The Privacy Research Group is a weekly meeting of students, professors, and industry professionals who are passionate about exploring, protecting, and understanding privacy in the digital age.

Joining PRG:

Because we deal with early-stage work in progress, attendance at meetings of the Privacy Research Group is generally limited to researchers and students who can commit to ongoing participation in the group. To discuss joining the group, please contact Eli Siems. If you are interested in these topics, but cannot commit to ongoing participation in PRG, you may wish to join the PRG-All mailing list.
 
PRG Student Fellows—Student members of PRG have the opportunity to become Student Fellows. Student Fellows help bring the exciting developments and ideas of the Research Group to the outside world. The primary Student Fellow responsibility is to maintain an active web presence through the ILI student blog, reporting on current events and developments in the privacy field and bringing the world of privacy research to a broader audience. Fellows also have the opportunity to help promote and execute exciting events and colloquia, and even present to the Privacy Research Group. Student Fellow responsibilities are a manageable and enjoyable addition to the regular meeting attendance required of all PRG members. The Student Fellow position is the first step for NYU students into the world of privacy research. Interested students should email Student Fellow Coordinator Eli Siems with a brief (1-2 paragraph) statement of interest or for more information.


PRG Calendar:


Fall 2018 [12:45-2:00pm, Furman Hall, 245 Sullivan Street, Room 120]

November 28: Ashley Gorham

November 14: Mark Verstraete

November 7: Jonathan Mayer

October 31: Sebastian Benthall

October 24: Yafit Lev-Aretz — Privacy and the Human Element
     ABSTRACT: The right to privacy has been traditionally discussed in terms of human observation and the formation of subsequent opinion or judgment.  Starting with Warren and Brandeis' "right to be let alone,"  and continuing with the privacy torts', the early days of privacy in the legal sphere placed crucial emphasis on human presence.  Often made arguments such as "I've got nothing to hide" on the one hand, and "you are being watched" on the other hand,  go to the heart of the human element which became an intuitive component around which the right to privacy has been structured, evolved, and interpreted over the years.  Nowadays, however, most information flows do not involve a human in the loop, and while we are pretty uncomfortable with human observation and subsequent judgment, algorithmic observation and judgment do not provoke similar discomfort.  This discrepancy can account for the privacy paradox, which refers to the difference between stated positions on information collection and widespread participation in it.  It can also explain the significant expansion of the privacy bundle in the past decade, to include concerns such as discrimination, profiling, unjust enrichment, and online manipulation.  In my work, I point to the failure of privacy as a policy goal and build on the work of Priscila Regan and Dan Solove to explain this failure in, beyond the use of wrong metaphors and the individual focus, the mismatch between the strong human presence in privacy intuitions and the modern surveillance culture that growingly capitalizes on diverse means of humanless tracking.  Consequently, I call for a conceptual shift that keeps privacy within the boundaries of the human element and discusses all other informational risks under a parallel paradigm of legal protection.

October 17: Julia Powles — AI: The Stories We Weave; The Questions We Leave
     ABSTRACT: It has become almost automatic. While public conversation about artificial intelligence readily diverts into problems of the long future (the rise of the machines) and ingrained past (systemic inequality, now perpetuated and reinforced in data-driven systems), a small cadre of tech companies amasses unprecedented power on a planetary scale. This talk is an exploration and invitation. It interrogates the debates we have, and those we need, about AI, algorithms, rights, regulation, and the future. It examines what we talk about, why we talk about it, what we should ask and solve instead, and what is required to spur a richer, more imaginative, more innovative conversation about the world we wish to create.

October 10: Andy Gersick — Can We Have Honesty, Civility, and Privacy Online? Implications from Evolutionary Theories of Animal and Human Communication
     ABSTRACT: Early internet optimism centered on two unique affordances of online interaction that were expected to empower disenfranchised and diasporic groups: the mutability of online identities and the erasure of physical distance. The ability to interact from the safety of a distant and sometimes hidden vantage has remained a core feature of online social life, codified in the rules of social-media sites and considered in discussion of legal privacy rights. But it is now far from clear that moving our social lives online has “empowered” the disenfranchised, on balance. In fact, the disembodied and dispersed nature of online communities has increasingly appeared to fuel phenomena like trolling, cyberbullying and the deliberate spread of misinformation. Science on the evolution of communication has a lot to say about how social animals evaluate the trustworthiness of potential mates and rivals, allies and enemies. Most of that work shows that bluffing, false advertisement and other forms of deceptive signaling are only held in check when signal-receivers get the chance to evaluate the honesty of signal-producers through direct and repeated contact. It’s a finding that holds true across the animal kingdom, and it has direct implications for our current socio-political discourse. The antagonistic trolls and propagandistic sock-puppets that have invaded our politics are using deceptive strategies that are as old as the history of communication. What’s new, in human social evolution, is our vulnerability to those strategies within a virtual environment. I will discuss elements of evolutionary theory that seem relevant to online communication and internet privacy, and I hope to have a dialogue with attendees about (a) how those theories intersect with core elements of internet privacy law, and (b) whether we have to alter our basic expectations about online privacy if we want social-media interactions that favor cooperation over conflict.

October 3: Eli Siems — The Case for a Disparate Impact Regime Covering All Machine-Learning Decisions
     ABSTRACT: The potential for Machine Learning (ML) tools to produce discriminatory models is now well documented. The urgency of this problem is compounded both by the rapid spread of these tools into socially significant decision structures and by the unique obstacles ML tools pose to the correction of bias. These unique challenges fit into two categories: (1) the uniquely obfuscatory nature of correlational modeling and the threat of proxy variables standing in for impermissible considerations, and (2) the overriding tendency of ML tools to “freeze” historical disparities in place, and to replicate and even exacerbate them. Currently, two ML tools with identical biases stemming from identical issues will be reviewed differently depending on the context in which they are utilized. Under Title VII, for example, statistical evidence of discrimination would be sufficient to initiate a claim, but the same claim under the Constitution would be dismissed at the pleading stage without additional evidence of intent to discriminate. This paper attempts to work within the (profoundly flawed) strictures of existing Constitutional and statutory law to propose the adoption of a unified, cross-contextual regime that would allow a plaintiff challenging the decisions of an ML tool to utilize statistical evidence of discrimination to carry a claim beyond the initial pleading stage, empowering plaintiffs to demand a record of a tool’s design and the data upon which it trained. In support of extending a disparate impact regime to all instances of ML discrimination, I carefully analyze the Supreme Court’s treatment of statistical evidence of discrimination under both the Fourteenth Amendment and under statutory Civil Rights law. While the Supreme Court has repeatedly disavowed the application of disparate-impact style claims to Fourteenth Amendment Equal Protection, I argue that, for myriad reasons, its stated logic in doing so does not hold when the decision-maker in question is an ML tool. By analyzing Equal Protection holdings from the fields of government employment, death penalty sentencing, policing, and risk assessment as well as holdings under Title VII of the Civil Rights Act, the Fair Housing Act, and the Voting Rights Act, I identify the contextual qualities that have factored into the Court’s decisions to allow or disallow disparate impact evidence. I then argue that the court’s own reasoning in barring the use of such evidence in contexts like death penalty sentencing and policing decisions cannot apply to ML decisions, regardless of context.

September 26: Ari Waldman — Privacy's False Promise
     ABSTRACT: Privacy law—a combination of statutes, constitutional norms, regulatory orders, and court decisions—has never seemed stronger. The European Union’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (CalCPA) work in parallel with the Federal Trade Commission’s broad regulatory arsenal to put limits on the collection, use, and manipulation of personal information. The United States Supreme Court has reclaimed the Fourth Amendment’s historical commitment to curtail pervasive police surveillance by requiring warrants for cell-site location data. And the EU Court of Justice has challenged the cross-border transfer of European citizens’ data, signaling that American companies need to do far more to protect personal information. This seems remarkably comprehensive. But the law’s veneer of protection is hiding the fact that it is built on a house of cards. Privacy law is failing to deliver its promised protections in part because the responsibility for fulfilling legal obligations is being outsourced to layers of compliance professionals who see privacy law through a corporate, rather than substantive lens. This Article provides a comprehensive picture of this outsourcing market and argues that the industry’s players are having an outsized and constraining impact on the construction of privacy law in practice. Based on original primary source research into the ecosystem of privacy professionals, lawyers, and the third-party vendors on which they increasingly rely, I argue that because of a multilayered process outsourcing corporate privacy duties—one in which privacy leads outsource privacy compliance responsibilities to their colleagues, their lawyers, and an army of third-party vendors—privacy law is in the middle of a process of legal endogeneity: mere symbols of compliance are replacing real progress on protecting the privacy of consumers.

September 19: Marjin Sax — Targeting Your Health or Your Wallet? Health Apps and Manipulative Commercial Practices
     ABSTRACT: Most popular health apps (e.g. MyFitnessPal, Headspace, Fitbit) are not just helpful tools aimed at improving the user's health; they are also commercial services that use the idea of health to monetize their user base. In order to do so, popular health apps rely on (1) advanced analytical tools to 'optimize' monetization, and (2) propagate a rather particular health discourse aimed at making users understand their own health in a way that serves the commercial interests of health apps. Given the fact that health is very important to people and given the fact that health apps often try to mask their commercial intentions by appealing to the user's health, I argue that commercial health app practices are potentially manipulative. I offer a conception of manipulation to help explain how health app users could be manipulated by health apps. To address manipulation in health apps, it would be wise to not only focus on questions of informational privacy and data protection law, but also consider decisional privacy and unfair commercial practice law.

September 12: Mason Marks — Algorithmic Disability Discrimination
     ABSTRACT: In the Information Age, we continuously shed a trail of digital traces that are collected and analyzed by corporations, data brokers, and government agencies. Using artificial intelligence tools such as machine learning, they convert these traces into sensitive medical information and sort us into health and disability-related categories. I have previously described this process as mining for emergent medical data (EMD) because the health information inferred from digital traces often arises unexpectedly (and is greater than the sum of its parts). EMD is employed in epidemiological research, advertising, and a growing scoring industry that aims to sort and rank us. This paper describes how EMD-based profiling, targeted advertising, and scoring affects the health and autonomy of people with disabilities while circumventing existing health and anti-discrimination laws. Because many organizations that collect EMD are not covered entities under the Health Information Portability and Accountability Act (HIPAA), EMD-mining circumvents HIPAA's Privacy Rule. Moreover, because the algorithms involved are often inscrutable (or maintained as trade secrets), violations of anti-discrimination laws can be difficult to detect. The paper argues that the next generation of privacy and anti-discrimination laws must acknowledge that in the Information Age, health data does not originate solely within traditional medical contexts. Instead, it can be pieced together by artificial intelligence from the digital traces we scatter throughout real and virtual worlds.
 

Spring 2018

May 2: Ira Rubinstein Article 25 of the GDPR and Product Design: A Critical View [with Nathan Good and Guilermo Monge, Good Research]
     ABSTRACT: The General Data Protection Regulation (GDPR) seeks to protect the privacy and security of EU citizens in a world that is vastly different from that of the 1995 Data Protection Directive. This is largely due to the rise of the Internet and digital technology, which together define how we communicate, access the world of ideas, and make ourselves into social creatures. The GDPR seeks to modernize European data protection law by establishing data protection as a fundamental right. It requires data controllers to respect the rights of individuals including new rights of erasure and data portability and to comply with new obligations including accountability, a risk-based approach, impact assessments and data protection by design and default (DPDD). Ideally, this new DPDD obligations will change business norms by bringing data protection to the forefront of product design. Although the, GDPR strives to remain sufficiently broad and flexible to allow for creative solutions, it also adopts a belt and suspenders approach to regulation, imposing multiple, overlapping obligations on data controllers. What, then, is the specific task of the DPDD provision? It requires organizations to implement privacy-enhancing measures at the earliest stage of design and to select techniques that by default are the most protective of individuals' privacy and data protection. More specifically, Article 25 requires that "controllers shall ... implement appropriate technical and organisational measures ... in an effective  manner... in order to meet the requirements of this Regulation and protect the rights of data subjects" and to ensure that "by default, only personal data which are necessary for each specific purpose of the processing are processed."This begs several questions, however. For example, do organizations achieve these goals by implementing specific measures over and above those they might otherwise put into effect to meet their obligations under the remainder of the Regulation? Are certain "technical and organizational measures" (like pseudonymisation and data minimisation) required or merely recommended? Are there specific design and engineering techniques that organizations should follow to satisfy their DPDD obligations? And how do organizations know when their efforts satisfy Article 25 requirements, especially when they have already complied with other obligations? In this paper, we examine what technology companies are doing currently to satisfy their obligations under Article 25 in the course of establishing overall GDPR compliance programs. We expect to find that companies with limited privacy resources are confining their efforts to a compliance-based approach, resulting in a patchwork of privacy practices rather than adoption of a privacy-based model of product design. And we predict that in a rush to achieve compliance, these companies will fail to implement the methods and practices that comprise privacy by design as that term is understood, not by regulators, but by engineers and designers as described in our earlier work (Rubinstein & Good, Privacy by Design: A Counterfactual Analysis of Google and Facebook Privacy Incidents). In other words, many firms will treat the DPDD obligation as just a checkbox requirement.We will investigate these claims via case studies. However, we do not rely on surveys of a representative sampling of regulated firms or snowball sampling of industry practitioners whose work exemplifies the methods and practices that engineers and designers rely on to achieve specific privacy (and security) goals. Rather, we will analyze what two groups of vendors are offering their customers to help them operationalize the GDPR generally and Article 25 in particular. We will look at both privacy technology vendors (a new niche market of firms selling into the private-sector market of firms needing help with GDPR compliance) and cloud infrastructure vendors (like Microsoft) who are marketing their platform to large multinationals and SMEs as GDPR-ready. (If necessary, we may supplement this approach with telephone interviews but mostly for purposes of follow up questions rather than as a source of primary knowledge.) Finally, we report on the incentives and motivations behind these practical solutions and discuss how supervisory authorities might develop policies to encourage firms to adopt appropriate solutions and develop the necessary expertise to achieve them. In sum, we provide an analysis of Article 25 with the goal of helping EU regulators bridge the gap between the ideals and practice of data protection by design and default.

April 25: Elana Zeide — The Future Human Futures Market
     ABSTRACT: This paper considers the emerging market in student futures as a cautionary tale. Income sharing arrangements involve the explicit and literal commodification of “human capital” by for-profit third parties who broker income sharing agreements between private investors and students with promising predictive data profiles. This paper considers the problematic legal and ethical aspects of the predictive technologies driving these markets and draws a parallel to the role schools and third-party career platforms play in sorting, scoring, and predicting student futures as part of a formal education. These matching systems not only mete out opportunity but preempt access to opportunity (see Kerr & Earle).
Many coding “bootcamps” take an untraditional approach to student financing. Some after a money-back employment “guarantee.” Others use “human capital” contracts. Instead of requiring students to pay tuition up front to take out onerous loans based on uncertain career paths, schools claim a portion of a graduate’s wages upon gainful employment. A two-year software engineering program in San Francisco, for example, asks for no money upfront but then takes 17% of students’ internship earnings during the program and 17% of salaries for three years after finding a job. Other schools, advocates, and policymakers push for similar private education funding arrangements, including bills introduced in the U.S. Senate and House of Representatives in 2017. They promote “income share agreements” as more equitable and efficient for students than the traditional student loan system, where debt may be disproportionate to post-graduation wages. These arrangements raise numerous constitutional, legal, and ethical questions. Do students have to accept the first offer they receive? How can they be enforced? How might this arrangement shift who can obtain a post-secondary credential? Are they a simply a modern version of indentured servitude? A less discussed but key component of the developing “futures” market is the role of opportunity brokers: third parties who design, implement, and “take the complexity out of” income sharing agreements. These for-profit companies match interested investors with promising “opportunities” based on proprietary predictive analytics that project future income. Some go beyond commodifying student futures to securitizing them: as one commentator writes “human capital - the present value of individuals’ future earnings - may soon become an important investable asset class, following in the footsteps of home mortgage debt.” Schools are themselves opportunity brokers, credential-creators, and career matchmakers that end up determining whose futures we support - individually, institutionally, or as a society. Scholars and popular entertainment offer chilling accounts of the dystopian aspects of a scored society, governed by anticipatory and proprietary data-models likely to reinforce existing patterns of privilege and inequity. Ubiquitous surveillance systems that chill free expression, promote performativity and create circumstances ripe for social control and engineering. Except we already have such a system in place: the formal education system. American schools not only provide whatever one considers “an education,” but also sort, score, and predict student potential. The tools they use to do so - textbooks, SATs, and standards like the Common Core - are subject to intense public scrutiny. Schools increasingly rely on for-profit vendors to provide the platforms and tools that deliver, assess, and document student progress. These include “personalized learning systems” that continuously monitor student progress and adapt instruction at scale - what some have called the “mass customization” of education. They use predictive analytics classify students, infer characteristics, and predict optimal learning pathways. Higher education institutions also use predictive platforms to make recruiting and admissions decisions, award financial aid, and detect students at risk of dropping out. Social media platforms and people analytics firms increasingly mediate and automate candidate-employer matching. This system might similarly not just deny but preempt access to opportunity without accompanying due process provisions. And it is likely to do so in ways that reinforce today's inequities - creating a new segregation of education.

April 18: Taylor Black — Performing Performative Privacy: Applying Post-Structural Performance Theory for Issues of Surveillance Aesthetics
     ABSTRACT: In 2017, Scott Skinner-Thompson published “Performative Privacy,” based in part on work with this group. In the article, he “identifies a new dimension of public privacy” and argues for a reading of certain public acts of anti-surveillance resistance as performative, and therefore to be legally understood as expressions of speech and protected as such. In this talk, I extend the framework of performative privacy from the perspective of performance studies, and discuss some new applications of critical theory and performance theory in contemporary issues of surveillance. As a discipline performance studies, particularly its critique of speech as act and its intervention in the use of liveness in action, offers an opportunity to meaningfully trouble the distinction between efficacy and expression underlying the question of performative privacy. To test these limits and demonstrate the possible applications of performance theory, I follow the performative privacy framework in two directions. First, we’ll examine privacy’s impact on performance and aesthetics in the rise of the post-Snowden “surveillance art” movement. Then, I incorporate Clare Birchall’s emerging research on “shareveillance” to explore the question of efficacy in surveillance resistance and the resulting impacts of performance entering into privacy discourse.

April 11: John Nay Natural Language Processing and Machine Learning for Law and Policy Texts
     ABSTRACT: Almost all law is expressed in natural language; therefore, natural language processing (NLP) is a key component of efforts to automatically structure, explore and predict law and policy at scale. NLP converts unstructured text into a formal, structured representation that computers can analyze.  First, we provide a brief overview of the different types of law and policy texts and the different types of machine learning methods to process those texts. We introduce the core idea of representing words, sentences and documents as numbers. Then we describe NLP and machine learning tools for leveraging the text data to accomplish tasks. We describe methods for automatically summarizing content (sentiment analyses, text summaries, topic models), extracting content (entities, attributes and relations), retrieving information and documents, predicting outcomes related to text, and answering questions.

April 4: Sebastian Benthall — Games and Rules of Information Flow
     ABSTRACT: Attempts to characterize the nature of privacy must acknowledge the complexity of concept. They tend to be either particularist (acknowledging many, unrelated, particular meanings) or contextualist (describing how the same concept manifests itself differently across social contexts. Both these approaches are insufficient for making policy and technical design decisions about technical infrastructure that spans many different contexts. A new model is needed, one that is compatible with these theories but which characterizes privacy considerations in terms of the reality of information flow, not our social expectations of it. I build a model of information flow from the theories of Fred Dretske, Judea Pearl, and Helen Nissenbaum that is compatible with both intuitive causal reasoning and contemporary machine learning methods. This model clarifies that information flow is a combination of causal flow and nomic association, where the associations of information depend on the causal structure of which the flow is a part. This model also affords a game theoretic and mechanism design extensions using the Multi-Agent Influence Diagram framework.
I employ this model to illustrate several different economic contexts involving personal information, as well as what happens when these contexts collapse. The model allows for a robust formulation of the difference between a tactical and a strategic information flow, which roughly correspond to the differences between the impact of a sudden data breach and the chilling effects of ongoing surveillance.

March 28: Yann Shvartzshanider and Noah Apthorpe Discovering Smart Home IoT Privacy Norms using Contextual Integrity
     ABSTRACT: The proliferation of Internet of Things (IoT) devices for consumer “smart” homes raises concerns about user privacy. We present a survey method based on the Contextual Integrity (CI) privacy framework that can quickly and efficiently discover privacy norms at scale. We apply the method to discover privacy norms in the smart home context, surveying 1,731 American adults on Amazon Mechanical Turk. For $2,800 and in less than six hours, we measured the acceptability of 3,840 information flows representing a combinatorial space of smart home devices sending consumer information to first and third-party recipients under various conditions. Our results provide actionable recommendations for IoT device manufacturers, including design best practices and instructions for adopting our method for further research.

March 21: Cancelled

March 7: Cancelled

February 28: Thomas Streinz TPP’s Implications for Global Privacy and Data Protection Law
     ABSTRACT: On 8 March, the remaining eleven parties of the original Trans-Pacific Partnership (TPP)–Australia, Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore, and Vietnam–will meet in Santiago, Chile to revive the TPP via the awkwardly (and arguably misleadingly) labelled Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP). This is a surprising development for two reasons: 1) After President Trump withdrew the US from the original TPP in January 2017, most observers believed the agreement was dead for good. 2) The TPP11 parties preserved the vast majority of the provisions of the original TPP (with notable exceptions mainly in the investment and IP chapters) despite the fact that the agreement mainly followed US models of (so called) free trade agreements (FTAs) and was in fact promoted as “Made in America” by the Office of the United States Trade Representative (USTR) during the Obama administration which was particularly proud of a new set on rules that it branded as the "Digital2Dozen." The chapter on “electronic commerce” which contains most but not all provisions with relevance for internet law and regulation got incorporated into CPTPP without any modifications and is bound to become the template for future trade agreements (including the ongoing renegotiations of NAFTA) without EU participation. In my presentation for PRG, I will focus on TPP’s (weak) provision on “personal information protection” (Article 14.8) and its innovative rules for free data flows (Article 14.11) and against data localization requirements (Article 14.13). I will explain and we should discuss why the EU views these rules as problematic from a privacy perspective. In its recent agreement with Japan, which is also a TPP party, this can got kicked down the road, but on 31 January 2018 the European Commission announced that it would endorse provisions for data flows and data protection in EU trade agreements. The crucial difference to the US model as incorporated in TPP is that the EU will likely require compliance with the General Data Protection Regulation (GDPR) as a condition for free data flows—complementing the existing adequacy assessment procedures and leveraging its trade muscle to promote the GDPR as the global standard.

February 21: Ben Morris, Rebecca Sobel, and Nick Vincent — Direct-to-Consumer Sequencing Kits: Are Users Losing More Than They Gain?
     ABSTRACT: Direct-to-consumer genetic sequencing, provided by companies like 23andMe, Ancestry, and Helix, has opened a myriad of scientific and legal issues ranging from the statistical interpretation of results to access, regulation, and user privacy. Interestingly, the most recent efforts have attempted to tie together direct-to-consumer testing with the blockchain and cryptocurrencies, but consumer protection and privacy concerns remain. In this presentation, we will provide a history of the direct-to-consumer genetic sequencing market and how we have arrived at the current market. We will also highlight some of the legal and regulatory issues surrounding the activities of these companies in relation to FDA requirements, the Genetic Information Nondiscrimination Act of 2008 (GINA), and the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Finally, we will use current examples from the emerging market of direct-to-consumer gut microbiome sequencing kits as a study for how privacy policies of these companies are evolving in a developing market and what concerns customers could (and perhaps should) have when using these kits.

February 14: Eli Siems — Trade Secrets in Criminal Proceedings: The Battle over Source Code Discovery
     ABSTRACT: In Intellectual Property law, a trade secret is information which “derives economic value from not being generally known . . . to other persons who can obtain economic value from its disclosure or use.” (Uniform Trade Secrets Act). Unlike patent, trade secret law will cease to protect against the use of information once that information becomes generally known. A trade secret, once disclosed, is the proverbial cat out of the bag. For this reason, courts have developed an evidentiary privilege protecting trade secrets from disclosure in trial unless a party shows that such disclosure is actually necessary to a just resolution. This privilege has developed over decades of civil litigation. Recently, a confluence of factors has led to an increase in assertions of the trade secret privilege in criminal trials. State police departments and prosecutors have begun contracting with private software developers for the use of algorithmic tools that generate either forensic proof to be used at trial, “risk assessment” to be used at sentencing, or data for policing. Criminal defendants have sought access to the source code for such programs only to be met with claims that the information sought is privileged as a trade secret. In addressing what a criminal defendant must show to overcome the privilege, some courts have directly applied the standard from civil common law, while others have imported key elements of that standard. Assuming that a defendant must always make some showing to justify the disclosure of “trade secret” source code in her criminal trial, her effective defense will require an understanding of the nature of her burden—must she show that the code is “necessary” to her defense (a replica of the civil standard), that the code is simply “material and relevant” (in line with basic criminal discovery standards), or something in between? This talk will draw from a spate of cases in which defendants sought the source code from “probabilistic genotyping” programs in order to define the contours of these standards as they have recently been applied. Centrally, it will identify the factors that have led courts to find that criminal defendants have failed to carry the burden of establishing either relevance or necessity of the source code. It will reveal that judges have relied on the same validation studies properly considered at the admissibility stage (where the court must determine the reliability of expert/scientific evidence) to determine that a defense review of the source code is either irrelevant or unnecessary. The idea that validation studies can defeat a defendant’s claim that source code is relevant or necessary to her defense fails to account for two key considerations—first, that a defendant may seek to challenge something other than the reliability of the software, and second, that validation of these tools may not be providing the type of assurance legally sufficient to defeat a defendant’s discovery requests. In addition to critiquing judicial reasoning, this talk will address deficiencies in defense pleadings and potential adaptations that may lead to more successful discovery motions in the future.

February 7: Madeline Bryd and Philip Simon Is Facebook Violating U.S. Discrimination Laws by Allowing Advertisers to Target Users?
     ABSTRACT: In 2016, ProPublica published an article revealing the startlingly easy method Facebook’s advertising program provided to exclude protected classes from seeing employment, housing, and credit advertisements. The article raised numerous questions about potential liability and what other mechanisms advertisers could use to discriminate via Facebook’s platform. This presentation will address whether Facebook can be held liable for advertising discrimination based on the discriminatory uses of its platform by advertisers; the current state of U.S. discrimination laws with respect to targeted online advertising in general, and; whether online platforms can escape liability through the Communications Decency Act (CDA) § 230. Our analysis of potential discriminatory uses will focus on research done by Krishna Gummadi and his team that explores Facebook’s advertising features (to be presented at FAT* ‘18, February 2018). Their paper identifies three ways in which advertisers can target users: PII-based targeting, attribute-based targeting, and look-alike audience targeting. Each targeting tool will be analyzed in the context of employment, housing, and credit discrimination laws to address whether these features can be illegally used by advertisers. Finally, we will address possible ways in which Facebook can be held liable for these illegal uses, despite any protection against liability that it may enjoy under CDA-230.

January 31: Madelyn Sanfilippo Sociotechnical Polycentricity: Privacy in Nested Sociotechnical Networks 
     ABSTRACT:
Knowledge access is both constrained and supported by social, political, human, economic, and technological factors, making formal and informal governance of knowledge a set of complex sociotechnical constructs. Political theory surrounding polycentric governance has long structured inquiry into contexts in which public service provision is nested or competing. This talk will define and discuss applications of polycentric governance theory to sociotechnical knowledge services, in order to support empirically grounded policy-making. Polycentricity is often defined in terms of many nested or overlapping contexts or jurisdictions, which may compete with or compliment one another, yet is also fundamentally about the many centers of decision-making within those contexts or jurisdictions. Sociotechnical polycentricity pertains not only to the complex exogenous policy environment, but also to endogenous decisions of firms or actors, which themselves overlap with this external environment. Extensive literature demonstrates how polycentricity illuminates complexity and supports policy recommendations or improvements, based off of failures, complexity, or conflicts in cases; this talk will explore polycentric frames applied to questions around sociotechnical governance, including various examples centered on knowledge access and privacy.

January 24: Jason Schultz and Julia Powles Discussion about the NYC Algorithmic Accountability Bill 
     ABSTRACT: The New York City Council recently passed one of the first laws in the United States to address “algorithmic accountability.” The bill, NY 1696 proposed by council member James Vacca, creates a task force to explore how the city can best open up public agency’s computerized decision-making tools to public scrutiny. This effort raises many technical, legal, and political questions about how algorithmic systems fit into the broader notions of responsible and responsive government. Julia Powles and Jason Schultz have each been involved in the debate over the bill and will lead a discussion of its contents, its context, and its next steps. Julia Powles' recent New Yorker piece for some more background.


Fall 2017

November 29: Kathryn Morris and Eli Siems Discussion of Carpenter v. United States
    
ABSTRACT:
In the 1970s, the Supreme Court decided a series of cases establishing within its Fourth Amendment jurisprudence a principle now known as the Third-Party Doctrine. One defendant’s bank records were seized and examined without a warrant. U.S. v. Miller, 425 U.S. 435 (1976). The phone numbers dialed by another were surreptitiously recorded by a pen register, also without a warrant. Smith v. Maryland, 442 U.S. 735 (1979). The Court reasoned that these were valid exercises of law enforcement authority and not violations of the Fourth Amendment, chiefly because the defendants had willingly turned this information over to a third party and, in doing so, forfeited any legitimate expectation that the information would be private and thus subject to constitutional protection. Under the Third-Party Doctrine, access to such materials by law enforcement does not constitute a “search.” But much of how we transmit information to third parties has changed. Recently, some Justices of the Supreme Court have signaled willingness to revisit some Fourth Amendment principles in light of modern developments. Taking a fresh look at the Search Incident to Arrest doctrine in 2014, Chief Justice Roberts issued sweeping statements indicating that smartphones are different enough from traditional objects of search or seizure to change Fourth Amendment calculations. Riley v. California, 573 U.S. __ (2014). Justice Sotomayor has called the Third-Party Doctrine “ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks.” U.S. v. Jones, 132 S.Ct. 945 (2012). (Sotomayor, J. Concurring). Carpenter has petitioned the Supreme Court to rule on whether the warrantless collection of Cell-Site Location Information (CSLI) violated his Fourth Amendment rights, and the government is set to argue that this information is exempt from the warrant requirement under the Third-Party doctrine. Much will turn on Carpenter’s efforts to draw a meaningful distinction between CSLI and the pen register data in Smith. Carpenter will argue (in line with recent SCOTUS dicta) that cell data is different enough in terms of scope and potential intrusion that the third-party rule should not mechanically apply. He will also argue that his transmission of information to his cell provider was not voluntary and that this transmission should not be found to affect the legitimacy of his expectation of privacy. The Electronic Frontier Foundation’s amicus brief in support of Carpenter details how technical and practical considerations push against application of the Third-Party Doctrine to CSLI. The relevant portion of the Government’s brief is in section I of that argument. 

November 15:Leon Yin Anatomy and Interpretability of Neural Networks
     ABSTRACT:
From tumor spotting to facial identification, neural networks are designed to optimize and automate decision-making. In a recent blog post, Andrej Karpathy-- the director of AI at Tesla, called neural networks Software 2.0. But just how do neural networks work? Interpretability is an increasingly hot topic among practitioners and policymakers alike. This presentation dives into the anatomy of neural networks, from input to output, and everything in between. The aim of this presentation is to establish a baseline understanding of how neural networks operate internally,  in hopes that it will inform how we interact with neural networks externally.

November 8: Ben Zevenbergen Contextual Integrity for Password Research Ethics?
     ABSTRACT:
Ben will present a draft chapter of his PhD, where he applies contextual integrity and the literature on research ethics to technical password research. While there’s are some benefits to password research, the origin of the research data is usually a hacked and leaked database containing millions of passwords. The aim of the chapter is not to criticize password research per se, but to test whether contextual integrity would be a useful framework to apply the concepts of research ethics.

November 1: Joe Bonneau An Overview of Smart Contracts
     ABSTRACT:
Smart contracts are an exciting and rapidly developing technology. Ethereum, the most popular platform for smart contracts, is already worth over $30 billion on hopes that they can revolutionize some types of contractual agreement. For example-Alice and Bob can agree to play a game of chess, without meeting or trusting each other. A smart contract can guarantee that the loser pays the winner a bet with no traditional legal system to enforce the terms. Alice and Bob might live in different jurisdictions, or one of them might even be a robot. This talk will provide an overview of the technology and its limitations. It will also discuss the controversy behind the DAO, which highlights the difficulty of automated contract enforcement with no human oversight. Finally, several open questions about the legal implications of smart contracts will be presented.

October 25: Sebastian Benthall Modeling Social Welfare Effects of Privacy Policies
     ABSTRACT:
According to Contextual Integrity, privacy norms are legitimized by a balance of societal values, contextual purposes, and individual ends. While several canonical arguments are used to make this point, formal reasoning about the social welfare consequences of privacy can shed light on policy design, especially when fine-grained computational policies, such as differential privacy, are available. Using the compact game-theoretic framework of Multi-Agent Influence Diagrams (Koller and Milch, 2003), we model several classes of information market and the impact of privacy regulations on them individually as well as in combination. We discover that the social welfare implications of privacy are not evenly distributed, and compare this result with empirical data about diversity in privacy preferences.

October 18: Sue Glueck Future-Proofing the Law
     ABSTRACT
: In July 2016, the Court of Appeals for the Second Circuit agreed with Microsoft that U.S. federal or state law enforcement cannot use traditional search warrants to seize emails of citizens of foreign countries that are located in data centers outside the United States.  On October 16, 2017, the Supreme Court granted the Department of Justice’s petition to review this decision.  Microsoft believes that the Electronic Communications Privacy Act (ECPA) – a law enacted decades before there was such a thing as cloud computing – was never intended to reach within other countries’ borders. But there’s a broader dimension to this issue:  The continued reliance on a law passed in 1986 will neither keep people safe nor protect people’s rights.  If U.S. law enforcement can obtain the emails of foreigners stored outside the United States, what’s to stop the government of another country from getting your emails even though they are located in the United States?  Microsoft believes that people’s privacy rights should be protected by the laws of their own countries and that information stored in the cloud should have the same protections as paper stored in your desk.  Please join Sue Glueck, Microsoft’s academic relations director, for a lively discussion of the issues implicated by this case.

October 11: John Nay — Algorithmic Decision-Making Explanations: A Taxonomy and Case Study
     ABSTRACT: Algorithms make decisions that permeate our lives. Explanations of the decisions can assist with improving algorithm performance and ensuring procedural fairness. We describe a taxonomy for algorithmic decision-making explanations. We argue that explanations of algorithmic decisions should be provided in terms of why a decision chosen is better than the alternative decision that could have been chosen, i.e. the difference in the outcomes that would occur in a world where the decision is taken and a world where an alternative decision is taken. Then the explanation should provide local and global explanations of the input-output behavior of the models feeding into the decision module. In walking through the components of an explanation, we focus on complex data-driven systems, but the methods are applicable to simpler models as long as their input-output behavior can be analyzed. For an empirical case study, we model and explain an example of algorithmic decision-making in cooperation games.

October 4:Finn Bruton — 'The Best Surveillance System we Could Imagine': Payment Networks and Digital Cash
     ABSTRACT:
This wide-ranging talk will be less about an argument -- there is one, but I don't think it's as germane to the PRG's interests -- than a history that I hope will start further discussion about privacy and payment. I will tell two linked stories: Almost fifty years ago, a team of computer scientists and electrical engineers developed "the best surveillance system we could imagine": a prototype electronic funds transfer (EFT) system, an early sketch of payment networks from Visa to PayPal. Electronic money is a medium for data: for records of purchases, locations and times, names and social networks. From money as online performance (think Venmo or WeChat's gift-money games) to the information-collection practices of different payment platforms, digital money can produce detailed dossiers and reward or punish particular choices in subtle ways. There is an alternative history to this one, however: the project of building anonymous digital cash -- money as a medium that provides no information but its own verification. This project, filled with tricky technical and social paradoxes to resolve, takes us from radical experiments and subcultures in the 1980s to Bitcoin, Zcash, online black markets, and digital money-laundering schemes and obfuscation attempts in the present day. These projects carry their own problems -- from legitimacy to money laundering -- that we can consider.

September 27: Julia Powles Promises, Polarities & Capture: A Data and AI Case Study
     ABSTRACT:
The case study:  In November 2015, 1.6 million Londoners fully identified medical records were transferred to Google. The first the public heard of it was an explosive news story in April 2016. The claimed purpose? So that Google's AI arm, DeepMind, could develop an app for kidney injury alerts. The discussion:  What is the best way to animate concerns over privacy, public value-for-data, competition, and civic innovation? How do you avoid polarization? How do you motivate intervention and accountability? What is the optimal strategy at different layers and for different audiences? Julia will speak to the paper Google DeepMind and Healthcare in an Age of Algorithms, and the draft reply to DeepMind's reply to the paper. 

September 20: Madelyn Rose Sanfilippo AND Yafit Lev-Aretz — Breaking News: How Push Notifications Alter the Fourth Estate
     ABSTRACT: News outlets increasingly capitalize on the potential of push notifications to drive engagement and enhance readership. Such changes in news reporting and consumption offer a new, largely overlooked, research perspective into the competing narratives about the definition of news, their impact on political participation, entrenchment of political views, the ubiquity of media environments, and anxiety in media consumption. Situated within discussions about fake news, how new technologies have changed journalism, and the nature of news consumption overall, this paper and a larger ongoing empirical project seek to explore: 1) how push notifications and online “breaking news” phenomenon differ from traditional news reporting; 2) relationships between objectivity in journalism, reader affect and trust; and 3) what this means for participatory politics and its relationship to the fourth estate. This article illustrates patterns and key insights about the impact of push notifications on journalism and changes in sentiment in news communication through a case study comparing reporting on President Nixon firing Special Prosecutor Archibald Cox in 1973 to the recent firing of FBI Director James Comey by President Trump. While headlines and push notifications vary significantly by news providers, push notifications are similar across platforms in distinguishing characteristics such as emotionally-loaded and subjective language. Both of these are defining elements of fake and deceptive news and may potentially account for some of the media mistrust in recent years.

September 13: Ignacio Cofone — Anti-Discriminatory Privacy
     ABSTRACT:
The paper examines the information dynamics of privacy and discrimination (Strahilevitz 2007, Roberts 2015) to design anti-discriminatory privacy rules, especially for statistical and algorithmic discrimination (Barocas and Selbst 2016, Kim 2017). To do so, it uses empirical studies of informational anti-discriminatory rules (Goldin and Rouse 1997, Agan and Starr 2016) and explores how privacy rules can overcome the limitations that these rules faced. It proposes that taste-based discrimination and statistical discrimination, a traditional distinction in economics, have the same information dynamic and should therefore be addressed similarly by privacy law. The common element between different kinds of discrimination is that, to effectively prevent them, informational rules must focus on blocking information flows that can be used to shift discrimination to other groups (e.g. former inmates versus black men). Anti-discriminatory privacy rules, in other words, should block not only undesirable information but also their proxies. The paper develops a theory on how to identify such proxies based on the cross-elasticity of information. It then applies this idea to algorithmic discrimination and proposes that the literature has so far brought legal solutions to an information problem. The paper proposes an information solution to the informational problem instead.
 

Spring 2017

April 26: Ben Zevenbergen Contextual Integrity as a Framework for Internet Research Ethics
April 19: Beate Roessler Manipulation
April 12: Amanda Levendowski Conflict Modeling
April 5: Madelyn Sanfilippo Privacy as Commons: A Conceptual Overview and Case Study in Progress
March 29: Hugo Zylberberg Reframing the fake news debate: influence operations, targeting-and-convincing infrastructure and exploitation of personal data
March 22: Caroline Alewaerts, Eli Siems and Nate Tisa will lead discussion of three topics flagged during our current events roundups: smart toys, the recently leaked documents about CIA surveillance techniques, and the issues raised by the government’s attempt to obtain recordings from an Amazon Echo in a criminal trial. 
March 8: Ira Rubinstein Privacy Localism
March 1: Luise Papcke Project on (Collaborative) Filtering and Social Sorting
February 22: Yafit Lev-Aretz and Grace Ha (in collaboration with Katherine Strandburg) Privacy and Innovation     
February 15: Argyri Panezi Academic Institutions as Innovators but also Data Collectors - Ethical and Other Normative Considerations
February 8: Katherine Strandburg Decisionmaking, Machine Learning and the Value of Explanation
February 1: Argyro Karanasiou A Study into the Layers of Automated Decision Making: Emergent Normative and Legal Aspects of Deep Learning
January 25: Scott Skinner-Thompson Equal Protection Privacy
 

Fall 2016

December 7: Tobias Matzner The Subject of Privacy
November 30: Yafit Lev-Aretz Data Philanthropy
November 16: Helen Nissenbaum Must Privacy Give Way to Use Regulation?
November 9: Bilyana Petkova Domesticating the "Foreign" in Making Transatlantic Data Privacy Law
November 2: Scott Skinner-Thompson Recording as Heckling
October 26: Yan Shvartzhnaider Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms
October 19: Madelyn Sanfilippo Privacy and Institutionalization in Data Science Scholarship
October 12: Paula Kift The Incredible Bulk: Metadata, Foreign Intelligence Collection, and the Limits of Domestic Surveillance Reform

October 5: Craig Konnoth Health Information Equity
September 28: Jessica Feldman the Amidst Project
September 21: Nathan Newman UnMarginalizing Workers: How Big Data Drives Lower Wages and How Reframing Labor Law Can Restore Information Equality in the Workplace
September 14: Kiel Brennan-Marquez Plausible Cause
 

Spring 2016

April 27: Yan Schvartzschnaider Privacy and loT AND Rebecca Weinstein - Net Neutrality's Impact on FCC Regulation of Privacy Practices
April 20: Joris van Hoboken Privacy in Service-Oriented Architectures: A New Paradigm? [with Seda Gurses]

April 13: Florencia Marotta-Wurgler Who's Afraid of the FTC? Enforcement Actions and the Content of Privacy Policies (with Daniel Svirsky)

April 6: Ira Rubinstein Big Data and Privacy: The State of Play

March 30: Clay Venetis Where is the Cost-Benefit Analysis in Federal Privacy Regulation?

March 23: Diasuke Igeta An Outline of Japanese Privacy Protection and its Problems

                  Johannes Eichenhofer Internet Privacy as Trust Protection

March 9: Alex Lipton Standing for Consumer Privacy Harms

March 2: Scott Skinner-Thompson Pop Culture Wars: Marriage, Abortion, and the Screen to Creed Pipeline [with Professor Sylvia Law]

February 24: Daniel Susser Against the Collection/Use Distinction

February 17: Eliana Pfeffer Data Chill: A First Amendment Hangover

February 10: Yafit Lev-Aretz Data Philanthropy

February 3: Kiel Brennan-Marquez Feedback Loops: A Theory of Big Data Culture

January 27: Leonid Grinberg But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race
 

Fall 2015

December 2: Leonid Grinberg But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race AND Kiel Brennan-Marquez - Spokeo and the Future of Privacy Harms
November 18: Angèle Christin - Algorithms, Expertise, and Discretion: Comparing Journalism and Criminal Justice
November 11: Joris van Hoboken Privacy, Data Sovereignty and Crypto
November 4: Solon Barocas and Karen Levy Understanding Privacy as a Means of Economic Redistribution
October 28: Finn Brunton Of Fembots and Men: Privacy Insights from the Ashley Madison Hack

October 21: Paula Kift Human Dignity and Bare Life - Privacy and Surveillance of Refugees at the Borders of Europe
October 14: Yafit Lev-Aretz and co-author, Nizan Geslevich Packin Between Loans and Friends: On Soical Credit and the Right to be Unpopular
October 7: Daniel Susser What's the Point of Notice?
September 30: Helen Nissenbaum and Kirsten Martin Confounding Variables Confounding Measures of Privacy
September 23: Jos Berens and Emmanuel Letouzé Group Privacy in a Digital Era
September 16: Scott Skinner-Thompson Performative Privacy

September 9: Kiel Brennan-Marquez Vigilantes and Good Samaritan
 

Spring 2015

April 29: Sofia Grafanaki Autonomy Challenges in the Age of Big Data
                 David Krone Compliance, Privacy and Cyber Security Information Sharing
                 Edwin Mok Trial and Error: The Privacy Dimensions of Clinical Trial Data Sharing
                 Dan Rudofsky Modern State Action Doctrine in the Age of Big Data


April 22: Helen Nissenbaum Respect for Context' as a Benchmark for Privacy: What it is and Isn't
April 15: Joris van Hoboken From Collection to Use Regulation? A Comparative Perspective
April 8: Bilyana Petkova
 Privacy and Federated Law-Making in the EU and the US: Defying the Status Quo?
April 1: Paula Kift — Metadata: An Ontological and Normative Analysis

March 25: Alex Lipton — Privacy Protections for the Secondary User of Consumer-Watching Technologies

March 11: Rebecca Weinstein (Cancelled)
March 4: Karen Levy & Alice Marwick — Unequal Harms: Socioeconomic Status, Race, and Gender in Privacy Research


February 25 : Luke Stark — NannyScam: The Normalization of Consumer-as-Surveillorm


February 18: Brian Choi A Prospect Theory of Privacy

February 11: Aimee Thomson — Cellular Dragnet: Active Cell Site Simulators and the Fourth Amendment

February 4: Ira Rubinstein — Anonymity and Risk

January 28: Scott Skinner-Thomson Outing Privacy

 

Fall 2014

December 3: Katherine Strandburg — Discussion of Privacy News [which can include recent court decisions, new technologies or significant industry practices]

November 19: Alice Marwick — Scandal or Sex Crime? Ethical and Privacy Implications of the Celebrity Nude Photo Leaks

November 12: Elana Zeide — Student Data and Educational Ideals: examining the current student privacy landscape and how emerging information practice and reforms implicate long-standing social and legal traditions surrounding education in America. The Proverbial Permanent Record [PDF]

November 5: Seda Guerses — Let's first get things done! On division of labor and practices of delegation in times of mediated politics and politicized technologies
October 29:Luke Stark — Discussion on whether “notice” can continue to play a viable role in protecting privacy in mediated communications and transactions given the increasing complexity of the data ecology and economy.
Kristen Martin — Transaction costs, privacy, and trust: The laudable goals and ultimate failure of notice and choice to respect privacy online

Ryan Calo — Against Notice Skepticism in Privacy (and Elsewhere)

Lorrie Faith Cranor — Necessary but Not Sufficient: Standardized Mechanisms for Privacy Notice and Choice
October 22: Matthew Callahan — Warrant Canaries and Law Enforcement Responses
October 15: Karen Levy — Networked Resistance to Electronic Surveillance
October 8: Joris van Hoboken —  The Right to be Forgotten Judgement in Europe: Taking Stock and Looking Ahead

October 1: Giancarlo Lee — Automatic Anonymization of Medical Documents
September 24: Christopher Sprigman — MSFT "Extraterritorial Warrants" Issue 

September 17: Sebastian Zimmeck — Privee: An Architecture for Automatically Analyzing Web Privacy Policies [with Steven M. Bellovin]
September 10: Organizational meeting
 

Spring 2014

April 30: Seda Guerses — Privacy is Security is a prerequisite for Privacy is not Security is a delegation relationship
April 23: Milbank Tweed Forum Speaker — Brad Smith: The Future of Privacy
April 16: Solon Barocas — How Data Mining Discriminates - a collaborative project with Andrew Selbst, 2012-13 ILI Fellow
March 12: Scott Bulua & Amanda Levendowski — Challenges in Combatting Revenge Porn


March 5: Claudia Diaz — In PETs we trust: tensions between Privacy Enhancing Technologies and information privacy law: The presentation is drawn from a paper, "Hero or Villain: The Data Controller in Privacy Law and Technologies” with Seda Guerses and Omer Tene.

February 26: Doc Searls Privacy and Business

February 19: Report from the Obfuscation Symposium, including brief tool demos and individual impressions

February 12: Ira Rubinstein The Ethics of Cryptanalysis — Code Breaking, Exploitation, Subversion and Hacking
February 5: Felix Wu — The Commercial Difference which grows out of a piece just published in the Chicago Forum called The Constitutionality of Consumer Privacy Regulation

January 29: Organizational meeting
 

Fall 2013

December 4: Akiva Miller — Are access and correction tools, opt-out buttons, and privacy dashboards the right solutions to consumer data privacy? & Malte Ziewitz What does transparency conceal?
November 20: Nathan Newman — Can Government Mandate Union Access to Employer Property? On Corporate Control of Information Flows in the Workplace

November 6: Karen Levy — Beating the Box: Digital Enforcement and Resistance
October 23: Brian Choi — The Third-Party Doctrine and the Required-Records Doctrine: Informational Reciprocals, Asymmetries, and Tributaries
October 16: Seda Güerses — Privacy is Don't Ask, Confidentiality is Don't Tell
October 9: Katherine Strandburg — Freedom of Association Constraints on Metadata Surveillance
October 2: Joris van Hoboken — A Right to be Forgotten
September 25: Luke Stark — The Emotional Context of Information Privacy
September 18: Discussion — NSA/Pew Survey
September 11: Organizational Meeting


Spring 2013

May 1: Akiva Miller — What Do We Worry About When We Worry About Price Discrimination
April 24: Hannah Block-Wheba and Matt Zimmerman — National Security Letters [NSL's]

April 17: Heather Patterson — Contextual Expectations of Privacy in User-Generated Mobile Health Data: The Fitbit Story
April 10: Katherine Strandburg — ECPA Reform; Catherine Crump: Cotterman Case; Paula Helm: Anonymity in AA

April 3: Ira Rubinstein — Voter Privacy: A Modest Proposal
March 27: Privacy News Hot Topics — US v. Cotterman, Drones' Hearings, Google Settlement, Employee Health Information Vulnerabilities, and a Report from Differential Privacy Day

March 6: Mariana Thibes — Privacy at Stake, Challenging Issues in the Brazillian Context
March 13: Nathan Newman — The Economics of Information in Behavioral Advertising Markets
February 27: Katherine Strandburg — Free Fall: The Online Market's Consumer Preference Disconnect
February 20: Brad Smith — Privacy at Microsoft
February 13: Joe Bonneau  — What will it mean for privacy as user authentication moves beyond passwo
February 6: Helen Nissenbaum — The (Privacy) Trouble with MOOCs
January 30: Welcome meeting and discussion on current privacy news
 

Fall 2012

December 5: Martin French — Preparing for the Zombie Apocalypse: The Privacy Implications of (Contemporary Developments in) Public Health Intelligence
November 7: Sophie Hood — New Media Technology and the Courts: Judicial Videoconferencing
November 14: Travis Hall — Cracks in the Foundation: India's Biometrics Programs and the Power of the Exception

November 28: Scott Bulua and Catherine Crump — A framework for understanding and regulating domestic drone surveillance

November 21: Lital Helman — Corporate Responsibility of Social Networking Platforms
October 24: Matt Tierney and Ian Spiro — Cryptogram: Photo Privacy in Social Media
October 17: Frederik Zuiderveen Borgesius — Behavioural Targeting. How to regulate?

October 10: Discussion of 'Model Law'

October 3: Agatha Cole — The Role of IP address Data in Counter-Terrorism Operations & Criminal Law Enforcement Investigations: Looking towards the European framework as a model for U.S. Data Retention Policy
September 26: Karen Levy — Privacy, Professionalism, and Techno-Legal Regulation of U.S. Truckers
September 19: Nathan Newman — Cost of Lost Privacy: Google, Antitrust and Control of User Data