Privacy Research Group

ILI Privacy Research Group Logo

The Privacy Research Group is a weekly meeting of students, professors, and industry professionals who are passionate about exploring, protecting, and understanding privacy in the digital age.

Joining PRG

Because we deal with early-stage work in progress, attendance at meetings of the Privacy Research Group is generally limited to researchers and students who can commit to ongoing participation in the group. To discuss joining the group, please contact Tom McBrien. If you are interested in these topics, but cannot commit to ongoing participation in PRG, you may wish to join the PRG-All mailing list.
 
PRG Student Fellows—Student members of PRG have the opportunity to become Student Fellows. Student Fellows help bring the exciting developments and ideas of the Research Group to the outside world. The primary Student Fellow responsibility is to maintain an active web presence through the ILI student blog, reporting on current events and developments in the privacy field and bringing the world of privacy research to a broader audience. Fellows also have the opportunity to help promote and execute exciting events and colloquia, and even present to the Privacy Research Group. Student Fellow responsibilities are a manageable and enjoyable addition to the regular meeting attendance required of all PRG members. The Student Fellow position is the first step for NYU students into the world of privacy research. Interested students should email Student Fellow Coordinator Tom McBrien with a brief (1-2 paragraph) statement of interest or for more information.


PRG Calendar

 

Fall 2019 [12:45-2:00pm, Furman Hall, 245 Sullivan Street, Room 120]

December 4: Albert Fox-Cahn
November 20: Margarita Boyarskaya & Solon Barocas [joint work with Hanna Wallach] — What is a Proxy and why is it a Problem?

     ABSTRACT: Indirect discrimination via proxy variables is a notorious problem in algorithmic fairness. The goal of non-discrimination restricts the use of sensitive and/or legally protected features in statistical decision-making. However, even when the sensitive feature A is not directly provided as an input into a model, discrimination on the basis of A may persist – intentionally or inadvertently – via the use of the so-called proxy variables. Current discussions in computer science literature take an ad hoc approach in their treatment of proxy variables. We formalize various definitions of a proxy variable and describe the statistical relationships that such definitions entail. We answer the question of what it means to have ‘a proxy problem’ from both statistical and legal perspectives, and offer a detailed survey of the various ways decision-makers might respond to the problem. We challenge a common fallacy in replacing a contentious proxy by another variable that appears to be more relevant to the outcome, but is nonetheless correlated with the sensitive feature, highlighting the way important distinctions between worldviews implicate different modeling choices. We suggest causal graphs as a tool for developing a principled approach to deciding whether to include, omit, or collect additional features in a model.

November 13: Mark Verstraete & Tal Zarsky — Data Breach Distortions

     ABSTRACT: Data breach notifications are likely to be revised in the seemingly inevitable federal privacy overhaul. In light of the looming regulatory changes, this Article interrogates the structure and efficacy of the diverse set of breach notification statutes. In doing so, this Article makes several crucial innovations to the debate over these legal requirements, their justifications, and their need. First, this Article argues that the normative foundations of data breach notification are complicated by overlooked features of cybersecurity and tort law—moral luck and activity levels, respectively. Moral luck is particularly relevant for data breaches because whether a firm experiences a breach is often partly a matter of luck or other external reasons. Moreover, moral luck is particularly operative for data breaches because technological changes exacerbate the role of luck and the idiosyncratic disclosure remedy provides more intervention points for luck to operate. In addition to moral luck, the debate over data breach notification has overlooked another foundational inquiry—activity levels. Tort theorists recognize that tort law regulates both duty of care (the precautions a person must take) and activity levels (how often a person undertakes an activity). We examine how different activity levels—or opportunities for breach—are both influenced and influence the effectiveness of breach notification statutes. To do so, we assess how different activity levels of firms interact with the normative goals of breach notification. Next, we situate the insights from moral luck and activity levels within the debate over the normative goals of data breach notification. In particular, we argue that moral luck and activity levels complicate the validity of different normative foundations for these laws. In doing so, we detail an array of possible normative goals of data breach, including deterrence, mitigation, information forcing, and restorative justice. Ultimately, we argue that moral luck and activity levels complicate these different normative values. We also demonstrate how these different normative values conflict with each other and involve inevitable trade-offs. Finally, we build on these earlier complications to craft a more informed data breach notification statute. In doing so, it examines key breach notification statutes, assessing whether and how their elements are properly structured given the distortion effects we discuss. We perform a regulatory design analysis and select which specific features should be included in a model data breach notification statute.

November 6: Aaron Shapiro — Dynamic Exploits: Calculative Asymmetries in the On-Demand Economy

     ABSTRACT: In this article, I argue that on-demand service firms secure their market power by cultivating and operationalising calculative asymmetries between platform managers and workers. Specifically, I analyse dynamic (or ‘surge’) pricing as an exemplary calculative technique. I show how the asymmetrical application of price-setting allows firms to leverage control over their workers at the aggregate level while maintaining the façade of autonomy at the individual level, thereby legitimising workers’ classification as independent contractors but solving the coordination problems the classification introduces. The article’s empirical contribution complements and extends previous critical research into the on-demand economy by analysing how management science models and simulates on-demand marketplaces to identify optimal management strategies. This literature provides novel insights into platform managers’ efforts to monopolise calculative agency at the expense of other market participants. The article concludes by considering the implications of the findings for our broader understanding of on-demand marketplaces. 

October 30: Tomer Kenneth — Who Can Move My Cheese? Other Legal Considerations About Smart-Devices

     ABSTRACT: This is a paper about your relationship with your fridge, which is about to get complicated. Technological innovations turn mundane devices into smart-devices, digitally operated and connected to the internet, part of the Internet of Things (IoT). These technological changes create exciting new possibilities but also pose numerous new legal challenges. Considerable legal scholarship is centered on using privacy and personal data to confront these challenges. Despite their importance, these concepts cannot answer all these challenges. This paper aims to broaden the scope of legal scholarship about IoT. It gazes beyond personal data and privacy issues and highlights other major legal concerns that smart-devices instigate. Drawing on a detailed analysis of the technology and some fundamental legal theory scholarship, this paper explores critical legal interest implicated by smart-fridges and other IoT technologies. It begins by discussing the distribution of first-order and second-order legal powers between the users and other possible operators of the smart-devices, stressing the need to revisit some core legal queries in light of IoT. Next, it explores the nature of personalization of IoT devices as limitations that operators impose on users, and emphasizes new and more pervasive limitations that IoT devices empower operators to impose. Finally, focusing on limitations that impede users’ freedom of choice and negative freedoms, this paper lays out a path for a more nuanced legal discussion about the harms that the imposition of such limitations brings about, juxtaposing the kind of imposing actors with the mode of limitation used. Doing that, the paper addresses vital legal challenges that are posed by IoT technology but are mostly overlooked by current legal scholarship. It aims to kick-off a more vibrant conversation about theoretical legal questions, ones that the legal community will have to confront in the coming years as smart-devices become more prevalent. Specifically, it wishes to draw attention to those law and technology questions that are outside the scope of privacy. While this paper will focus on smart-fridges as a primary case-study, the lion’s share of the technological and legal discussions is applicable to other IoT devices and to the general IoT legal discourse.

October 23: Yafit Lev-Aretz & Madelyn Sanfilippo — Privacy and Religious Views

     ABSTRACT: Privacy is often an important value in religion. God’s omnipotence and pervasive surveillance within monotheistic faiths convey a general openness to flows of personal information to a divine recipient and religious leaders. Much has been written on privacy in religious practice and privacy as protecting religious freedoms. However, it is not well understood how religiosity or specific religious values impacts individuals’ privacy attitudes, as the pervasiveness of religious values in different contexts varies in highly individualistic ways. Even less is known about the impact of religious values on privacy views with respect to commerce and the use of personal information by the private sector, despite the fact that technology increasingly brings commercialization into religion. As part of a larger research agenda to explore religion and privacy with respect to commercial contexts and consumer behavior, Madelyn and Yafit structure their empirical inquiries around the Contextual Integrity framework. This paper specifically compares privacy practices and policies for Christian, Islamic, and Jewish mobile apps across eleven established functional categories: sacred textual engagement, prayer, meditation, devotional worship, rituals, utilities, wisdom and leaders, media outlets, games, kids, and social media. Results indicate that intra-religious norms limit personal information flows for apps developed by religious actors and for apps categorically associated with religious observation and practice, yet do not limit information collection by commercial developers, for paid apps, or for lifestyle and entertainment apps. Variations were present within religions, as Evangelical Christian apps, in comparison to a Catholic subset of Christian apps, had significantly more permissions granted on average. Further, inter-religious differences are significant, with permissions granted to Islamic apps as most numerous, and the average permissions granted to Jewish apps as fewest. Apps from commercial developers often asked for extensive permissions, including access to cameras and microphones that were unassociated with any features. We also found that commercially developed religious children’s apps were often among the most extensive in collecting user data, with many not clearly explaining COPPA compliance in their privacy policies and others removed from app markets during the course of our study for COPPA violations.

October 16: Salome Viljoen — Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought

     ABSTRACT: The field of computer science is in a bind: on the one hand, computer scientists are increasingly eager to address social challenges; on the other, the field faces a growing awareness that many well- intentioned applications of algorithms in social contexts have led to significant harm. We argue that productively moving through this bind requires developing new practical reasoning methods for those engaged in algorithmic work. To understand what such an intervention looks like and what it may achieve, we look to the twentieth century evolution in American legal thought from legal formalism to legal realism. Drawing on the lessons of legal realism, we propose a new mode of algorithmic thinking— "algorithmic realism”—that is attentive to the internal limits of algorithms as well as the social concerns that fall beyond the bounds of current algorithmic thinking. Algorithmic realism is a practical orientation to work, and thus will not on its own prevent every harmful impact of algorithms. Nevertheless, it will better equip engineers to reason about the sociality of their work, and provide a necessary first step toward reducing algorithmic harms.

October 9: Katja Langenbucher — Responsible A.I. Credit Scoring

     ABSTRACT: A core element of a lender’s decision when handing out a loan is the assessment of the borrower’s creditworthiness. Some of this work is done by the lender himself, for example by performing internal checks and by applying rating models to information he may have at his disposal. Other parts of this task are outsourced to intermediaries, such as data brokers or credit rating agencies. The latter deliver credit scores based on their proprietary rating methodology. abstract continued

October 2: Michal Shur-Ofry — Robotic Collective Memory

     ABSTRACT: The various ways in which robots and AI will affect our future society are at the center of scholarly attention. This essay, conversely, concentrates on their possible impact on humanity’s past, or more accurately, on the ways societies will remember their joint past. We focus on the emerging use of technologies that combine AI, cutting-edge visualization techniques, and social robots, in order to store and communicate recollections of the past in an interactive human-like manner. We explore the use of these technologies by remembrance institutions and their potential impact on collective memory. Taking a close look at the case study of NDT (New Dimensions in Testimony)a project that uses ‘virtual witnesses’ to convey memories from the Holocaust and other mass atrocitieswe highlight the significant value, and the potential vulnerabilities, of this new mode of memory construction. Against this background, we propose a novel concept of memory fiduciaries that can form the basis for a policy framework for robotic collective memory. Drawing on Jack Balkin’s concept of “information fiduciaries” on the one hand, and on studies of collective memory on the other, we explain the nature of and the justifications for memory fiduciaries. We then demonstrate, in broad strokes, the potential implications of this new conceptualization for various questions pertaining to collective memory constructed by AI and robots. By so doing, this Essay aims to start a conversation on the policies that would allow algorithmic collective memory to fulfill its potential, while minimizing its social costs. On a more general level, it brings to the fore a series of important policy questions pertaining to the intersection of new technologies and inter-generational collective memory.

September 25: Mark Verstraete — Inseparable Uses in Property and Information Law

     ABSTRACTProperty law generally provides owners broad discretion over how to use the things they control. A person who purchases a car can determine whether it is best used for transportation or conceptual art. That said, ownership is still loosely constrained by general private and public law obligations. For instance, tort law prevents owners from using a souvenir baseball bat to strike people. At bottom, though, decisions about use are so central to ownership that one prominent property scholar—Larissa Katz—suggests that ownership exists, in part, to grant authority over who can determine how to use a thing. However, some things do not fit squarely within the prevailing property paradigm that only reluctantly scrutinizes downstream uses. For example, body parts, rights of publicity, personal information, and creative works reside at the edge of property and lead to thorny questions about uses after acquisition. Broadly, these things maintain a connection to specific people, even after transfer, which arguably justifies imposing restrictions on potential uses of these things. This Article focuses on these ambiguous cases in order to assess the wisdom of crafting special rules about downstream uses as well as create a theory about when a thing is inseparable from the person which vindicates these limitations. And further, this Article argues that separability—or the circumstances in which a thing is distinct from particular people—should determine the scope and content of potential use restrictions. In order to develop this theory, however, this Article offers a new vision of separability that more closely scrutinizes and considers uses, arguing that separability depends on both the connection a thing has to the person and how it is used. Moral philosophers (such as Immanuel Kant and G.W.F. Hegel) as well as contemporary property theorists have attempted to provide a conceptual analysis of separability but have largely overlooked the importance of use for this analysis. This Article breaks new ground by arguing that separability is a function of both the connection that a thing retains to a person and how it is used. The normative upshot of this approach is that it provides guidance for policymakers and theorists to distinguish between uses that are connected to people—and likely justify regulatory intervention—from uses that are distinct and should be less searchingly reviewed. After developing this theory, this Article applies its insights to contested cases from property law and information law. Separability provides crucial insights about potential use restrictions for rights of publicity, creative works, and body parts. Within information law, separability marks a new roadmap for the governance of personal data. Rather than focusing on collection, regulatory interventions should focus more squarely on potential uses. And further, uses that are inseparable from the person should be the focus of intervention.

September 18: Gabe Nicholas & Michael Weinberg — Data, To Go: Privacy and Competition in Data Portability

     ABSTRACT: Today’s major social media platforms face serious scrutiny around privacy and anti-competitive behavior. Data portability, the principle that users should be able to move their data from one service to another, has been hailed as a way to improve competition without compromising privacy. As the logic goes, portability offers users ownership over their data by allowing them to download it, and at the same time, offers competitors opportunities to innovate, by letting them build and grow new products with uploaded incumbent data. This presentation will argue that, in the context of social network data, these dual goals of privacy and competitiveness from data portability are incompatible. When incumbents choose which data to make available in a portability regime, decisions that benefit the privacy of uploaders and third-party users harm the utility to competitors, and visa versa. There are at least three ways competitors might use social network data to create new products: to seed new profiles on a competing platform, to offer insights and novel applications through machine learning, or to recreate features from the incumbent platform and allow users to migrate over. This presentation will consider the privacy/competition trade-off for each of these three scenarios by looking at real data from Facebook’s portability platform, Download Your Information. In all three, the data Facebook makes available is likely insufficient to bring about meaningful competition. This could be improved by allowing users to export their social graphs, adding globally unique identifiers, or increasing the contextual data made available. However, these changes would compromise the privacy of uploading users and their connections, even ones who did not upload their own data to the new platform. Regulators who consider incorporating data portability should be specific with their goals, and in the case of social networks, choose between encouraging private data ownership and competition.

September 11: Ari Waldman — Privacy, Discourse, and Power

     ABSTRACT: This project is about the discourses of privacy and privacy law. It constructs the landscape of privacy discourse, where it has been, where it is going, and who it empowers along the way. The dominant discourse of privacy today, often called "notice and consent", is explicitly neoliberal. This regime has been roundly criticized by privacy scholars as a failure. And yet, for all its faults, notice-and-consent always made sense from a sociological or phenomenological perspective. That is, it was inadequate yet scrutable; because of the latter, we determined the former. Neoliberal privacy law is ineffective, but it was always accessible and open for interrogation from the ground up. That inadequate, yet relatable discourse, however, is now losing ground to the inscrutable, unaccountable discourse of technology designers. I argue that the same neoliberal social, political, and legal forces, superpowered by more advanced technology and a more powerful technologist profession, are shifting privacy law discourse from accessible concepts like choice to inaccessible computer code, from something regulators could interrogate to the “black box”  language of technology. The discourse of privacy law and, thus, power over its translation into practice, resides in the design team, where engineers, supervised by other engineers, make consequential choices about how, if at all, to interpret the requirements of privacy law and integrate them into the code of technologies they create. Based on primary source research, this project argues that the code-based discourse of engineers is gaining hegemonic power in privacy law, thereby defining privacy law and what it means in practice, stacking the deck against robust privacy protections, and undermining the promise of privacy laws already passed.


Spring 2019

April 24: Sheila Marie Cruz-Rodriguez — Contractual Approach to Privacy Protection in Urban Data Collection
     ABSTRACT: Data collection and data analysis methods are involved in a continuous iterative process, as a result, the tools we use to safeguard certain social interests may lag when compared against technology’s rapid pace. This article is written under the assumption that a sensor program is to be deployed in a jurisdiction[1] that has recognized a right to locational privacy, making the collection of certain urban data unlawful absent contractual agreement between the parties or a specific regulation exempting such collection from a consent requirement. This analysis only includes data collection by private parties for commercial purposes, issues arising from government data collection practices are outside the scope of this article. The article considers the effectiveness of individual and community methods for solving the issues arising from urban data collection. In the first section, I evaluate an individual method by examining whether the contractual approach to urban data collection effectively safeguards consumer privacy interests. This is achieved by examining the notice and consent elements of contracts. Next, I analyze the challenges facing each element of the contractual approach along with its exceptions as recognized in the relevant literature (de-identification and anonymization) in order to conclude that in my view they are inadequate approaches to protect consumer privacy. The second section considers the community method under three main approaches. First, the implementation of a consent board who would be a party to the data collection contract and will analyze the relevant terms in order to either supply or withhold consent for the collection of urban data. This approach would be a community solution that aims to solve several issues arising from the problems related to notice and consent. Next, a Civic Data Trust that would manage the requirements for collection and access to the data itself. The Data Trust would be a local entity that would have control over the urban data. Lastly, a private non-profit entity authorized by the legislative power, to develop and enforce rules regarding urban data collection, use, and retention practices. Finally, the article discusses whether the challenges identified in the previous sections have been addressed by the alternatives discussed and identify areas of further research.


[1] As an example, In R. v. Jarvis, 2019 SCC 10, the Supreme Court of Canada ruled on February 14, 2019 that individuals are entitled to a reasonable expectation of privacy in public spaces. This decision has set as a precedent that a person’s reasonable expectation of privacy can no longer be purely based on one’s location, but instead in a “totality of circumstances”.

April 17: Andrew Selbst — Negligence and AI's Human Users
     ABSTRACT: Negligence law is often asked to adapt to new technologies. So it is with artificial intelligence (AI). But AI is different. Drawing on examples in medicine, financial advice, data security, and driving in semi-autonomous vehicles, this Article argues that AI poses serious challenges for negligence law. By inserting a layer of inscrutable, unintuitive, and statistically-derived code in between a human decisionmaker and the consequences of that decision, AI disrupts our typical understanding of responsibility for choices gone wrong. The Article argues that AI’s unique nature introduces four complications into negligence: 1) unforeseeability of specific errors that AI will make; 2) capacity limitations when humans interact with AI; 3) introducing AI-specific software vulnerabilities into decisions not previously mediated by software; and 4) distributional concerns based on AI’s statistical nature and potential for bias.Tort scholars have mostly overlooked these challenges. This is understandable because they have been focused on autonomous robots, especially autonomous vehicles, which can easily kill, maim, or injure people. But this focus has neglected to consider the full range of what AI is. Outside of robots, AI technologies are not autonomous. Rather, they are primarily decision-assistance tools that aim to improve on the inefficiency, arbitrariness, and bias of human decisions. By focusing on a technology that eliminates users, tort scholars have concerned themselves with product liability and innovation, and as a result, have missed the implications for negligence law, the governing regime when harm comes from users of AI. The Article also situates these observations in broader themes of negligence law: the relationship between bounded rationality and foreseeability, the need to update reasonableness conceptions based on new technology, and the difficulties of merging statistical facts with individual determinations, such as fault. This analysis suggests that though there might be a way to create systems of regulatory support to allow negligence law to operate as intended, an approach to oversight that it not based in individual fault is likely to be a more fruitful approach.

April 10: Sun Ping — Beyond Security: What Kind of Data Protection Law Should China Make?
      ABSTRACT: In September 2018, the draft of Personal Information Protection Law was firstly listed as one of the Class I items in the 5-year Legislative Plan of 13th Standing Committee of the National People’s Congress (NPCSC), which means China will finally make an official draft of DPL in near future. Actually, as early as 2003, with the rapid development on E-government, Chinese government has begun to try to make a DPL, but 5 years later, the attempt was failed without any result, except for an expert’s proposal draft with European style. It is from around 2010 that security in cyberspace gradually became the dominant concern in public and political opinions which provided the new incentive for Chinese DPL legislation. From then on, a whole new legislative mode, security-driven mode, appears through the articles and sections related to data protection scattered in criminal law, civil law and administrative law. However, the legislation under the security-driven mode only covers limited private sector, targets merely on data breach and illegal obtainment, lacks non-partial enforcing mechanism, and hardly balances different values. Meanwhile, with incredible technology development and application, the initiatives of personal data collecting and using in public and private sectors have become more and more ambitious without comprehensive legal frame of data protection. More importantly, because China is still struggling in transition, the current data protection laws have no roots in constitutional values, and hence, no compatible and profound theories. The incoming draft of DPA is an opportunity for China to re-comprehend the practice (Datalization, Centralization and Preconditionalization), structure their own theory (Marxism and informational person(human)), draw on experiences and lessons from Europe and the U.S., and mitigate the risk of international misunderstanding and distrust.

April 3: Moran Yemini — Missing in "State Action": Toward a Pluralist Conception of the First Amendment
     ABSTRACT: Online speech intermediaries, particularly social platforms, have an enormous impact on internet users' freedom of expression. They determine the speech rules for most of the content generated and information exchanged today, and routinely interfere with users' speech, while enjoying practically unchecked power to block, filter, censor, manipulate, and surveil. Accordingly, our current system of free expression lacks one of the main requirements of a just system - the notion that no form of power is immune from the question of legitimacy. Scholarly responses to this situation tend to assign decreased weight to constitutional norms as means to impose duties on online intermediaries and promote internet users' speech, while focusing, instead, on other means, such as non-legal norms, legislative and administrative regulation, and technological design. This Article will swim against this current, arguing that a speech promoting environment cannot be sustained without an effective constitutional check on online intermediaries' exercise of powerUnfortunately, existing First Amendment doctrine poses high barriers for structural reform in the existing power relations between online intermediaries and their end users: (1) the "state action" doctrine prevents users from raising speech-related claims against online intermediaries; and (2) an expansive interpretation of what constitutes "speech" serves as a Lochnerian vehicle for intermediaries to claim immunity from government regulation. This Article will discuss these doctrinal barriers, as well as possible modifications to existing doctrine, which could create an environment more supportive of users' speech. However, more importantly, the main contribution of this Article to existing scholarship centers on the argument that a deeper reassessment of traditional doctrinal assumptions is required in order for the First Amendment to fulfill its speech-protecting role in the digital age. The underlying premise of traditional thinking about speech-related constitutional conflicts conceptualizes such conflicts as necessarily bipolar, speaker-government, equations. Accordingly, courts and scholars ordinarily focus on asking whether "the state" is present on one side of the equation or whether "a speaker" exists on the other. This way of thinking about speech-related conflicts suffers from grave limitations when trying to cope with the realities of networks comprised of multiple speakers and multiple censors/regulators (with potential overlaps between these categories). The bipolar conception of the First Amendment is simply incompatible with the type of conflicts that pluralist networks generate. Consequently, if the First Amendment is to have a significant speech-protective meaning in the digital ecosystem, a more sophisticated analysis than the reigning bipolar conception of the First Amendment is necessary. This Article will propose such an alternative analysis, which shall be denominated a pluralist conception of the First Amendment.

March 27: Nick Vincent — Privacy and the Human Microbiome
     ABSTRACT: By even the most modest estimates, an individual human being may harbor over 30 trillion bacterial cells within and on her body. These bacterial communities thrive on our skin and hair, in our mouths, and throughout our digestive tracts, and they have the ability to carry and communicate far more information about us than we may at first realize. The composition of these communities is affected by what we eat, where we live, medications we have taken, and even the status of our overall health. This talk will address the burgeoning interest in using bacterial communities and the human microbiome in the spheres of forensic science and, more pertinent to our goals, privacy, before diving into an open discussion on the promises, drawbacks, and challenges of this technology. Is the thought of identifying the presence of a murderer at a crime scene, or using targeted advertising based off the bacterial communities in and around us the stuff of science fiction? Or is it just around the corner?

March 13: Nick Mendez — Will You Be Seeing Me in Court? Risk of Future Harm, and Article III Standing After a Data Breach
     ABSTRACT: We examine the circuit split that has developed in the U.S. federal court system regarding an individual's ability to sue in federal court after their data has been compromised through a data breach or leak but has not been exploited in any other way. Following the U.S. Supreme Court's decision in Spokeo v. Robins, a plaintiff must show concrete and particularized harm to maintain standing to sue in federal court. Under Clapper, this may include imminent harm, including a substantial risk that the harm will occur. Four circuit courts (1st; 2d; 4th; 8th) have held that the mere exposure of data is insufficient to sustain an assertion that there is a substantial risk of harm, meaning therefore that the data breach victim has not experienced an injury in fact, and lacks standing to sue. Five circuits (DC; 3d; 6th; 7th; 9th) have held the opposite: that data in the hands of a hacker creates a substantial risk of harm that is sufficient to constitute an injury in fact, conferring standing on a data breach victim. Currently before the Court is a petition for certiorari on the 9th Cir. case In re Zappos that would potentially resolve the issue. We weigh the costs and benefits of each theory, and assess its applicability to other areas of privacy law. Attached as background reading are the cert petition and brief in opposition from Zappos.

March 6: Jake Goldenfein — Through the Handoff Lens: Are Autonomous Vehicles No-Win for Users
     ABSTRACT: There is a great deal of hype around the potential social benefits of autonomous vehicles. Rather than simply challenge this rhetoric as not reflecting reality, or as promoting certain political agendas, we try to expose how the transport models described by tech companies, car manufacturers, and researchers each generate different political and ethical consequences for users. The paper explores three archetypes of autonomous vehicles - fully driverless cars, advanced driver assist systems, and connected cars. Within each archetype we describe how the components and actors of driving systems might be redistributed and transformed, how those transformations are embedded in the human-machine interfaces of the vehicle, and how that interface is largely determinative of the political and value propositions of autonomous vehicles for human ‘users’ – particularly with respect to privacy, autonomy, and responsibility. To that end, this paper introduces the analytical lens of ‘handoff’ for understanding the ramifications of the different configurations of actors and components associated with different models of autonomous vehicle futures. ‘Handoff’ is an approach to tracking societal values in sociotechnical systems. It exposes what is at stake in transitions of control between different components and actors in a system, i.e. human, regulatory, mechanical or computational. The handoff analytic tracks the reconfigurations and reorientations of actors and components when a system transitions for the sake of ‘functional equivalence’ (for instance through automation). The claim is that, in handing off certain functions to mechanical or computational systems like giving control of a vehicle to an automated system, the identity and operation of actor-components in a system change, and those changes have ethical and political consequences. Thinking through the handoff lens allows us to track those consequences and understand what is at stake for human values in the visions, claims and rhetoric around autonomous vehicles.

February 27: Cathy Dwyer — Applying the Contextual Integrity Framework to Cambride Analytica
     ABSTRACT: A continuing challenge for the Digital Privacy research community has been the identification and articulation of norms regarding information flow. The Contextual Integrity Framework (CI) gives us a tool to discover these norms. An application of CI to 2,011 news articles about the Cambridge Analytica revelations of 2018 found evidence that mini-contexts could be useful in identifying norms regarding information flow that are fast-evolving and rarely articulated. These mini-contexts emerge in circumstances where norms and transmission principles are clearly expressed. I define a mini-context as a narrowly specified situation where information flow is clearly articulated and described. I call them mini-contexts, because they have a much reduced scope of relevance than broader contexts, such as those associated with the flow of medical or educational information. The political consulting firm Cambridge Analytica carried out extensive collection of personal data from Facebook, and used that data to develop predictive models of individuals in order to target political advertising during the 2016 ‘Brexit’ vote, and the US presidential election that same year. The result of this revelation was an international uproar, and spurred a global discussion about the use of personal information. From the Cambridge Analytica case, I will discuss two mini-contexts. The first is the “I Accept” mini-context. A recurring topic in the discussion around Cambridge Analytica is the degree to which individual users are themselves responsible for giving up their privacy. This context describes what information can be shared when a person clicks ‘I Accept,’ without reading the policy or terms. The second mini-context centers around the evaluation of personality types using data from online behavior. In this mini-context, there is evidence that the application of personality evaluations without explicit consent is a violation of appropriate information flows.

 

February 20: Ignacio Cofone & Katherine Strandburg — Strategic Games and Algorithmic Transparency
      ABSTRACT: We challenge a commonly held belief in the industry, government, and academia: that algorithmic decision-making processes are better kept opaque or secret because otherwise people may “game the system,” leading to inaccurate or unfair results. We first show that the situations in which people can game the system, even with all information about the decision-process, are narrow, and we suggest how to identify such situations. This depends on the proxies used: how easy they are to fake, and whether such “faking” changes the underlying feature that is measured. We then develop normative considerations to determine, from the sub-set of situations where decision-subjects can effectively game, when does this gaming justify opacity, and when should transparency be mandated irrespective of gaming. This should depend on the social costs of false positives and false negatives and the accuracy of proxies. Proxies with very high or very low false positives and false negatives should be disclosed, and whether proxies with high false positives and low false negatives or the converse should be disclosed depends on the relative social costs of each. In such way, we show that the situations in which algorithmic secrecy is justified are much narrower than is normally assumed, and we thereby hope to advance the discussion on algorithmic transparency.

February 13: Yan Shvartshnaider — Going Against the (Appropriate) Flow: A Contextual Integrity Approach to Privacy Policy Analysis
     ABSTRACT: We present a method for annotating and analyzing privacy policies that is theoretically grounded in the framework of contextual integrity (CI). Unlike previous annotation techniques, CI helps formalize and detect issues with descriptions of data collection and exchange. We demonstrate this method with a case study comparing Facebook’s privacy policy before and after the Cambridge Analytica scandal. Surprisingly, our analysis shows that despite providing additional details on information handling practices, the updated policy suffers from a substantial increase in ambiguous statements due to missing or excessive contextual details, such as what information is being transferred, from whom, by whom, to whom, and under what conditions. We also demonstrate that our approach can scale using crowdworking to produce CI annotations of large privacy policy corpora. We perform a proof-of-concept experiment with 99 crowdworkers annotating 48 excerpts of privacy policies from 17 companies. The resulting high precision annotations indicate that CI annotation and analysis could be widely applied to privacy policies in future research.

January 30: Sabine Gless — Predictive Policing: In Defense of 'True Positives'
     ABSTRACT: Predictive policing has triggered a heated debate around the issue of false positives. Biased machine training can wrongly classify individuals as high risk simply as a result of belonging to a particular ethnic group and many agree such persons should not have to shoulder the burden of over-policing due to an inherent stochastic problem. This provocation however makes a case for the ‘true positives’. It claims that those who are caught red-handed, as a consequence of biased police profiling, offer the best opportunity to address the issue of biased profiling as they have a high incentive to raise the problem of discrimination during criminal proceedings. While the line of argument starts with a purely pragmatic consideration, it can be grounded on a more general reasoning of undesirability of discriminatory stops and searches as inherently unfair and a threat to social peace. To create an efficient legal tool against discriminatory law-enforcement, the defence should be entitled to contest a conviction based on biased predictive policing, with a specific exclusionary rule protecting ‘true positives’ against the use of tainted evidence.


Fall 2018

December 5: Discussion of current issues

November 28: Ashley Gorham — Algorithmic Interpellation
     ABSTRACT: The use of algorithmic logic for purposes as different as military strikes and targeted advertisements alone ought to alarm us. And yet, despite their rapidly increasing presence in our lives, our understanding of where and how algorithms are used, as well as their material effects, remains at a minimum. To be fair, algorithms are a technical and therefore unsurprisingly intimidating topic, and just what an algorithm is is not immediately obvious to many, if not most, people. Even among those who think they have a sense of what an algorithm is, it is still hard to define. As Tarleton Gillespie (2016) notes, as social scientists, “[w]e find ourselves more ready to proclaim the impact of algorithms than to say what they are” (18). With this in mind, and in light of the pervasiveness of algorithms in contemporary society, we have set to clarify the operations of algorithms through the use of Althusser’s theory of ideology, and in particular his concept of interpellation. It is our main contention that algorithms operate as mechanisms of capitalist interpellation and that a proper understanding of algorithms must appreciate this aspect of their workings. The argument will proceed as follows: first, we will offer a brief, and, admittedly incomplete, overview of the ways in which other scholars have conceptualized algorithms.  Second, we will examine Althusser’s theory of ideology, and, as his theory is a complicated one, we will discuss it in some detail. Finally, we will apply Althusser’s theory to the operations of algorithms, considering how an algorithm is well understood as a mechanism that “gives us a name.”

November 14: Mark Verstraete — Data Inalienabilities
     ABSTRACT: This paper explores the theoretical links between personal information and alienability. More specifically, I present a conceptual framework for thinking about limitations on the alienability of personal data. To that end, I argue that restrictions on the alienability of personal data are justifiable based on both analogies to other objects that are subject of limitations on transfer and the unique nature of personal data. One set of alienability limitations are present in Intellectual Property and constrain the alienability of creative works. For instance, Copyright’s Termination Transfer Right gives authors an inalienable option to regain rights in their creative works after a set period of years. Similarly, the doctrine of moral rights allows authors to retain some control over their work even after sale—preventing purchasers from destroying or altering the work. A second suite of alienability restrictions governs entitlements that are intimately bound up in the body and personhood. Third, and finally, personal data is unique. Unlike many other artifacts that are transferred or sold, personal data cannot be fully severed from people about whom the data refers. By contrast, traditional commodities like cars or furniture do not relate back to previous owners in the way that personal data does. Data subjects, on the other hand, have a continuing interest in the use of data about them that cannot be fully extinguished by transfer or sale.

November 7: Jonathan Mayer — Estimating Incidental Collection in Foreign Intelligence Surveillance
     ABSTRACT: Section 702 of the Foreign Intelligence Surveillance Act (FISA) authorizes the Intelligence Community to acquire communications within the United States when targeting non-U.S. persons outside the United States. Because of the increasingly global nature of communications, Section 702 intercepts foreseeably involve communications where a U.S. person is a party. This property of Section 702, dubbed "incidental collection," has been a subject of controversy for over a decade because it involves acquisition of a U.S. person's communications without a probable-cause warrant. Lawmakers on both sides of the aisle have called on the Intelligence Community to estimate the scale of incidental collection, in order to better understand how Section 702 operates and weigh Fourth Amendment considerations. Senior national security officials in the Obama and Trump administrations have acknowledged the value of estimating incidental collection, and the Intelligence Community has assessed possible methodologies and called for input from outside experts. In this session, I will present preliminary results from a working group on estimating incidental collection under Section 702. Princeton CITP convened leaders in national security, surveillance law, and privacy-preserving computation for a daylong session this summer in order to explore the problem and consider new methodologies. The group's scope was narrowly focused on estimation; it did not address broader policy or legal considerations for Section 702. I will explain points of consensus on data sources, statistics, and rejected methodologies, and I will present a possible path forward that leverages the latest privacy-preserving computation techniques to estimate incidental collection.

October 31: Sebastian Benthall — Trade, Trust, and Cyberwar
     ABSTRACT: In recent years, several nations have passed new policies restricting the use of information technology of foreign origin. These cybersecurity trade polices are legitimized by narratives around national security and privacy while also having "protectionist" economic implications. This talk frames cybersecurity trade policies in the broader theoretical context of war and trade. It then examines how cyberwar is different from other forms of trade conflict, and what implications this has for the potential for broad economic and political alignment on cybersecurity.

October 24: Yafit Lev-Aretz — Privacy and the Human Element
     ABSTRACT: The right to privacy has been traditionally discussed in terms of human observation and the formation of subsequent opinion or judgment.  Starting with Warren and Brandeis' "right to be let alone,"  and continuing with the privacy torts', the early days of privacy in the legal sphere placed crucial emphasis on human presence.  Often made arguments such as "I've got nothing to hide" on the one hand, and "you are being watched" on the other hand,  go to the heart of the human element which became an intuitive component around which the right to privacy has been structured, evolved, and interpreted over the years.  Nowadays, however, most information flows do not involve a human in the loop, and while we are pretty uncomfortable with human observation and subsequent judgment, algorithmic observation and judgment do not provoke similar discomfort.  This discrepancy can account for the privacy paradox, which refers to the difference between stated positions on information collection and widespread participation in it.  It can also explain the significant expansion of the privacy bundle in the past decade, to include concerns such as discrimination, profiling, unjust enrichment, and online manipulation.  In my work, I point to the failure of privacy as a policy goal and build on the work of Priscila Regan and Dan Solove to explain this failure in, beyond the use of wrong metaphors and the individual focus, the mismatch between the strong human presence in privacy intuitions and the modern surveillance culture that growingly capitalizes on diverse means of humanless tracking.  Consequently, I call for a conceptual shift that keeps privacy within the boundaries of the human element and discusses all other informational risks under a parallel paradigm of legal protection.

October 17: Julia Powles — AI: The Stories We Weave; The Questions We Leave
     ABSTRACT: It has become almost automatic. While public conversation about artificial intelligence readily diverts into problems of the long future (the rise of the machines) and ingrained past (systemic inequality, now perpetuated and reinforced in data-driven systems), a small cadre of tech companies amasses unprecedented power on a planetary scale. This talk is an exploration and invitation. It interrogates the debates we have, and those we need, about AI, algorithms, rights, regulation, and the future. It examines what we talk about, why we talk about it, what we should ask and solve instead, and what is required to spur a richer, more imaginative, more innovative conversation about the world we wish to create.

October 10: Andy Gersick — Can We Have Honesty, Civility, and Privacy Online? Implications from Evolutionary Theories of Animal and Human Communication
     ABSTRACT: Early internet optimism centered on two unique affordances of online interaction that were expected to empower disenfranchised and diasporic groups: the mutability of online identities and the erasure of physical distance. The ability to interact from the safety of a distant and sometimes hidden vantage has remained a core feature of online social life, codified in the rules of social-media sites and considered in discussion of legal privacy rights. But it is now far from clear that moving our social lives online has “empowered” the disenfranchised, on balance. In fact, the disembodied and dispersed nature of online communities has increasingly appeared to fuel phenomena like trolling, cyberbullying and the deliberate spread of misinformation. Science on the evolution of communication has a lot to say about how social animals evaluate the trustworthiness of potential mates and rivals, allies and enemies. Most of that work shows that bluffing, false advertisement and other forms of deceptive signaling are only held in check when signal-receivers get the chance to evaluate the honesty of signal-producers through direct and repeated contact. It’s a finding that holds true across the animal kingdom, and it has direct implications for our current socio-political discourse. The antagonistic trolls and propagandistic sock-puppets that have invaded our politics are using deceptive strategies that are as old as the history of communication. What’s new, in human social evolution, is our vulnerability to those strategies within a virtual environment. I will discuss elements of evolutionary theory that seem relevant to online communication and internet privacy, and I hope to have a dialogue with attendees about (a) how those theories intersect with core elements of internet privacy law, and (b) whether we have to alter our basic expectations about online privacy if we want social-media interactions that favor cooperation over conflict.

October 3: Eli Siems — The Case for a Disparate Impact Regime Covering All Machine-Learning Decisions
     ABSTRACT: The potential for Machine Learning (ML) tools to produce discriminatory models is now well documented. The urgency of this problem is compounded both by the rapid spread of these tools into socially significant decision structures and by the unique obstacles ML tools pose to the correction of bias. These unique challenges fit into two categories: (1) the uniquely obfuscatory nature of correlational modeling and the threat of proxy variables standing in for impermissible considerations, and (2) the overriding tendency of ML tools to “freeze” historical disparities in place, and to replicate and even exacerbate them. Currently, two ML tools with identical biases stemming from identical issues will be reviewed differently depending on the context in which they are utilized. Under Title VII, for example, statistical evidence of discrimination would be sufficient to initiate a claim, but the same claim under the Constitution would be dismissed at the pleading stage without additional evidence of intent to discriminate. This paper attempts to work within the (profoundly flawed) strictures of existing Constitutional and statutory law to propose the adoption of a unified, cross-contextual regime that would allow a plaintiff challenging the decisions of an ML tool to utilize statistical evidence of discrimination to carry a claim beyond the initial pleading stage, empowering plaintiffs to demand a record of a tool’s design and the data upon which it trained. In support of extending a disparate impact regime to all instances of ML discrimination, I carefully analyze the Supreme Court’s treatment of statistical evidence of discrimination under both the Fourteenth Amendment and under statutory Civil Rights law. While the Supreme Court has repeatedly disavowed the application of disparate-impact style claims to Fourteenth Amendment Equal Protection, I argue that, for myriad reasons, its stated logic in doing so does not hold when the decision-maker in question is an ML tool. By analyzing Equal Protection holdings from the fields of government employment, death penalty sentencing, policing, and risk assessment as well as holdings under Title VII of the Civil Rights Act, the Fair Housing Act, and the Voting Rights Act, I identify the contextual qualities that have factored into the Court’s decisions to allow or disallow disparate impact evidence. I then argue that the court’s own reasoning in barring the use of such evidence in contexts like death penalty sentencing and policing decisions cannot apply to ML decisions, regardless of context.

September 26: Ari Waldman — Privacy's False Promise
     ABSTRACT: Privacy law—a combination of statutes, constitutional norms, regulatory orders, and court decisions—has never seemed stronger. The European Union’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (CalCPA) work in parallel with the Federal Trade Commission’s broad regulatory arsenal to put limits on the collection, use, and manipulation of personal information. The United States Supreme Court has reclaimed the Fourth Amendment’s historical commitment to curtail pervasive police surveillance by requiring warrants for cell-site location data. And the EU Court of Justice has challenged the cross-border transfer of European citizens’ data, signaling that American companies need to do far more to protect personal information. This seems remarkably comprehensive. But the law’s veneer of protection is hiding the fact that it is built on a house of cards. Privacy law is failing to deliver its promised protections in part because the responsibility for fulfilling legal obligations is being outsourced to layers of compliance professionals who see privacy law through a corporate, rather than substantive lens. This Article provides a comprehensive picture of this outsourcing market and argues that the industry’s players are having an outsized and constraining impact on the construction of privacy law in practice. Based on original primary source research into the ecosystem of privacy professionals, lawyers, and the third-party vendors on which they increasingly rely, I argue that because of a multilayered process outsourcing corporate privacy duties—one in which privacy leads outsource privacy compliance responsibilities to their colleagues, their lawyers, and an army of third-party vendors—privacy law is in the middle of a process of legal endogeneity: mere symbols of compliance are replacing real progress on protecting the privacy of consumers.

September 19: Marijn Sax — Targeting Your Health or Your Wallet? Health Apps and Manipulative Commercial Practices
     ABSTRACT: Most popular health apps (e.g. MyFitnessPal, Headspace, Fitbit) are not just helpful tools aimed at improving the user's health; they are also commercial services that use the idea of health to monetize their user base. In order to do so, popular health apps rely on (1) advanced analytical tools to 'optimize' monetization, and (2) propagate a rather particular health discourse aimed at making users understand their own health in a way that serves the commercial interests of health apps. Given the fact that health is very important to people and given the fact that health apps often try to mask their commercial intentions by appealing to the user's health, I argue that commercial health app practices are potentially manipulative. I offer a conception of manipulation to help explain how health app users could be manipulated by health apps. To address manipulation in health apps, it would be wise to not only focus on questions of informational privacy and data protection law, but also consider decisional privacy and unfair commercial practice law.

September 12: Mason Marks — Algorithmic Disability Discrimination
     ABSTRACT: In the Information Age, we continuously shed a trail of digital traces that are collected and analyzed by corporations, data brokers, and government agencies. Using artificial intelligence tools such as machine learning, they convert these traces into sensitive medical information and sort us into health and disability-related categories. I have previously described this process as mining for emergent medical data (EMD) because the health information inferred from digital traces often arises unexpectedly (and is greater than the sum of its parts). EMD is employed in epidemiological research, advertising, and a growing scoring industry that aims to sort and rank us. This paper describes how EMD-based profiling, targeted advertising, and scoring affects the health and autonomy of people with disabilities while circumventing existing health and anti-discrimination laws. Because many organizations that collect EMD are not covered entities under the Health Information Portability and Accountability Act (HIPAA), EMD-mining circumvents HIPAA's Privacy Rule. Moreover, because the algorithms involved are often inscrutable (or maintained as trade secrets), violations of anti-discrimination laws can be difficult to detect. The paper argues that the next generation of privacy and anti-discrimination laws must acknowledge that in the Information Age, health data does not originate solely within traditional medical contexts. Instead, it can be pieced together by artificial intelligence from the digital traces we scatter throughout real and virtual worlds.
 

Spring 2018

May 2: Ira Rubinstein Article 25 of the GDPR and Product Design: A Critical View [with Nathan Good and Guilermo Monge, Good Research]
April 25: Elana Zeide — The Future Human Futures Market
April 18: Taylor Black — Performing Performative Privacy: Applying Post-Structural Performance Theory for Issues of Surveillance Aesthetics
April 11: John Nay Natural Language Processing and Machine Learning for Law and Policy Texts
April 4: Sebastian Benthall — Games and Rules of Information Flow
March 28: Yann Shvartzshanider and Noah Apthorpe Discovering Smart Home IoT Privacy Norms using Contextual Integrity    
February 28: Thomas Streinz TPP’s Implications for Global Privacy and Data Protection Law

February 21: Ben Morris, Rebecca Sobel, and Nick Vincent — Direct-to-Consumer Sequencing Kits: Are Users Losing More Than They Gain?
February 14: Eli Siems — Trade Secrets in Criminal Proceedings: The Battle over Source Code Discovery
February 7: Madeline Bryd and Philip Simon Is Facebook Violating U.S. Discrimination Laws by Allowing Advertisers to Target Users?
January 31: Madelyn Sanfilippo Sociotechnical Polycentricity: Privacy in Nested Sociotechnical Networks 
January 24: Jason Schultz and Julia Powles Discussion about the NYC Algorithmic Accountability Bill


Fall 2017

November 29: Kathryn Morris and Eli Siems Discussion of Carpenter v. United States
November 15:Leon Yin Anatomy and Interpretability of Neural Networks
November 8: Ben Zevenbergen Contextual Integrity for Password Research Ethics?
November 1: Joe Bonneau An Overview of Smart Contracts
October 25: Sebastian Benthall Modeling Social Welfare Effects of Privacy Policies
October 18: Sue Glueck Future-Proofing the Law
October 11: John Nay — Algorithmic Decision-Making Explanations: A Taxonomy and Case Study
October 4:Finn Bruton — 'The Best Surveillance System we Could Imagine': Payment Networks and Digital Cash
September 27: Julia Powles Promises, Polarities & Capture: A Data and AI Case Study
September 20: Madelyn Rose Sanfilippo AND Yafit Lev-Aretz — Breaking News: How Push Notifications Alter the Fourth Estate
September 13: Ignacio Cofone — Anti-Discriminatory Privacy
 

Spring 2017

April 26: Ben Zevenbergen Contextual Integrity as a Framework for Internet Research Ethics
April 19: Beate Roessler Manipulation
April 12: Amanda Levendowski Conflict Modeling
April 5: Madelyn Sanfilippo Privacy as Commons: A Conceptual Overview and Case Study in Progress
March 29: Hugo Zylberberg Reframing the fake news debate: influence operations, targeting-and-convincing infrastructure and exploitation of personal data
March 22: Caroline Alewaerts, Eli Siems and Nate Tisa will lead discussion of three topics flagged during our current events roundups: smart toys, the recently leaked documents about CIA surveillance techniques, and the issues raised by the government’s attempt to obtain recordings from an Amazon Echo in a criminal trial. 
March 8: Ira Rubinstein Privacy Localism
March 1: Luise Papcke Project on (Collaborative) Filtering and Social Sorting
February 22: Yafit Lev-Aretz and Grace Ha (in collaboration with Katherine Strandburg) Privacy and Innovation     
February 15: Argyri Panezi Academic Institutions as Innovators but also Data Collectors - Ethical and Other Normative Considerations
February 8: Katherine Strandburg Decisionmaking, Machine Learning and the Value of Explanation
February 1: Argyro Karanasiou A Study into the Layers of Automated Decision Making: Emergent Normative and Legal Aspects of Deep Learning
January 25: Scott Skinner-Thompson Equal Protection Privacy
 

Fall 2016

December 7: Tobias Matzner The Subject of Privacy
November 30: Yafit Lev-Aretz Data Philanthropy
November 16: Helen Nissenbaum Must Privacy Give Way to Use Regulation?
November 9: Bilyana Petkova Domesticating the "Foreign" in Making Transatlantic Data Privacy Law
November 2: Scott Skinner-Thompson Recording as Heckling
October 26: Yan Shvartzhnaider Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms
October 19: Madelyn Sanfilippo Privacy and Institutionalization in Data Science Scholarship
October 12: Paula Kift The Incredible Bulk: Metadata, Foreign Intelligence Collection, and the Limits of Domestic Surveillance Reform

October 5: Craig Konnoth Health Information Equity
September 28: Jessica Feldman the Amidst Project
September 21: Nathan Newman UnMarginalizing Workers: How Big Data Drives Lower Wages and How Reframing Labor Law Can Restore Information Equality in the Workplace
September 14: Kiel Brennan-Marquez Plausible Cause
 

Spring 2016

April 27: Yan Schvartzschnaider Privacy and loT AND Rebecca Weinstein - Net Neutrality's Impact on FCC Regulation of Privacy Practices
April 20: Joris van Hoboken Privacy in Service-Oriented Architectures: A New Paradigm? [with Seda Gurses]

April 13: Florencia Marotta-Wurgler Who's Afraid of the FTC? Enforcement Actions and the Content of Privacy Policies (with Daniel Svirsky)

April 6: Ira Rubinstein Big Data and Privacy: The State of Play

March 30: Clay Venetis Where is the Cost-Benefit Analysis in Federal Privacy Regulation?

March 23: Diasuke Igeta An Outline of Japanese Privacy Protection and its Problems

                  Johannes Eichenhofer Internet Privacy as Trust Protection

March 9: Alex Lipton Standing for Consumer Privacy Harms

March 2: Scott Skinner-Thompson Pop Culture Wars: Marriage, Abortion, and the Screen to Creed Pipeline [with Professor Sylvia Law]

February 24: Daniel Susser Against the Collection/Use Distinction

February 17: Eliana Pfeffer Data Chill: A First Amendment Hangover

February 10: Yafit Lev-Aretz Data Philanthropy

February 3: Kiel Brennan-Marquez Feedback Loops: A Theory of Big Data Culture

January 27: Leonid Grinberg But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race
 

Fall 2015

December 2: Leonid Grinberg But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race AND Kiel Brennan-Marquez - Spokeo and the Future of Privacy Harms
November 18: Angèle Christin - Algorithms, Expertise, and Discretion: Comparing Journalism and Criminal Justice
November 11: Joris van Hoboken Privacy, Data Sovereignty and Crypto
November 4: Solon Barocas and Karen Levy Understanding Privacy as a Means of Economic Redistribution
October 28: Finn Brunton Of Fembots and Men: Privacy Insights from the Ashley Madison Hack

October 21: Paula Kift Human Dignity and Bare Life - Privacy and Surveillance of Refugees at the Borders of Europe
October 14: Yafit Lev-Aretz and co-author, Nizan Geslevich Packin Between Loans and Friends: On Soical Credit and the Right to be Unpopular
October 7: Daniel Susser What's the Point of Notice?
September 30: Helen Nissenbaum and Kirsten Martin Confounding Variables Confounding Measures of Privacy
September 23: Jos Berens and Emmanuel Letouzé Group Privacy in a Digital Era
September 16: Scott Skinner-Thompson Performative Privacy

September 9: Kiel Brennan-Marquez Vigilantes and Good Samaritan
 

Spring 2015

April 29: Sofia Grafanaki Autonomy Challenges in the Age of Big Data
                 David Krone Compliance, Privacy and Cyber Security Information Sharing
                 Edwin Mok Trial and Error: The Privacy Dimensions of Clinical Trial Data Sharing
                 Dan Rudofsky Modern State Action Doctrine in the Age of Big Data


April 22: Helen Nissenbaum Respect for Context' as a Benchmark for Privacy: What it is and Isn't
April 15: Joris van Hoboken From Collection to Use Regulation? A Comparative Perspective
April 8: Bilyana Petkova
 Privacy and Federated Law-Making in the EU and the US: Defying the Status Quo?
April 1: Paula Kift — Metadata: An Ontological and Normative Analysis

March 25: Alex Lipton — Privacy Protections for the Secondary User of Consumer-Watching Technologies

March 11: Rebecca Weinstein (Cancelled)
March 4: Karen Levy & Alice Marwick — Unequal Harms: Socioeconomic Status, Race, and Gender in Privacy Research


February 25 : Luke Stark — NannyScam: The Normalization of Consumer-as-Surveillorm


February 18: Brian Choi A Prospect Theory of Privacy

February 11: Aimee Thomson — Cellular Dragnet: Active Cell Site Simulators and the Fourth Amendment

February 4: Ira Rubinstein — Anonymity and Risk

January 28: Scott Skinner-Thomson Outing Privacy

 

Fall 2014

December 3: Katherine Strandburg — Discussion of Privacy News [which can include recent court decisions, new technologies or significant industry practices]

November 19: Alice Marwick — Scandal or Sex Crime? Ethical and Privacy Implications of the Celebrity Nude Photo Leaks

November 12: Elana Zeide — Student Data and Educational Ideals: examining the current student privacy landscape and how emerging information practice and reforms implicate long-standing social and legal traditions surrounding education in America. The Proverbial Permanent Record [PDF]

November 5: Seda Guerses — Let's first get things done! On division of labor and practices of delegation in times of mediated politics and politicized technologies
October 29:Luke Stark — Discussion on whether “notice” can continue to play a viable role in protecting privacy in mediated communications and transactions given the increasing complexity of the data ecology and economy.
Kristen Martin — Transaction costs, privacy, and trust: The laudable goals and ultimate failure of notice and choice to respect privacy online

Ryan Calo — Against Notice Skepticism in Privacy (and Elsewhere)

Lorrie Faith Cranor — Necessary but Not Sufficient: Standardized Mechanisms for Privacy Notice and Choice
October 22: Matthew Callahan — Warrant Canaries and Law Enforcement Responses
October 15: Karen Levy — Networked Resistance to Electronic Surveillance
October 8: Joris van Hoboken —  The Right to be Forgotten Judgement in Europe: Taking Stock and Looking Ahead

October 1: Giancarlo Lee — Automatic Anonymization of Medical Documents
September 24: Christopher Sprigman — MSFT "Extraterritorial Warrants" Issue 

September 17: Sebastian Zimmeck — Privee: An Architecture for Automatically Analyzing Web Privacy Policies [with Steven M. Bellovin]
September 10: Organizational meeting
 

Spring 2014

April 30: Seda Guerses — Privacy is Security is a prerequisite for Privacy is not Security is a delegation relationship
April 23: Milbank Tweed Forum Speaker — Brad Smith: The Future of Privacy
April 16: Solon Barocas — How Data Mining Discriminates - a collaborative project with Andrew Selbst, 2012-13 ILI Fellow
March 12: Scott Bulua & Amanda Levendowski — Challenges in Combatting Revenge Porn


March 5: Claudia Diaz — In PETs we trust: tensions between Privacy Enhancing Technologies and information privacy law: The presentation is drawn from a paper, "Hero or Villain: The Data Controller in Privacy Law and Technologies” with Seda Guerses and Omer Tene.

February 26: Doc Searls Privacy and Business

February 19: Report from the Obfuscation Symposium, including brief tool demos and individual impressions

February 12: Ira Rubinstein The Ethics of Cryptanalysis — Code Breaking, Exploitation, Subversion and Hacking
February 5: Felix Wu — The Commercial Difference which grows out of a piece just published in the Chicago Forum called The Constitutionality of Consumer Privacy Regulation

January 29: Organizational meeting
 

Fall 2013

December 4: Akiva Miller — Are access and correction tools, opt-out buttons, and privacy dashboards the right solutions to consumer data privacy? & Malte Ziewitz What does transparency conceal?
November 20: Nathan Newman — Can Government Mandate Union Access to Employer Property? On Corporate Control of Information Flows in the Workplace

November 6: Karen Levy — Beating the Box: Digital Enforcement and Resistance
October 23: Brian Choi — The Third-Party Doctrine and the Required-Records Doctrine: Informational Reciprocals, Asymmetries, and Tributaries
October 16: Seda Güerses — Privacy is Don't Ask, Confidentiality is Don't Tell
October 9: Katherine Strandburg — Freedom of Association Constraints on Metadata Surveillance
October 2: Joris van Hoboken — A Right to be Forgotten
September 25: Luke Stark — The Emotional Context of Information Privacy
September 18: Discussion — NSA/Pew Survey
September 11: Organizational Meeting


Spring 2013

May 1: Akiva Miller — What Do We Worry About When We Worry About Price Discrimination
April 24: Hannah Block-Wheba and Matt Zimmerman — National Security Letters [NSL's]

April 17: Heather Patterson — Contextual Expectations of Privacy in User-Generated Mobile Health Data: The Fitbit Story
April 10: Katherine Strandburg — ECPA Reform; Catherine Crump: Cotterman Case; Paula Helm: Anonymity in AA

April 3: Ira Rubinstein — Voter Privacy: A Modest Proposal
March 27: Privacy News Hot Topics — US v. Cotterman, Drones' Hearings, Google Settlement, Employee Health Information Vulnerabilities, and a Report from Differential Privacy Day

March 6: Mariana Thibes — Privacy at Stake, Challenging Issues in the Brazillian Context
March 13: Nathan Newman — The Economics of Information in Behavioral Advertising Markets
February 27: Katherine Strandburg — Free Fall: The Online Market's Consumer Preference Disconnect
February 20: Brad Smith — Privacy at Microsoft
February 13: Joe Bonneau  — What will it mean for privacy as user authentication moves beyond passwo
February 6: Helen Nissenbaum — The (Privacy) Trouble with MOOCs
January 30: Welcome meeting and discussion on current privacy news
 

Fall 2012

December 5: Martin French — Preparing for the Zombie Apocalypse: The Privacy Implications of (Contemporary Developments in) Public Health Intelligence
November 7: Sophie Hood — New Media Technology and the Courts: Judicial Videoconferencing
November 14: Travis Hall — Cracks in the Foundation: India's Biometrics Programs and the Power of the Exception

November 28: Scott Bulua and Catherine Crump — A framework for understanding and regulating domestic drone surveillance

November 21: Lital Helman — Corporate Responsibility of Social Networking Platforms
October 24: Matt Tierney and Ian Spiro — Cryptogram: Photo Privacy in Social Media
October 17: Frederik Zuiderveen Borgesius — Behavioural Targeting. How to regulate?

October 10: Discussion of 'Model Law'

October 3: Agatha Cole — The Role of IP address Data in Counter-Terrorism Operations & Criminal Law Enforcement Investigations: Looking towards the European framework as a model for U.S. Data Retention Policy
September 26: Karen Levy — Privacy, Professionalism, and Techno-Legal Regulation of U.S. Truckers
September 19: Nathan Newman — Cost of Lost Privacy: Google, Antitrust and Control of User Data