April 26: Ben Zevenbergen
April 19: Beate Roessler
April 12: ILI will sponsor the law school forum on “Privacy Dystopia” during the PRG slot. Presenters for the other two dates will be Madelyn Sanfilippo and Amanda Levendowski.
April 5: ILI will sponsor the law school forum on “Privacy Dystopia” during the PRG slot. Presenters for the other two dates will be Madelyn Sanfilippo and Amanda Levendowski.
March 29: ILI will sponsor the law school forum on “Privacy Dystopia” during the PRG slot. Presenters for the other two dates will be Madelyn Sanfilippo and Amanda Levendowski.
March 22: Current events - This will be more in-depth discussion of two or three current privacy/information law-related issue.
March 8: Ira Rubinstein
March 1: Luise Papcke
February 22: Yafit Lev-Aretz and Grace Ha (in collaboration with Katherine Strandburg) - Privacy and Innovation
ABSTRACT: Calls to limit or refrain from privacy regulation rest on a variety of conflicting grounds, such as freedom of speech, safety, security, efficiency, and innovation. One of the most widely cited, but least clearly specified, such grounds is the stifling effect that privacy regulation is said to have on innovation. Regulatory intervention for the sake of privacy, goes the claim, is suspect because it will hinder the development of a variety of socially valuable and innovative products, technologies or business models.[i] The threat of stifled innovation is often invoked in essentially talismanic fashion by those opposed to privacy regulation, without evidence and with little detail as to precisely what kind of innovation is at risk, the nature and severity of the looming risk, or by what mechanism any particular regulatory proposal would make the risk materialize.[ii] Privacy scholarship also has devoted surprisingly little attention to these questions.[iii] In this project, we interrogate and analyze the interplay between privacy regulation and innovation, drawing upon insights from the privacy, innovation and regulatory literatures. In particular, we set the debate about privacy regulation and innovation into the context of studies of the effects of regulation on innovation in other arenas, such as health care, environmental policy and consumer safety.[iv] We show that the bare argument that privacy regulation will “stifle” innovation is overly simplistic. Innovation is not a commodity of which society simply has “more” or “less.” Like many other aspects of the legal and economic background within which innovation occurs, regulation shapes innovation and affects its direction and character as much as it affects the amount of innovation that occurs. Moreover, the implications of regulation for innovation will depend, in the privacy arena as elsewhere, on the design of the regulation.[v] While we do not deny that there may be normative tradeoffs to be made between certain types of innovation and certain instantiations of privacy values, we argue that privacy regulation cannot be pigeonholed exclusively as an enemy of technological development. Indeed, privacy may be an essential catalyst for innovation. Thus, viewing the relationship between privacy and innovation simplistically, as a zero-sum trade-off, does a disservice to the social importance of both. We set off by mapping and categorizing the contentions that have been made about the effect of privacy regulation on innovation during previous debates about privacy regulation. While some of the possible arguments are unique to privacy regulation, others are classic counter-regulation arguments that are generally unpersuasive without a concrete cost-benefit analysis tailored to a particular situation.[vi] We then disentangle and characterize the various ways in which regulation can interact with innovation. The relationship between privacy regulation and innovation may involve a variety of regulatory means and innovation systems. We home in on issues such as the direction of the putatively stifled innovation, the particular types of innovation that may be stifled, the possibility that regulation can re-direct innovation in socially desirable directions, the possibility of innovation in means for regulatory compliance, mechanisms connecting specific regulatory avenues with particular effects on innovation and the nature of the social costs and benefits that might emerge from these interactions. Beginning with existing literature in privacy and other fields, we also explore the various available regulatory design levers that affect how regulation and innovation interact. While the relationship between privacy regulation and innovation has much in common with the relationship between regulation and innovation more broadly, we also consider how a more careful analysis of the relationship between privacy and innovation might play out in particular regulatory debates in the privacy arena. For example, privacy regulation’s long-standing reliance on a notice and consent regime has been the subject of almost universal critique based on its effectiveness in protecting privacy.[vii] Here we consider the implications of the significant gap between compliance with notice and consent based regulation and the effective promotion of privacy values for innovation. Notice and consent regulation may be both ineffective and wasteful, misleading individual consumers about their privacy and prompting expenditure of resources on compliance measures that do not promote privacy goals.[viii] Other examples include the possibility that regulation promoting “privacy by design,” in which privacy protection measures are integrated into the software, might be a spur for privacy-enhancing innovation and the opposite possibility that certain types of privacy regulation might divert resources away from innovation in privacy-preserving technologies and toward regulatory compliance initiatives.
[i] See, e.g., Richard Waters, Google Says Tighter EU Search Regulations Would ‘Hurt’ Innovation, The Financial Times, June 24, 2013; Colleen Taylor, Google Co-Founders Talk Regulation, Innovation, and More in Fireside Chat with Vinod Khosla, TechCrunch, Jul. 6, 2014, https://techcrunch.com/2014/07/06/google-co-founders-talk-long-term-innovation-making-big-bets-and-more-in-fireside-chat-with-vinod-khosla/; Adam Thierer & Ryan Hagemann, Removing Roadblocks to Intelligent Vehicles and Driverless Cars, 5 Wake Forest J.L. & Pol’y 339, 349 (2015).
[ii] See Julie E. Cohen, The Surveillance-Innovation Complex: The Irony of the Participatory Turn, in The Participatory Condition 10 (Darin Barney et. al. eds., 2015).
[iii] But see, e.g., Avi Goldfarb & Catherine Tucker, Privacy and Innovation, 12 Innovation Pol’y & the Economy 65, 77 (2012) (noting that privacy regulations will likely restrict innovation in the domain of the advertising-supported Internet) [‘Goldfarb and Tucker’]; Tal Z. Zarsky, The Privacy-Innovation Conundrum, 19 Lewis & Clark L. Rev. 115, 140-41 (2015) (stating that stronger privacy protections will reduce innovation).
[iv] See, e.g., Matthew Grennan & Robert Town, The FDA and the Regulation of Medical Device Innovation: A Problem of Information, Risk, and Access, 4 Penn Wharton Public Policy Initiative 1 (2016) (discussing the relationship between FDA regulations on coronary stents and consumer safety); Rebecca S. Eisenberg, Reexamining Drug Regulation from the Perspective of Innovation Policy, 160 J. Institutional & Theoretical Economics (JITE) 126 (2004) (discussing the impact of FDA regulations on new drug development); David Popp, Innovation and Climate Policy, 2 Annual Review of Resource Economics 283 (2010) (describing the impact of environmental regulations on the development of clean technologies).
[v] Dennis D. Hirsch & Ira S. Rubinstein, Better Safe than Sorry: Designing Effective Safe Harbor Programs for Consumer Privacy Legislation, 10 BNA Privacy & Security Law Report 1639, 1643-46 (2011).
[vi] See, e.g., Goldfarb & Tucker, at 77; Rahul Telang, A Privacy and Security Policy Infrastructure for Big Data, 10 I/S: J. L. & Pol’y for Info Soc’y 783 (2015).
[vii] See, e.g., Daniel J. Solove, Privacy Self-Management and the Consent Dilemma, 126
Harv. L. Rev. 1880 (2013); James P. Nehf, Open Book: The Failed Promise
of Information Privacy in America 191 (2012); Richard Warner, Undermined Norms: The Corrosive Effect of Information Processing Technology on Informational Privacy, 55 St. Louis L.J. 1047, 1084–86 (2011).
[viii] See Protecting Consumer Privacy in an Era of Rapid Change (2010 FTC Report), available at http://www.ftc.gov/os/2010/12/101201privacyreport.pdf.
February 15: Argyri Panezi - Academic Institutions as Innovators but also Data Collectors - Ethical and Other Normative Considerations
ABSTRACT: In my presentation I wish to discuss the role of academic institutions as innovators particularly when they are involved in data-driven research projects immediately related to members of their community (students, researchers, administration and faculty) but also to their local communities. With research projects on the Internet of Things and on Smart Cities taking off, there is arguably a need to discuss ethics, codes of conduct and perhaps responsibilities when institutions collect and manage different types of data needed for these projects. I am generally interested in the management of digital resources within academia. I define digital resources broadly to include data in digitized form and other digitized material that are machine-readable -thus material in any form that when digitized can ultimately be processed as raw data. Academic institutions have long been familiar with circumstances when their collection of data, incidental (for example for practical, administrative purposes) or purposeful (for research or for archival purposes), is subject to legal and ethical rules. One can look at several examples to draw analogies from, in longstanding practices within academic environments: recruitment and admissions departments storing all kinds of sensitive data collected by candidates, academic libraries having access to data of their readers (which books are checked out), science labs conducting experiments in which members of the student body participate etc. Is the involvement of academia in big-data research projects any different? During the presentation I will try to map the relevant legal issues and also suggest what types of academic research I focus on. A central question is which responsibilities arise when academic institutions partner with industry. There are a number of complex legal issues that arise in this context: an interesting mix of access issues (IP considerations), data protection, and security issues. To exemplify the complexity I will also be presenting an example coming from my current research in digitization.
February 8: Katherine Strandburg - Decisionmaking, Machine Learning and the Value of Explanation
ABSTRACT: Much of the policy and legal debate about algorithmic decision-making has focused on issues of accuracy and bias. Equally important, however, is the question of whether algorithmic decisions are understandable by human observers: whether the relationship between algorithmic inputs and outputs can be explained. Explanation has long been deemed a crucial aspect of accountability, particularly in legal contexts. By requiring that powerful actors explain the bases of their decisions — the logic goes — we reduce the risks of error, abuse, and arbitrariness, thus producing more socially desirable decisions. Decision-making processes employing machine learning algorithms complicate this equation. Such approaches promise to refine and improve the accuracy and efficiency of decision-making processes, but the logic and rationale behind each decision often remains opaque to human understanding. Indeed, at a technical level, it is not clear that all algorithms can be made explainable and, at a normative level, it is an open question when and if the costs of making algorithms explainable outweigh the benefits. This presentation will begin to map out some of the issues that must be addressed in determining in what contexts, and under what constraints, machine learning approaches to governmental decision-making are appropriate
February 1: Argyro Karanasiou - A Study into the Layers of Automated Decision Making: Emergent Normative and Legal Aspects of Deep Learning
ABSTRACT: The paper dissects the intricacies of Automated Decision Making (ADM) and urges for refining the current legal definition of AI when pinpointing the role of algorithms in the advent of ubiquitous computing, data analytics and deep learning. ADM relies upon a plethora of algorithmic approaches and has already found a wide range of applications in marketing automation, social networks, computational neuroscience, robotics, and other fields. Whilst coming up with a toolkit to measure algorithmic determination in automated/semi-automated tasks might be proven to be a tedious task for the legislator, our main aim here is to explain how a thorough understanding of the layers of ADM could be a first good step towards this direction: AI operates on a formula based on several degrees of automation employed in the interaction between the programmer, the user, and the algorithm; this can take various shapes and thus yield different answers to key issues regarding agency. The paper offers a fresh look at the concept of “Machine Intelligence”, which exposes certain vulnerabilities in its current legal interpretation. To highlight this argument, analysis proceeds in two parts: Part 1 strives to provide a taxonomy of the various levels of automation that reflects distinct degrees of Human – Machine interaction and can thus serve as a point of reference for outlining distinct rights and obligations of the programmer and the consumer: driverless cars are used as a case study to explore the several layers of human and machine interaction. These different degrees of automation reflect various levels of complexities in the underlying algorithms, and pose very interesting questions in terms of regulating the algorithms that undertake dynamic driving tasks. Part 2 further discusses the intricate nature of the underlying algorithms and artificial neural networks (ANN) that implement them and considers how one can interpret and utilize observed patterns in acquired data. Finally, the paper explores the scope for user empowerment and data transparency and discusses attendant legal challenges posed by these recent technological developments.
January 25: Scott Skinner-Thompson - Equal Protection Privacy
ABSTRACT: To the extent the right to privacy exists, it is often understood as universal. If not universal, then of particular importance to marginalized individuals. But in practice, people of privilege tend to fare far better when they bring privacy tort claims than do non-privileged individuals. This, despite doctrine suggesting that those who occupy prominent and public social positions are entitled to diminished privacy tort protections. This Article unearths disparate outcomes in public disclosure tort case outcomes, and uses the unequal results as a lens to expand our understanding of how constitutional equality principles might be used to rejuvenate beleaguered privacy tort law. Scholars and the Supreme Court have long recognized that state action applies to the common law, both because judges make the substantive rule of decision and enforce the law. Under this theory of state action, the First Amendment has been used as a means of limiting the extent of privacy and defamation torts. But if state action applies to tort law, should other constitutional provisions bear on the substance of common law torts? This Article argues that the answer is yes, and uses the unequal implications of prevailing public disclosure tort doctrine to explore whether constitutional equality principles can be used to reform the currently weak protections provided by black letter privacy tort law. By so doing, the Article also opens a doctrinally-sound basis for a broader discussion of how constitutional liberty, due process, and equality norms might influence tort law across a variety of substantive contexts.
December 7: Tobias Matzner - The Subject of Privacy
ABSTRACT: The paper engages with theories which establish the value of privacy. It compares two accounts of privacy: the first as protecting a particular, private space like the home or the “private sphere”, and the second as the relative separation of social contexts. Most theories of the value of privacy pertain to the first category, where privacy is seen as necessary space for an autonomous subject. Using various examples from current privacy research as well as normative positions, the paper shows that this focus on autonomy is problematic. Thus, it is shown that the second account of privacy is much better suited to grasp the problems brought about by digital media. The paper continues to show that the second account of privacy is often linked to the idea of “identity management”; i.e. privacy is not only meant to separate social contexts, but also to clear a space where free decisions about the personalities one assumes in this contexts can be taken. Such a view implies the first account of privacy within the second. Based on theories of Hannah Arendt and Judith Butler, the paper develops an alternative account of privacy and personality that better fits the problems of digital communication. Examples from empirical studies of teenagers’ behavior online illustrate how the implicit individualism in “identity management” can lead to victim blaming. The paper concludes by showing how the value of privacy can be conceived from this perspective. Rather than providing freedom in the sense of autonomy privacy protects the freedom to be someone else in the future or at other places – which however need not necessarily be an autonomous person. Thus, privacy eventually protects the fundamental value of plurality.
November 30: Yafit Lev-Aretz - Data Philanthropy
ABSTRACT: Everybody is busy collecting. The business of collecting data and extracting insights in pursuit of specified goals has never been more thriving. The privacy and security implications are terrifying: unlimited information about virtually anyone and anything is being recorded and archived in data banks that are subject to a variety of cyber threats. But alongside the risks lies an enormous opportunity: troves of data represent a boundless wealth of potential insights for the progress of knowledge and society. When the right information is matched with the right questions, numbers could be translated into real life value by answering pressing questions, mitigating common challenges, and guiding policy decisions. Because data is non-rivalry, the same information could be analyzed for different purposes, and data that has been deemed useless for one could unlock a world of possibilities for another. Advocates of data sharing have been calling on private sector actors to voluntarily share their data for social impact. Robert Kirkpatrick, the head of the UN Global Pulse Initiative, an R&D lab that uses big data and real-time analytics to make policymaking more agile and effective, explained that “the public sector cannot fully exploit Big Data without leadership from the private sector.” And stressed: “what we need is action that goes beyond corporate social responsibility.” Similarly, Matt Stempeck, Microsoft’s Director of Civic Technology in New York City, wrote: “Companies shaping this data-driven world can contribute to the public good by working directly with public institutions and social organizations to bring their expertise and information assets to bear on shared challenges.” In many instances, this kind of giving has been termed “data philanthropy.” Following a comprehensive introduction to the data philanthropy discourse, this project aims at providing a better understanding of data collaborations, sharing incentives, and practical concerns. Subsequently, using the Fair Information Practices Principles framework, the project will submit a set of policy recommendations to capitalize on the potential of data givings while minimizing risks that could result in from such collaborations.
November 16: Helen Nissenbaum - Must Privacy Give Way to Use Regulation?
ABSTRACT: In a departure from traditional modes of privacy regulation, there is growing support for regulating only certain uses of personal information while entirely deregulating its collection. Proponents argue that the safeguards usually associated with privacy protection can be achieved through judicious constraints on use, so that ex ante constraints on collection will not stifle the enormous potential of AI and big data. My paper questions this increasingly popular logic not only because it is ambiguous to the point of incoherence or plays suspiciously well with the dominant business model of information industry incumbents. Although there is no denying the genuine and unprecedented challenges to privacy posed by data science, the paper argues that fully substituting restrictions on collection with use restrictions will weaken one of the cornerstones of a free society with little assurance of public welfare gains.
November 9: Bilyana Petkova - Domesticating the "Foreign" in Making Transatlantic Data Privacy Law
ABSTRACT: Research shows that in the data privacy domain, the regulation promoted by frontrunner states in federated systems such as the United States or the European Union generates races to the top, not to the bottom. Institutional dynamics or the willingness of major interstate companies to work with a single standard generally create opportunities for the federal lawmaker to level up privacy protection. This article uses federalism to explore whether a similar pattern of convergence (toward the higher regulatory standard) emerges when it comes to the international arena, or whether we witness a more nuanced picture. I focus on the interaction of the European Union with the United States, looking at the migration of legal ideas across the (member) state jurisdictions with a focus on breach notification statutes and privacy officers. The article further analyses recent developments such as the invalidation of the Safe Harbor Agreement and the adoption of a Privacy Shield. I argue that instead of a one-way street, usually conceptualized as the EU ratcheting up standards in the US, the influences between the two blocs are mutual. Such influences are conditioned by the receptivity and ability of domestic actors in both the US and the EU to translate, and often, adapt the “foreign” to their respective contexts. Instead of converging toward a uniform standard, the different points of entry in the two federated systems contribute to the continuous development of two models of regulating commercial privacy that, thus far, remain distinct.
November 2: Scott Skinner-Thompson - Recording as Heckling
ABSTRACT: There are increasing calls for a right to public privacy, and often such calls are justified with reliance on the First Amendment. Similarly, there is a growing body of authority recognizing that recording of public space is also protected by the First Amendment. Both purported rights serve important First Amendment values—recording information can be critical to future speech and, as a form of confrontation to authority, is also a direct form of expression. Likewise, functional efforts to maintain privacy while navigating public space may help create an incubator for thought and future speech, and can also serve as a form of direct expressive resistance to surveillance regimes. But while recordings may be critical to government accountability and have important First Amendment benefits, they also have obvious privacy implications. How do we balance the right to record with the right to maintain privacy? When can the government regulate recording that attempts to breach the privacy shields erected by other citizens? I suggest that the concept of the heckler’s veto provides a promising rubric for analyzing attempts to regulate these sometimes competing forms of “speech.” This piece argues that just as a heckler’s suppression of another’s free speech justifies government regulation of the heckler’s speech, so too when recording (a form of speech) infringes on and pierces reasonable efforts to maintain privacy (also a form of speech), then the government may—through direct regulation or even tort law—limit the ability to record.
October 26: Yan Shvartzhnaider - Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms
ABSTRACT: Designing programmable privacy logic frameworks that correspond to social, ethical, and legal norms has been a fundamentally hard problem. The theory of Contextual integrity (CI) (Nissenbaum 2010) offers a model for conceptualizing privacy that is able to bridge technical design with ethical, legal, and policy approaches. While CI is capable of capturing the various components of contextual privacy in theory, it is challenging to discover and formally express these norms in operational terms. In this talk I will discuss our work in designing a framework for crowdsourcing privacy norms based on the theory of contextual integrity.
October 19: Madelyn Sanfilippo - Privacy and Institutionalization in Data Science Scholarship
ABSTRACT: Meta-analysis of methodological institutionalization across three scholarly disciplines provides evidence that not only are traditional statistical quantitative methods more institutionalized and consistent, but also are drawn on to structure data scientific approaches when institutionalization is sought for new and large n quantitative methods. Among the strategies, norms, and rules within this body of literature are various institutionalisms surrounding issues of privacy, with stark contrasts in level of detail and attitudes–such as compliance versus privacy as a social value—based on discipline and methodological approaches. This talk will focus on key insights from recently completed work on institutionalization in data science scholarship and outline preliminary findings from work-in-progress pursuing insight into attitudinal and institutional differences reflected in this literature toward privacy.
October 12: Paula Kift - The Incredible Bulk: Metadata, Foreign Intelligence Collection, and the Limits of Domestic Surveillance Reform
ABSTRACT: On June 2, 2015 Congress passed the USA FREEDOM Act, which, among other things, was intended to end the bulk collection of domestic telephony metadata that the National Security Agency (NSA) had been conducting under the authority of Section 215 of the USA PATRIOT Act. The metadata program sparked outrage among privacy and civil liberties advocates across the United States since it implied that, in the course of foreign intelligence investigations, the U.S. government was collecting the communication records of millions of Americans in bulk, in the absence of any particularized suspicion. The reliance on Section 215 of the PATRIOT Act as the legal basis for the program also raised significant statutory and constitutional concerns. This paper analyzes whether the passage of the USA FREEDOM Act was able to alleviate some of these these concerns. It argues that, even though the FREEDOM Act made some headway towards limiting the scope, and improving the accountability, of domestic government surveillance programs, a significant risk remains that the U.S. government can continue collecting large amounts of communications metadata of Americans that are not strictly relevant to any authorized investigations. Most worryingly, the U.S. government may have simply shifted the bulk collection of domestic metadata to a different authority, sweeping up the telecommunication records of millions of Americans at home under the guise of foreign intelligence collection abroad.
October 5: Craig Konnoth - Health Information Equity
ABSTRACT: As of the last few years, the health information of numerous Americans is being collected and used for follow-on, secondary research to study correlations between medical conditions, genetic or behavioral profiles, and treatments. Recent federal legislation and regulations make it easier to use the data of the low income, unwell, and elderly, than that of others, for this research. This imposes disproportionate security and autonomy burdens on these individuals. Those who are well off and pay out of pocket can effectively exempt their data from the publicly available information pot. This presents a problem which modern research ethics is not well equipped to address. Where it considers equity at all, it emphasizes underinclusion and the disproportionate distribution of research benefits, rather than overinclusion and disproportionate distribution of burdens. I rely on basic intuitions of reciprocity and fair play, as well as broader accounts of social and political equity to show that equity in burden distribution is a key aspect of the ethics of secondary research. To satisfy its demands we can use three sets of regulatory and policy levers. First, information collection for public research should expand beyond groups having the lowest welfare. Next, data analyses and queries should more equitably draw on data pools. Finally, we must create an entity to coordinate these solutions using existing statutory authority if possible. Considering health information collection at a systematic level rather than that of individual clinical encounters gives us insight into the broader role health information plays as a site of personhood, citizenship, and community.
September 28: Jessica Feldman - the Amidst Project
ABSTRACT: In this talk I will discuss the amidst project -- an ad-hoc, peer-to-peer, encrypted network for mobile phones -- and the fieldwork that led me to work on it. Drawing on 50+ interviews and surveys with activists, human rights workers, journalists, and engineers in Cairo, Istanbul, Madrid, and New York City, my doctoral dissertation considers surveillance, blocking, and alternate communications methods in the "movements of the squares" and their aftermath. As a response to this fieldwork, I am working with a team of engineers on the amidst network. As a mobile "mesh" network, amidst comes into being when a large group of people are assembled together, and uses each phone as a node to build the network, attempting to provide a solution to the problems of just-in-time blocking and infrastructural surveillance allowed for by centralized telecom. The project also experiments with decentralized, non-hierarchical, localized communication and security practices, which bring about some interesting problems, both philosophically and technically, regarding the fraught relationships among privacy, trust, accountability, and democratic publics.
September 21: Nathan Newman - UnMarginalizing Workers: How Big Data Drives Lower Wages and How Reframing Labor Law Can Restore Information Equality in the Workplace
ABSTRACT: While there has been a flurry of new scholarship on how employer use of data analysis may lead to subtle but potentially devastating individual discrimination in employment systems, there has been far less attention to the ways the deployment of big data may be driving down wages for most workers, including those who manage to be hired. This article details the ways big data can and in many cases is actively being deployed to lower wages through hiring practices, in the ways raises are now being offered, and in the ways workplaces are organized (and disorganized) to lower employee bargaining power—and how new interpretations of labor law are beginning to and can in the future reshape the workplace to address these economic harms. Data analysis is increasingly helping to lower wages in companies beginning in the hiring process where pre-hire personality testing helps employers screen out employees who will agitate for higher wages and organize or support unionization drives in their companies. For employees who are hired, companies have massively expanded data-driven workplace surveillance that allows employers to assess which employees are most likely to leave and thereby limit pay increases largely to them, lowering wages over time for workers either less able to find new employment because of their age or less inclined in general to risk doing so. Data analysis and so-called “algorithmic management” has also allowed the centralized monitoring of far flung workers organized nominally in subcontractors or as individual contractors, while traditional firms such as in retail implement data-driven scheduling that resembles the “on-demand” employment of independent contractors. All of this shifts risk and “downtime” costs to employees and lowers their take-home pay, even as the fragmenting of the workplace makes it harder for workers to collectively organize for higher wages. The article addresses how we should rethink and interpret existing labor law in each of these aspects of the employment process. The NLRB can reasonably construe many pre-hire employment tests as violating federal labor law’s prohibition of screening out union sympathizers, much as the EEOC has found many personality tests violate the Americans with Disabilities Act by allowing indirect identification of people with mental illness. Similarly, since big data analysis can reveal pro-union sympathies of current employees, under existing prohibitions of “polling” employees for their views, a reasonable extension of the law would be to prohibit sharing any personal data collected by management that might reveal protected conduct or union sympathies with line managers or outside management consultants involved in advising in labor campaigns. The Board can also level the informational playing field by making both hiring algorithms and those determining pay increases more available during collective bargaining. The Board is already moving to expand its “joint employer” doctrine to allow workers to challenge the fragmented workplace increasingly driven by algorithmic management and a clear recognition that algorithms establish exactly the control of nominally independent contractors or subcontractor’s workers that entitle them to collective bargaining rights with a central employer, strengthening worker bargaining power. Such a “collective action” approach to the problem is far more likely to succeed than other proposals focused on strengthening individual worker privacy or anti-discrimination rights in the workplace in regards to data-driven decision-making. As scholars have noted, disadvantaged groups under the civil rights laws may have sharply different preferences in wage versus benefit packages, so a process that increases informational resources for all workers and allows them to negotiate together for the mix of wages, benefits, work conditions and other “public goods” in the workplace, including privacy protections, will better reflect the overall interests of employees than in either a classic economic model based on a marginal worker’s “exit” or a “rights consciousness” litigation approach to rein in individual employment harms. In making this overall argument, the article partially addresses the debate on why wages have stagnated and even fallen below productivity gains over the last four decades as the deployment of data technology has played a significant and growing role in helping employers extract a disproportionate share of employee productivity gains to the benefit of management and shareholders.
September 14: Kiel Brennan-Marquez - Plausible Cause
ABSTRACT: “Probable cause” is not about probability. It is about plausibility. To determine if an officer has the requisite suspicion to perform a search or seizure, what matters is not the statistical likelihood that a “person, house, paper or effect” is linked to criminal activity. What matters is whether criminal activity provides a convincing explanation of observed facts. For an inference to qualify as plausible, an observer must understand why the inference follows; she must be able to explain its relationship to the facts. Probable inferences, by contrast, do not require explanations. An inference can be probable—in a predictive sense, based on past trends—without a human observer understanding what makes it so. In many cases, plausibility and probability overlap. An inference that accounts for observed facts is often likely to be true, and vice versa. But there is an important sub-set of cases in which the two properties pull apart, raising deep questions about the underpinnings of Fourth Amendment suspicion: inferences generated by predictive algorithms. In this Article, I argue that casting suspicion in terms of plausibility, rather than probability, is both more consistent with established law and crucial to the Fourth Amendment’s normative integrity. Before law enforcement officials may intrude on private life, they must explain why they believe wrongdoing has occurred. This “explanation-giving” requirement has two key virtues. First, it facilitates governance; we cannot effectively regulate what we do not understand. Second, it allows judges to consider the “other side of the story”—the innocent version of events a suspect might offer on her own behalf—before warranting searches and seizures. In closing, I connect these virtues to broader themes of democratic theory. In a free society, legitimacy is not measured solely by outcomes. The exercise of state power must be explained—and the explanations must be responsive both to the democratic community writ large and to the specific individuals whose interests are infringed.
April 27: Yan Schvartzschnaider - Privacy and loT AND Rebecca Weinstein - Net Neutrality's Impact on FCC Regulation of Privacy Practices
April 20: Joris van Hoboken - Privacy in Service-Oriented Architectures: A New Paradigm? [with Seda Gurses]
April 13: Florencia Marotta-Wurgler - Who's Afraid of the FTC? Enforcement Actions and the Content of Privacy Policies (with Daniel Svirsky)
April 6: Ira Rubinstein - Big Data and Privacy: The State of Play
ABSTRACT: Big data undermines modern conceptions of privacy law in at least two ways. First, big data challenges the Fair Information Practices (FIPs), which form the basis of all modern privacy law, by exploding the core premises of informed choice and data minimization. Second, the classic FIPs seem ill-equipped to handle a new class of privacy violations and related harms in which algorithmic processes and/or inaccurate or biased data leads to discriminatory actions against protected groups. Regulators and policy experts have sought to address these problems in either of five ways: First, by extending the FIPs to directly address “profiling” (as in the new EU General Regulation) or "out of context" data collection and use (as in the draft Obama privacy bill); second, by narrowing the FIPs to focus primarily on use regulations, while developing new balancing tests that weigh the costs and benefits of specific uses of Big Data under the FTC’s unfairness standard or similar criteria; third, by recasting the FIPs in terms of technological due process; fourth and fifth, by supplementing (or even supplanting) the FIPs by describing new business models premised on consumer empowerment or, finally, by developing new technological solutions under the banner of “fairness by design.” This essay seeks to capture and evaluate the current state of play in privacy and big data by analyzing the strengths and weaknesses of all five responses to the FIPs and, if possible, synthesizing a more satisfactory approach.
March 30: Clay Venetis - Where is the Cost-Benefit Analysis in Federal Privacy Regulation?
March 23: Diasuke Igeta - An Outline of Japanese Privacy Protection and its Problems
ABSTRACT: In Japan, no statute includes the word of "privacy". The concept of privacy has developed through many judicial cases. Because of the lack of statute, we have some problems about privacy; some is overprotected, the other should be more protected. I would introduce some of the important cases and the modern problems in Japan.
Johannes Eichenhofer - Internet Privacy as Trust Protection
ABSTRACT: This presentation argues for a legal conception of Internet Privacy based on the idea of trust protection. The protection of trust through legal certainty is considered one of the key elements of both German and European Law. It applies to both the relations of the individual to the State – governed by “public privacy rules” – and to private entities (e.g. Internet service providers), which are governed by “private privacy rules”. Even though the latter is of enormous relevance for the Internet users’ privacy, this relationship finds only weak or even non-existing protection under current German or European Constitutional Law. This condition can be challenged under the perspective of Internet Privacy as trust protection.
March 9: Alex Lipton - Standing for Consumer Privacy Harms
ABSTRACT: Courts are struggling to apply traditional standing doctrine to claims involving modern consumer privacy harms, leading to inconsistent outcomes for plaintiffs alleging near-identical injuries. However, while the privacy interests and resulting injuries prove similar, not all consumer privacy claims are the same. This Article hypothesizes that federal courts are more likely to recognize consumer privacy harms as cognizable for standing when framed as statutory or contract-based harms, as opposed to tort-based harms. The distinction between statutory and tort-based harms aligns with the normative goals of standing law, which respects legislative recognition of novel harms and raises concerns where the judiciary attempts to extend its reach to injuries previously unrecognized by courts or Congress. The distinction between contract and tort-based harms has not been previously recognized in standing doctrine, but may reflect courts’ reticence to construct privacy standards outside of those agreed to between the parties (i.e. by contract), in line with the normative goals of consumer contract law. To test this hypothesis, I compare rates of dismissal on standing grounds between (1) statutory-based consumer privacy harm claims; (2) contract-based consumer privacy harm claims; and (3) tort-based consumer privacy harm claims, providing empirical support for the claim that courts are more likely to dismiss claims based in tort rather than statute or contract. Finally, I discuss the implications of the dismissal rates on standing grounds for the future of consumer privacy protection.
March 2: Scott Skinner-Thompson - Pop Culture Wars: Marriage, Abortion, and the Screen to Creed Pipeline [with Professor Sylvia Law]
February 24: Daniel Susser - Against the Collection/Use Distinction
February 17: Eliana Pfeffer - Data Chill: A First Amendment Hangover
February 10: Yafit Lev-Aretz - Data Philanthropy
February 3: Kiel Brennan-Marquez - Feedback Loops: A Theory of Big Data Culture
January 27: Leonid Grinberg - But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race
November 18: Angèle Christin - Algorithms, Expertise, and Discretion: Comparing Journalism and Criminal Justice
BACKGROUND READING: Courts and Predictive Algorithms
November 4: Solon Barocas and Karen Levy - Understanding Privacy as a Means of Economic Redistribution
October 28: Finn Brunton - Of Fembots and Men: Privacy Insights from the Ashley Madison Hack
October 21: Paula Kift - Human Dignity and Bare Life - Privacy and Surveillance of Refugees at the Borders of Europe
ABSTRACT: In the summer of 2015, tens of thousands of forcibly displaced persons arrived at the borders of Europe. At least in one regard the continent was prepared: over the years it had developed an extensive surveillance assemblage that disparages asylum seekers as “crimmigrants” and subjects them to extensive systems of discipline and control, often long before embarking on their perilous journey to Europe. This paper treats privacy as an aspect of human dignity, and argues that denying asylum seekers informational, visual, physical, and decisional privacy reduces them to homines sacri, or bare life. The paper will analyze EU law and policy, German constitutional law and the media coverage of the refugee crisis based on theories of sovereignty, biopolitics, visual culture, social psychology, and critical border studies.
October 14: Yafit Lev-Aretz and co-author, Nizan Geslevich Packin - Between Loans and Friends: On Soical Credit and the Right to be Unpopular
ABSTRACT: Credit scoring systems calculate the specific level of risk that a person or entity brings to a particular transaction. These levels of risk assessments are compiled into a credit score, a numerical expression of one’s financial health at a given point in time. Certain laws, such as the Fair Credit Reporting Act, the Fair and Accurate Credit Transactions Act, Equal Credit Opportunity Act, and the recent Dodd-Frank Wall Street Reform and Consumer Protection Act, place limits on the type of information that can be used to calculate creditworthiness and the ways in which it may be put to use. These laws have been effectively applied to conventional formulas employed by traditional lenders in order to protect certain rights of those being evaluated. But in the last few years, new, aggressive, and loosely regulated lenders have become increasingly popular, especially among certain populations like millennials and the financially underserved. Some of these online marketplace lenders calculate their customers’ creditworthiness based on big-data analytics that are said to significantly increase the accuracy of the scoring methods. Specifically, some lenders have built their score-generating algorithms around behavioral data gleaned from social media and social networking information, including quantity and quality of social media presence; the identity and features of an applicant’s contacts; an applicant’s online social ties and interactions; contacts’ financial standing; an applicant’s personality attributes as extracted from her online footprints, and more. This Article studies the potential consequences of social credit systems that are predicated on a simple transaction: authorized use of highly personal information in return for better interest rates. Following a description of the trend, the Article moves to analyze the inclination of rational and irrational customers to be online socially active and/or disclose all their online social-related information for financial ranking purposes. This examination includes, inter alia, customers’ preferences as well as mistakes, attempts to manipulate the system, customers’ self-doxing or lack thereof, and lenders’ inferences on their customers. The Article then explains the potential consequential harms that could result in from social-based financial ranking – especially if it became the new creditworthiness baseline – focusing on (i) discrimination and social polarization ensuing from customers adapting their behavior to the biased and limited algorithmic modeling, (ii) the use of inaccurate or inappropriate data in automated processes, which could lead to flawed financial decisions, and (iii) broader privacy concerns. The social credit trend is then compared with other financially sound yet socially undesired practices, such as the use of medical information in creditworthiness assessments. The Article concludes by introducing a limited “right to be unpopular,” to accommodate the welcomed aspects of social credit systems while mitigating many of the trend’s undesired consequences.
October 7: Daniel Susser - What's the Point of Notice?
September 30: Helen Nissenbaum and Kirsten Martin - Confounding Variables Confounding Measures of Privacy
September 23: Jos Berens and Emmanuel Letouzé - Group Privacy in a Digital Era
ABSTRACT: Big Data has blurred the boundaries between individual and group data. Through the sheer number and richness of databases and the increasing sophistication of algorithms, the “breadcrumbs” left behind by each one of us have not only multiplied to a degree that calls our individual privacy into question; they have also created new risks for groups, who can be targeted and discriminated against unbeknownst to themselves, or even unbeknownst to data analysts. This challenges us to enrich our approach to privacy. Where individual privacy might once have sufficed to rein in state and corporate surveillance and the neighbors’ curiosity, and to give individuals a measure of control over their reputations and security, today it can leave groups vulnerable to discrimination and targeting and, what’s more, leave them unaware of that risk. The concept of group privacy attempts to supplement individual privacy by addressing this blindspot.
September 16: Scott Skinner-Thompson - Performative Privacy
ABSTRACT: Conventional legal theory suggests that the right to privacy is non-existent once one enters the public realm. Still, some scholars contend that privacy ought to exist in public—but they justify this right to “public privacy” with reference to other, ancillary values privacy may serve (for instance, public privacy may be necessary to make the freedoms of movement and association meaningful in practice). This Article advances the pro-public-privacy theories one step further, arguing that demands for public privacy are more accurately conceptualized as a form of performative resistance against an ever pervasive surveillance society. For example, when a person wears a hoodie in public obscuring their identity, one is engaged in a form of active, expressive resistance to the surveillance regime—communicating in no uncertain terms a refusal to be surveilled. This Article isolates and labels “performative privacy” as a social practice, and explains how this identification of public, performative privacy will provide doctrinal and discursive solutions to some of our most pressing social controversies. By demonstrating that demands for public privacy are inherently expressive, the Article helps establish that public privacy is grounded in the First Amendment and entitled to its robust protections. Discursively, directly linking public privacy performances with the well-ensconced freedom of expression will help shift societal reaction to such privacy demands from suspicion to embrace. Moreover, to the extent that acts of performative privacy cut across conflicts traditionally viewed in terms of racial, religious, or gender identity, (Trayvon Martin’s hoodie, bans on head veils, and transgender demands for gender privacy are some examples), performative privacy has the potential to provide a more universal and unifying normative response to these conflicts.
David Krone - Compliance, Privacy and Cyber Security Information Sharing
Edwin Mok - Trial and Error: The Privacy Dimensions of Clinical Trial Data Sharing
Dan Rudofsky - Modern State Action Doctrine in the Age of Big Data
April 22: Helen Nissenbaum — Respect for Context' as a Benchmark for Privacy: What it is and Isn't
April 15: Joris van Hoboken — From Collection to Use Regulation? A Comparative Perspective
March 11: Rebecca Weinstein (Cancelled)
Kristen Martin — Transaction costs, privacy, and trust: The laudable goals and ultimate failure of notice and choice to respect privacy online
Ryan Calo — Against Notice Skepticism in Privacy (and Elsewhere)
Lorrie Faith Cranor — Necessary but Not Sufficient: Standardized Mechanisms for Privacy Notice and Choice
October 22: Matthew Callahan — Warrant Canaries and Law Enforcement Responses
October 15: Karen Levy — Networked Resistance to Electronic Surveillance
October 8: Joris van Hoboken — The Right to be Forgotten Judgement in Europe: Taking Stock and Looking Ahead
October 1: Giancarlo Lee — Automatic Anonymization of Medical Documents
September 24: Christopher Sprigman — MSFT "Extraterritorial Warrants" Issue
September 17: Sebastian Zimmeck — Privee: An Architecture for Automatically Analyzing Web Privacy Policies [with Steven M. Bellovin]
September 10: Organizational meeting
January 29: Organizational meeting
November 20: Nathan Newman — Can Government Mandate Union Access to Employer Property? On Corporate Control of Information Flows in the Workplace
September 25: Luke Stark — The Emotional Context of Information Privacy
September 18: Discussion — NSA/Pew Survey
September 11: Organizational Meeting
April 10: Katherine Strandburg — ECPA Reform; Catherine Crump: Cotterman Case; Paula Helm: Anonymity in AA
March 27: Privacy News Hot Topics — US v. Cotterman, Drones' Hearings, Google Settlement, Employee Health Information Vulnerabilities, and a Report from Differential Privacy Day
March 6: Mariana Thibes — Privacy at Stake, Challenging Issues in the Brazillian Context
March 13: Nathan Newman — The Economics of Information in Behavioral Advertising Markets
February 27: Katherine Strandburg — Free Fall: The Online Market's Consumer Preference Disconnect
February 20: Brad Smith — Privacy at Microsoft
February 13: Joe Bonneau — What will it mean for privacy as user authentication moves beyond passwo
February 6: Helen Nissenbaum — The (Privacy) Trouble with MOOCs
January 30: Welcome meeting and discussion on current privacy news
November 14: Travis Hall — Cracks in the Foundation: India's Biometrics Programs and the Power of the Exception
September 19: Nathan Newman — Cost of Lost Privacy: Google, Antitrust and Control of User Data