Privacy Research Group

The Privacy Research Group is a weekly meeting of students, professors, and industry professionals who are passionate about exploring, protecting, and understanding privacy in the digital age.

Joining PRG:

Because we deal with early-stage work in progress, attendance at meetings of the Privacy Research Group is generally limited to researchers and students who can commit to ongoing participation in the group. To discuss joining the group, please contact Professor Helen Nissenbaum or Professor Katherine Strandburg. If you are interested in these topics, but cannot commit to ongoing participation in PRG, you may wish to join the PRG-All mailing list.

PRG Calendar

Fall 2015

September 9: Kiel Brennan-Marquez - Vigilantes and Good Samaritans
September 16: Scott Skinner-Thompson - Performative Privacy

ABSTRACT: Conventional legal theory suggests that the right to privacy is non-existent once one enters the public realm.  Still, some scholars contend that privacy ought to exist in public—but they justify this right to “public privacy” with reference to other, ancillary values privacy may serve (for instance, public privacy may be necessary to make the freedoms of movement and association meaningful in practice). This Article advances the pro-public-privacy theories one step further, arguing that demands for public privacy are more accurately conceptualized as a form of performative resistance against an ever pervasive surveillance society.  For example, when a person wears a hoodie in public obscuring their identity, one is engaged in a form of active, expressive resistance to the surveillance regime—communicating in no uncertain terms a refusal to be surveilled. This Article isolates and labels “performative privacy” as a social practice, and explains how this identification of public, performative privacy will provide doctrinal and discursive solutions to some of our most pressing social controversies. By demonstrating that demands for public privacy are inherently expressive, the Article helps establish that public privacy is grounded in the First Amendment and entitled to its robust protections.  Discursively, directly linking public privacy performances with the well-ensconced freedom of expression will help shift societal reaction to such privacy demands from suspicion to embrace.  Moreover, to the extent that acts of performative privacy cut across conflicts traditionally viewed in terms of racial, religious, or gender identity, (Trayvon Martin’s hoodie, bans on head veils, and transgender demands for gender privacy are some examples), performative privacy has the potential to provide a more universal and unifying normative response to these conflicts. 

September 23: Jos Berens and Emmanuel Letouzé - Group Privacy in a Digital Era

ABSTRACT: Big Data has blurred the boundaries between individual and group data. Through the sheer number and richness of databases and the increasing sophistication of algorithms, the “breadcrumbs” left behind by each one of us have not only multiplied to a degree that calls our individual privacy into question; they have also created new risks for groups, who can be targeted and discriminated against unbeknownst to themselves, or even unbeknownst to data analysts. This challenges us to enrich our approach to privacy. Where individual privacy might once have sufficed to rein in state and corporate surveillance and the neighbors’ curiosity, and to give individuals a measure of control over their reputations and security, today it can leave groups vulnerable to discrimination and targeting and, what’s more, leave them unaware of that risk. The concept of group privacy attempts to supplement individual privacy by addressing this blindspot.

September 30: Helen Nissenbaum and Kirsten Martin - Confounding Variables Confounding Measures of Privacy
October 7: Daniel Susser - What's the Point of Notice?
October 14: Yafit Lev-Aretz and co-author, Nizan Geslevich Packin - Between Loans and Friends: On Soical Credit and the Right to be Unpopular

ABSTRACT: Credit scoring systems calculate the specific level of risk that a person or entity brings to a particular transaction.  These levels of risk assessments are compiled into a credit score, a numerical expression of one’s financial health at a given point in time.  Certain laws, such as the Fair Credit Reporting Act, the Fair and Accurate Credit Transactions Act, Equal Credit Opportunity Act, and the recent Dodd-Frank Wall Street Reform and Consumer Protection Act, place limits on the type of information that can be used to calculate creditworthiness and the ways in which it may be put to use.  These laws have been effectively applied to conventional formulas employed by traditional lenders in order to protect certain rights of those being evaluated.  But in the last few years, new, aggressive, and loosely regulated lenders have become increasingly popular, especially among certain populations like millennials and the financially underserved.  Some of these online marketplace lenders calculate their customers’ creditworthiness based on big-data analytics that are said to significantly increase the accuracy of the scoring methods.  Specifically, some lenders have built their score-generating algorithms around behavioral data gleaned from social media and social networking information, including quantity and quality of social media presence; the identity and features of an applicant’s contacts; an applicant’s online social ties and interactions; contacts’ financial standing; an applicant’s personality attributes as extracted from her online footprints, and more. This Article studies the potential consequences of social credit systems that are predicated on a simple transaction: authorized use of highly personal information in return for better interest rates.  Following a description of the trend, the Article moves to analyze the inclination of rational and irrational customers to be online socially active and/or disclose all their online social-related information for financial ranking purposes.  This examination includes, inter alia, customers’ preferences as well as mistakes, attempts to manipulate the system, customers’ self-doxing or lack thereof, and lenders’ inferences on their customers.  The Article then explains the potential consequential harms that could result in from social-based financial ranking – especially if it became the new creditworthiness baseline – focusing on (i) discrimination and social polarization ensuing from customers adapting their behavior to the biased and limited algorithmic modeling, (ii) the use of inaccurate or inappropriate data in automated processes, which could lead to flawed financial decisions, and (iii) broader privacy concerns.  The social credit trend is then compared with other financially sound yet socially undesired practices, such as the use of medical information in creditworthiness assessments.  The Article concludes by introducing a limited “right to be unpopular,” to accommodate the welcomed aspects of social credit systems while mitigating many of the trend’s undesired consequences.

October 21: Paula Kift - Human Dignity and Bare Life - Privacy and Surveillance of Refugees at the Borders of Europe

ABSTRACT: In the summer of 2015, tens of thousands of forcibly displaced persons arrived at the borders of Europe. At least in one regard the continent was prepared: over the years it had developed an extensive surveillance assemblage that disparages asylum seekers as “crimmigrants” and subjects them to extensive systems of discipline and control, often long before embarking on their perilous journey to Europe. This paper treats privacy as an aspect of human dignity, and argues that denying asylum seekers informational, visual, physical, and decisional privacy reduces them to homines sacri, or bare life. The paper will analyze EU law and policy, German constitutional law and the media coverage of the refugee crisis based on theories of sovereignty, biopolitics, visual culture, social psychology, and critical border studies.

October 28: Finn Brunton - Of Fembots and Men: Privacy Insights from the Ashley Madison Hack
November 4: Solon Barocas and Karen Levy - Understanding Privacy as a Means of Economic Redistribution
November 11: Joris van Hoboken - Privacy, Data Sovereignty and Crypto
November 18: Angèle Christin - Algorithms, Expertise, and Discretion: Comparing Journalism and Criminal Justice

BACKGROUND READING: Courts and Predictive Algorithms

November 25: Thanksgiving Break
December 2: Leonid Grinburg - Ad-block Wars


Spring 2015

April 29: Sofia Grafanaki - Autonomy Challenges in the Age of Big Data
                 David Krone - Compliance, Privacy and Cyber Security Information Sharing
                 Edwin Mok - Trial and Error: The Privacy Dimensions of Clinical Trial Data Sharing
                 Dan Rudofsky - Modern State Action Doctrine in the Age of Big Data

April 22: Helen Nissenbaum Respect for Context' as a Benchmark for Privacy: What it is and Isn't
April 15: Joris van Hoboken From Collection to Use Regulation? A Comparative Perspective
ABSTRACT: In the debates about data privacy for the 21st century, we increasingly hear the argument hat regulation should focus on the use of data instead of its initial collection. The argument for this shift tends to be pragmatic: the collection of personal data has become the normal state of affairs to such an extent that focusing the regulation of personal data driven processes through limiting the collection of data (input) is no longer feasible and desirable. Instead, regulation should focus on issues related to the actual use (output). This paper will look at this position from a comparative perspective. It will first explore the different positions that have been expressed in the relevant literature, look at the position of data (collection) minimization and purpose limitation in the US and European regulatory systems and analyze them in comparative perspective, focusing on the different rationales underlying the regulation of ‘collection’ on the one hand, and ‘use’ on the other hand.
April 8: Bilyana Petkova
 Privacy and Federated Law-Making in the EU and the US: Defying the Status Quo?
ABSTRACT: The federated nature of lawmaking in both the United States and the European Union is seen to deliver sub-optimal results. In particular, in the US there are concerns for the increased fragmentation of American data privacy law and the lack of relevant federal consolidation, whereas in the EU the proposed General Data Protection Regulation and overall data protection regime generated opposition regarding the over-centralization of powers to the European institutions. My argument is that the autonomy of state institutions and regulatory experimentation on the state level can defy the status quo, be that of too little or too much privacy consolidation. I look into the role of Member States’ parliaments and highest courts in the EU and of state attorneys general in the US. Arguably, regulatory experimentation with higher data privacy standards in individual states like Germany or California has the potential of generating a dynamic of horizontal adaptation among jurisdictions and industry players that the federal or EU tier can capitalize on to level up privacy protection.
April 1: Paula Kift — Metadata: An Ontological and Normative Analysis

ABSTRACT: When the legality of the bulk telephony metadata program was challenged, the NSA countered that it was not collecting the content but only the metadata of communications. The aim of this paper is to discover where the distinction between metadata and content data came from and whether this distinction still makes sense today. The first part of the paper relies on Klayman v. Obama and ACLU v. Clapper to look at the various dichotomies the courts have used to define metadata over time: content vs. non-content information, sensitive vs. non-sensitive information and private records vs. business records held by third parties. The second part of the paper engages in a normative analysis of the bulk telephony metadata program based on the framework of contextual integrity. The paper finds that the bulk telephony metadata program violates entrenched informational norms.

March 25: Alex Lipton — Privacy Protections for the Secondary User of Consumer-Watching Technologies

ABSTRACT: Consumer products increasingly record user data without regard to whether the recorded individual is the primary user—the purchaser of the product—or the secondary user—an individual who uses the product but is not the purchaser. This distinction proves especially significant when considering the product's privacy policy, which purports to establish user consent to expansive data use practices, and statutory protections governing the recording of user data, many of which include exceptions based on user consent. This Note examines one private regime for protecting consumer privacy—privacy policies—and several public regimes—including state wiretap laws, the Electronic Communications Privacy Act, and the Children's Online Privacy Protection Act—to illustrate how legal protections differ for primary and secondary users of consumer-watching technologies. I conclude by suggesting a framework for designing privacy protections for the secondary user of consumer-watching technologies.

March 11: Rebecca Weinstein (Cancelled
March 4: Karen Levy & Alice Marwick — Unequal Harms: Socioeconomic Status, Race, and Gender in Privacy Research

ABSTRACT: (NOTE: this is a nascent idea and we're envisioning PRG as primarily a time for discussion of these issues, rather than a research presentation. We'll do a short presentation and then open it up to the group.) While privacy and surveillance affect different populations in disparate ways  (Gilman 2012), they are often treated as a monolithic concept by privacy researchers. While researchers in disciplines like women’s studies, sociology, and criminology have examined the impact of the welfare system (Eubanks 2006), the criminal justice system (Goffman 2014), and differential access to technology on privacy (Vickery 2014), these issues may not be labeled or easily recognized as privacy-related by the mainstream of privacy scholarship. Examining the major academic privacy conferences and scholarship of the last few years leads to the conclusion that the normative subject of much privacy research is middle-class, white, and male. However, it is the researchers, think-tank directors, advocates and activists attending these conferences whose work often informs public policy. By incorporating research that is often left out by privacy scholars, and by advocating for projects that discuss more diverse conceptualizations of “the user” or the subject, we can envision a future for privacy policy that incorporates a wider set of harms and needs, and encompasses the concerns of a larger base of citizens.

February 25 : Luke Stark — NannyScam: The Normalization of Consumer-as-Surveillorm

ABSTRACT: With the proliferation of surveillance technologies in the developed world over the past decade, norms of surveillance are appearing in novel forms and new ways across the terrain of everyday life. While there has been much academic scrutiny of certain aspects of this trend, such as surveillance in the workplace, the collection and analysis of consumer data through loyalty cards and other mechanisms, and location tracking through mobile digital devices, this paper explores an under-studied facet of quotidian surveillance: the construction of a new subject position, that of the consumer not just as surveilled but as also as surveillor.The technologies and practices that are acting together to constitute this subject position are not new in themselves _ these include the by-now familiar processes of online self-service, by which consumers are asked to navigate and analyze algorithmic systems in search for their desired product; consumer-grade surveillance products, which have progressed from nanny-cams and baby monitors to the much maligned Elf on the Shelf toy; and most recent and perhaps most troubling, systems of public facing service-sector surveillance (such as Domino’s Pizza’s online order tracker) that give consumers oversight over service-sector workers without control or autonomy on either side. I argue that the confluence of these three sets of technical and social infrastructures places consumers midway within in a hierarchy of everyday surveillance, recreating familiar inequalities of power, access and influence in the process. These systems require not only a material infrastructure for the normalization of surveillance, but also an ideological and emotional one. I suggest that making consumers responsible for the surveillance not only of their own affairs but also the work of others classified as subordinate to them (such as children, service workers, and members of other disenfranchised groups) positions the figure of the consumer as the emotional manager of the experience of surveillance, and reifies a broader material trend within late capitalism: the equation of financial wealth with moral weight in a hierarchy of oversight that gives the wealthiest the most control and least accountability. Implicated within a chain of surveillance that has colonial echoes, the play of everyday social relations have thus become integrated into the broader networks of digital surveillance overseen by governments and corporate actors. In the paper, I will lay out a taxonomy of both historical and contemporary consumer surveillance products and services that supports my analysis above. I will explore how two trends _ infantilization and expectations around parental power and responsibility, and the history of workplace surveillance _ have become models for the transference of surveillance norms to consumers. Based on these models, I will examine possible legal and policy remedies for the political and social challenges posed by the normalization of the consumer-as-surveillor. These include wellknown problems around a lack of what Cohen terms “semantic discontinuities within daily life, leading to the diminution of creativity, agency and human flourishing; concerns also include the normalization of the emotional toll not simply of state surveillance, but also surveillance by one’s fellow citizens in ways reminiscent of the living conditions within totalitarian states.

February 18: Brian Choi A Prospect Theory of Privacy

ABSTRACT: Privacy law differs from other information law doctrines in that it is guided almost exclusively by moral intuition. What qualifies as a “violation” of privacy turns in large part on the moral reprehensibility of the act in question. By stark contrast, the intellectual property regimes are led primarily by economic considerations, and only secondarily by non-economic factors. Likewise, free speech doctrine is dominated by the value-agnostic “marketplace of ideas.” In other words, the major disconnect between intellectual “privacy” and intellectual “property” has been the relative priority assigned to the individual’s right to control versus the social cost-benefit of proscribing access by others. Yet, if data is a commodity of value, then the cultivation of such data is a social good, not just an individual entitlement. Where moral rhetoric has failed to advance robust recognition of privacy interests, utilitarian frameworks may prove more effective. In particular, prospect theory offers several useful insights. First, prospect theory posits that individuals should be allowed to rely on more than secrecy to guard informational resources. Presently, because recognition of privacy claims is weak, the production of private data depends heavily on secrecy. Thus, people will either invest in increasingly costly secrecy measures or opt out of the data economy entirely. Both lead to immense social waste. Second, prospect theory suggests that the assignment of informational rights encourages the sharing of information, thus reducing duplicative efforts to generate data. In the patent regime, the goal is commercialization of invention to promote the useful arts. In the privacy regime, the goal is the advancement of social progress through the collective pooling of private experience—which would otherwise remain stashed away. By viewing private data as a valuable commodity to be cultivated, rather than as an inevitable outgrowth of human interaction, we can lend fresh perspective as to what privacy is for.

February 11: Aimee Thomson — Cellular Dragnet: Active Cell Site Simulators and the Fourth Amendment

ABSTRACT: This Paper examines government use of active cell site simulators (ACSSs) and concludes that ACSS operations constitute a Fourth Amendment search. An ACSS known colloquially as a stingray, triggerfish, or dirtbox mimics a cell phone tower, forcing nearby cell phones to register with the device and divulge identifying and location information. Law enforcement officials regularly use ACSSs to identify and locate individuals, often with extreme precision, while sweeping up the identifying and location information of hundreds or thousands of third parties in the process. Despite the pervasive use of ACSSs at federal, state, and local levels, law enforcement duplicity concerning ACSS operations has prevented courts from closely examining their constitutionality. ACSS operations constitute a Fourth Amendment search under both the trespass paradigm and theprivacy paradigm. Within the former, an ACSS emits radio signals that trespass on private "effects." Under the Jones reinvigoration of the trespass paradigm, radio signals "touch" cell phones for the purpose of obtaining information, constituting a Fourth Amendment trespass. Radio signals also trespass under common law property and tort regimes, and the Paper proposes a new rule, consistent with existing trespass jurisprudence, to target only those radio signals that intentionally and without consent cause an active physical change in the cell phone. Within the latter, ACSS operations constitute a Fourth Amendment search because they violate users' subjective expectations of privacy that society can and should recognize as reasonable, particularly if Fourth Amendment jurisprudence continues to eliminate secrecy as a proxy for privacy. Until courts decisively recognize warrantless ACSS operations as illegal, however, advocates and litigants can implement several interim remedial measures. An ACSS is an undeniably valuable law enforcement tool. Subjecting ACSS operations to Fourth Amendment strictures will not hinder their utility but rather ensure that this powerfully invasive technology is not abused.
February 4: Ira Rubinstein — Anonymity and Risk

ABSTRACT: The possibility of re-identifying anonymized data sets has sparked one of the most lively and important debates in privacy law. The credibility of anonymization, which anchors much of privacy law, is now open to attack. Critics of anonymization argue that almost any data set is vulnerable to a re-identification attack given the inevitability of related data becoming publicly available over time. Defenders of anonymization counter that despite the theoretical and demonstrated ability to mount such attacks, the likelihood of re-identification for most data sets remains minimal. As a practical matter, they argue, most data sets will remain anonymized using established techniques. Both sides of this debate are now entrenched in their positions, making increasingly technical arguments that are siloed from other relevant aspects of privacy law. As a result, a consensus is elusive. This article aims to help resolve this impasse between formalists (for whom mathematical proof is the touchstone of any meaningful policy) and pragmatists (for whom workable solutions always prevail over theoretical concerns) by reframing the debate away from the endpoint of anonymity and toward the process of risk management. In order to develop a clear, flexible, and workable legal framework for de-identification, we propose drawing from the related, more established area of data security. The law of data security is focused on mandating processes that decrease the likelihood of harm, even if threats are remote. Because there is no such thing as perfect protection, data security policy is decidedly focused on protocols, organizational structure, and the implementation of safeguards. Data security policy also largely refrains from overly-specific rules, deferring instead to a reasonable adherence to industry standards. As the motivation for a consistent approach to de-identify data increases, industry standards will inevitably develop in coordination with public policy and consumer protection goals. A reasonableness approach is also capable of incorporating both legal and technical solutions to the re-identification problem where appropriate. An effective strategy would be to combine contractual prohibitions prohibiting re-identification with scientific approaches to de-identification such as differential privacy and k-anonymity. In short, the law of de-identification should look more like the law of data security: process-based, contextual, and tolerant of risk. Our proposal also argues against a full embrace of the pragmatism and status quo advocated by defenders of anonymization. To begin with, the way this issue is framed is a problem. We join the critics in arguing that the terms “anonymous” and “anonymization” should be abandoned in our policy and discourse. Almost all uses of the term to describe data sets are misleading if not deceptive. Focusing on the language of risk will better set expectations. Additionally, anonymization critics have rightfully pointed out it is a mistake to overly rely upon risk assessments that cannot account for new data inputs and increasingly sophisticated analytical techniques. An effective risk-based approach to de-identification should accommodate risk models as well as important baseline protections for consumers. In this article, we aim to move past the criticism and defense of anonymization to propose a policy-driven and comprehensive risk-based de-identification framework. Risk-based de-identification should do at least four things: 1) Cover all foreseeably personal data; 2) Tether degree of obligation and punishment to risk; 3) Embrace the full scope of potential harms and available remedies; and, perhaps most controversially, 4) Minimize or eliminate “release and forget” data sets. Risk-based de-identification is capable of bridging the gap between the formalists and the pragmatists. The approach recognizes that there is no perfect anonymity. Its focus is on process rather than endpoints. Yet effective risk-based de-identification also avoids a ruthless pragmatism by acknowledging the limits of current risk projection models and building in important protections for individual privacy. This policy-driven, integrated, and comprehensive approach will help us advance this important debate into the next stage of implementation.
January 28: Scott Skinner-Thomson Outing Privacy

ABSTRACT: The government regularly outs information concerning people’s sexuality, gender identity, and HIV-status.  Notwithstanding the implications of such outings, the Supreme Court has yet to answer whether the Constitution contains a right to informational privacy—a right to limit the government’s ability to collect and disseminate personal information. In fact, the Court has on three occasions reluctantly “assumed” that there is such a right without authoritatively recognizing the right or defining its contours. This Article probes informational privacy theory and jurisprudence in order to better understand why the judiciary has been reluctant to fully embrace a robust constitutional right to informational privacy.  In short, the Article argues that while existing theories of informational privacy beneficially encourage us to broadly imagine the right and its possibilities, often focusing on informational privacy’s ability to promote individual dignity and autonomy, there is often a disconnect when courts attempt to translate current theories into workable doctrine.  The Article reorients and hones the focus of the purported right to informational privacy toward what the Due Process Clause suggests as the right’s two principal and more concrete values: preventing intimate, personal information from serving as the basis for potential discrimination and creating space for the formation of political thought.  By so doing, not only is a more precise theory of informational privacy constructed, but, instrumentally (and perhaps most importantly), courts will be more apt to recognize a constitutional informational privacy right and individuals will be better insulated from discrimination or marginalization on the basis of their intimate information or political beliefs.

Fall 2014

December 3: Katherine Strandburg — Discussion of Privacy News [which can include recent court decisions, new technologies or significant industry practices]

November 19: Alice Marwick — Scandal or Sex Crime? Ethical and Privacy Implications of the Celebrity Nude Photo Leaks

November 12: Elana Zeide — Student Data and Educational Ideals: examining the current student privacy landscape and how emerging information practice and reforms implicate long-standing social and legal traditions surrounding education in America. The Proverbial Permanent Record [PDF]

November 5: Seda Guerses — Let's first get things done! On division of labor and practices of delegation in times of mediated politics and politicized technologies
ABSTRACT: During particular historical junctures, characterized by crisis, deepening exploitation and popular revolt, referred to here as “sneaky moments”, hegemonic hierarchies are simultaneously challenged and reinvented, and in case of the latter in due course subtly reproduced. The current divide between those engaged in politics of technology and those participating in struggles of social justice requires reflection in this context. We argue that especially the delegation of technological matters to the experienced "techies" or "technological platforms", and the corresponding flattening of politics and all political activities in the process of developing technical tools and platforms, exacerbate this problem. These tangible divergences in daily practice, however, are not only due to philosophical or political differences. They are also related to the ways in which specialization of work and scarcity of resources leads to a division of labor that often expresses itself across existing fault-lines of race, gender, class and age. Assuming that these moments in which collectives fall back on hegemonic divisions of labor are part and parcel of the divergence between technology politics and social justice politics, we want to ask: are these divisions of labor inevitable? In this paper that is still in progress, we specifically look at the rise of consciousness about surveillance programs post-MENA uprisings as well as Snowden revelations, and the way the counter-surveillance technology campaigns that ensued reconfigured the division of labor between social justice and tech freedom activists. Given the urgency of the moment as well as the momentum created in response to the revelations and news about government surveillance programs, numerous digital rights and freedoms organizations joined campaigns to promote encryption toolkits that "enhance privacy" and "reset the net" for "users around the globe". Through a close reading of these campaign websites, their forms of narration, vocabulary, design decisions, as well as their editorial and technical decisions, we explore how work has been divided between 'techies' and 'activists' and consider ways in which things could have been different.
October 29:Luke Stark — Discussion on whether “notice” can continue to play a viable role in protecting privacy in mediated communications and transactions given the increasing complexity of the data ecology and economy.
Kristen Martin — Transaction costs, privacy, and trust: The laudable goals and ultimate failure of notice and choice to respect privacy online

Ryan Calo — Against Notice Skepticism in Privacy (and Elsewhere)

Lorrie Faith Cranor — Necessary but Not Sufficient: Standardized Mechanisms for Privacy Notice and Choice
October 22: Matthew Callahan — Warrant Canaries and Law Enforcement Responses
October 15: Karen Levy — Networked Resistance to Electronic Surveillance
October 8: Joris van Hoboken —  The Right to be Forgotten Judgement in Europe: Taking Stock and Looking Ahead

October 1: Giancarlo Lee — Automatic Anonymization of Medical Documents
September 24: Christopher Sprigman — MSFT "Extraterritorial Warrants" Issue 

September 17: Sebastian Zimmeck — Privee: An Architecture for Automatically Analyzing Web Privacy Policies [with Steven M. Bellovin]
September 10: Organizational meeting

Spring 2014

April 30: Seda Guerses — Privacy is Security is a prerequisite for Privacy is not Security is a delegation relationship
ABSTRACT: Since the end of the 60s, computer scientists have engaged in research on privacy and information systems. Over the years, this research has led to a whole palette of “privacy solutions.” These solutions originate from diverse sub-fields of computer science, e.g., security engineering, databases, software engineering, HCI, and artificial intelligence. From a bird’s eye view, all of these researchers are studying privacy. However, a closer look reveals that each community of researchers relies on different, sometimes even conflicting, definitions of privacy, and on a variety of social and technical assumptions. For example, a good number of privacy researchers define privacy in terms of a known "security property": confidentiality. Others contest this approach and suggest that the binary understanding of privacy as concealment and violation of privacy as exposure is too simplistic and at times misleading. During my talk, I will lay out some of the elements of this particular contestation. I will do so by presenting the way in which the interplay between privacy and security is articulated by some of the researchers who participated in an empirical study of privacy research within computer science. This will be a follow up of my PRG presentation in the Fall of 2013, where I presented some of the privacy definitions and conflicting assumptions as they were articulated by differential privacy researchers, data analysts and security engineers.

April 23: Milbank Tweed Forum Speaker — Brad Smith: The Future of Privacy
April 16: Solon Barocas — How Data Mining Discriminates - a collaborative project with Andrew Selbst, 2012-13 ILI Fellow
ABSTRACT: This presentation considers recent computer science scholarship on non-discriminatory data mining that has demonstrated—unwittingly, in some cases—the inherent limits of the notion of procedural fairness that grounds anti-discrimination law and the impossibility of avoiding a normative position on the fairness of specific outcomes.
April 9: Florencia Marotta-Wurgler — "The Anatomy of Privacy" - initial findings from her empirical study on privacy policies
April 2: Elana Zeide— "Student Privacy in Context: Intuition, Ignorance and Trust"
March 26: Heather Patterson — "When Health Information Goes Rogue: Privacy and Ethical Implications of Decentextualized Information Flows from Consumer Mobile Fitness Devices to Clinician, Insurers, and Employers"
ABSTRACT: The rapid proliferation of health apps, digital sensors, and other participatory personal data collection devices points to an increasingly personalized future of health care, whereby individuals will track their own physiological and behavioral biomarkers in near real time and receive tailored feedback from an expanding team of commercial entities, social networks, and clinical care providers. Although much of the data processed by commercial sensors and apps is closely aligned with—and sometimes identical to—traditional health care data, its privacy and security are generally not subject to federal or state health privacy regulations by virtue of being held by non-HIPAA covered entities. Worryingly, the collection, integration, analysis, and distribution of this commercially tracked health data may expose individuals to the very privacy and security consequences that health privacy laws were developed to prevent, potentially disrupting the values of the health care system itself. This Article discusses technological, regulatory, and social drivers of digital health technology, reviews privacy harms associated with mobile self-tracking devices—focusing particularly on unconstrained and decontextualized information flows mediated by commercial “health data intermediaries”—and argues that the likely absorption of sensor data into the traditional medial ecosystem will present challenges to consumer privacy that current regulations are insufficient to address. It proposes that modern health privacy regimes ought to more fully take into account new data flow practices presented by emerging health technologies, both by affirmatively granting health technology users the right to exercise granular and contextual controls over their own health data, and by adopting by default an anti-discrimination framework preventing employers and insurers from penalizing individuals for health inferences made about them from sensors and other “Internet of Things” technologies.
March 12: Scott Bulua & Amanda Levendowski — Challenges in Combatting Revenge Porn

ABSTRACT: Revenge porn - sexually explicit images that are publicly shared online, without the consent of the pictured individual - has become the a hot button issue for journalists and academics, lawyers and activists. The phenomenon is surprisingly common: According to a McAfee survey, one in ten former partners threaten to post sexually explicit images of their exes online. An estimated 60 percent follow through. The harms caused by revenge porn can be very real - people featured on these sites often receive solicitations over social media, lose their jobs, or live in fear that their families and future employers will discover the photos. We will examine the challenges of combatting revenge porn: websites hosting this kind of content are afforded broad immunity under the Communications Decency Act Section 230; existing stalking and harassment laws rarely apply to the conduct of revenge porn submitters and websites; few privacy torts encompass the behaviors of revenge porn submitters; proposed legislation often runs afoul of the First Amendment. Our discussion will center on why the revenge porn problem is so difficult to combat and offer suggestions on how to approach a revenge porn solution.

March 5: Claudia Diaz — In PETs we trust: tensions between Privacy Enhancing Technologies and information privacy law: The presentation is drawn from a paper, "Hero or Villain: The Data Controller in Privacy Law and Technologies” with Seda Guerses and Omer Tene.

February 26: Doc Searls Privacy and Business
ABSTRACT: Thoughtful conversations around privacy (such as ours) have tend come mostly from legal, policy, social and ethical angles. When business comes up, it is often cast in the role of culprit. Today's online advertising business, for example, rationalizes surveillance, dismisses privacy concerns and opposes legislation and regulation protecting privacy. So, in today's privacy climate, one might ask, Can privacy be good for business? and, Can business be good for privacy? Doc Searls' answer to both questions is yes. Through ProjectVRM at Harvard's Berkman Center, Doc has been fostering developments that empower individuals as independent actors in the marketplace since 2006. The Intention Economy: When Customers Take Charge (Harvard Business Review Press, 2012) summarized that work and where it was headed at that time. Today there are more than a hundred VRM (vendor relationship management) developers, many of which are working specifically on protecting personal privacy and establishing its worth in the marketplace. Doc will report that work, its background, where it is currently headed—and the growing role of privacy as both a market demand and a design goal.

February 19: Report from the Obfuscation Symposium, including brief tool demos and individual impressions

February 12: Ira Rubinstein The Ethics of Cryptanalysis — Code Breaking, Exploitation, Subversion and Hacking
ABSTRACT: When it comes to the First Amendment, commerciality does, and should, matter. Building on the work of Meir Dan-Cohen and others, this article develops the view that the key distinguishing characteristic of commercial or corporate speech is that the interest at stake is “derivative,” in the sense that we care about the speech interest for reasons other than caring about the rights of the entity directly asserting a claim under the First Amendment. To say that the interest is derivative is not to say that it is unimportant, and one could find commercial and corporate speech interests to be both derivative and strong enough to apply heightened scrutiny to the restrictions that are the usual subject of debate, namely, restrictions on commercial advertising and restrictions on corporate campaigning. Distinguishing between derivative and intrinsic speech interests, however, helps to uncover two types of situations in which lesser or no scrutiny may be appropriate. The first is in the context of compelled speech. If the entity being compelled is not one whose rights we are concerned with, this undermines the rationale for subjecting speech compulsions to heightened scrutiny under the First Amendment. The second is in the context of speech among commercial entities. In these cases, the transaction may be among entities none of which merit First Amendment concern. Highlighting the difference that commerciality makes helps to explain better certain exceptions, or apparent exceptions, that existing case law already makes to heightened scrutiny, such as with respect to antitrust, securities, or labor law. It also provides insight in a number of current controversies, such as that over cigarette labeling. It has particularly important implications for consumer privacy regulation, suggesting that regulation of both the consumer data trade and commercial data collection merit significantly less scrutiny than might be applied to restrictions on the privacy-invasive practices of ordinary individuals.
February 5: Felix Wu — The Commercial Difference which grows out of a piece just published in the Chicago Forum called The Constitutionality of Consumer Privacy Regulation

January 29: Organizational meeting

Fall 2013

December 4: Akiva Miller — Are access and correction tools, opt-out buttons, and privacy dashboards the right solutions to consumer data privacy? & Malte Ziewitz What does transparency conceal?
November 20: Nathan Newman — Can Government Mandate Union Access to Employer Property? On Corporate Control of Information Flows in the Workplace
ABSTRACT: A basic question of labor law over the years has been how government can intervene to ensure that workers receive information needed to exercise their rights? A contrary concern has been what rights do property owners have under the 1st, 4th, 5th amendments and under federal labor law to restrict that information flow, both in their own interest and in interests claimed on behalf of their employees? A number of existing and proposed law by state governments—including one before the Supreme Court this term—have sought to mandate physical access to employer property to be able to contact employees and/or customers. Since the goal of unions in gaining that access is to develop a "map" of the workplace, including a strong analysis of all social networks—who is friends with whom, churches and organizations people are affiliated with, and any other useful social information—such mandated access amounts to government granting an independent party access to a range of social network information about individuals in a workplace. Obviously, employers have their own power in the workplace, so the government strengthening this de facto bottom-up data mining by unions has historically been one of the key counter weights to that corporate power in the workplace. However, the Supreme Court has in recent decades struck down requirements by the NLRB to require that unions be given access to employer property in the name of state property rights and may this term strike down a state law mandating access in the name of the first amendment rights of the employer. This tilt of the law towards protecting employer rights to control data flow in their workplace has been a key factor in weakening labor unions and, as many argue, expanding economic inequality over the last generation. An implication of this analysis is: if a rights-based framework over information in the workplace has ill-served workers, are there implications for whether a rights-based framework over privacy may ill-serve consumers and citizens in broader debates on privacy and data collection?

November 6: Karen Levy — Beating the Box: Digital Enforcement and Resistance
ABSTRACT: I’ll be presenting some research from my dissertation, which (broadly) explores digital enforcement strategies – the use of technologies in place or in support of traditional human rule enforcement regimes as a means to enact more ‘perfect’ behavioral regulation over subjects. Specifically, my research concerns new federal regulations mandating the electronic monitoring of long-haul truck drivers’ work time. Last year in PRG, I talked about how the organizational knowledge practices of trucking firms change around the proliferation of monitoring devices and divest truckers of occupational autonomy. This time around, I’d like to focus on two different areas: First, I explore how truckers (and others) resist monitoring using a variety of technical and organizational strategies, including physical tampering, data manipulation, and [something I’m calling] ‘collaborative omission.’ These tactics serve to construct new gaps between regulatory intent and social practice. But I could use your help in thinking them through in a more systematic way. Second, I consider the challenges faced by law enforcement officers—specifically, commercial vehicle inspectors—when enforcement efforts are augmented by machines. I’m finding that human/machine hybridity creates several challenges on the ground for these officers, including [something I’m calling] ‘decoy compliance’ among drivers that obfuscates actual legal noncompliance, as well as a last-mile problem in acquiring data from digital monitors in the trucks.

October 23: Brian Choi — The Third-Party Doctrine and the Required-Records Doctrine: Informational Reciprocals, Asymmetries, and Tributaries
ABSTRACT: Even as many have assailed the third-party doctrine and predicted its impending demise, few have heeded the parallel threat posed by the required-records doctrine. Although the third-party doctrine has been widely criticized as an overbroad exception to the Fourth Amendment, defining a coherent limiting principle has proved exceedingly difficult. The popular “mosaic” theory explains how the aggregation of many seemingly insignificant pieces of data from third parties can reveal comprehensive pictures of private activity ordinarily shielded by the Fourth Amendment. Yet, the inherent paradox of the mosaic theory is that it undercuts the project of distinguishing between third-party data that should be accessible without judicial warrant and third-party data that should not. Likewise, the required-records doctrine—which excludes records kept in compliance with general purpose recordkeeping requirements from the Fifth Amendment privilege against self-incrimination—has been so troubling that it has remained largely dormant since its creation. Efforts to construct a coherent limiting principle have been similarly lacking. Nevertheless, a recent set of tax enforcement cases has resuscitated the required-records doctrine and extended it to compel production of offshore bank account records from individual taxpayers. It is no coincidence that the third party doctrine also grew out of a tax enforcement case, holding that individual taxpayers may not shield financial records held by third-party banks. Three insights can be drawn from the project. The first is a warning that the required records doctrine is poised to follow in the footsteps of the third party doctrine. In both contexts, many of the early cases involved requests for financial records needed to prove tax evasion. With the third party doctrine, those early cases quickly generalized to encompass any document in the possession of a third party, including phone records, loan records, medical records, and more. With the required records doctrine, there is a similar shoddiness in the governing standard that would easily allow the same scope creep. If we dislike the current state of the third party doctrine, we should be wary of retracing the same steps under a different guise. Second, the juxtaposition provides a frame for unraveling our discomfort with both the required records doctrine and the third party doctrine. The easy cases might be those involving data that is readily obtainable through both avenues (e.g., duplicate records such as pay stubs or insurance forms), and those that are off limits under both doctrines (e.g., private diaries). On the other hand, the most vexing cases might be those involving data that can be obtained only through one avenue but not the other—allowing the government to pit one Amendment against another. For example, in the moment where a document is required but has not yet been created, that document still exists in thought only; the required records doctrine could demand it but the third party doctrine would not be able to reach it. That asymmetry also explains our discomfort with commercial aggregators who collect massive databases of personal information that the required records doctrine could never demand. Finally, the required records doctrine is a direct tributary of the third party doctrine, and has played a key role in shaping its watershed. Because business entities cannot assert the Fifth Amendment privilege, many businesses are automatically subject to recordkeeping and reporting requirements. Since the required data can include information about customers or other private citizens, the required records doctrine multiplies the potency of the third party doctrine.
October 16: Seda Güerses — Privacy is Don't Ask, Confidentiality is Don't Tell
ABSTRACT: Since the end of the 60s, computer scientists have engaged in research on privacy and information systems. Over the years, this research has led to a whole palette of "privacy solutions''. These solutions originate from diverse sub-fields of computer science, e.g., security engineering, databases, software engineering, HCI, and artificial intelligence. From a bird's eye view, all of these researchers are studying privacy. However, a closer look reveals that each community of researchers relies on different, sometimes even conflicting, definitions of privacy, and on a variety of social and technical assumptions. These researchers do have a tradition of assessing the (implicit) definitions and assumptions that underlie the studies in their respective sub-disciplines. However, a systematic evaluation of privacy research practice across the different computer science communities is so far absent. I hope to contribute to closing this research gap by presenting the preliminary results of an empirical study of privacy research in computer science.
October 9: Katherine Strandburg — Freedom of Association Constraints on Metadata Surveillance
ABSTRACT: Documents leaked this past summer confirm that the National Security Agency has acquired access to a huge database of domestic call traffic data, revealing information about times, dates, and numbers called. Although communication content traditionally has been the primary focus of concern about overreaching government surveillance, officials are increasingly interested in using sophisticated computer analysis of noncontent traffic data to “map” networks of associations. Despite the rising importance of digitally mediated association, current Fourth Amendment and statutory schemes provide only weak checks on government. The potential to chill association through overreaching relational surveillance is great. This Article argues that the First Amendment’s freedom of association guarantees can and do provide a proper framework for regulating relational surveillance and suggests how these guarantees might apply to particular forms of analysis of traffic data.
October 2: Joris van Hoboken — A Right to be Forgotten
ABSTRACT: In this talk I will present my ongoing work on the so-called 'right to be forgotten' and the underlying questions relating to balancing privacy and freedom of expression in the context of online services. This right to be forgotten was officially proposed in 2012 by the European Commission as a new element of the EU data protection rules. I will discuss the policy backgrounds of this proposal for a strengthened right to erasure and in particular its relation to new types of publicity facilitated by online intermediaries (search engines and social media in particular). As is clear from an analysis of the proposal which I recently conducted for the European Commission (attached as background) the right to be forgotten has captured the attention of many but fails to address let alone solve the hard issues on the interface of data privacy law, media law and intermediary liability regulations. While the EC proposal may actually be considered 'a right to be forgotten', the underlying questions of how to regulate personal data in online services remain.

September 25: Luke Stark — The Emotional Context of Information Privacy
September 18: Discussion — NSA/Pew Survey
September 11: Organizational Meeting

Spring 2013

May 1: Akiva Miller — What Do We Worry About When We Worry About Price Discrimination
April 24: Hannah Block-Wheba and Matt Zimmerman — National Security Letters [NSL's]

April 17: Heather Patterson — Contextual Expectations of Privacy in User-Generated Mobile Health Data: The Fitbit Story
April 10: Katherine Strandburg — ECPA Reform; Catherine Crump: Cotterman Case; Paula Helm: Anonymity in AA

April 3: Ira Rubinstein — Voter Privacy: A Modest Proposal
March 27: Privacy News Hot Topics — US v. Cotterman, Drones' Hearings, Google Settlement, Employee Health Information Vulnerabilities, and a Report from Differential Privacy Day

March 6: Mariana Thibes — Privacy at Stake, Challenging Issues in the Brazillian Context
March 13: Nathan Newman — The Economics of Information in Behavioral Advertising Markets
February 27: Katherine Strandburg — Free Fall: The Online Market's Consumer Preference Disconnect
February 20: Brad Smith — Privacy at Microsoft
February 13: Joe Bonneau  — What will it mean for privacy as user authentication moves beyond passwo
February 6: Helen Nissenbaum — The (Privacy) Trouble with MOOCs
January 30: Welcome meeting and discussion on current privacy news

Fall 2012

December 5: Martin French — Preparing for the Zombie Apocalypse: The Privacy Implications of (Contemporary Developments in) Public Health Intelligence
November 7: Sophie Hood — New Media Technology and the Courts: Judicial Videoconferencing
November 14: Travis Hall — Cracks in the Foundation: India's Biometrics Programs and the Power of the Exception

November 28: Scott Bulua and Catherine Crump — A framework for understanding and regulating domestic drone surveillance

November 21: Lital Helman — Corporate Responsibility of Social Networking Platforms
October 24: Matt Tierney and Ian Spiro — Cryptogram: Photo Privacy in Social Media
October 17: Frederik Zuiderveen Borgesius — Behavioural Targeting. How to regulate?

October 10: Discussion of 'Model Law'

October 3: Agatha Cole — The Role of IP address Data in Counter-Terrorism Operations & Criminal Law Enforcement Investigations: Looking towards the European framework as a model for U.S. Data Retention Policy
September 26: Karen Levy — Privacy, Professionalism, and Techno-Legal Regulation of U.S. Truckers
September 19: Nathan Newman — Cost of Lost Privacy: Google, Antitrust and Control of User Data