Privacy Research Group

Welcome to the Privacy Research Group home page. The Privacy Research Group is a weekly meeting of students, professors, and industry professionals who are passionate about exploring, protecting, and understanding privacy in the digital age.

  • Visit the PRG Blog: Check out all the latest news from the Privacy Research Group.

Joining the PRG: Because we deal with early-stage work in progress, attendance at meetings of the Privacy Research Group is generally limited to researchers and students who can commit to ongoing participation in the group. To discuss joining the group, please contact Professor Helen Nissenbaum or Professor Katherine Strandburg. If you are interested in these topics, but cannot commit to ongoing participation in the PRG, you may wish to join the PRG-All mailing list.

PRG Calendar — Fall 2014

  • September 10: Organizational meeting
  • September 17: Sebastian Zimmeck - Privee: An Architecture for Automatically Analyzing Web Privacy Policies [with Steven M. Bellovin]
  • September 24: Christopher Sprigman - MSFT "Extraterritorial Warrants" Issue
  • October 1: Giancarlo Lee - Automatic Anonymization of Medical Documents
  • October 8: Joris van Hoboken
  • October 15: Karen Levy
  • October 22: Matthew Callahan
  • October 29: Luke Stark and Kate Crawford
  • November 5: Seda Guerses
  • November 12: Elana Zeide
  • November 19: Alice Marwick
  • December 3: Katherine Strandburg

Spring 2014

  • January 29: Organizational meeting
  • February 5: Felix Wu: "The Commercial Difference" which grows out of a piece just published in the Chicago Forum called The Constitutionality of Consumer Privacy Regulation

    ABSTRACT: When it comes to the First Amendment, commerciality does, and should, matter. Building on the work of Meir Dan-Cohen and others, this article develops the view that the key distinguishing characteristic of commercial or corporate speech is that the interest at stake is “derivative,” in the sense that we care about the speech interest for reasons other than caring about the rights of the entity directly asserting a claim under the First Amendment. To say that the interest is derivative is not to say that it is unimportant, and one could find commercial and corporate speech interests to be both derivative and strong enough to apply heightened scrutiny to the restrictions that are the usual subject of debate, namely, restrictions on commercial advertising and restrictions on corporate campaigning. Distinguishing between derivative and intrinsic speech interests, however, helps to uncover two types of situations in which lesser or no scrutiny may be appropriate. The first is in the context of compelled speech. If the entity being compelled is not one whose rights we are concerned with, this undermines the rationale for subjecting speech compulsions to heightened scrutiny under the First Amendment. The second is in the context of speech among commercial entities. In these cases, the transaction may be among entities none of which merit First Amendment concern. Highlighting the difference that commerciality makes helps to explain better certain exceptions, or apparent exceptions, that existing case law already makes to heightened scrutiny, such as with respect to antitrust, securities, or labor law. It also provides insight in a number of current controversies, such as that over cigarette labeling. It has particularly important implications for consumer privacy regulation, suggesting that regulation of both the consumer data trade and commercial data collection merit significantly less scrutiny than might be applied to restrictions on the privacy-invasive practices of ordinary individuals.

  • February 12: Ira Rubinstein: "The Ethics of Cryptanalysis - Code Breaking, Exploitation, Subversion and Hacking"
  • February 19: Report from the Obfuscation Symposium, including brief tool demos and individual impressions
  • February 26: Doc Searls: "Privacy and Business"

    ABSTRACT: Thoughtful conversations around privacy (such as ours) have tend come mostly from legal, policy, social and ethical angles. When business comes up, it is often cast in the role of culprit. Today's online advertising business, for example, rationalizes surveillance, dismisses privacy concerns and opposes legislation and regulation protecting privacy. So, in today's privacy climate, one might ask, Can privacy be good for business? and, Can business be good for privacy? Doc Searls' answer to both questions is yes. Through ProjectVRM at Harvard's Berkman Center, Doc has been fostering developments that empower individuals as independent actors in the marketplace since 2006. The Intention Economy: When Customers Take Charge (Harvard Business Review Press, 2012) summarized that work and where it was headed at that time. Today there are more than a hundred VRM (vendor relationship management) developers, many of which are working specifically on protecting personal privacy and establishing its worth in the marketplace. Doc will report that work, its background, where it is currently headed — and the growing role of privacy as both a market demand and a design goal.
  • March 5: Claudia Diaz: "In PETs we trust: tensions between Privacy Enhancing Technologies and information privacy law" The presentation is drawn from a paper, "Hero or Villain: The Data Controller in Privacy Law and Technologies” with Seda Guerses and Omer Tene, available at: https://www.cosic.esat.kuleuven.be/publications/article-2365.pdf
  • March 12: Scott Bulua & Amanda Levendowski: "Challenges in Combatting Revenge Porn"

    ABSTRACT: Revenge porn - sexually explicit images that are publicly shared online, without the consent of the pictured individual - has become the a hot button issue for journalists and academics, lawyers and activists. The phenomenon is surprisingly common: According to a McAfee survey, one in ten former partners threaten to post sexually explicit images of their exes online. An estimated 60 percent follow through. The harms caused by revenge porn can be very real - people featured on these sites often receive solicitations over social media, lose their jobs, or live in fear that their families and future employers will discover the photos. We will examine the challenges of combatting revenge porn: websites hosting this kind of content are afforded broad immunity under the Communications Decency Act Section 230; existing stalking and harassment laws rarely apply to the conduct of revenge porn submitters and websites; few privacy torts encompass the behaviors of revenge porn submitters; proposed legislation often runs afoul of the First Amendment. Our discussion will center on why the revenge porn problem is so difficult to combat and offer suggestions on how to approach a revenge porn solution.

    For further reading, here are two publications by Amanda Levendowski:

    Using Copyright to Combat Revenge Porn, 3 N.Y.U. J. Intell. Prop. & Ent. L. (forthcoming 2014) available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2374119.

    Our Best Weapon Against Revenge Porn: Copyright Law?, The Atlantic (Feb. 4, 2014) available at http://www.theatlantic.com/technology/archive/2014/02/our-best-weapon-against-revenge-porn-copyright-law/283564.
  • March 26: Heather Patterson: "When Health Information Goes Rogue: Privacy and Ethical Implications of Decentextualized Information Flows from Consumer Mobile Fitness Devices to Clinician, Insurers, and Employers"

    ABSTRACT: The rapid proliferation of health apps, digital sensors, and other participatory personal data collection devices points to an increasingly personalized future of health care, whereby individuals will track their own physiological and behavioral biomarkers in near real time and receive tailored feedback from an expanding team of commercial entities, social networks, and clinical care providers. Although much of the data processed by commercial sensors and apps is closely aligned with—and sometimes identical to—traditional health care data, its privacy and security are generally not subject to federal or state health privacy regulations by virtue of being held by non-HIPAA covered entities. Worryingly, the collection, integration, analysis, and distribution of this commercially tracked health data may expose individuals to the very privacy and security consequences that health privacy laws were developed to prevent, potentially disrupting the values of the health care system itself. This Article discusses technological, regulatory, and social drivers of digital health technology, reviews privacy harms associated with mobile self-tracking devices—focusing particularly on unconstrained and decontextualized information flows mediated by commercial “health data intermediaries”—and argues that the likely absorption of sensor data into the traditional medial ecosystem will present challenges to consumer privacy that current regulations are insufficient to address. It proposes that modern health privacy regimes ought to more fully take into account new data flow practices presented by emerging health technologies, both by affirmatively granting health technology users the right to exercise granular and contextual controls over their own health data, and by adopting by default an anti-discrimination framework preventing employers and insurers from penalizing individuals for health inferences made about them from sensors and other “Internet of Things” technologies.
  • April 2: Elana Zeide: "Student Privacy in Context: Intuition, Ignorance and Trust"
  • April 9: Florencia Marotta-Wurgler: "The Anatomy of Privacy" - initial findings from her empirical study on privacy policies
  • April 16: Solon Barocas: "How Data Mining Discriminates" - a collaborative project with Andrew Selbst, 2012-13 ILI Fellow

    ABSTRACT: This presentation considers recent computer science scholarship on non-discriminatory data mining that has demonstrated—unwittingly, in some cases—the inherent limits of the notion of procedural fairness that grounds anti-discrimination law and the impossibility of avoiding a normative position on the fairness of specific outcomes.
  • April 23: Milbank Tweed Forum Speaker - Brad Smith: "The Future of Privacy"
  • April 30: Seda Guerses: "Privacy is Security is a prerequisite for Privacy is not Security is a delegation relationship"

    ABSTRACT: Since the end of the 60s, computer scientists have engaged in research on privacy and information systems. Over the years, this research has led to a whole palette of “privacy solutions.” These solutions originate from diverse sub-fields of computer science, e.g., security engineering, databases, software engineering, HCI, and artificial intelligence. From a bird’s eye view, all of these researchers are studying privacy. However, a closer look reveals that each community of researchers relies on different, sometimes even conflicting, definitions of privacy, and on a variety of social and technical assumptions. For example, a good number of privacy researchers define privacy in terms of a known "security property": confidentiality. Others contest this approach and suggest that the binary understanding of privacy as concealment and violation of privacy as exposure is too simplistic and at times misleading. During my talk, I will lay out some of the elements of this particular contestation. I will do so by presenting the way in which the interplay between privacy and security is articulated by some of the researchers who participated in an empirical study of privacy research within computer science. This will be a follow up of my PRG presentation in the Fall of 2013, where I presented some of the privacy definitions and conflicting assumptions as they were articulated by differential privacy researchers, data analysts and security engineers.

Fall 2013

  • September 11: Organizational meeting
  • September 18: Discussion - NSA/Pew Survey
  • September 25: Luke Stark: "The Emotional Context of Information Privacy"
  • October 2: Joris van Hoboken: "A Right to be Forgotten"

    ABSTRACT: In this talk I will present my ongoing work on the so-called 'right to be forgotten' and the underlying questions relating to balancing privacy and freedom of expression in the context of online services. This right to be forgotten was officially proposed in 2012 by the European Commission as a new element of the EU data protection rules. I will discuss the policy backgrounds of this proposal for a strengthened right to erasure and in particular its relation to new types of publicity facilitated by online intermediaries (search engines and social media in particular). As is clear from an analysis of the proposal which I recently conducted for the European Commission (attached as background) the right to be forgotten has captured the attention of many but fails to address let alone solve the hard issues on the interface of data privacy law, media law and intermediary liability regulations. While the EC proposal may actually be considered 'a right to be forgotten', the underlying questions of how to regulate personal data in online services remain.
  • October 9: Katherine Strandburg: "Freedom of Association Constraints on Metadata Surveillance"

    ABSTRACT: Documents leaked this past summer confirm that the National Security Agency has acquired access to a huge database of domestic call traffic data, revealing information about times, dates, and numbers called. Although communication content traditionally has been the primary focus of concern about overreaching government surveillance, officials are increasingly interested in using sophisticated computer analysis of noncontent traffic data to “map” networks of associations. Despite the rising importance of digitally mediated association, current Fourth Amendment and statutory schemes provide only weak checks on government. The potential to chill association through overreaching relational surveillance is great. This Article argues that the First Amendment’s freedom of association guarantees can and do provide a proper framework for regulating relational surveillance and suggests how these guarantees might apply to particular forms of analysis of traffic data.
  • October 16: Seda Güerses: "Privacy is Don't Ask, Confidentiality is Don't Tell"

    ABSTRACT: Since the end of the 60s, computer scientists have engaged in research on privacy and information systems. Over the years, this research has led to a whole palette of "privacy solutions''. These solutions originate from diverse sub-fields of computer science, e.g., security engineering, databases, software engineering, HCI, and artificial intelligence. From a bird's eye view, all of these researchers are studying privacy. However, a closer look reveals that each community of researchers relies on different, sometimes even conflicting, definitions of privacy, and on a variety of social and technical assumptions. These researchers do have a tradition of assessing the (implicit) definitions and assumptions that underlie the studies in their respective sub-disciplines. However, a systematic evaluation of privacy research practice across the different computer science communities is so far absent. I hope to contribute to closing this research gap by presenting the preliminary results of an empirical study of privacy research in computer science.
  • October 23: Brian Choi: "The Third-Party Doctrine and the Required-Records Doctrine: Informational Reciprocals, Asymmetries, and Tributaries"

    ABSTRACT: Even as many have assailed the third-party doctrine and predicted its impending demise, few have heeded the parallel threat posed by the required-records doctrine. Although the third-party doctrine has been widely criticized as an overbroad exception to the Fourth Amendment, defining a coherent limiting principle has proved exceedingly difficult. The popular “mosaic” theory explains how the aggregation of many seemingly insignificant pieces of data from third parties can reveal comprehensive pictures of private activity ordinarily shielded by the Fourth Amendment. Yet, the inherent paradox of the mosaic theory is that it undercuts the project of distinguishing between third-party data that should be accessible without judicial warrant and third-party data that should not. Likewise, the required-records doctrine—which excludes records kept in compliance with general purpose recordkeeping requirements from the Fifth Amendment privilege against self-incrimination—has been so troubling that it has remained largely dormant since its creation. Efforts to construct a coherent limiting principle have been similarly lacking. Nevertheless, a recent set of tax enforcement cases has resuscitated the required-records doctrine and extended it to compel production of offshore bank account records from individual taxpayers. It is no coincidence that the third party doctrine also grew out of a tax enforcement case, holding that individual taxpayers may not shield financial records held by third-party banks. Three insights can be drawn from the project. The first is a warning that the required records doctrine is poised to follow in the footsteps of the third party doctrine. In both contexts, many of the early cases involved requests for financial records needed to prove tax evasion. With the third party doctrine, those early cases quickly generalized to encompass any document in the possession of a third party, including phone records, loan records, medical records, and more. With the required records doctrine, there is a similar shoddiness in the governing standard that would easily allow the same scope creep. If we dislike the current state of the third party doctrine, we should be wary of retracing the same steps under a different guise. Second, the juxtaposition provides a frame for unraveling our discomfort with both the required records doctrine and the third party doctrine. The easy cases might be those involving data that is readily obtainable through both avenues (e.g., duplicate records such as pay stubs or insurance forms), and those that are off limits under both doctrines (e.g., private diaries). On the other hand, the most vexing cases might be those involving data that can be obtained only through one avenue but not the other—allowing the government to pit one Amendment against another. For example, in the moment where a document is required but has not yet been created, that document still exists in thought only; the required records doctrine could demand it but the third party doctrine would not be able to reach it. That asymmetry also explains our discomfort with commercial aggregators who collect massive databases of personal information that the required records doctrine could never demand. Finally, the required records doctrine is a direct tributary of the third party doctrine, and has played a key role in shaping its watershed. Because business entities cannot assert the Fifth Amendment privilege, many businesses are automatically subject to recordkeeping and reporting requirements. Since the required data can include information about customers or other private citizens, the required records doctrine multiplies the potency of the third party doctrine.
  • October 30: Danah Boyd: "Networked Harm"

    ABSTRACT: How we might interdisciplinarily think through the social, legal, technical, and ethical issues that arise when new technologies create unexpected connections and people get implicated/harmed in cases outside of their control. To do so, we will offer a framework for thinking about networked harm and briefly lay out three cases that we think offer interesting contours for conversation (Maryland v. King, Google's Gmail litigation, and CA SB568). Questions on the table: Are our current models of harm are too limited for thinking through cases where new technologies implicate people in new ways?; How do we get past property models that focus on joint rights when thinking of people being implicated in each other's data traces?; What are good frameworks for moving from an individual-centric model of harm to a network-based one?
  • November 6: Karen Levy: "Beating the Box: Digital Enforcement and Resistance"

    ABSTRACT: I’ll be presenting some research from my dissertation, which (broadly) explores digital enforcement strategies – the use of technologies in place or in support of traditional human rule enforcement regimes as a means to enact more ‘perfect’ behavioral regulation over subjects. Specifically, my research concerns new federal regulations mandating the electronic monitoring of long-haul truck drivers’ work time. Last year in PRG, I talked about how the organizational knowledge practices of trucking firms change around the proliferation of monitoring devices and divest truckers of occupational autonomy. This time around, I’d like to focus on two different areas: First, I explore how truckers (and others) resist monitoring using a variety of technical and organizational strategies, including physical tampering, data manipulation, and [something I’m calling] ‘collaborative omission.’ These tactics serve to construct new gaps between regulatory intent and social practice. But I could use your help in thinking them through in a more systematic way. Second, I consider the challenges faced by law enforcement officers--specifically, commercial vehicle inspectors—when enforcement efforts are augmented by machines. I’m finding that human/machine hybridity creates several challenges on the ground for these officers, including [something I’m calling] ‘decoy compliance’ among drivers that obfuscates actual legal noncompliance, as well as a last-mile problem in acquiring data from digital monitors in the trucks.
  • November 13: Akiva Miller: "Are access and correction tools, opt-out buttons, and privacy dashboards the right solutions to consumer data privacy" & Heather Patterson: Results of a recent Pew Research Privacy Survey
  • November 20: Nathan Newman: "Can Government Mandate Union Access to Employer Property? On Corporate Control of Information Flows in the Workplace"

    ABSTRACT: A basic question of labor law over the years has been how government can intervene to ensure that workers receive information needed to exercise their rights? A contrary concern has been what rights do property owners have under the 1st, 4th, 5th amendments and under federal labor law to restrict that information flow, both in their own interest and in interests claimed on behalf of their employees? A number of existing and proposed law by state governments — including one before the Supreme Court this term -- have sought to mandate physical access to employer property to be able to contact employees and/or customers. Since the goal of unions in gaining that access is to develop a "map" of the workplace, including a strong analysis of all social networks — who is friends with whom, churches and organizations people are affiliated with, and any other useful social information— such mandated access amounts to government granting an independent party access to a range of social network information about individuals in a workplace. Obviously, employers have their own power in the workplace, so the government strengthening this de facto bottom-up data mining by unions has historically been one of the key counter weights to that corporate power in the workplace. However, the Supreme Court has in recent decades struck down requirements by the NLRB to require that unions be given access to employer property in the name of state property rights and may this term strike down a state law mandating access in the name of the first amendment rights of the employer. This tilt of the law towards protecting employer rights to control data flow in their workplace has been a key factor in weakening labor unions and, as many argue, expanding economic inequality over the last generation. An implication of this analysis is: if a rights-based framework over information in the workplace has ill-served workers, are there implications for whether a rights-based framework over privacy may ill-serve consumers and citizens in broader debates on privacy and data collection?
  • December 4: Akiva Miller: Are access and correction tools, opt-out buttons, and privacy dashboards the right solutions to consumer data privacy?" & Malte Ziewitz: "What does transparency conceal?"

Spring 2013

Fall 2012

  • September 19: Nathan Newman: "Cost of Lost Privacy: Google, Antitrust and Control of User Data"
  • September 26: Karen Levy: "Privacy, Professionalism, and Techno-Legal Regulation of U.S. Truckers"
  • October 3: Agatha Cole: "The Role of IP address Data in Counter-Terrorism Operations & Criminal Law Enforcement Investigations: Looking towards the European framework as a model for U.S. Data Retention Policy"
  • October 10: Discussion of 'Model Law'
  • October 17: Frederik Zuiderveen Borgesius: "Behavioural Targeting. How to regulate?"
  • October 24: Matt Tierney and Ian Spiro: "Cryptogram: Photo Privacy in Social Media"
  • November 7: Sophie Hood: "New Media Technology and the Courts: Judicial Videoconferencing"
  • November 14: Travis Hall: "Cracks in the Foundation: India's Biometrics Programs and the Power of the Exception"
  • November 21: Lital Helman, "Corporate Responsibility of Social Networking Platforms"
  • November 28: Scott Bulua and Catherine Crump: "A framework for understanding and regulating domestic drone surveillance"
  • December 5: Martin French, "Preparing for the Zombie Apocalypse: The Privacy Implications of (Contemporary Developments in) Public Health Intelligence"