Privacy Research Group

ILI Privacy Research Group Logo

The Privacy Research Group is a weekly meeting of students, professors, and industry professionals who are passionate about exploring, protecting, and understanding privacy in the digital age.

Joining PRG

Because we deal with early-stage work in progress, attendance at meetings of the Privacy Research Group is generally limited to researchers and students who can commit to ongoing participation in the group. To discuss joining the group, please contact Tom McBrien. If you are interested in these topics, but cannot commit to ongoing participation in PRG, you may wish to join the PRG-All mailing list.
 
PRG Student Fellows—Student members of PRG have the opportunity to become Student Fellows. Student Fellows help bring the exciting developments and ideas of the Research Group to the outside world. The primary Student Fellow responsibility is to maintain an active web presence through the ILI student blog, reporting on current events and developments in the privacy field and bringing the world of privacy research to a broader audience. Fellows also have the opportunity to help promote and execute exciting events and colloquia, and even present to the Privacy Research Group. Student Fellow responsibilities are a manageable and enjoyable addition to the regular meeting attendance required of all PRG members. The Student Fellow position is the first step for NYU students into the world of privacy research. Interested students should email Student Fellow Coordinator Tom McBrien with a brief (1-2 paragraph) statement of interest or for more information.


PRG Calendar

 

Spring 2021: via zoom Fridays 2:30-4:00pm
 

April 16:Tomer Kenneth — Public Officials on Social Media

     ABSTRACT: Presidents, Governors, Mayors, and numerous other public officials use social media on a daily basis. They convey all sorts of messages: sharing parts of their personal lives, voice their opinions about daily events, asking people to vote for them, blocking hecklers, and also give governmental orders and directives. How should we understand public officials’ social media activity? In a recent line of (relatively famous) cases, courts found an easy answer to this question. They distinguish between public users who are bounded by 1st Am., and private users who are protected by it. This, I argue, is too easy. I trace the origins of this approach in political theory and criticize this dichotomy as insufficient, at least regarding officials’ social media users. I suggest that courts, social media companies, officials, and the public – all have an interest to further distinguish between public messages and official messages. I then present some (very rudimentary) thought about how to do it. 

April 9: Thomas Streinz — The Flawed Dualism of Facebook's Oversight Board

     ABSTRACT: In the pandemic plagued fall of 2020, right before the presidential election in the United States, Facebook launched its “Oversight Board”. The Board’s mission is to “to protect free expression by making principled, independent decisions about important pieces of content and by issuing policy advisory opinions on Facebook’s content policies”. The creation of the Oversight Board is an exercise in corporate institution building in response to the protracted debate about Facebook’s role as a global speech regulator. In establishing what is often but misleadingly called the “Supreme Court of Facebook”, the company seeks to shield itself from some of the contestation inherent in determining which content is allowed on the world’s leading social media platform with more than 2.7 billion active monthly users. The Oversight Board assesses this question based on Facebook’s “community guidelines”, the self-created and evolving substantive standards promulgated by Facebook itself. The Board is asked to pay particular attention to the impact of removing content on human rights norms protecting free expression, thereby entangling a self-created platform standard with established (if contested) standards of legality under international human rights law. The Board’s decisions are binding on Facebook unless their implementation would violate “the law”, thereby entangling the independently adjudicated legality of content under Facebook’s internal law with the legality of online content under national law as interpreted by national courts, some of which operate under oversight by regional human rights courts. The bylaws for Facebook’s Oversight Board appear to limit its jurisdiction to cases which are not legally determined by “the law” from the outset and let Facebook’s internal legal department make this determination. While Facebook’s desire to avoid legal liability is understandable, “the law” of online content legality is far from clear or stable; rather it is complex, contested, and rapidly evolving. Letting Facebook’s legal department, rather than the Oversight Board itself, decide whether the permissibility of content is determined (only) by Facebook law or (also) by national law, significantly curtails the Oversight Board’s ambit, power, and potential impact. It is also a missed opportunity from an international human rights law perspective as it deprives the Oversight Board from the possibility to clarify which that content was legal under Facebook’s internal law in light of international human rights standards on free expression – thereby (indirectly) contesting determinations by national jurisdictions to the contrary. This reveals a “flawed dualism” that treats the multiple legalities of online content as neatly separated or at least separable instead of taking seriously their entanglement and potential for deliberative contestation. This paper proceeds on the assumption that Facebook’s Oversight Board is best understood as a global regulatory body tasked with adjudicating the permissibility of content on a global platform. It shows how the rules governing permissible content on Facebook are entangled with national and international law. While disentangling these multiple legalities is neither possible nor desirable, a decision needs to be made whether or not Facebook’s Oversight Board has jurisdiction in cases that are at least in part determined by “the law”. I argue that the Oversight Board should decide itself which cases are subject to review – not Facebook’s legal department. In advancing these arguments, the paper also contributes to the broader debate about the legitimacy of self-regulating global speech platforms.

April 2: Gabe Nicholas — Have Your Data and Eat it Too: Bridging the Gap between Data Sharing and Data Protection

     ABSTRACT: In policymaking circles, conversations about data protection (e.g. privacy, security, liability) and data sharing (e.g. interoperability, portability) have largely been siloed from one another, and some technology firms have exploited this gap in order to avoid regulatory scrutiny on both fronts. In this nascent paper I hope to write, I want to argue that data protection and data sharing should not be seen as two distinct values but rather, as part of the same regulatory project. The contextual integrity framework shows that both data protection and data sharing relate to the appropriate flow of information. Consumer data that is neither protected nor shareable is thus in the untenable position of too dangerous to allow consumers to bring to another service yet too valuable to corporate innovation to limit how they can use it and where they can send it. Regulators should be particularly concerned about these industries, such as agricultural tech, that "have their cake and eat it too" by avoiding both data protection and data sharing scrutiny.

March 26: Ira Rubinstein  — Voter Microtargeting and the Future of Democracy

     ABSTRACT: In 2020, Rep. Anna Eshoo (D-CA) introduced H.R. 7014, a bill that prohibits online platforms, including social media, ad networks, and streaming services, from targeting political ads based on the demographic or behavioral data of users. According to Eshoo, “Microtargeting political ads fractures our open democratic debate into millions of private, unchecked silos, allowing for the spread of false promises, polarizing lies, disinformation, fake news, and voter suppression.” Her bill is one of several efforts at both the federal and state levels to protect the democratic process in response to a series of threats ranging from Russian hacking and trolling, to voter suppression, to disinformation campaigns and various other forms of computational propaganda. All of these threats to democratic process rely to some extent on voter microtargeting. There are some who would argue that the Eshoo bill is dead in the water on First Amendment grounds. Indeed, earlier proposals to regulate online campaign advertising have pursued a safer path by focusing on transparency and privacy measures in an effort to avoid constitutional challenges. This Article suggests that such measures are insufficient to prevent further deterioration of democratic politics. Moreover, it is clear that if Congress fails to act going forward into the 2024 presidential campaign, it will surrender the regulation of campaign speech to powerful private actors like Facebook and Twitter that engage in such tasks only in a highly self-interested fashion. Thus, this Article seeks to defend more radical solutions—such as Eshoo’s ban on most voting microtargeting—even in the face of potentially fatal First Amendment concerns. This Article briefly considers the (dire) state of contemporary First Amendment doctrine and the obstacles it poses to the Eshoo bill but its primary goal is to develop a compelling justification for regulating voter microtargeting. It analyze the harms of voter microtargeting from four distinct but complementary perspectives. First, it considers privacy concerns but given the Court’s record when free speech and privacy come into conflict, it concludes that privacy arguments alone will not suffice. Second, it examines arguments grounded in personal autonomy, that is, in the basic idea that democracy works only if citizens are self-governing and vote for candidates of their own choice. Obviously, democracy breaks if voters are subjected to violence, intimidation, or other forms of direct interference with their voting rights but voter microtargeting may also be understood as a form of political manipulation that interferes with voting rights. This line of argument quickly runs into the difficult task of distinguishing mass political persuasion—which is at the core of our First Amendment values—from mass political manipulation. If the latter is beyond the reach of campaign regulation, this potentially transforms the First Amendment from a bulwark of democracy into a suicide pact. Next, the Article turns to campaign activities that are widely perceived as both wrong and actionable under existing election law. These include the use of minority status as a key targeting factor in voter suppression activities. Both federal and state law prohibit such activities but why exactly are they wrong? And what does this teach us about the need to regulate voter microtargeting? Finally, the Article brings these arguments together by emphasizing free choice as a core principle of fair electoral processes. This principle justifies the regulation of both overt and covert forms of improper influence on voters including voter microtargeting. In sum, this Article argues that despite First Amendment concerns, we should regulate voter microtargeting on the grounds that in this case, we enhance the free choice of voters by imposing constraints on how candidates can target their messages to voters.  

March 19: Stav Zeitouni

     ABSTRACT: Control as an organizing principle for privacy has fallen out of favor in scholarly legal writing, although it continues to be the animating principle behind many notice-and-consent models on the books. Its dominance can be traced back to the work of Alan Westin, who relied heavily on surveys of lay perceptions of privacy. Over the years, philosophers and lawyers have pointed to the many downsides of this approach, but they have relied on very particular conceptions of control to do so. Meanwhile, psychological research on privacy and control has evolved into different understandings of these terms. This early-stage presentation will begin to sketch the convergences and divergences of these attitudes with the goal of building out a more robust understanding of information privacy as it plays out in interpersonal interactions.

March 12: Ngozi Nwanta

     ABSTRACT: Many development enthusiasts and financial institutions have posited that the collection and use of personal identifiable information as digital identity is crucial for efficiency, security, financial inclusion, healthcare, welfare distribution and development in economies. Hence, in recent years, several African countries, with funding from the World Bank and other financial institutions, and relying on the Indian Aadhar identity model, have embarked on identification projects that would confer digital identity on individuals. Although digital identity holds so much promise for the continent, it also poses serious harms individually and collectively when the powers (amassed through collection, processing and use by tech companies, the state and other actors) of such data is harnessed, inadequately regulated and improperly defined, and when possession of the digital identity is situated as a prerequisite to accessing infrastructures. This may potentially increase the risk of exclusion of individuals and challenge citizenship rights where the focus of the state is on reliance on technology to resolve more deep-rooted societal, institutional and developmental challenges. In this early stage of the research, I analyze the harms through a human rights and capacity lens, discuss the inability of data protection models to resolve the associated harms and propose a regional (African) framework for state responsibility in digital identification. 

March 5: Aileen Nielsen

     ABSTRACT: In 2005, while defending post 9/11 domestic electronic surveillance programs, Judge Posner famously asserted that computers cannot invade anyone’s privacy. Rather, Judge Posner argued, automated surveillance programs should be seen as privacy-enhancing tools because they reduce the need for human review of information. Since that time, legal scholarship has come out on both sides of this issue, with arguments grounded in legislative history, case law, and survey work to defend or dismiss distinctions between the human or robotic gaze. In this discussion of early stage work, I present a vignette study to elucidate distinctions important to laypeople in response to realistic scenarios of automated or human surveillance. 

February 26: Tom McBrien

     ABSTRACT: In this (early-stage) project, I aim to examine the relationship between the First Amendment and data privacy, expanding on some recent First Amendment scholarship to develop a clearer justification for general privacy statutes under the First Amendment. The Supreme Court's explanation of the First Amendment's "coverage"—the threshold question of what is considered "speech" and what is not—is incomplete and inconsistent. This is surprising because of how important it is: labeling something as speech greatly reduces the government's ability to regulate that thing. This inconsistency has, in part, resulted in "First Amendment opportunism"—incentives for litigants to argue that wide swaths of activity are constitutionally protected speech. Since the 1970s, and especially throughout the past fifteen years, many activities previously considered conduct have been ruled to be speech. See, e.g., Virginia State Pharmacy Board v. Virginia Citizens Consumer Council (1976) (advertising); Citizens United v. FEC (2010) (corporate political advertisements); Brown v. Entertainment Merchants Association (2011) (selling violent video games to minors); Janus v. AFSCME (2018) (public-sector union dues); United States v. Caronia (2012) (marketing drugs for "off-label" purposes the FDA did not approve of). This opportunism may soon have serious impacts on legislatures' ability to protect people's privacy. Many legal scholars, courts, and advocacy organizations now accept that collecting and selling data is—in many cases—speech protected by intermediate or strict scrutiny under the First Amendment. The Supreme Court went so far as to say (in an aside) that "The creation and dissemination of information is speech for First Amendment purposes." Sorrell v. IMS Health (2011). The ongoing legal challenge to the AI company Clearview AI's practices centers largely on whether Illinois's Biometric Information Privacy Act violates Clearview's First Amendment rights. Concerns about Sorrell v. IMS Health pervaded the drafting and amendment process of the California Consumer Privacy Act, and more states are passing their own versions of general privacy laws that may be challenged on First Amendment grounds if they sufficiently threaten Internet companies' business models. Certain privacy proposals, such as Balkin and Zittrain's information fiduciary idea, seem motivated largely by an attempt to side-step these First Amendment issues. While data=speech proponents have some powerful precedent and momentum on their side, there is one glaring inconsistency: The same legal reasoning that justifies treating Internet users' personal information as Internet companies' protected speech should apply similarly to other areas of law such as trade secrecy, antitrust, evidence rules, securities regulation, and workplace sexual harassment—to name just a few. Yet, when faced with First Amendment challenges to some of these regulations, the Supreme Court has pointedly ignored the arguments. In an age where the Supreme Court finds speech everywhere it looks, what makes these areas special? Data=speech proponents struggle to explain this inconsistency other than hinting or claiming outright that these fields are also unconstitutional. Some First Amendment scholars, such as Erica Goldberg, Robert Post, and Amanda Shanor, have moved away from searching for any unified theory of First Amendment coverage premised in legal doctrine. Instead, they search for a consistent descriptive account of First Amendment coverage rooted in activities' social context. Examining how certain activities—such as a political speech, pornography, bigoted public comments, bigoted workplace comments, testimony during trials, or collusion between business competitors—interact with and impact social contexts helps these scholars describe why the Supreme Court is likely to label something as speech or not. I plan to apply their insights to the activity of selling user data to third parties. I hope this analysis will help to more clearly situate privacy regulations between the universe of constitutional "uncovered" laws and the universe of unconstitutional speech-suppressing laws.

February 19: Ari Ezra Waldman

     ABSTRACT: This is a project about misinformation. While legal scholars have been active over the last 4 years identifying legal definitions of and developing legal responses to the problem of misinformation, including assessing the constitutionality of those responses under the current Supreme Court's First Amendment jurisprudence, less attention has been paid to how the law is already changing as a result of misinformation. This project brings together literature in sociology and social network theory about how information spreads and doctrinal standards used in judicial review of government action. Consider, for example, the seemingly frivolous litigation stemming from the former president's baseless challenges to only some of the results of the 2020 election. Each of those claims were based on things that simply weren't true. A claim that Republican vote watchers were not allowed to stand the appropriate number of feet away from vote counting was false. Statements in complaints alleging massive voter fraud were baseless, and found to be so when lawyers were asked by judges in court to detail their claims, or when the time came to file motions. That is, the basic rule against lying in court--and associated professional sanctions for doing so--appeared to erect a wall against law being made based on misinformation. But that is not always the case. Nor is this particular set of baseless claims the last one courts will face. In fact, I would like to argue that the law is deeply and structurally vulnerable to erosion by misinformation because standards and practices either insufficiently envisioned the possibility of lies or were explicitly created to achieve partisan and power gains with the support of misinformation. AEW Notes: Please note that this is a very early-stage project, currently at the research phase, so this discussion is more about hypotheses and thinking through avenues for research than anything else.

February 12: Albert Fox Cahn

     ABSTRACT: So-called “smart cities” technologies encompass an ever-growing list of data-driven government services, impacting everything from the way our children are taught, to the way our streets are cleaned, and even the ways that we are policed. While this category of state and municipal infrastructure is not novel, aspects of public investment in smart cities services and infrastructure have been significantly accelerated by the transition to remote work and remote education during the COVID-19 pandemic.  This interactive discussion will evaluate both the current status of smart cities development, the existing legal structures governing use and procurement, and early work on the Just Cities development guidelines. These guidelines will aim to empower local activists around the country to more effectively engage in the debate over how smart cities technologies are used.

February 5: Salome Viljoen & Seb Benthall — Data Market Discipline: From Financial Regulation to Data Governance

     ABSTRACT: Privacy regulation has traditionally been the remit of consumer protection, and privacy harm cast as a contractual harm arising from the interpersonal exchanges between data subjects and data collectors. This frames surveillance of people by companies as primarily a consumer harm. In this paper, we argue that the modern economy of personal data is better understood as an extension of the financial system. The data economy intersects with capital markets in ways that may increase systemic and systematic financial risks. We contribute a new regulatory approach to privacy harms: as a source of risk correlated across households, firms, and the economy as a whole. We consider adapting the tools of macroprudential regulations designed to mitigate financial crises to the market for personal data. We identify both promises and pitfalls to viewing individual privacy through the lens of the financial system.

January 29: Mason Marks  — Biosupremacy: Data Protection, Antitrust, and Monopolistic Power Over Human Behavior

     ABSTRACT: For decades, leading technology companies have acquired other firms, avoided antitrust enforcement, and grown so powerful that their influence over human affairs equals that of many governments. Their power stems from data collected by devices that people welcome into their homes, workplaces, schools, and public spaces. When paired with artificial intelligence, this vast surveillance network profiles people to sort them into increasingly specific categories. However, this "sensing net" was not implemented solely to observe and analyze human behavior; it was also designed to exert control. Accordingly, it is paired with a matching network of influence, the "control net," that leverages intelligence from the sensing net to manipulate people's behavior, nudging them through personalized newsfeeds, targeted advertising, dark patterns, and other forms of coercive choice architecture. Dual networks of sensing and control form a global digital panopticon, a modern analog of Bentham's eighteenth-century building designed for total surveillance. It monitors billions of students, employees, patients, prisoners, and members of the public. Moreover, it enables a pernicious type of influence that Foucault defined as biopower: the ability to measure and influence populations to shift social norms.  This Article argues that a handful of companies are vying for a dominant share of biopower to achieve biosupremacy, monopolistic power over humanity's future. It analyzes how firms concentrate biopower through conglomerate and concentric mergers that add software and devices to their sensing and control networks. Acquiring sensors in new markets enables cross-market data flows that send information back to the acquiring firm across sectoral boundaries. Conglomerate mergers also expand the control net, establishing beachheads from which platforms exert biopower to assault social norms.  Many fields of law would benefit from incorporating the concepts of biopower and biosupremacy into legal doctrine. This Article focuses on antitrust, a branch of law concerned with restraining private power. It argues that antitrust regulators should expand their conception of consumer harm to account for the costs imposed by panoptic surveillance and the impact of coercive choice architecture on product quality. They should revive conglomerate merger control, abandoned in the 1970s, and update it for the Digital Age. Specifically, they should halt mergers that concentrate biopower, prohibit the use of dark patterns, and mandate data siloes to block cross-market data flows. To prevent platforms from locking consumers into panoptic walled gardens, which concentrates biopower, regulators should force tech companies to implement data portability and platform interoperability. 
 

Fall 2020
 

December 4: Florencia Marotta-Wurgler & David Stein — Teaching Machines to Think Like Lawyers

     ABSTRACT: On Friday, we will present the framework of a large scale project we worked on this summer, where a team of fourteen law students coded and annotated a large sample of privacy policies along over 100 dimensions to measure the extent to which such policies comply with CCPA, GDPR, and current US guidelines. In addition, we sought to replicate the coding of other public databases of privacy policies used for machine learning analysis. In addition to presenting the framework, we will discuss the variables we track, coding tools, annotating protocols, and some preliminary findings in hopes of receiving feedback that we can incorporate into the project as we go forward. Many thanks for your time. 

November 20: Andrew Weiner

     ABSTRACT: In the wake of a data breach, data processors and controllers race to figure out what happened, why it happened, and how to remediate the issues that led to the breach. This process often is the responsibility of technical analysts and investigators, engineers who work on the breached product or platform, and cybersecurity subject matter experts. When the breach contains personal information of a data subject, the race to complete the investigative process intensifies. While understanding and remediating a personal information data breach is obviously high priority, the investigation process does not necessarily include a key question: Do we have to tell anyone about this whole shindig? The question of whether to notify parties external to the breached organization is not a decision just for policy or communications teams. Rather, a patchwork of worldwide data breach notification statutes and regulations create legal obligations that lawyers must navigate to determine whether a data breach triggers data subject and/or regulator notification. Data breach notification laws have important goals -- giving data subjects transparency into who has their data and incentivizing organizations to take data protection and cybersecurity seriously. However, the quality and effectiveness of the laws dictate how well those goals are met. This presentation will discuss data breach notification laws' successes, shortcomings, and inconsistencies and propose improvements to the patchwork of laws. 

November 13: Cancelled for Northeast Privacy Scholars Workshop

November 6: Mark Verstraete — Cybersecurity Spillovers

     ABSTRACT: The state of cybersecurity is notoriously in disarray. Data security incidents occur frequently and firms have little apparent motivation to increase the security that they provide. Yet this talk examines an interesting feature of the cybersecurity ecosystem; namely, that some clients of cloud storage receive a security windfall in virtue of other clients who also use the same provider. We label this phenomenon "cybersecurity spillovers" and examine how these spillovers occur, conceptual uncertainty about these benefits, and the normative payoffs of these spills. To understand this phenomenon, this talk will canvass the unique structural features of cybersecurity that allow additional security to pass between clients. Here, the essential feature is the public nature of the cloud. Many different clients use the same cloud infrastructure and some clients will need higher security and, further, the service provider has strong incentives to apply these changes at the platform level rather than for specific clients. Finally, this talk addresses how this additional security may be compensated and what this means for security spillovers and externalities more generally. In closing, we gesture at the broad normative implications that follow from cybersecurity spillovers.

October 30: Ari Ezra Waldman — Privacy Law's Two Paths

     ABSTRACT: Privacy law is on a path to irrelevance. But it doesn't have to be that way. Both leading privacy laws today--the GDPR and the CCPA--reflect a "Twentieth Century Synthesis" (Britton-Purdy, Grewal, Kapczynski, and Rahman (forthcoming)) in which law is oriented toward neoliberal and managerial values like efficiency, productivity, and innovation (Cohen 2019). The GDPR and the CCPA explicitly shift regulatory responsibilities from government to regulated entities themselves, a form of "collaborative governance" that relies on best practices, compliance, codes of conduct, internal corporate structures, and assessments based on executive attestation (Kaminski 2018; Waldman 2020). Even the proposals in the U.S. Congress and in various state capitols aimed at enhancing privacy protections follow this path. We seem incapable of stepping outside the narrow Overton Window handed to us by the neoliberal consensus of the last 50 years. Why is that? One reason is that the social practice of privacy law itself is built to normalize corporate data extraction rather than rein it in. The social practice of privacy law is the detritus of privacy law on the ground. Indeed, law is best understood as a series of practices that endogenously construct legal rules from the ground up. The social practices of privacy law--everything from chief privacy offices to privacy impact assessments, audit trails to click to agree buttons, statements like "we care about your privacy" or "your privacy is important to us" to notifications about privacy policy changes--are performative acts that express what privacy law is and normalize it as what privacy law should be. Normalization is the social and psychological process through which common things come to be understood as acceptable, ordinary, and, ultimately, good. Put another way, the privacy laws we have today not only represent the Twentieth Century Synthesis, but they entrench it through a series of on-the-ground performances that construct our privacy-related identities and normalize privacy self-governance as the only possible outcome. The implication of this is profound: Scholars talk about needing the political will to shift to a regulatory system that reflects a political economy approach to governance. The privacy laws we have today are eroding that political will, among policymakers, privacy professionals, and even individuals.

October 23: Aileen Nielsen — Tech's Attention Problem

     ABSTRACT: The plasticity of human preferences and behavior is a readily accepted proposition in economics and psychology. Within this domain of plasticity is the experimentally demonstrated fact that humans likely have only a certain number of “mental cycles” per day with which to make decisions and defend their interests. This poses particularly compelling problems related to human privacy and autonomy in an era where we increasingly spend our time in digital environments that are privately owned and engineered to maximize the utility of the entities who own that infrastructure. This work makes an argument in four part in response to the private infrastructure that drive our digital attention economies. First, I discuss how human attention is under attack in the digital sector, fueled by scientific knowledge from psychology and economics, and also show how the resulting attention harms routinely fail to achieve legal recognition and protection. Second, I propose a taxonomy of simple metrics that can be used with both scientific justification and conceptual simplicity to operationalize metrics for attention in common digital products. Third, I experimentally measure likely marketplace reactions to these metrics in a realistic scenario relating to mobile apps. Finally, I examine a variety of regulatory and policy measures that could be implemented with such attention metrics. In proceeding in these parts, I make the case that there are practical but insufficiently explored options to quantify and regulate pervasive consumer harms in digital attention economies. 

October 16: Caroline Alewaerts — UN Global Pulse

     ABSTRACT: UN Global Pulse is the UN Secretary-General’s initiative on big data and artificial intelligence (AI) for sustainable development, humanitarian action, and peace. It was established a decade ago based on a recognition that digital data offer opportunities to gain a better understanding of changes in human well-being, and to get real-time feedback on how well policy responses are working. UN Global Pulse has since been expanding the boundaries of its research and policy work, ensuring close alignment with the transformative innovation efforts of the Executive Office of the Secretary-General in which it operates. In this presentation, I will provide an overview of UN Global Pulse technology and policy work both within and outside of the UN, and how it is working to accelerate the discovery, development and adoption of privacy-protective and rights-based big data and AI applications that can transform how we operate and help communities everywhere achieve the Sustainable Development Goals (SDGs). 

October 9: Salome Viljoen — Data as a Democratic Medium: From Individual to Relational Data Governance

     ABSTRACT: Discussions about personal data often involve claims (explicit or implicit) regarding what data is or is “like,” why we should care about its collection, and what we should do about datafication—the transformation of information about people into a commodity. This Article evaluates the legal merit of these claims via their empirical and normative consequences. To do so, it engages with two enduring problems vexing U.S. data governance. First, the “sociality problem”: how can data governance law better account for the downstream social effects of data collection? Second, the “legitimacy problem”: how can data governance law distinguish legitimate and illegitimate downstream uses without relying on the failed mechanism of individual notice and choice? Part One documents the significance of data processing for the digital economy and evaluates how the predominant legal regimes that discipline data collection—contract and privacy law—code data as an individual medium. This conceptualization is referred to throughout the Article as “data as individual medium” (DIM). Part Two explores the disconnect between DIM and how the data political economy produces social value and social risk. First it shows that data’s capacity to transmit social relational meaning is central to how data produces economic value and social risk, yet is legally irrelevant under DIM. Part Three evaluates two prominent proposals that have emerged in response to datafication: propertarian and dignitarian reforms to data governance. While both approaches have merit, because they conceive of data as an individual medium they are unable to resolve either the sociality problem or the legitimacy problem. Part Four proposes an alternative approach: data as a democratic medium (DDM). DDM fosters data governance that is attentive to data’s social effects as well as to the purposes that drive data production and the conditions under which it occurs. Part Four concludes by outlining key principles and directions for what DDM regimes could look like in practice. 

October 2: Gabe Nicholas — Surveillance Delusion: Lessons from the Vietnam War

     ABSTRACT: Surveillance systems allow states to “see” into the lives of individuals. Sometimes that vision is an illusion — other times it is a delusion. In this paper, I offer a case study of one such delusional surveillance system: Operation Igloo White, a sensor-software system built in 1968 by the US Air Force to track and bomb North Vietnamese supply lines in the Laotian jungle. Through technical documentation, declassified military histories, and original interviews with veterans, I argue that the Air Force overlooked, ignored, or hid a preponderance of evidence that Igloo White failed to accurately “see” what was happening on the ground. The US government tricked itself, willfully or otherwise, into believing its surveillance system was effective. I call this phenomenon surveillance delusion. State surveillance systems are particularly susceptible to delusion because unlike surveillance capitalist systems, they have no profit motive to be accurate. As James Scott argues, modern states use surveillance to make citizens “legible” in order to govern society by scientific principles. This imperative depends not on accurate observation but the stringent, invisible categorization of individuals. Harms of surveillance delusion are thus externalized to the surveilled. Part One of this paper defines surveillance delusion and contextualizes it in the broader surveillance studies literature on dataism and datafication. Part Two gives a case study of Operation Igloo White and describes three areas in which it failed to “see”: data integrity, or how well a sensor measures an intended ground truth; data quality, or how well a metric works for its intended purpose; and data politics, or how control over data allocates power. Part Three explains how the Air Force deluded itself about these blindnesses. Part Four reconsiders three modern domestic surveillance systems through the lens of delusion — app-based contact tracing, predictive policing, and the US-Mexico border wall.

September 25: Angelina Fisher & Thomas Streinz — Confronting Data Inequality

     ABSTRACT: Data conveys significant social, economic, and political power. For this reason, unequal control over data – a pervasive form of digital inequality – is a problem for economic development, human agency, and collective self-determination that needs to be addressed. This paper takes some steps in this direction by analyzing the extent to which extant law facilitates unequal control over data and by suggesting ways in which legal interventions might lead to more equal control over data.  The paper distinguishes between unequal control over data as an asset on the one hand and unequal control over the infrastructures that generate, process, storage, transfer, and use data, on the other hand. We hypothesize that the former is a function of the latter. Existing law tends to ignore the salience of infrastructural control over data and seeks to regulate data as an object to be transferred, protected, and shared. Private law technologies are dominant in this regard while states increasingly bind themselves under international economic law to not redistribute or localize control over data. While there are no easy solutions to the problem of data inequality, we suggest that retaining flexibility to experiment with different approaches, demanding enhanced transparency, pooling of data and bargaining power, and differentiated and conditional access to data mechanisms may help in confronting data inequality going forward. We begin the paper by considering how data is conceptualized.  Here we highlight two broad discourses: one sees data as an asset or resource that creates value for different entities (e.g., enterprises, communities, countries, etc.). The other sees data as not “a natural kind” but rather a relational and contextual concept that is shaped by assemblages of digital infrastructures, social and organizational practices, histories and ideologies, and legal instruments, practices, and institutions.  We bring these two discourses together to illustrate (a) the relationship between data (as an output) and the infrastructures that constitute it (“data infrastructures”) and (b) examine specific inequalities that flow from unequal control over data infrastructures.  We highlight the outsized role of commercial enterprises in control over data infrastructures with reference to e-commerce, communication and IoT data management platforms.  We also consider the role that cloud computing plays in centralizing infrastructural control.  (Part I) Having presented the problematique with which the paper is concerned, we turn to the role of legal technologies.  Here we consider (a) to what extent different legal regimes and instruments in their current approaches to regulation of data facilitate, entrench or simply ignore infrastructural control and (b) how law can be deployed to address the type of data inequality we identify in the paper.  We posit that to do the latter, the regulation of data through law needs to move away from conceptualizing data-as-an-asset and instead focus on regulating data infrastructures. (Part II) In the concluding part, we put forth some interventions that might usefully be deployed to address data inequality.  Although proposals in this section apply to a variety of actors and contexts, our primary audience here are policy makers in developing countries.  Our suggestions include a mix of legal, technical and political interventions, urging contextual, experimental, and flexible approaches. These include requirements for transparency, pooling of political power, building up bottom up data governance arrangements that provide differentiated and conditional access to data, and leveraging international organizations as international data governors. (Part III)

September 18: Danny Huang — Watching loTs That Watch Us: Studying loT Security & Privacy at Scale

     ABSTRACT: Many consumers today are increasingly concerned about IoT security and privacy. There is much media hype about home cameras being hacked or voice assistants eavesdropping on conversations. However, it is unclear exactly what are security and privacy threats, how prevalent they are, and what are the implications on users, policymakers, and manufacturers, because there is no reliable large-scale data in the wild. In my talk, I'll describe a new method to systematically collect a large-scale real-world dataset of IoT device network traffic. I'll show you examples of security and privacy threats we identified on various IoT devices, along with a discussion on the potential legal issues.

September 11: Seb Benthall — Accountable Context for Web Applications

     ABSTRACT: We consider the challenge of accountable privacy, fairness, and ethics for web applications. We begin with a case for studying specific software architectures. Computer science paradigms have come under disciplinary criticisms from STS and engineering disciplines. These criticisms are diffused by introducing realistic and general models from software engineering. We find that some of the criticisms of computer science literature, especially against the use of formalism, misplaced and instead better understood as limits imposed by the design of the web and web services. The design of the web is to maximize connectivity and minimize control: this entails that web resources are exposed indiscriminately to myriad social contexts. The design of web services (e.g. REST) is to allow for "anarchic scalability" and "independent deployability" across multiple, and shifting, organizational boundaries. We find this networked, relational, inter-organizational nature of web services to be an otherwise unaddressed reason for poor accountability of computer systems. We propose a system of labeling and documentation for web applications that would remedy this opacity.


Spring 2020

April 29: Aileen Nielsen — "Pricing" Privacy: Preliminary Evidence from Vignette Studies Inspired by Economic Anthropology
April 22: Ginny Kozemczak — Dignity, Freedom, and Digital Rights: Comparing American and European Approaches to Privacy
April 15: Privacy and COVID-19 Policies
April 8: Ira Rubinstein — Urban Privacy
April 1: Thomas Streinz — Data Governance in Trade Agreements: Non-territoriality of Data and Multi-Nationality of Corporations
March 25: Christopher Morten — The Big Data Regulator, Rebooted: Why and How the FDA Can and Should Disclose Confidential Data on Prescription Drugs
March 4: Lilla Montanagni — Regulation 2018/1807 on the Free Flow of Non Personal Data: Yet Another Piece in the Data Puzzle in the EU?
February 26: Stein — Flow of Data Through Online Advertising Markets
February 19: Seb Benthall — Towards Agend-Based Computational Modeling of Informational Capitalism
February 12: Yafit Lev-Aretz & Madelyn Sanfilippo — One Size Does Not Fit All: Applying a Single Privacy Policy to (too) Many Contexts
February 5: Jake Goldenfein & Seb Benthall — Data Science and the Decline of Liberal Law and Ethics
January 29: Albert Fox Cahn — Reimagining the Fourth Amendment for the Mass Surveillance Age
January 22: Ido Sivan-Sevilia — Europeanization on Demand? The EU's Cybersecurity Certification Regime Between the Rationale of Market Integration and the Core Functions of the State

 

Fall 2019

December 4: Ari Waldman — Discussion on Proposed Privacy Bills
November 20: Margarita Boyarskaya & Solon Barocas [joint work with Hanna Wallach] — What is a Proxy and why is it a Problem?
November 13: Mark Verstraete & Tal Zarsky — Data Breach Distortions
November 6: Aaron Shapiro — Dynamic Exploits: Calculative Asymmetries in the On-Demand Economy
October 30: Tomer Kenneth — Who Can Move My Cheese? Other Legal Considerations About Smart-Devices
October 23: Yafit Lev-Aretz & Madelyn Sanfilippo — Privacy and Religious Views
October 16: Salome Viljoen — Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought
October 9: Katja Langenbucher — Responsible A.I. Credit Scoring
October 2: Michal Shur-Ofry — Robotic Collective Memory   
September 25: Mark Verstraete — Inseparable Uses in Property and Information Law
September 18: Gabe Nicholas & Michael Weinberg — Data, To Go: Privacy and Competition in Data Portability 
September 11: Ari Waldman — Privacy, Discourse, and Power


Spring 2019

April 24: Sheila Marie Cruz-Rodriguez — Contractual Approach to Privacy Protection in Urban Data Collection
April 17: Andrew Selbst — Negligence and AI's Human Users
April 10: Sun Ping — Beyond Security: What Kind of Data Protection Law Should China Make?
April 3: Moran Yemini — Missing in "State Action": Toward a Pluralist Conception of the First Amendment
March 27: Nick Vincent — Privacy and the Human Microbiome
March 13: Nick Mendez — Will You Be Seeing Me in Court? Risk of Future Harm, and Article III Standing After a Data Breach
March 6: Jake Goldenfein — Through the Handoff Lens: Are Autonomous Vehicles No-Win for Users
February 27: Cathy Dwyer — Applying the Contextual Integrity Framework to Cambride Analytica
February 20: Ignacio Cofone & Katherine Strandburg — Strategic Games and Algorithmic Transparency
February 13: Yan Shvartshnaider — Going Against the (Appropriate) Flow: A Contextual Integrity Approach to Privacy Policy Analysis
January 30: Sabine Gless — Predictive Policing: In Defense of 'True Positives'


Fall 2018

December 5: Discussion of current issues
November 28: Ashley Gorham — Algorithmic Interpellation
November 14: Mark Verstraete — Data Inalienabilities
November 7: Jonathan Mayer — Estimating Incidental Collection in Foreign Intelligence Surveillance
October 31: Sebastian Benthall — Trade, Trust, and Cyberwar
October 24: Yafit Lev-Aretz — Privacy and the Human Element
October 17: Julia Powles — AI: The Stories We Weave; The Questions We Leave
October 10: Andy Gersick — Can We Have Honesty, Civility, and Privacy Online? Implications from Evolutionary Theories of Animal and Human Communication
October 3: Eli Siems — The Case for a Disparate Impact Regime Covering All Machine-Learning Decisions
September 26: Ari Waldman — Privacy's False Promise
September 19: Marijn Sax — Targeting Your Health or Your Wallet? Health Apps and Manipulative Commercial Practices
September 12: Mason Marks — Algorithmic Disability Discrimination
 

Spring 2018

May 2: Ira Rubinstein Article 25 of the GDPR and Product Design: A Critical View [with Nathan Good and Guilermo Monge, Good Research]
April 25: Elana Zeide — The Future Human Futures Market
April 18: Taylor Black — Performing Performative Privacy: Applying Post-Structural Performance Theory for Issues of Surveillance Aesthetics
April 11: John Nay Natural Language Processing and Machine Learning for Law and Policy Texts
April 4: Sebastian Benthall — Games and Rules of Information Flow
March 28: Yann Shvartzshanider and Noah Apthorpe Discovering Smart Home IoT Privacy Norms using Contextual Integrity    
February 28: Thomas Streinz TPP’s Implications for Global Privacy and Data Protection Law

February 21: Ben Morris, Rebecca Sobel, and Nick Vincent — Direct-to-Consumer Sequencing Kits: Are Users Losing More Than They Gain?
February 14: Eli Siems — Trade Secrets in Criminal Proceedings: The Battle over Source Code Discovery
February 7: Madeline Bryd and Philip Simon Is Facebook Violating U.S. Discrimination Laws by Allowing Advertisers to Target Users?
January 31: Madelyn Sanfilippo Sociotechnical Polycentricity: Privacy in Nested Sociotechnical Networks 
January 24: Jason Schultz and Julia Powles Discussion about the NYC Algorithmic Accountability Bill


Fall 2017

November 29: Kathryn Morris and Eli Siems Discussion of Carpenter v. United States
November 15:Leon Yin Anatomy and Interpretability of Neural Networks
November 8: Ben Zevenbergen Contextual Integrity for Password Research Ethics?
November 1: Joe Bonneau An Overview of Smart Contracts
October 25: Sebastian Benthall Modeling Social Welfare Effects of Privacy Policies
October 18: Sue Glueck Future-Proofing the Law
October 11: John Nay — Algorithmic Decision-Making Explanations: A Taxonomy and Case Study
October 4:Finn Bruton — 'The Best Surveillance System we Could Imagine': Payment Networks and Digital Cash
September 27: Julia Powles Promises, Polarities & Capture: A Data and AI Case Study
September 20: Madelyn Rose Sanfilippo AND Yafit Lev-Aretz — Breaking News: How Push Notifications Alter the Fourth Estate
September 13: Ignacio Cofone — Anti-Discriminatory Privacy
 

Spring 2017

April 26: Ben Zevenbergen Contextual Integrity as a Framework for Internet Research Ethics
April 19: Beate Roessler Manipulation
April 12: Amanda Levendowski Conflict Modeling
April 5: Madelyn Sanfilippo Privacy as Commons: A Conceptual Overview and Case Study in Progress
March 29: Hugo Zylberberg Reframing the fake news debate: influence operations, targeting-and-convincing infrastructure and exploitation of personal data
March 22: Caroline Alewaerts, Eli Siems and Nate Tisa will lead discussion of three topics flagged during our current events roundups: smart toys, the recently leaked documents about CIA surveillance techniques, and the issues raised by the government’s attempt to obtain recordings from an Amazon Echo in a criminal trial. 
March 8: Ira Rubinstein Privacy Localism
March 1: Luise Papcke Project on (Collaborative) Filtering and Social Sorting
February 22: Yafit Lev-Aretz and Grace Ha (in collaboration with Katherine Strandburg) Privacy and Innovation     
February 15: Argyri Panezi Academic Institutions as Innovators but also Data Collectors - Ethical and Other Normative Considerations
February 8: Katherine Strandburg Decisionmaking, Machine Learning and the Value of Explanation
February 1: Argyro Karanasiou A Study into the Layers of Automated Decision Making: Emergent Normative and Legal Aspects of Deep Learning
January 25: Scott Skinner-Thompson Equal Protection Privacy
 

Fall 2016

December 7: Tobias Matzner The Subject of Privacy
November 30: Yafit Lev-Aretz Data Philanthropy
November 16: Helen Nissenbaum Must Privacy Give Way to Use Regulation?
November 9: Bilyana Petkova Domesticating the "Foreign" in Making Transatlantic Data Privacy Law
November 2: Scott Skinner-Thompson Recording as Heckling
October 26: Yan Shvartzhnaider Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms
October 19: Madelyn Sanfilippo Privacy and Institutionalization in Data Science Scholarship
October 12: Paula Kift The Incredible Bulk: Metadata, Foreign Intelligence Collection, and the Limits of Domestic Surveillance Reform

October 5: Craig Konnoth Health Information Equity
September 28: Jessica Feldman the Amidst Project
September 21: Nathan Newman UnMarginalizing Workers: How Big Data Drives Lower Wages and How Reframing Labor Law Can Restore Information Equality in the Workplace
September 14: Kiel Brennan-Marquez Plausible Cause
 

Spring 2016

April 27: Yan Schvartzschnaider Privacy and loT AND Rebecca Weinstein - Net Neutrality's Impact on FCC Regulation of Privacy Practices
April 20: Joris van Hoboken Privacy in Service-Oriented Architectures: A New Paradigm? [with Seda Gurses]

April 13: Florencia Marotta-Wurgler Who's Afraid of the FTC? Enforcement Actions and the Content of Privacy Policies (with Daniel Svirsky)

April 6: Ira Rubinstein Big Data and Privacy: The State of Play

March 30: Clay Venetis Where is the Cost-Benefit Analysis in Federal Privacy Regulation?

March 23: Diasuke Igeta An Outline of Japanese Privacy Protection and its Problems

                  Johannes Eichenhofer Internet Privacy as Trust Protection

March 9: Alex Lipton Standing for Consumer Privacy Harms

March 2: Scott Skinner-Thompson Pop Culture Wars: Marriage, Abortion, and the Screen to Creed Pipeline [with Professor Sylvia Law]

February 24: Daniel Susser Against the Collection/Use Distinction

February 17: Eliana Pfeffer Data Chill: A First Amendment Hangover

February 10: Yafit Lev-Aretz Data Philanthropy

February 3: Kiel Brennan-Marquez Feedback Loops: A Theory of Big Data Culture

January 27: Leonid Grinberg But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race
 

Fall 2015

December 2: Leonid Grinberg But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race AND Kiel Brennan-Marquez - Spokeo and the Future of Privacy Harms
November 18: Angèle Christin - Algorithms, Expertise, and Discretion: Comparing Journalism and Criminal Justice
November 11: Joris van Hoboken Privacy, Data Sovereignty and Crypto
November 4: Solon Barocas and Karen Levy Understanding Privacy as a Means of Economic Redistribution
October 28: Finn Brunton Of Fembots and Men: Privacy Insights from the Ashley Madison Hack

October 21: Paula Kift Human Dignity and Bare Life - Privacy and Surveillance of Refugees at the Borders of Europe
October 14: Yafit Lev-Aretz and co-author, Nizan Geslevich Packin Between Loans and Friends: On Soical Credit and the Right to be Unpopular
October 7: Daniel Susser What's the Point of Notice?
September 30: Helen Nissenbaum and Kirsten Martin Confounding Variables Confounding Measures of Privacy
September 23: Jos Berens and Emmanuel Letouzé Group Privacy in a Digital Era
September 16: Scott Skinner-Thompson Performative Privacy

September 9: Kiel Brennan-Marquez Vigilantes and Good Samaritan
 

Spring 2015

April 29: Sofia Grafanaki Autonomy Challenges in the Age of Big Data
                 David Krone Compliance, Privacy and Cyber Security Information Sharing
                 Edwin Mok Trial and Error: The Privacy Dimensions of Clinical Trial Data Sharing
                 Dan Rudofsky Modern State Action Doctrine in the Age of Big Data


April 22: Helen Nissenbaum Respect for Context' as a Benchmark for Privacy: What it is and Isn't
April 15: Joris van Hoboken From Collection to Use Regulation? A Comparative Perspective
April 8: Bilyana Petkova
 Privacy and Federated Law-Making in the EU and the US: Defying the Status Quo?
April 1: Paula Kift — Metadata: An Ontological and Normative Analysis

March 25: Alex Lipton — Privacy Protections for the Secondary User of Consumer-Watching Technologies

March 11: Rebecca Weinstein (Cancelled)
March 4: Karen Levy & Alice Marwick — Unequal Harms: Socioeconomic Status, Race, and Gender in Privacy Research


February 25 : Luke Stark — NannyScam: The Normalization of Consumer-as-Surveillorm


February 18: Brian Choi A Prospect Theory of Privacy

February 11: Aimee Thomson — Cellular Dragnet: Active Cell Site Simulators and the Fourth Amendment

February 4: Ira Rubinstein — Anonymity and Risk

January 28: Scott Skinner-Thomson Outing Privacy

 

Fall 2014

December 3: Katherine Strandburg — Discussion of Privacy News [which can include recent court decisions, new technologies or significant industry practices]

November 19: Alice Marwick — Scandal or Sex Crime? Ethical and Privacy Implications of the Celebrity Nude Photo Leaks

November 12: Elana Zeide — Student Data and Educational Ideals: examining the current student privacy landscape and how emerging information practice and reforms implicate long-standing social and legal traditions surrounding education in America. The Proverbial Permanent Record [PDF]

November 5: Seda Guerses — Let's first get things done! On division of labor and practices of delegation in times of mediated politics and politicized technologies
October 29:Luke Stark — Discussion on whether “notice” can continue to play a viable role in protecting privacy in mediated communications and transactions given the increasing complexity of the data ecology and economy.
Kristen Martin — Transaction costs, privacy, and trust: The laudable goals and ultimate failure of notice and choice to respect privacy online

Ryan Calo — Against Notice Skepticism in Privacy (and Elsewhere)

Lorrie Faith Cranor — Necessary but Not Sufficient: Standardized Mechanisms for Privacy Notice and Choice
October 22: Matthew Callahan — Warrant Canaries and Law Enforcement Responses
October 15: Karen Levy — Networked Resistance to Electronic Surveillance
October 8: Joris van Hoboken —  The Right to be Forgotten Judgement in Europe: Taking Stock and Looking Ahead

October 1: Giancarlo Lee — Automatic Anonymization of Medical Documents
September 24: Christopher Sprigman — MSFT "Extraterritorial Warrants" Issue 

September 17: Sebastian Zimmeck — Privee: An Architecture for Automatically Analyzing Web Privacy Policies [with Steven M. Bellovin]
September 10: Organizational meeting
 

Spring 2014

April 30: Seda Guerses — Privacy is Security is a prerequisite for Privacy is not Security is a delegation relationship
April 23: Milbank Tweed Forum Speaker — Brad Smith: The Future of Privacy
April 16: Solon Barocas — How Data Mining Discriminates - a collaborative project with Andrew Selbst, 2012-13 ILI Fellow
March 12: Scott Bulua & Amanda Levendowski — Challenges in Combatting Revenge Porn


March 5: Claudia Diaz — In PETs we trust: tensions between Privacy Enhancing Technologies and information privacy law: The presentation is drawn from a paper, "Hero or Villain: The Data Controller in Privacy Law and Technologies” with Seda Guerses and Omer Tene.

February 26: Doc Searls Privacy and Business

February 19: Report from the Obfuscation Symposium, including brief tool demos and individual impressions

February 12: Ira Rubinstein The Ethics of Cryptanalysis — Code Breaking, Exploitation, Subversion and Hacking
February 5: Felix Wu — The Commercial Difference which grows out of a piece just published in the Chicago Forum called The Constitutionality of Consumer Privacy Regulation

January 29: Organizational meeting
 

Fall 2013

December 4: Akiva Miller — Are access and correction tools, opt-out buttons, and privacy dashboards the right solutions to consumer data privacy? & Malte Ziewitz What does transparency conceal?
November 20: Nathan Newman — Can Government Mandate Union Access to Employer Property? On Corporate Control of Information Flows in the Workplace

November 6: Karen Levy — Beating the Box: Digital Enforcement and Resistance
October 23: Brian Choi — The Third-Party Doctrine and the Required-Records Doctrine: Informational Reciprocals, Asymmetries, and Tributaries
October 16: Seda Güerses — Privacy is Don't Ask, Confidentiality is Don't Tell
October 9: Katherine Strandburg — Freedom of Association Constraints on Metadata Surveillance
October 2: Joris van Hoboken — A Right to be Forgotten
September 25: Luke Stark — The Emotional Context of Information Privacy
September 18: Discussion — NSA/Pew Survey
September 11: Organizational Meeting


Spring 2013

May 1: Akiva Miller — What Do We Worry About When We Worry About Price Discrimination
April 24: Hannah Block-Wheba and Matt Zimmerman — National Security Letters [NSL's]

April 17: Heather Patterson — Contextual Expectations of Privacy in User-Generated Mobile Health Data: The Fitbit Story
April 10: Katherine Strandburg — ECPA Reform; Catherine Crump: Cotterman Case; Paula Helm: Anonymity in AA

April 3: Ira Rubinstein — Voter Privacy: A Modest Proposal
March 27: Privacy News Hot Topics — US v. Cotterman, Drones' Hearings, Google Settlement, Employee Health Information Vulnerabilities, and a Report from Differential Privacy Day

March 6: Mariana Thibes — Privacy at Stake, Challenging Issues in the Brazillian Context
March 13: Nathan Newman — The Economics of Information in Behavioral Advertising Markets
February 27: Katherine Strandburg — Free Fall: The Online Market's Consumer Preference Disconnect
February 20: Brad Smith — Privacy at Microsoft
February 13: Joe Bonneau  — What will it mean for privacy as user authentication moves beyond passwo
February 6: Helen Nissenbaum — The (Privacy) Trouble with MOOCs
January 30: Welcome meeting and discussion on current privacy news
 

Fall 2012

December 5: Martin French — Preparing for the Zombie Apocalypse: The Privacy Implications of (Contemporary Developments in) Public Health Intelligence
November 7: Sophie Hood — New Media Technology and the Courts: Judicial Videoconferencing
November 14: Travis Hall — Cracks in the Foundation: India's Biometrics Programs and the Power of the Exception

November 28: Scott Bulua and Catherine Crump — A framework for understanding and regulating domestic drone surveillance

November 21: Lital Helman — Corporate Responsibility of Social Networking Platforms
October 24: Matt Tierney and Ian Spiro — Cryptogram: Photo Privacy in Social Media
October 17: Frederik Zuiderveen Borgesius — Behavioural Targeting. How to regulate?

October 10: Discussion of 'Model Law'

October 3: Agatha Cole — The Role of IP address Data in Counter-Terrorism Operations & Criminal Law Enforcement Investigations: Looking towards the European framework as a model for U.S. Data Retention Policy
September 26: Karen Levy — Privacy, Professionalism, and Techno-Legal Regulation of U.S. Truckers
September 19: Nathan Newman — Cost of Lost Privacy: Google, Antitrust and Control of User Data