Privacy Research Group

ILI Privacy Research Group Logo

The Privacy Research Group is a weekly meeting of students, professors, and industry professionals who are passionate about exploring, protecting, and understanding privacy in the digital age.

Joining PRG

Because we deal with early-stage work in progress, attendance at meetings of the Privacy Research Group is generally limited to researchers and students who can commit to ongoing participation in the group. To discuss joining the group, please contact Tom McBrien. If you are interested in these topics, but cannot commit to ongoing participation in PRG, you may wish to join the PRG-All mailing list.
 
PRG Student Fellows—Student members of PRG have the opportunity to become Student Fellows. Student Fellows help bring the exciting developments and ideas of the Research Group to the outside world. The primary Student Fellow responsibility is to maintain an active web presence through the ILI student blog, reporting on current events and developments in the privacy field and bringing the world of privacy research to a broader audience. Fellows also have the opportunity to help promote and execute exciting events and colloquia, and even present to the Privacy Research Group. Student Fellow responsibilities are a manageable and enjoyable addition to the regular meeting attendance required of all PRG members. The Student Fellow position is the first step for NYU students into the world of privacy research. Interested students should email Student Fellow Coordinator Tom McBrien with a brief (1-2 paragraph) statement of interest or for more information.


PRG Calendar

 

Spring 2021: via zoom Fridays 2:30-4:00pm
 

April 16:

April 9:

April 2:

March 26:

March 19:

March 12:

March 5:

February 26:

February 19:

February 12:

February 5:

January 29:

 

Fall 2020
 

December 4: Florencia Marotta-Wurgler & David Stein — Teaching Machines to Think Like Lawyers

     ABSTRACT: On Friday, we will present the framework of a large scale project we worked on this summer, where a team of fourteen law students coded and annotated a large sample of privacy policies along over 100 dimensions to measure the extent to which such policies comply with CCPA, GDPR, and current US guidelines. In addition, we sought to replicate the coding of other public databases of privacy policies used for machine learning analysis. In addition to presenting the framework, we will discuss the variables we track, coding tools, annotating protocols, and some preliminary findings in hopes of receiving feedback that we can incorporate into the project as we go forward. Many thanks for your time. 

November 20: Andrew Weiner

     ABSTRACT: In the wake of a data breach, data processors and controllers race to figure out what happened, why it happened, and how to remediate the issues that led to the breach. This process often is the responsibility of technical analysts and investigators, engineers who work on the breached product or platform, and cybersecurity subject matter experts. When the breach contains personal information of a data subject, the race to complete the investigative process intensifies. While understanding and remediating a personal information data breach is obviously high priority, the investigation process does not necessarily include a key question: Do we have to tell anyone about this whole shindig? The question of whether to notify parties external to the breached organization is not a decision just for policy or communications teams. Rather, a patchwork of worldwide data breach notification statutes and regulations create legal obligations that lawyers must navigate to determine whether a data breach triggers data subject and/or regulator notification. Data breach notification laws have important goals -- giving data subjects transparency into who has their data and incentivizing organizations to take data protection and cybersecurity seriously. However, the quality and effectiveness of the laws dictate how well those goals are met. This presentation will discuss data breach notification laws' successes, shortcomings, and inconsistencies and propose improvements to the patchwork of laws. 

November 13: Cancelled for Northeast Privacy Scholars Workshop

November 6: Mark Verstraete — Cybersecurity Spillovers

     ABSTRACT: The state of cybersecurity is notoriously in disarray. Data security incidents occur frequently and firms have little apparent motivation to increase the security that they provide. Yet this talk examines an interesting feature of the cybersecurity ecosystem; namely, that some clients of cloud storage receive a security windfall in virtue of other clients who also use the same provider. We label this phenomenon "cybersecurity spillovers" and examine how these spillovers occur, conceptual uncertainty about these benefits, and the normative payoffs of these spills. To understand this phenomenon, this talk will canvass the unique structural features of cybersecurity that allow additional security to pass between clients. Here, the essential feature is the public nature of the cloud. Many different clients use the same cloud infrastructure and some clients will need higher security and, further, the service provider has strong incentives to apply these changes at the platform level rather than for specific clients. Finally, this talk addresses how this additional security may be compensated and what this means for security spillovers and externalities more generally. In closing, we gesture at the broad normative implications that follow from cybersecurity spillovers.

October 30: Ari Ezra Waldman — Privacy Law's Two Paths

     ABSTRACT: Privacy law is on a path to irrelevance. But it doesn't have to be that way. Both leading privacy laws today--the GDPR and the CCPA--reflect a "Twentieth Century Synthesis" (Britton-Purdy, Grewal, Kapczynski, and Rahman (forthcoming)) in which law is oriented toward neoliberal and managerial values like efficiency, productivity, and innovation (Cohen 2019). The GDPR and the CCPA explicitly shift regulatory responsibilities from government to regulated entities themselves, a form of "collaborative governance" that relies on best practices, compliance, codes of conduct, internal corporate structures, and assessments based on executive attestation (Kaminski 2018; Waldman 2020). Even the proposals in the U.S. Congress and in various state capitols aimed at enhancing privacy protections follow this path. We seem incapable of stepping outside the narrow Overton Window handed to us by the neoliberal consensus of the last 50 years. Why is that? One reason is that the social practice of privacy law itself is built to normalize corporate data extraction rather than rein it in. The social practice of privacy law is the detritus of privacy law on the ground. Indeed, law is best understood as a series of practices that endogenously construct legal rules from the ground up. The social practices of privacy law--everything from chief privacy offices to privacy impact assessments, audit trails to click to agree buttons, statements like "we care about your privacy" or "your privacy is important to us" to notifications about privacy policy changes--are performative acts that express what privacy law is and normalize it as what privacy law should be. Normalization is the social and psychological process through which common things come to be understood as acceptable, ordinary, and, ultimately, good. Put another way, the privacy laws we have today not only represent the Twentieth Century Synthesis, but they entrench it through a series of on-the-ground performances that construct our privacy-related identities and normalize privacy self-governance as the only possible outcome. The implication of this is profound: Scholars talk about needing the political will to shift to a regulatory system that reflects a political economy approach to governance. The privacy laws we have today are eroding that political will, among policymakers, privacy professionals, and even individuals.

October 23: Aileen Nielsen — Tech's Attention Problem

     ABSTRACT: The plasticity of human preferences and behavior is a readily accepted proposition in economics and psychology. Within this domain of plasticity is the experimentally demonstrated fact that humans likely have only a certain number of “mental cycles” per day with which to make decisions and defend their interests. This poses particularly compelling problems related to human privacy and autonomy in an era where we increasingly spend our time in digital environments that are privately owned and engineered to maximize the utility of the entities who own that infrastructure. This work makes an argument in four part in response to the private infrastructure that drive our digital attention economies. First, I discuss how human attention is under attack in the digital sector, fueled by scientific knowledge from psychology and economics, and also show how the resulting attention harms routinely fail to achieve legal recognition and protection. Second, I propose a taxonomy of simple metrics that can be used with both scientific justification and conceptual simplicity to operationalize metrics for attention in common digital products. Third, I experimentally measure likely marketplace reactions to these metrics in a realistic scenario relating to mobile apps. Finally, I examine a variety of regulatory and policy measures that could be implemented with such attention metrics. In proceeding in these parts, I make the case that there are practical but insufficiently explored options to quantify and regulate pervasive consumer harms in digital attention economies. 

October 16: Caroline Alewaerts — UN Global Pulse

     ABSTRACT: UN Global Pulse is the UN Secretary-General’s initiative on big data and artificial intelligence (AI) for sustainable development, humanitarian action, and peace. It was established a decade ago based on a recognition that digital data offer opportunities to gain a better understanding of changes in human well-being, and to get real-time feedback on how well policy responses are working. UN Global Pulse has since been expanding the boundaries of its research and policy work, ensuring close alignment with the transformative innovation efforts of the Executive Office of the Secretary-General in which it operates. In this presentation, I will provide an overview of UN Global Pulse technology and policy work both within and outside of the UN, and how it is working to accelerate the discovery, development and adoption of privacy-protective and rights-based big data and AI applications that can transform how we operate and help communities everywhere achieve the Sustainable Development Goals (SDGs). 

October 9: Salome Viljoen — Data as a Democratic Medium: From Individual to Relational Data Governance

     ABSTRACT: Discussions about personal data often involve claims (explicit or implicit) regarding what data is or is “like,” why we should care about its collection, and what we should do about datafication—the transformation of information about people into a commodity. This Article evaluates the legal merit of these claims via their empirical and normative consequences. To do so, it engages with two enduring problems vexing U.S. data governance. First, the “sociality problem”: how can data governance law better account for the downstream social effects of data collection? Second, the “legitimacy problem”: how can data governance law distinguish legitimate and illegitimate downstream uses without relying on the failed mechanism of individual notice and choice? Part One documents the significance of data processing for the digital economy and evaluates how the predominant legal regimes that discipline data collection—contract and privacy law—code data as an individual medium. This conceptualization is referred to throughout the Article as “data as individual medium” (DIM). Part Two explores the disconnect between DIM and how the data political economy produces social value and social risk. First it shows that data’s capacity to transmit social relational meaning is central to how data produces economic value and social risk, yet is legally irrelevant under DIM. Part Three evaluates two prominent proposals that have emerged in response to datafication: propertarian and dignitarian reforms to data governance. While both approaches have merit, because they conceive of data as an individual medium they are unable to resolve either the sociality problem or the legitimacy problem. Part Four proposes an alternative approach: data as a democratic medium (DDM). DDM fosters data governance that is attentive to data’s social effects as well as to the purposes that drive data production and the conditions under which it occurs. Part Four concludes by outlining key principles and directions for what DDM regimes could look like in practice. 

October 2: Gabe Nicholas — Surveillance Delusion: Lessons from the Vietnam War

     ABSTRACT: Surveillance systems allow states to “see” into the lives of individuals. Sometimes that vision is an illusion — other times it is a delusion. In this paper, I offer a case study of one such delusional surveillance system: Operation Igloo White, a sensor-software system built in 1968 by the US Air Force to track and bomb North Vietnamese supply lines in the Laotian jungle. Through technical documentation, declassified military histories, and original interviews with veterans, I argue that the Air Force overlooked, ignored, or hid a preponderance of evidence that Igloo White failed to accurately “see” what was happening on the ground. The US government tricked itself, willfully or otherwise, into believing its surveillance system was effective. I call this phenomenon surveillance delusion. State surveillance systems are particularly susceptible to delusion because unlike surveillance capitalist systems, they have no profit motive to be accurate. As James Scott argues, modern states use surveillance to make citizens “legible” in order to govern society by scientific principles. This imperative depends not on accurate observation but the stringent, invisible categorization of individuals. Harms of surveillance delusion are thus externalized to the surveilled. Part One of this paper defines surveillance delusion and contextualizes it in the broader surveillance studies literature on dataism and datafication. Part Two gives a case study of Operation Igloo White and describes three areas in which it failed to “see”: data integrity, or how well a sensor measures an intended ground truth; data quality, or how well a metric works for its intended purpose; and data politics, or how control over data allocates power. Part Three explains how the Air Force deluded itself about these blindnesses. Part Four reconsiders three modern domestic surveillance systems through the lens of delusion — app-based contact tracing, predictive policing, and the US-Mexico border wall.

September 25: Angelina Fisher & Thomas Streinz — Confronting Data Inequality

     ABSTRACT: Data conveys significant social, economic, and political power. For this reason, unequal control over data – a pervasive form of digital inequality – is a problem for economic development, human agency, and collective self-determination that needs to be addressed. This paper takes some steps in this direction by analyzing the extent to which extant law facilitates unequal control over data and by suggesting ways in which legal interventions might lead to more equal control over data.  The paper distinguishes between unequal control over data as an asset on the one hand and unequal control over the infrastructures that generate, process, storage, transfer, and use data, on the other hand. We hypothesize that the former is a function of the latter. Existing law tends to ignore the salience of infrastructural control over data and seeks to regulate data as an object to be transferred, protected, and shared. Private law technologies are dominant in this regard while states increasingly bind themselves under international economic law to not redistribute or localize control over data. While there are no easy solutions to the problem of data inequality, we suggest that retaining flexibility to experiment with different approaches, demanding enhanced transparency, pooling of data and bargaining power, and differentiated and conditional access to data mechanisms may help in confronting data inequality going forward. We begin the paper by considering how data is conceptualized.  Here we highlight two broad discourses: one sees data as an asset or resource that creates value for different entities (e.g., enterprises, communities, countries, etc.). The other sees data as not “a natural kind” but rather a relational and contextual concept that is shaped by assemblages of digital infrastructures, social and organizational practices, histories and ideologies, and legal instruments, practices, and institutions.  We bring these two discourses together to illustrate (a) the relationship between data (as an output) and the infrastructures that constitute it (“data infrastructures”) and (b) examine specific inequalities that flow from unequal control over data infrastructures.  We highlight the outsized role of commercial enterprises in control over data infrastructures with reference to e-commerce, communication and IoT data management platforms.  We also consider the role that cloud computing plays in centralizing infrastructural control.  (Part I) Having presented the problematique with which the paper is concerned, we turn to the role of legal technologies.  Here we consider (a) to what extent different legal regimes and instruments in their current approaches to regulation of data facilitate, entrench or simply ignore infrastructural control and (b) how law can be deployed to address the type of data inequality we identify in the paper.  We posit that to do the latter, the regulation of data through law needs to move away from conceptualizing data-as-an-asset and instead focus on regulating data infrastructures. (Part II) In the concluding part, we put forth some interventions that might usefully be deployed to address data inequality.  Although proposals in this section apply to a variety of actors and contexts, our primary audience here are policy makers in developing countries.  Our suggestions include a mix of legal, technical and political interventions, urging contextual, experimental, and flexible approaches. These include requirements for transparency, pooling of political power, building up bottom up data governance arrangements that provide differentiated and conditional access to data, and leveraging international organizations as international data governors. (Part III)

September 18: Danny Huang — Watching loTs That Watch Us: Studying loT Security & Privacy at Scale

     ABSTRACT: Many consumers today are increasingly concerned about IoT security and privacy. There is much media hype about home cameras being hacked or voice assistants eavesdropping on conversations. However, it is unclear exactly what are security and privacy threats, how prevalent they are, and what are the implications on users, policymakers, and manufacturers, because there is no reliable large-scale data in the wild. In my talk, I'll describe a new method to systematically collect a large-scale real-world dataset of IoT device network traffic. I'll show you examples of security and privacy threats we identified on various IoT devices, along with a discussion on the potential legal issues.

September 11: Seb Benthall — Accountable Context for Web Applications

     ABSTRACT: We consider the challenge of accountable privacy, fairness, and ethics for web applications. We begin with a case for studying specific software architectures. Computer science paradigms have come under disciplinary criticisms from STS and engineering disciplines. These criticisms are diffused by introducing realistic and general models from software engineering. We find that some of the criticisms of computer science literature, especially against the use of formalism, misplaced and instead better understood as limits imposed by the design of the web and web services. The design of the web is to maximize connectivity and minimize control: this entails that web resources are exposed indiscriminately to myriad social contexts. The design of web services (e.g. REST) is to allow for "anarchic scalability" and "independent deployability" across multiple, and shifting, organizational boundaries. We find this networked, relational, inter-organizational nature of web services to be an otherwise unaddressed reason for poor accountability of computer systems. We propose a system of labeling and documentation for web applications that would remedy this opacity.


Spring 2020

April 29: Aileen Nielsen — "Pricing" Privacy: Preliminary Evidence from Vignette Studies Inspired by Economic Anthropology

     ABSTRACT: Despite the widespread existence and economic significance of digital markets in which personal data is bought, sold, and collected by commercial firms, it is far from clear that a market pricing approach to privacy best describes current attitudes towards personal data. Likewise, it's not clear that any solution to existing privacy problems represented by such data markets will be aptly addressed through the provision of legal rights to personal data where such rights are framed in terms of economic control and entitlements. Here, I present preliminary pilot studies inspired by economic anthropology to explore whether and to what degree market paradigms may be unduly limiting the conversation regarding legal remedies to privacy problems.

April 22: Ginny Kozemczak — Dignity, Freedom, and Digital Rights: Comparing American and European Approaches to Privacy

     ABSTRACT: Federal courts in the U.S. generally agree that Internet Protocol (IP) addresses are not subject to Fourth Amendment protection. An Internet user’s 32-digit anonymous number is largely considered unprotectable under the Third-Party Doctrine, for which there is no requirement that the government obtains a warrant. At the same time, the European Court of Justice and the European Court of Human Rights have issued decisions holding that, in certain contexts, IP addresses are protectable, private data for which a warrant is required – relying on Article 7 of the Charter of Fundamental Rights and Article 8 of the European Convention on Human Rights respectively. This paper proposes a new framework to explain why American and European privacy protections of IP addresses diverge, moving beyond a dichotomous model of dignity and liberty, as some comparative privacy law literature suggests. Instead, I present a model that attempts to capture societal expectations of how privacy laws ought to function in relation to governments and the private sector.

April 15: Privacy and COVID-19 Policies

     ABSTRACT: As coronavirus has spread across the globe, how should policymakers and citizens think about privacy issues? The student fellows will start with a brief review of different policy and tech responses to the virus followed by some framing questions and then an open discussion. Questions will include: Is there a necessary tradeoff between the efficacy of public health responses and privacy? How do we think about and evaluate tradeoffs? How do we balance privacy against freedom of movement and other values? What are appropriate transnational responses?

April 8: Ira Rubinstein — Urban Privacy

     ABSTRACT: As Alan Westin observed in Privacy and Freedom, “Anonymity occurs when the individual is in public places or performing public acts but still seeks, and finds, freedom from identification and surveillance.” Most accounts of anonymity so understood are submerged within discussions of the Fourth Amendment under the rubric of “privacy in public.” These accounts reconsider the “plain view” and “third party” doctrines in light of the evolution from sense enhancing technologies to smartphones, GPS, and the Internet of Things, with the goal of extending constitutional protections to new forms of surveillance. Missing from these accounts is any consideration of public places and their character or function in city life—that is, how the public realm works, what it contributes to urban experience, and why cities have (or should have) an interest in preserving the public realm against a variety of countervailing forces including surveillance. In the public realm, strangers of diverse backgrounds and behavior encounter and learn to tolerate one another and this occurs (at least in part) because they are free from identification and surveillance. This Essay develops an account of privacy in public that emphasizes the public realm and people’s efforts to manage the boundaries between themselves and the strangers they encounter when they are out in public. This is what I mean by “urban privacy” and I will argue that the need for urban privacy provides city government with a separate and distinct set of interests for resisting surveillance as compared with the usual Fourth (and First) Amendment interests cited in most discussions. In two earlier papers, I have analyzed “privacy localism” and (with Bilyana Petkova) cities as privacy activists and data stewards. This Essay is part of larger project to understand privacy in the city. It brings together two closely aligned intellectual traditions that have surprisingly little scholarly interaction: urban studies (sociology and ethnography) analyzing the city as a social order composed of multiple realms or territories with their own norms and informal rules (Goffman, Lofland, Jacobs, Sennett); and privacy as boundary management (beginning with Altman and later refined by Nippert-Eng, Cohen and Kaminski). This Essay proceeds as follows: first, I analyze the public realm as a social territory (descriptive) that is generative of urban diversity and tolerance (normative). Next, I reconstruct the idea of privacy in public as a form of boundary management. This is followed by a discussion of smart city surveillance technologies (IoT, public Wi-Fi, smartphone apps, policing devices, big data analytics), their uses (optimization, efficiency, surveillance), and their impact on city life. Finally, I consult multiple sources to determine if there is any evidence of a detrimental impact on the public realm. This includes empirical studies of CCTV in London (the city that symbolizes the urban panopticon) as well indirect evidence in accounts of smart cities, in popular culture, and in preliminary reviews of the Chinese social credit system. This Essay then draws some tentative conclusions.

April 1: Thomas Streinz — Data Governance in Trade Agreements: Non-territoriality of Data and Multi-Nationality of Corporations

     ABSTRACT: The US has pioneered provisions in recent ‘trade’ agreements promoting the cross-border transfer of data (‘free data flows’) while constraining states’ ability to require the use of domestic computing facilities (‘data localization’). These provisions feature in the Trans-Pacific Partnership (TPP) agreement, from which the US later withdrew but that the remaining eleven members revived as the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), in the US-Mexico-Canada agreement (USMCA), and, most recently, in the dedicated Japan-US Digital Trade Agreement (JUSDTA). While these agreements are ostensibly only binding for the Parties that have signed and ratified them, in reality, multi-national corporations will be able to avail themselves of the prerequisite corporate nationality to force countries towards uniform and open transnational data governance regimes - in contrast to the EU's preferred model of differentiated transnational data governance (distinguishing between countries with and without an "adequate" level of data protection). The paper discusses the implications for data governance in trade agreements and for global data governance more broadly. Thanks to all of you for indulging me with another paper on data governance and international economic law. I very much enjoyed the PRG discussion of my paper on TPP's model for the global digital economy (published as "Digital Megaregulation Uncontested?" in our volume "Megaregulation Contested: Global Economic Ordering After TPP"). This paper is a continuation of this prior work and also heavily influenced by our Global Data Law project in the Guarini Global Law & Tech initiative with Benedict Kingsbury and Angelina Fisher.

March 25: Christopher Morten — The Big Data Regulator, Rebooted: Why and How the FDA Can and Should Disclose Confidential Data on Prescription Drugs

     ABSTRACT: Medicines are complex products, and it is often extraordinarily difficult to know whether they cure or kill. The FDA holds an enormous reservoir of data on these medicines, data that sheds light on that precise question, and yet the agency currently discloses only a trickle to researchers, doctors, patients, and the public at large. This paper explains why and how the FDA can and should “reboot” its disclosure rules to disclose much more data on safety and efficacy of prescription drugs, to protect patients, advance science, and safeguard democracy. Though the need for this data is clearer than ever, last Term a Supreme Court case threatened the viability of one existing tool through which independent researchers have historically obtained clinical data from the FDA: Freedom of Information Act (FOIA) requests. We present a wealth of new evidence about the urgency of the problem together with a novel argument for proactive data disclosure—what we term “data publicity”—that can be achieved without any legislative reform. We provide a roadmap to data publicity that navigates the two main challenges to data sharing: protecting the privacy of individuals who participate in trials and defeating the claims that companies make that this data is and should remain confidential. Along the way, we show that trade secrecy law does not create an impossible barrier to disclosure, contrary to the view of the pharmaceutical industry. Our analysis illuminates a broader problem that is woven through the regulatory state in our information age: corporations urge us to buy their products and services because they are technologically innovative, yet increasingly they hide the inner workings of those technologies from us. The model we offer here could, we suggest, become a template for other regulatory agencies to permit meaningful democratic oversight of industry and revitalize the agencies themselves in an age of information capitalism.

March 4: Lilla Montanagni — Regulation 2018/1807 on the Free Flow of Non Personal Data: Yet Another Piece in the Data Puzzle in the EU?

     ABSTRACT: In light of the importance of data for the European economy, the EU institutions have undertaken various actions towards the creation of comprehensive and predictable legal rules for the free flow, access and use of data. One of the last additions to this context is Regulation 2018/1807 on the free flow of non-personal data, which bans localization obligations and encourages portability for non-personal data. The attempt to strengthen the competitiveness of the European Union’s data economy by adopting hard and soft law instruments is however creating an extremely complex regulatory environment characterized by various shortcomings. Such an excessive over-complication of the system runs the risk of building a regime which would hardly function in practice. As a result, the emerging system of the much-acclaimed common space for data is hampered by the necessity of complying with a framework of rules not sufficiently coordinated. In the paper I am working on I try to make sense of how the Regulation on the free flow of non-personal data fits into the current overall debate on a common space for data by considering its consistency against other instruments adopted by the European institutions, such as the GDPR. I also try to verify how the rules on the free flow interacts with those on access and use of data which have become a priority in the recent EU Commission's European strategy for data (adopted on the 19th of February).

February 26: Stein — Flow of Data Through Online Advertising Markets

     ABSTRACT: "I'll be presenting on the flow of data through online advertising markets, how advertisers use that data to generate revenue, and what some of the emerging trends are in the space. The presentation is from a technologist's perspective and contains no normative claims. I'm hoping to spark a discussion on which, if any, parts of the "ad-tech" ecosystem have avoided regulatory and scholarly scrutiny."

February 19: Seb Benthall — Towards Agend-Based Computational Modeling of Informational Capitalism

     ABSTRACT: Interlinked concerns of privacy, security, fairness, accountability, and transparency motivate active research in both technical design and public policy. Whereas the design of information systems and the state and corporate governance were once separable concerns, today technology is part of governance. Theories of society or the economy that do not include the role of computation in society are incomplete, as are theories of the implications of technology that are removed from socioeconomic context. The [very preliminary!] research reviews the state of agent-based computational modeling as a social scientific method. Computation is endogenous to computational modeling. Such models may be used to better understand information capitalist institutions such as platforms and data refineries. This talk considers the promise and challenges of such an approach. 

February 12: Yafit Lev-Aretz & Madelyn Sanfilippo — One Size Does Not Fit All: Applying a Single Privacy Policy to (too) Many Contexts

     ABSTRACT: The Walt Disney Company’s holdings span film, television, music, radio, gaming, finance, theater, consumer goods, property, amusement parks, and digital media industries. As of 2019, it is the largest media conglomerate in the world. It’s content, from Disney Junior to Marvel to ESPN to ABC News, reaches diverse populations on diverse platforms, from radio and television to Hulu and social media, as well as on Broadway or at resorts all over the world from Orlando to Shanghai. Yet a single privacy policy governs the vast majority of user information flows associated with this commercial empire. This is not an anomaly. Various entities apply a single privacy policy across many modes of interaction, from commercial publishers applying one policy for their websites and apps, including for subsidiaries for children or religious populations, to the Red Cross applying a single policy to donors, those learning about disasters, and victims applying for aid during crises. There are a variety of legitimate reasons why this might be. From a business perspective, this is logical as mergers and acquisitions have dramatically consolidated the media landscape. The use of a one-size-fits-all privacy policy is also common in major firms with many subsidiaries, as with Alphabet and Google, and when third parties appropriate privacy policies of established market players (notwithstanding the copyright infringement implications). The practice also extends to various forms of interaction across many platforms as firms pursue user and consumer outreach. In this application, there are arguments to be made for consistent privacy practices by firms, across platforms. In contrast to legitimate business purposes, there are legitimate objections and concerns about applying a single privacy policy across diverse constituencies and forms of engagement, particularly when protected classes, such as children, or sensitive contexts, such as health or education, are involved. Furthermore, there are real challenges and consequences that result from a one-size-fits-all privacy policy approach. Specifically, the use of general and overbroad terms results in little to no guidance on how personal information is collected in specific interactions and used in specific contexts. In other words, while privacy policies are often criticized for being too long and incomprehensible to a non-lawyer, many of these policies are too general to be at all informative in specific contexts, even if a user carefully read the fine print. This project investigates applications of a single privacy policy to different user interactions across various platforms and contexts. Cases are introduced to illustrate the prevalence of the practice as well as the different types of associated contextual challenges. We characterize the specific challenges associated with applying a single policy to: different types of users, including to protected classes; commercial and social interactions; different types of platforms; data collection for both online and offline contexts; and passive and active engagement. Our analysis goes beyond considerations of privacy compliance to show the importance of contextual policies in a world that moves toward products and services consolidation.

February 5: Jake Goldenfein & Seb Benthall — Data Science and the Decline of Liberal Law and Ethics

     ABSTRACT: The advent and application of data science has opened a new domain of normative inquiry. Often this inquiry is framed in terms of liberal ethics in places, such as Western democratic states, where liberalism is the basis of legal institutions. Liberalism, as a political and ethical framework, and as expressed in law, theorizes that rational individuals, endowed with private autonomy and property, can achieve legitimate distributional outcomes by exchanging goods in free and public markets. Three ways liberal data science ethics attempts to solve the ethical challenges of data science include: reinforcing privacy with data protection, preventing the undermining of individual rationality through manipulation, and considering the relationship between data and property. However, data science has resulted in a normative crisis because it contradicts assumptions of liberalism. Data science reveals that rationality is bounded and limited by data access, challenging liberalism’s concept of individual moral subjectivity. Corporate rationalities employ data science to create privatized markets that defy liberal market theory, with questionable distributional legitimacy. We argue that data science therefore necessitates ethics and regulation beyond what liberal theory can offer. 

January 29: Albert Fox Cahn — Reimagining the Fourth Amendment for the Mass Surveillance Age

     ABSTRACT: Fourth Amendment doctrine plays a diminishing role in protecting privacy in the mass surveillance age, as technological changes outstrip doctrinal advancements, and greater reliance is placed on statutory, regulatory, and private sector privacy protections.  Crucially, these technological advancements have eroded the most potent historical barrier to surveillance, cost. As economically-driven particularity fades, doctrinally-driven particularity must increase to maintain the status quo level of privacy protections.  I assert that in expanding constitutional protections, we will need to change one of the unspoken assumptions of prior holdings.  Historically, Fourth Amendment holdings delineate searches based on whether or not a type of surveillance required the unique force of the state (e.g., physical searches, wiretaps, etc.). Doctrines like the Third-Party Doctrine and Plain View Doctrine curtail warrant requirements based on whether or not a member of the public could have accomplished the alleged search without state action. But I claim the Court’s 2018 holding in Carpenter v. U.S. is an inflection point where we see Fourth Amendment protections extend to searches that could be carried out by the general public.  These changes necessitate a fundamental shift in doctrine, and I suggest that potential models could be (1) reasonable expectation part II, which is triggered by use, not collection, of data; (2) Minimization; (3) Differentiating counter-terrorism use and admissibility at trial; (4) Expanded Title III-style requirements for invasive technologies; and (5) categorical bans for highly-invasive technologies.

January 22: Ido Sivan-Sevilia — Europeanization on Demand? The EU's Cybersecurity Certification Regime Between the Rationale of Market Integration and the Core Functions of the State

     ABSTRACT: The literature on EU integration has been distinguishing between market and core states powers types of integration. Whereas market integration emphasizes joint gains from harmonized trade settings and usually settles the shape of European regulation based on the largest common multiple with a decisive role to EU’s supranational institutions, the integration of core states powers involves contested resource mobilization of national capacities to the EU level that are derived from the state’s monopoly on national security, defense, coercion, and taxation. For such integration, state elites are expected to prefer intergovernmental over supranational arrangements with a narrowly defined EU mandate. For the integration of cybersecurity certification, however, it is unclear what type of ‘post-integration’ policy design to expect. The security certification of products and infrastructures is both a market integration issue - creating a market for certified products in the age of ‘Internet of Things’ (e.g. certifying smart meters) - and a core state powers integration, mobilizing the capacity to set standards and certify nationally-sensitive infrastructures (e.g. smart power grids). Therefore, this study asks (1) how and to what extent the cybersecurity certification regime has been integrated? And (2) what explains considerable differences in such integration across different regime components? Through a process-tracing analysis based on 40 policy documents and 18 interviews, this paper deconstructs EU cybersecurity certification into standardization, accreditation, certification, and evaluation components; studies the designed multi-level interactions for each of these components; analyzes and labels each of the regime components according to their national, intergovernmental, or supranational nature; and explains how, and to what extent, classical theories of EU integration: supranationalism (through cultivated and functional spill overs) and liberal intergovernmentalism explain how cybersecurity certification has been ‘Europeanized.’ I find that the chosen policy design for the different regime components provides functional and political solutions that suggest a ‘Europeanization on Demand’ model, which allows member states to closely control and limit the extent of integration to specific economic sectors.

Fall 2019

December 4: Ari Waldman — Discussion on Proposed Privacy Bills
November 20: Margarita Boyarskaya & Solon Barocas [joint work with Hanna Wallach] — What is a Proxy and why is it a Problem?
November 13: Mark Verstraete & Tal Zarsky — Data Breach Distortions
November 6: Aaron Shapiro — Dynamic Exploits: Calculative Asymmetries in the On-Demand Economy
October 30: Tomer Kenneth — Who Can Move My Cheese? Other Legal Considerations About Smart-Devices
October 23: Yafit Lev-Aretz & Madelyn Sanfilippo — Privacy and Religious Views
October 16: Salome Viljoen — Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought
October 9: Katja Langenbucher — Responsible A.I. Credit Scoring
October 2: Michal Shur-Ofry — Robotic Collective Memory   
September 25: Mark Verstraete — Inseparable Uses in Property and Information Law
September 18: Gabe Nicholas & Michael Weinberg — Data, To Go: Privacy and Competition in Data Portability 
September 11: Ari Waldman — Privacy, Discourse, and Power


Spring 2019

April 24: Sheila Marie Cruz-Rodriguez — Contractual Approach to Privacy Protection in Urban Data Collection
April 17: Andrew Selbst — Negligence and AI's Human Users
April 10: Sun Ping — Beyond Security: What Kind of Data Protection Law Should China Make?
April 3: Moran Yemini — Missing in "State Action": Toward a Pluralist Conception of the First Amendment
March 27: Nick Vincent — Privacy and the Human Microbiome
March 13: Nick Mendez — Will You Be Seeing Me in Court? Risk of Future Harm, and Article III Standing After a Data Breach
March 6: Jake Goldenfein — Through the Handoff Lens: Are Autonomous Vehicles No-Win for Users
February 27: Cathy Dwyer — Applying the Contextual Integrity Framework to Cambride Analytica
February 20: Ignacio Cofone & Katherine Strandburg — Strategic Games and Algorithmic Transparency
February 13: Yan Shvartshnaider — Going Against the (Appropriate) Flow: A Contextual Integrity Approach to Privacy Policy Analysis
January 30: Sabine Gless — Predictive Policing: In Defense of 'True Positives'


Fall 2018

December 5: Discussion of current issues
November 28: Ashley Gorham — Algorithmic Interpellation
November 14: Mark Verstraete — Data Inalienabilities
November 7: Jonathan Mayer — Estimating Incidental Collection in Foreign Intelligence Surveillance
October 31: Sebastian Benthall — Trade, Trust, and Cyberwar
October 24: Yafit Lev-Aretz — Privacy and the Human Element
October 17: Julia Powles — AI: The Stories We Weave; The Questions We Leave
October 10: Andy Gersick — Can We Have Honesty, Civility, and Privacy Online? Implications from Evolutionary Theories of Animal and Human Communication
October 3: Eli Siems — The Case for a Disparate Impact Regime Covering All Machine-Learning Decisions
September 26: Ari Waldman — Privacy's False Promise
September 19: Marijn Sax — Targeting Your Health or Your Wallet? Health Apps and Manipulative Commercial Practices
September 12: Mason Marks — Algorithmic Disability Discrimination
 

Spring 2018

May 2: Ira Rubinstein Article 25 of the GDPR and Product Design: A Critical View [with Nathan Good and Guilermo Monge, Good Research]
April 25: Elana Zeide — The Future Human Futures Market
April 18: Taylor Black — Performing Performative Privacy: Applying Post-Structural Performance Theory for Issues of Surveillance Aesthetics
April 11: John Nay Natural Language Processing and Machine Learning for Law and Policy Texts
April 4: Sebastian Benthall — Games and Rules of Information Flow
March 28: Yann Shvartzshanider and Noah Apthorpe Discovering Smart Home IoT Privacy Norms using Contextual Integrity    
February 28: Thomas Streinz TPP’s Implications for Global Privacy and Data Protection Law

February 21: Ben Morris, Rebecca Sobel, and Nick Vincent — Direct-to-Consumer Sequencing Kits: Are Users Losing More Than They Gain?
February 14: Eli Siems — Trade Secrets in Criminal Proceedings: The Battle over Source Code Discovery
February 7: Madeline Bryd and Philip Simon Is Facebook Violating U.S. Discrimination Laws by Allowing Advertisers to Target Users?
January 31: Madelyn Sanfilippo Sociotechnical Polycentricity: Privacy in Nested Sociotechnical Networks 
January 24: Jason Schultz and Julia Powles Discussion about the NYC Algorithmic Accountability Bill


Fall 2017

November 29: Kathryn Morris and Eli Siems Discussion of Carpenter v. United States
November 15:Leon Yin Anatomy and Interpretability of Neural Networks
November 8: Ben Zevenbergen Contextual Integrity for Password Research Ethics?
November 1: Joe Bonneau An Overview of Smart Contracts
October 25: Sebastian Benthall Modeling Social Welfare Effects of Privacy Policies
October 18: Sue Glueck Future-Proofing the Law
October 11: John Nay — Algorithmic Decision-Making Explanations: A Taxonomy and Case Study
October 4:Finn Bruton — 'The Best Surveillance System we Could Imagine': Payment Networks and Digital Cash
September 27: Julia Powles Promises, Polarities & Capture: A Data and AI Case Study
September 20: Madelyn Rose Sanfilippo AND Yafit Lev-Aretz — Breaking News: How Push Notifications Alter the Fourth Estate
September 13: Ignacio Cofone — Anti-Discriminatory Privacy
 

Spring 2017

April 26: Ben Zevenbergen Contextual Integrity as a Framework for Internet Research Ethics
April 19: Beate Roessler Manipulation
April 12: Amanda Levendowski Conflict Modeling
April 5: Madelyn Sanfilippo Privacy as Commons: A Conceptual Overview and Case Study in Progress
March 29: Hugo Zylberberg Reframing the fake news debate: influence operations, targeting-and-convincing infrastructure and exploitation of personal data
March 22: Caroline Alewaerts, Eli Siems and Nate Tisa will lead discussion of three topics flagged during our current events roundups: smart toys, the recently leaked documents about CIA surveillance techniques, and the issues raised by the government’s attempt to obtain recordings from an Amazon Echo in a criminal trial. 
March 8: Ira Rubinstein Privacy Localism
March 1: Luise Papcke Project on (Collaborative) Filtering and Social Sorting
February 22: Yafit Lev-Aretz and Grace Ha (in collaboration with Katherine Strandburg) Privacy and Innovation     
February 15: Argyri Panezi Academic Institutions as Innovators but also Data Collectors - Ethical and Other Normative Considerations
February 8: Katherine Strandburg Decisionmaking, Machine Learning and the Value of Explanation
February 1: Argyro Karanasiou A Study into the Layers of Automated Decision Making: Emergent Normative and Legal Aspects of Deep Learning
January 25: Scott Skinner-Thompson Equal Protection Privacy
 

Fall 2016

December 7: Tobias Matzner The Subject of Privacy
November 30: Yafit Lev-Aretz Data Philanthropy
November 16: Helen Nissenbaum Must Privacy Give Way to Use Regulation?
November 9: Bilyana Petkova Domesticating the "Foreign" in Making Transatlantic Data Privacy Law
November 2: Scott Skinner-Thompson Recording as Heckling
October 26: Yan Shvartzhnaider Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms
October 19: Madelyn Sanfilippo Privacy and Institutionalization in Data Science Scholarship
October 12: Paula Kift The Incredible Bulk: Metadata, Foreign Intelligence Collection, and the Limits of Domestic Surveillance Reform

October 5: Craig Konnoth Health Information Equity
September 28: Jessica Feldman the Amidst Project
September 21: Nathan Newman UnMarginalizing Workers: How Big Data Drives Lower Wages and How Reframing Labor Law Can Restore Information Equality in the Workplace
September 14: Kiel Brennan-Marquez Plausible Cause
 

Spring 2016

April 27: Yan Schvartzschnaider Privacy and loT AND Rebecca Weinstein - Net Neutrality's Impact on FCC Regulation of Privacy Practices
April 20: Joris van Hoboken Privacy in Service-Oriented Architectures: A New Paradigm? [with Seda Gurses]

April 13: Florencia Marotta-Wurgler Who's Afraid of the FTC? Enforcement Actions and the Content of Privacy Policies (with Daniel Svirsky)

April 6: Ira Rubinstein Big Data and Privacy: The State of Play

March 30: Clay Venetis Where is the Cost-Benefit Analysis in Federal Privacy Regulation?

March 23: Diasuke Igeta An Outline of Japanese Privacy Protection and its Problems

                  Johannes Eichenhofer Internet Privacy as Trust Protection

March 9: Alex Lipton Standing for Consumer Privacy Harms

March 2: Scott Skinner-Thompson Pop Culture Wars: Marriage, Abortion, and the Screen to Creed Pipeline [with Professor Sylvia Law]

February 24: Daniel Susser Against the Collection/Use Distinction

February 17: Eliana Pfeffer Data Chill: A First Amendment Hangover

February 10: Yafit Lev-Aretz Data Philanthropy

February 3: Kiel Brennan-Marquez Feedback Loops: A Theory of Big Data Culture

January 27: Leonid Grinberg But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race
 

Fall 2015

December 2: Leonid Grinberg But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race AND Kiel Brennan-Marquez - Spokeo and the Future of Privacy Harms
November 18: Angèle Christin - Algorithms, Expertise, and Discretion: Comparing Journalism and Criminal Justice
November 11: Joris van Hoboken Privacy, Data Sovereignty and Crypto
November 4: Solon Barocas and Karen Levy Understanding Privacy as a Means of Economic Redistribution
October 28: Finn Brunton Of Fembots and Men: Privacy Insights from the Ashley Madison Hack

October 21: Paula Kift Human Dignity and Bare Life - Privacy and Surveillance of Refugees at the Borders of Europe
October 14: Yafit Lev-Aretz and co-author, Nizan Geslevich Packin Between Loans and Friends: On Soical Credit and the Right to be Unpopular
October 7: Daniel Susser What's the Point of Notice?
September 30: Helen Nissenbaum and Kirsten Martin Confounding Variables Confounding Measures of Privacy
September 23: Jos Berens and Emmanuel Letouzé Group Privacy in a Digital Era
September 16: Scott Skinner-Thompson Performative Privacy

September 9: Kiel Brennan-Marquez Vigilantes and Good Samaritan
 

Spring 2015

April 29: Sofia Grafanaki Autonomy Challenges in the Age of Big Data
                 David Krone Compliance, Privacy and Cyber Security Information Sharing
                 Edwin Mok Trial and Error: The Privacy Dimensions of Clinical Trial Data Sharing
                 Dan Rudofsky Modern State Action Doctrine in the Age of Big Data


April 22: Helen Nissenbaum Respect for Context' as a Benchmark for Privacy: What it is and Isn't
April 15: Joris van Hoboken From Collection to Use Regulation? A Comparative Perspective
April 8: Bilyana Petkova
 Privacy and Federated Law-Making in the EU and the US: Defying the Status Quo?
April 1: Paula Kift — Metadata: An Ontological and Normative Analysis

March 25: Alex Lipton — Privacy Protections for the Secondary User of Consumer-Watching Technologies

March 11: Rebecca Weinstein (Cancelled)
March 4: Karen Levy & Alice Marwick — Unequal Harms: Socioeconomic Status, Race, and Gender in Privacy Research


February 25 : Luke Stark — NannyScam: The Normalization of Consumer-as-Surveillorm


February 18: Brian Choi A Prospect Theory of Privacy

February 11: Aimee Thomson — Cellular Dragnet: Active Cell Site Simulators and the Fourth Amendment

February 4: Ira Rubinstein — Anonymity and Risk

January 28: Scott Skinner-Thomson Outing Privacy

 

Fall 2014

December 3: Katherine Strandburg — Discussion of Privacy News [which can include recent court decisions, new technologies or significant industry practices]

November 19: Alice Marwick — Scandal or Sex Crime? Ethical and Privacy Implications of the Celebrity Nude Photo Leaks

November 12: Elana Zeide — Student Data and Educational Ideals: examining the current student privacy landscape and how emerging information practice and reforms implicate long-standing social and legal traditions surrounding education in America. The Proverbial Permanent Record [PDF]

November 5: Seda Guerses — Let's first get things done! On division of labor and practices of delegation in times of mediated politics and politicized technologies
October 29:Luke Stark — Discussion on whether “notice” can continue to play a viable role in protecting privacy in mediated communications and transactions given the increasing complexity of the data ecology and economy.
Kristen Martin — Transaction costs, privacy, and trust: The laudable goals and ultimate failure of notice and choice to respect privacy online

Ryan Calo — Against Notice Skepticism in Privacy (and Elsewhere)

Lorrie Faith Cranor — Necessary but Not Sufficient: Standardized Mechanisms for Privacy Notice and Choice
October 22: Matthew Callahan — Warrant Canaries and Law Enforcement Responses
October 15: Karen Levy — Networked Resistance to Electronic Surveillance
October 8: Joris van Hoboken —  The Right to be Forgotten Judgement in Europe: Taking Stock and Looking Ahead

October 1: Giancarlo Lee — Automatic Anonymization of Medical Documents
September 24: Christopher Sprigman — MSFT "Extraterritorial Warrants" Issue 

September 17: Sebastian Zimmeck — Privee: An Architecture for Automatically Analyzing Web Privacy Policies [with Steven M. Bellovin]
September 10: Organizational meeting
 

Spring 2014

April 30: Seda Guerses — Privacy is Security is a prerequisite for Privacy is not Security is a delegation relationship
April 23: Milbank Tweed Forum Speaker — Brad Smith: The Future of Privacy
April 16: Solon Barocas — How Data Mining Discriminates - a collaborative project with Andrew Selbst, 2012-13 ILI Fellow
March 12: Scott Bulua & Amanda Levendowski — Challenges in Combatting Revenge Porn


March 5: Claudia Diaz — In PETs we trust: tensions between Privacy Enhancing Technologies and information privacy law: The presentation is drawn from a paper, "Hero or Villain: The Data Controller in Privacy Law and Technologies” with Seda Guerses and Omer Tene.

February 26: Doc Searls Privacy and Business

February 19: Report from the Obfuscation Symposium, including brief tool demos and individual impressions

February 12: Ira Rubinstein The Ethics of Cryptanalysis — Code Breaking, Exploitation, Subversion and Hacking
February 5: Felix Wu — The Commercial Difference which grows out of a piece just published in the Chicago Forum called The Constitutionality of Consumer Privacy Regulation

January 29: Organizational meeting
 

Fall 2013

December 4: Akiva Miller — Are access and correction tools, opt-out buttons, and privacy dashboards the right solutions to consumer data privacy? & Malte Ziewitz What does transparency conceal?
November 20: Nathan Newman — Can Government Mandate Union Access to Employer Property? On Corporate Control of Information Flows in the Workplace

November 6: Karen Levy — Beating the Box: Digital Enforcement and Resistance
October 23: Brian Choi — The Third-Party Doctrine and the Required-Records Doctrine: Informational Reciprocals, Asymmetries, and Tributaries
October 16: Seda Güerses — Privacy is Don't Ask, Confidentiality is Don't Tell
October 9: Katherine Strandburg — Freedom of Association Constraints on Metadata Surveillance
October 2: Joris van Hoboken — A Right to be Forgotten
September 25: Luke Stark — The Emotional Context of Information Privacy
September 18: Discussion — NSA/Pew Survey
September 11: Organizational Meeting


Spring 2013

May 1: Akiva Miller — What Do We Worry About When We Worry About Price Discrimination
April 24: Hannah Block-Wheba and Matt Zimmerman — National Security Letters [NSL's]

April 17: Heather Patterson — Contextual Expectations of Privacy in User-Generated Mobile Health Data: The Fitbit Story
April 10: Katherine Strandburg — ECPA Reform; Catherine Crump: Cotterman Case; Paula Helm: Anonymity in AA

April 3: Ira Rubinstein — Voter Privacy: A Modest Proposal
March 27: Privacy News Hot Topics — US v. Cotterman, Drones' Hearings, Google Settlement, Employee Health Information Vulnerabilities, and a Report from Differential Privacy Day

March 6: Mariana Thibes — Privacy at Stake, Challenging Issues in the Brazillian Context
March 13: Nathan Newman — The Economics of Information in Behavioral Advertising Markets
February 27: Katherine Strandburg — Free Fall: The Online Market's Consumer Preference Disconnect
February 20: Brad Smith — Privacy at Microsoft
February 13: Joe Bonneau  — What will it mean for privacy as user authentication moves beyond passwo
February 6: Helen Nissenbaum — The (Privacy) Trouble with MOOCs
January 30: Welcome meeting and discussion on current privacy news
 

Fall 2012

December 5: Martin French — Preparing for the Zombie Apocalypse: The Privacy Implications of (Contemporary Developments in) Public Health Intelligence
November 7: Sophie Hood — New Media Technology and the Courts: Judicial Videoconferencing
November 14: Travis Hall — Cracks in the Foundation: India's Biometrics Programs and the Power of the Exception

November 28: Scott Bulua and Catherine Crump — A framework for understanding and regulating domestic drone surveillance

November 21: Lital Helman — Corporate Responsibility of Social Networking Platforms
October 24: Matt Tierney and Ian Spiro — Cryptogram: Photo Privacy in Social Media
October 17: Frederik Zuiderveen Borgesius — Behavioural Targeting. How to regulate?

October 10: Discussion of 'Model Law'

October 3: Agatha Cole — The Role of IP address Data in Counter-Terrorism Operations & Criminal Law Enforcement Investigations: Looking towards the European framework as a model for U.S. Data Retention Policy
September 26: Karen Levy — Privacy, Professionalism, and Techno-Legal Regulation of U.S. Truckers
September 19: Nathan Newman — Cost of Lost Privacy: Google, Antitrust and Control of User Data