Privacy Research Group

ILI Privacy Research Group Logo

The Privacy Research Group is a weekly meeting of students, professors, and industry professionals who are passionate about exploring, protecting, and understanding privacy in the digital age.

Joining PRG

Because we deal with early-stage work in progress, attendance at meetings of the Privacy Research Group is generally limited to researchers and students who can commit to ongoing participation in the group. To discuss joining the group, please contact Nicholas Tilmes. If you are interested in these topics, but cannot commit to ongoing participation in PRG, you may wish to join the PRG-All mailing list.
 
PRG Student Fellows—Student members of PRG have the opportunity to become Student Fellows. Student Fellows help bring the exciting developments and ideas of the Research Group to the outside world. The primary Student Fellow responsibility is to maintain an active web presence through the ILI student blog, reporting on current events and developments in the privacy field and bringing the world of privacy research to a broader audience. Fellows also have the opportunity to help promote and execute exciting events and colloquia, and even present to the Privacy Research Group. Student Fellow responsibilities are a manageable and enjoyable addition to the regular meeting attendance required of all PRG members. The Student Fellow position is the first step for NYU students into the world of privacy research. Interested students should email Student Fellow Coordinator Nicholas Tilmes with a brief (1-2 paragraph) statement of interest or for more information.


PRG Calendar

Spring 2023

April 17: Elettra Bietti - Data is Infrastructure

     ABSTRACT: In the context of the advent of generative AI and changing platform business strategies, data’s role as a currency cannot be overstated. The mass collection of data, its storage, use and reproduction in algorithmic training and processing are key to platform companies’ profits. This paper frames data as an infrastructural and contextual phenomenon. It argues that conceptualizing data as infrastructure prompts a redirection of data governance efforts, across privacy and antitrust law, toward greater contextual awareness. Data is what it does. Unlike oil or other resources and commodities to which it has been compared, data is not a static physical object that exists “out there” and that can be traded on a marketplace. Instead, perhaps more like water, it is a fluid, contextual, materially embedded phenomenon that acquires the multiplicity of functions which we project onto it. Data embeds the purposes, assumptions and rationales of those who produce, collect, use, share and monetize it. As noted by Jathan Sadowski, data in platform economy is used not only to surveil, profile and target people with content and ads, but also to optimize systems; to manage, control and discipline processes; to model probabilities; to build new products and to grow the value of existing assets. It follows that data is economically and socially relevant, and thus legally relevant, primarily as versatile infrastructure and not as a commodity. In the existing platform context, the most significant uses of data about humans are internal to large platform companies like Meta and Alphabet and fuel the accumulation and exchange of other resources such as curated and refined forms of engagement. Viewing data as infrastructure has the potential to redirect digital governance efforts across privacy, data protection law and antitrust.

April 10: Angela Zhang - The Promise and Perils of China's Regulation of Artificial Intelligence

     ABSTRACT: In recent years, China has emerged as a pioneer in formulating some of the world’s earliest and most comprehensive regulations concerning artificial intelligence (AI) services. Thus far, much attention has focused on the restrictive nature of these rules, raising concerns that they might constrain Chinese AI development. This article is the first to draw attention to the expressive powers of Chinese AI legislations, particularly its information and coordination functions, to enable the AI industry. Recent legislative measures, such as the interim measures to regulate generative AI and various local AI legislations, offer little protective value to the Chinese public. Instead, these laws have sent a strong pro-growth signal to the industry while attempting to coordinate various stakeholders to accelerate technological progress. China’s strategic lenient approach to regulation may therefore offer its AI firms a short-term competitive advantage over their European and U.S. counterparts. However, such leniency risks creating potential regulatory lags that could escalate into AI-induced accidents and even disasters. The dynamic complexity of China’s regulatory tactics thus underscores the urgent need for increased international dialogue and collaboration with the country to tackle the safety challenges in AI governance.

April 3: Sebastian Benthall - Complex Sociotechnical System Alignment

     ABSTRACT: A common refrain in the field of AI ethics today is that AI should be aligned with human values. Current research and practice attempts this worthy aim with such techniques as reward modeling and constitutional prompting. In this work, I draw on work from cognitive psychology, systems theory, and science and technology studies to critique the way this problem is normally framed. It reifies AI as an autonomous entity rather than considering how it is embedded in and dependent on a sociotechnical system. It also elides the differences between artificial and living systems. A more realistic look at human values as they are expressions of human social organization, including human law, invites a new approach to thinking about ethical and accountable sociotechnical system design using complex systems theory. Not only a philosophical critique, this suggests a novel frontier for AI research.

March 27: Katja Langenbucher - Financial Profiling

     ABSTRACT: This early-stage project explores financial profiling, understood as “automated processing of personal data with the aim of making a prediction about a person” that involves “financial resources or essential services such as housing, electricity, and telecommunication services”3. I describe profiling as a searching and a signaling device. To those who provide access to financial resources or services, profiling is a searching device. For those who seek access, it works as a signaling device. Against this background, I discuss the role of regulation from both perspectives. For the provider of financial resources and essential services, I submit that existing regulation of profiling is largely concerned with what has traditionally been understood as decision-makers. I point towards predatory pricing, personalized pricing as well as prudential and paternalistic statistical discrimination. I highlight limits to that approach and move on to propose a focus on profiling in line with the AI Act, the recent ECJ’s interpretation of the GDPR, and US regulation of credit reporting agencies. For those that seek resources and services, I focus on the role of regulation as enabling them to send an appropriate signal. This presupposes that they understand the relevant signal. Along those lines, I applaud the ECJ’s wide reading of the GDPR but reject the court’s restriction to situations where profiling is of “paramount importance” to a decision-maker.  

March 13: Thomas Streinz - With Pride and Without Prejudice: Constructing European Data Law around the GDPR

     ABSTRACT: The EU has recently enacted a flurry of new legislation in the digital domain, including the Data Governance Act (DGA), Digital Services Act (DSA), Digital Markets Act (DMA), Data Act (DA), and most recently the Artificial Intelligence Act (AIA). These laws have different regulatory objectives and employ different regulatory approaches, yet they can all be conceptualized to some extent as “data laws”. They regulate to varying degrees what, when, where, how, and why data is to be accessed, shared, transferred. For this reason, these new data laws need to position themselves vis-à-vis the EU’s established data protection law, especially the General Data Protection Regulation (GDPR). The contested legislative process and the final legislative outcome reveals that all new European data law gravitates around GDPR. In other words, the EU’s regulatory strategy in the digital domain proceeds with pride for and without prejudice to GDPR: Scholarly criticism for the GDPR’s design and track record does apparently not penetrate the Brussels bubble as its political economy coalesces around its landmark data protection law. This presentation questions the viability of constructing European data law in this way as the new European data law is not actually “without prejudice” to GDPR. Overlaps and tensions between the various legislative acts will eventually have to get resolved. If the EU wants to achieve its regulatory objectives, it may have to re-calibrate the relationship between data protection law and other domains of data law.

March 6: Yafit Lev-Aretz and Aileen Nielsen - Understanding Privacy as a Public Health Priority

     ABSTRACT: Privacy harms have taken on the dimensions of a massive crisis. Traditional legal frameworks, which predominantly rely on notice and consent or tort theories, have proven inadequate in addressing privacy’s proliferating challenges. Scholars have proposed novel definitions of privacy to factor in the social elements of privacy and the externalities of privacy decisions. Scholars have also proposed alternatives to individualized privacy self-management to improve the conceptual rationalizations of privacy law. While inspiring and engaging, these promising scholarly efforts have so far failed to inspire meaningful policy changes or to articulate successful privacy advocacy litigation strategies. Recent litigation targeting social media companies, whose business models intrinsically implicate personal data, presents a compelling opportunity to advance novel legal theories for holding firms accountable for harmful data practices. Arguments conceptualizing excessive user data extraction as an unlawful public nuisance, as in Seattle School District No. 1 v. Meta Platforms, Snap Inc., TikTok, Alphabet, et al., offer a path forward to discipline market players through a litigation theory - public nuisance - that was previously employed for public health concerns, as with the opioid epidemic. Another novel litigation tactic is to attack social media harms by addiction in vulnerable populations, as shown by the case brought by dozens of states against Meta in Arizona v. Meta Platforms, again employing a public health theory and taking lessons from previous public health litigation, as with e-cigarettes and childhood obesity. In this paper, we call for a conscious paradigm shift in privacy scholarship and privacy law. We argue that the current state of privacy should be perceived as a public health concern, and possibly even a public health crisis. Reframing privacy as an essential component of public health, we argue, provides both rhetorical powers to drive reforms and practical guidance for privacy law and policymaking. We contextualize the public health framing of privacy harms by tracing parallels to major public health crises that also once focused on individual responsibility - tobacco use, obesity, and the opioid epidemic. In all three cases, early tort litigation faced obstacles as the harms were blamed on individual responsibility rather than industry practices, mirroring current legal conceptions of privacy harms. Additional parallels exist, like difficulties establishing regulatory authority, state and local legislation filling federal gaps, and corporate efforts to obscure research on how industry practices shape individual behaviors. Rather than solely seeking individual consent, a public health framework recognizes privacy's broader societal impacts and emphasizes preventative protections. This approach justifies regulatory interventions that balance individual rights against collective welfare. Public health governance provides tested methodologies for curbing behaviors, like unfettered data collection, that jeopardize community well-being. In reconceiving privacy through this established lens, policymakers can implement solutions tailored to current systemic threats, moving beyond atomized notions of harm toward much-needed collective safeguards. Finally, we close by defending the notion of understanding privacy as a matter of public health in particular rather than as a public good more generally.

February 28: Ari Waldman - Compromised Advocates: Civil Society and the Future of Privacy Law

     ABSTRACT: This Article tells the inside story of the American Data Privacy and Protection Act (ADPPA) and the role of privacy nonprofit organizations in crafting it. It presents original research in the form of discourse analyses of primary source documents and interviews with Congressional staff and advocates at five privacy nonprofit organizations identified by Congressional staff as critical to drafting bill. The Article situates ADPPA as a weak and ultimately ineffectual attempt at regulating the harms of data extractive capitalism at the federal level, demonstrates why ADPPA turned out the way it did, and why civil society organizations advocated for particular provisions and not others. It then asks and answers a question critical for the future of privacy law: Why would privacy advocates draft and advocate for a weak law? Scholars are used to answering questions like this by turning to sociological explanations about civil society’s organizational atrophy, its oligarchic tendencies, and its context, or to political science explanations about special interests or coalition advocacy. That conventional approach is persuasive, yet incomplete. It ignores the effects of the law. This Article argues that background law, the dynamics of policymaking, and proceduralist or legalistic conceptions of privacy channeled privacy advocacy toward milquetoast reform, contributing to a weak bill that would change little about the status quo. ADPPA may not have been signed into law, but it is critical lawyers and legal scholars learn its lessons now: It is the single best snapshot we have of where privacy law is and where it is going. To help us end the cycle of mistakes and middling reform that keeps our privacy unprotected, especially as rapidly advancing artificial intelligence tools drive expanding thirst for personal data, this Article goes behind the scenes to show how law weakens the democratic voice in policy and sustains data-extractive capitalism. In making this argument, the Article makes three contributions to sociolegal studies. Its original research pulls back the curtain on privacy civil society, a woefully understudied player in constructing privacy law, and challenges existing literature that sees privacy nonprofits as far more effective than they really are. It also widens the aperture for scholars trying to understand the role of law in creating social, economic, and institutional relations and the role of social groups and institutions in creating law. Finally, the Article contributes to our understanding of how legislation is drafted in today’s dysfunctional Congress. It concludes by looking forward, using the ADPPA case study to inform future fights to protect privacy in the information economy.

February 21: PRG Student Fellows - Executive Order on Safe, Secure, and Trustworthy Development and Use of AI

February 14: Stein & Florencia Marotta-Wurgler - Training 'Legal Thinking': An Automated Approach to Interpreting Privacy Policies

     ABSTRACT: Privacy policies govern firms’ collection, use, sharing, and security of personal information of consumers. These rich and complex legal documents include contractual promises related to the collection, use, sharing, and protection of personally identifiable information, as well as mandated disclosures dictated by data protection regimes such as the European Union’s GDPR and California’s CCPA. Privacy policies tend to be detailed, lengthy, and complex, making them difficult to understand and for regulators to police firm behavior. Our project joins recent efforts to classify the terms in privacy policies to help automate their analysis using machine learning. Machine learning relies on human-coded examples to train, adjust, and test the capabilities of artificial intelligence algorithms (AIs). Until very recently, AIs ability to process large, unstructured texts were limited. As a result, datasets designed for legal tech focused on short phrases and simple legal concepts.  Current AI technology, however, possesses an increased ability to process text, largely through the use of large language models (LLMs). To date, most applications of LLM-based legal tech have relied on untested AIs trained mostly on generic, non-legal datasets. The legal training data that exists is designed around the limitation of the previous generation of AIs, and focuses on the meaning of short sentences and individual clauses, not on entire documents or collections of documents. Our paper makes three contributions. First, we introduce an approach and toolset for labeling online contracts that generate datasets tailored for training and testing this new class of higher-capability AIs’ ability to process legal documents. Our coding labels encompass most terms commonly found in privacy policies and map directly to relevant legal benchmarks across the U.S. and the E.U. Second, we demonstrate how a dataset generated using our approach can be used to test and modify LLMs. We offer some preliminary results in the case of privacy policies, where we “tune” LLMs to label key aspects of privacy policies and automate our coding process. Third, we make our data and tools publicly available for others to use and extend.

February 7: Fabien Lechevalier & Marie Potel-Saville - Moving from Dark to Fair Patterns: Regulation & countermeasures for human-centered digital

     ABSTRACT: Dark patterns or deceptive patterns could be defined as techniques for deceiving or manipulating users through interfaces that have the substantial effect of subverting or altering a user's autonomy, decision-making or choice as part of its online activities. These techniques are, for example, used to lead users to share ever more personal data, to pay more for products or services, to prevent them from canceling subscriptions or to make it more difficult to exercise their rights, or even impossible. The context of use of these services generates decision-making based on System 1 (Kahneman) and heuristics, which is fast and inexpensive in terms of cognitive costs. Beyond the direct consequences visible on an individual scale, these techniques contribute to the reinforcement of generalized behavioral manipulation practices that question our collective relationship to the progress of techniques, when they are not used for humans’ best interests, and question our social contract in the digital age. The communication aims to provide an overview of the regulatory framework governing dark patterns, to identify its shortcomings, and to propose sustainable regulatory solutions that really take human cognitive limits into account.

January 31: Moritz Schramm - Platform Administrative Law: A Research Agenda

     ABSTRACT: Scholarship of online platforms is at a crossroads. Everyone agrees that platforms must be reformed. Many agree that platforms should respect certain guarantees known primarily from public law like transparency, accountability, and reason-giving. However, how to install public law-inspired structures like rights protection, review, accountability, deference, hierarchy and discretion, participation, etc. in hyper capitalist organizations remains a mystery. This article proposes a new conceptual and, by extension, normative framework to analyze and improve platform reform: Platform Administrative Law (PAL). Thinking about platform power through the lens of PAL serves two functions. On the one hand, PAL describes the bureaucratic reality of digital domination by actors like Meta, X, Amazon, or Alibaba. PAL clears the view on the mélange of normative material and its infrastructural consequences governing the power relationship between platform and individual. It allows us to take stock of the distinctive norms, institutions, and infrastructural set ups enabling and constraining platform power. In that sense, PAL originates – paradoxically – from private actors. On the other hand, PAL draws from ‘classic’ administrative law to offer normative guidance to incrementally infuse ‘good administration’ into platforms. Many challenges platforms face can be thought of as textbook examples of administrative law. Maintaining efficiency while paying attention to individual cases, acting proportionate despite resource constraints, acting in fundamental rights-sensitive fields, implementing external accountability feedback, maintaining coherence in rule-enforcement, etc. – all this is administrative law. Thereby, PAL describes the imperfect and fragmented administrative regimes of platforms and draws inspiration from ‘classic’ administrative law for platforms. Consequentially, PAL helps reestablishing the supremacy of legitimate rules over technicity and profit in the context of platforms.

January 24: Priyanka Nanayakkara - Will Challenges of Understanding Differential Privacy Prevent it from Becoming Policy?

     ABSTRACT: Differential privacy (DP) is a state-of-the-art approach to privacy-preserving data analysis. Since its invention in 2006, researchers and practitioners have investigated its promise for satisfying various legal requirements, such as those elaborated in Title 13, GDPR, and HIPAA. If DP is used to meet such requirements, a range of parties—including policymakers, data analysts, and the public—will increasingly be required to make decisions related to its deployment. However, DP is not only notoriously difficult to understand and reason about by non-DP experts, but prior evidence also suggests that challenges to understanding it may prevent buy-in necessary for DP to become policy. Whether these challenges can be successfully overcome in a way that results in broad trust remains to be seen. In this talk, I will discuss implications of three categories of challenges to understanding DP and offer recommendations for how relevant parties may potentially address these challenges to meet policy requirements

Fall 2023

November 29: Monika Leszczynska - Defining the Boundaries of Marketing Influence: Public Perception and Unfair Trade Practices in the Digital Era

     ABSTRACT: Companies are deploying increasingly sophisticated techniques to influence consumer choices and preferences in the digital environment. As yet, however, it is unclear whether and how consumer law should respond to such practices. This paper explores a valuable benchmark to inform an answer to this question: public norms and perceptions regarding online marketing practices. Understanding such perceptions is a crucial factor in assessing the legitimacy of consumer protection law and potential areas for reform. Based on an experimental vignette study, I examine the moral acceptability of several of online marketing practices, as well as factors that underlie these judgments. I demonstrate that practices leading to privacy harms are perceived as less morally acceptable than those causing no harm. Additionally, I show that some practices specifically invite moral condemnation relative to a neutral choice design, independent of the presence and type of harm involved. My findings suggest that there may well be a reason to expand the scope of unfair trade practices laws to include the scrutiny of online marketing strategies targeting consumer decisions that could potentially result in privacy harms. If strategies pose a significant threat to consumer autonomy, the requirement to demonstrate tangible harm for classifying a practice as unfair should be eliminated. Furthermore, I suggest that the notion of unfairness should indeed encompass the potential threat to freedom of choice, with its assessment closely linked to consumers' perspectives.

November 15: Aileen Nielsen & Arna Woemmel - Ageism unrestrained: the perplexing lack of action to protect older adults in the digital world

     ABSTRACT: Discrimination against older people, a global threat to individuals and society at large, is pervasive in digital spaces. Yet, it goes largely undiscussed. Discrimination against older people is a significant global threat to both individual well-being and society as a whole, as recently emphasized by the World Health Organization. Yet, in the digital space – a key source for shaping social values and norms – ageism appears to be prevalent and unhindered. In this Comment, we present surprisingly easy to find instances of discrimination against older people (ageism) in the digital space. These examples underscore that ageism is pervasive across various settings, including online platforms, search engines, and tech-related policy decisions. Alarmingly, key stakeholders such as the machine learning research community, policymakers, and tech companies appear remarkably passive in addressing these issues. We write to prompt further research and practical interventions into the prevalence of and best measures to combat the further proliferation of digital ageism. The ML community, policymakers, and tech companies must act to curb its further proliferation.

November 8: Aniket Kesari - A Legal Framework for Explainable AI

     ABSTRACT: What makes a good artificial intelligence (AI) explanation? A foundational legal principle is that decision-makers must explain their reasons: judges write opinions, government agencies write reports detailing why they deny benefits in areas such as entitlements and immigration, and credit lenders need to inform applicants about the reasons for denying an application. As more of these decisions become automated with machine learning and AI tools, the notion of reason-giving has received renewed attention within the legal community. Black-box algorithms can improve the speed and accuracy of legal decision-making, but it can be difficult to scrutinize the reasons underlying their predictions. Even when it is possible to scrutinize the reasons, simple appeals to intuition may falter as these methods are adept at uncovering patterns that elude humans. An active literature in explainable AI has produced a growing library of methods for explaining algorithmic predictions and decisions. But explainable AI has largely focused on the needs of software developers to debug, rather than the interests of decision subjects to understand. The legal-ethical debates, on the one hand, and explainable AI innovations, on the other, have mostly proceeded independently without a connecting conversation. We bridge this gap by introducing a typology for good legal explanations in algorithmic decision-making contexts. Are explanations global (explaining the behavior of the system as a whole) or local (explaining the decision as it pertains to a particular data subject)? Are they contrastive (detailing what the data subject could have changed to receive a different decision) or non-contrastive (simply giving the model's predictions)? Ensuring that the bedrock principle of giving explanations is preserved, even as more high-stakes decisions are made with AI, will be a paramount law and policy issue. Explanations pave the way for other parts of a functioning legal system including the right to appeal adverse decisions, transparency in government decisions, and building public trust in institutions. We conclude by showing how our framework can be used to capture the benefits of AI decision-making while still producing good explanations. 

November 1: Toussaint Nothias - The Idea of Digital Colonialism: An Intellectual History At the Intersection of Research and Digital Rights Advocacy

     ABSTRACT: Our societies are in the process of grappling with the harmful and global impact of a wide range of data-driven technologies. Conversations about the oppressive dimensions of predictive algorithms, the privacy implications of facial recognition technology, the biases of NLP models and the blindspots of automated content moderation are increasingly widespread in tech policy and civil society communities, as well as in academia and the tech industry itself. Driving this reckoning, a growing community of scholars and civil society voices call for challenging what they see as harmful instances of ‘digital colonialism’. This paper proposes an intellectual history of this movement to critique and oppose digital colonialism. In the last five years, scholars from a wide range of disciplines have turned to this concept (or variations like techno-colonialism, tech colonialism, tech imperialism, data colonialism, algorithmic colonization or digital coloniality) as a novel explanatory framework to understand the societal, economic and political role of digital technologies on a global level. These include scholars in law (Coleman, 2019), computer science (Birhane, 2019), social theory (Couldry and Mejias, 2019), anthropology (Amrute, 2019), communication (Madianou, 2019, Oyedemi, 2019, Ricaurte, 2019) sociology (Kwet, 2019), and political scientists (Hicks, 2019).  In this paper, I ask: why did scholars from varied disciplines turn to this idea of the colonial features of digital technologies?  Why did they develop similar frameworks, at the same time, and at this specific historical juncture? In answering these questions, I make two main arguments. On the one hand, I argue that there are significant historical precedents to these ideas – including the STS literature on postcolonial computing from the early 2010s as well as the political economy literature on electronic colonialism and media imperialism in the 1970s and 80s. In other words, these ideas are not altogether new. They are part of an historical continuum of rich scholarly thinking about technology and coloniality. On the other hand, I argue that digital rights activists have been actively developing as well as popularizing these ideas over the last decade. I draw on cases from Kenya (related to election data and digital ID) and India (net neutrality campaign), and writings by activists and artists to illustrate the prominence and circulation of these ideas in digital rights communities simultaneously to their emergence in academic publications. Together, these two findings invite us to conceptualize global knowledge production about technology and privacy as a dialectic process in which scholarly and activist communities are often co-creators. 

October 25: Sebastian Benthall - Regulatory CI: Adaptively Regulating Privacy as Contextual Integrity

     ABSTRACT: Privacy regulators are captured by outdated privacy paradigms that challenge their ability to anticipate and prevent harms to social values due to inappropriate flows of information. Pivoted around a positive definition of privacy, Contextual Integrity (CI) can inform regulators by modeling how information flows can or cannot be legitimized by contextual purposes, societal values, and individual ends. Regulating according to CI is challenging in practice, however, because of the need to dynamically operationalize the social values at risk by information flows, and because the flows themselves are opaque, complex, and require constant updates to regulatory models. We call for a shift in the object to be regulated, moving away from regulating ’data’ to regulating ’information flows’ and propose adaptive regulation techniques to apply this new approach. At the core of our proposal is Regulatory CI, a formalization of contextual integrity in which information flows are modeled and audited using Bayesian networks and causal game theory. These models are used in three parallel learning cycles of the adaptive regulatory process (a) assessment of new risks, (b) real-time monitoring of existing threat actors, and (c) validation assessment of existing regulatory instruments to update and work around information flow models: Stakeholders develop a scientific model of privacy risk, calibrate it to data collected from society, and predict and tests the impact of regulatory measures on beneficial social outcomes. We use the Cambridge Analytica scandal to demonstrate existing gaps in privacy regulations and the novelty of our proposal.

October 18: Michal Shur-Ofry - Multiplicity as an AI Governance Practice

     ABSTRACT: The recent proliferation of artificial intelligence large language models (LLMs) could mark a watershed moment in the interaction between AI and humans. As the enormous potential of large language models is starting to unfold, this research explores their systemic implications. Much of the public and scholarly discussion to date has focused on the risks of LLMs generating information that is false, misleading, or inaccurate. This study suggests that LLMs can impact social perceptions, even when the output they generate is reliable and valuable. Relying on multidisciplinary research in computer science, sociology, communication and cultural studies, this article takes a close look at the technological paradigm underlying LLMs, and unravels the human judgements that ultimately affect their output. It then describes three case studies, based on experimentations with ChatGPT, that demonstrate how LLMs can affect users’ perceptions, even when they generate valuable and relevant responses on issues such as historical figures, television series, or culinary options. The analysis indicates that the outputs of LLMs are likely to be geared toward the popular and reflect a mainstream and concentrated worldview, rather than a multiplicity of contents and narratives. This inclination could have adverse societal effects—from undermining cultural diversity, to limiting the multiplicity of narratives that build collective memory, narrowing users’ perceptions, or impeding democratic dialogue. The analysis further indicates that the power of LLMs to influence their users’ perceptions could be particularly significant, due to a series of design and technological traits that exacerbate the asymmetrical power relations between LLMs and their users. To address these challenges, the article proposes a novel policy response: recognizing multiplicity as an AI governance principle. Multiplicity implies exposing users, or at least alerting them to the existence of multiple options, contents and narratives, and encouraging them to seek additional information. The analysis explains why current AI governance principles, such as explainability and transparency, are insufficient for alleviating the aforesaid concerns, and how adopting multiplicity as part of AI ethical and regulatory principles could directly address them. It then suggests ways for incorporating multiplicity into AI governance, concentrating on two nonexhaustive directions: Multiplicity-by-Design and Second (AI) Opinions. Finally, the study explores potential legal frameworks that can accommodate multiplicity in AI governance principle. It concludes that integrating multiplicity as an AI governance principle will allow society to benefit from the integration of generative AI tools into our daily lives without jeopardizing the intricacies of the human experience.

October 11: Michael Goodyear - Infringing Information Architectures

     ABSTRACT: Information architectures underpin daily life, from television programming to social media. At the same time, new information distribution ecosystems that have the potential to change how we create and distribute information are rapidly evolving. However, they face an underexamined existential crisis from intellectual property law. Rights owners allege that, unlike a copycat mimicking a single painting or logo, online platforms should be liable for all their users’ copyright and trademark infringements. This is architectural infringement. If these claims succeeded, the aggregate damages could be catastrophic and significantly chill innovation. Architectural infringement’s threat to innovation is hardly new, but current judicial, legislative, and scholarly analyses are misleading and incomplete. This Article explains how, in response to these potentially ruinous claims, courts and Congress consistently refined infringement liability tests to accommodate new information architectures from video recorders to online file sharing. Together, these refinements resulted in today’s nuanced intellectual property liability framework. But these ad hoc refinements, while well-meaning, lacked a uniform normative purpose. By examining tort theory and prior architectural infringement cases, this Article proposes consciousness—awareness of the specific infringement and one’s role in furthering the infringement through their action or inaction—as a framework for consistently and effectively responding to new architectural infringement claims. Guided by this framework, courts should further refine copyright and trademark law in response to new technologies such as blockchain, NFTs, and Web 3.0 ecosystems to reduce the risk of overbroad architectural infringement and facilitate greater innovation and information dissemination.

October 4: Yafit Lev-Aretz - Humanized Choice Architecture

     ABSTRACT: The field of choice architecture, which focuses on strategically influencing decisions by altering the context in which choices are made, has rapidly evolved in the past two decades. Early choice architecture relied on general nudges tailored to exploit common cognitive biases. With the rise of data-driven profiling came the “hypernudge,” highly personalized nudges that are adapted in real-time based on the user. While hypernudges raised concerns about manipulation, conversational AI represents an even more radical shift, introducing humanized choice architecture. Unlike previous nudges, interactive AI systems simulate human attributes to forge quasi-social connections with users. This enables highly customized nudges that leverage the user’s instinct to anthropomorphize machines. Even with full awareness that they are engaging with programmed code rather than with a human, the allure of machines that convincingly simulate human attributes leaves users prone to AI influence. The sophisticated nudging powers of conversational AI thus stem from a human instinct that overrides logic. This inversion of dynamics creates risks of manipulation that may infringe on decision-making autonomy regardless of the voluntary and conscious nature of user engagement with AI. Consumer protection laws that focus on overt manipulation may prove insufficient to address the novel risks posed by humanized choice architecture. In this work, I explore alternative legal frameworks that may offer meaningful safeguards, even when choices are made voluntarily and informed. Specifically, the FTC’s Section 5 authority against unfair practices, along with contract law doctrines of undue influence and unconscionability may provide protections in cases of abusive AI nudging. Through these frameworks policy could be adapted to determine when voluntary anthropomorphizing reflects a legally actionable compromise of personal autonomy in the age of AI. This approach could enable more nuanced safeguards tailored to the distinct regulatory challenges posed by AI's emergence as an intimate, humanized choice architect.

September 27: Alexis Shore - Governing the screenshot feature: Fighting interpersonal breaches of privacy through law and policy

     ABSTRACT: Case law has widely recognized screenshots of digital messages as validating evidence of unlawful behaviors. While this exemplifies a valued, utilitarian purpose of the screenshot feature, little attention has been paid to the screenshot feature as a threat to private digital communications. In fact, many digital messaging platforms authorize individuals to surreptitiously capture and share conversations using the screenshot feature without notice to the original information owner. This dismantles the ability to have true intellectual or intimate privacy within supposedly private digital mediums. Given its expressive function, law and policy have the power to influence not only technology design, but the societal norms around screenshot collection, use, and sharing of private digital conversations. Using relevant case law and rulemakings by the FTC, the findings of this study highlight inconsistencies in the law and provide guidance from the FTC in regulating behaviors akin to screenshot collection and sharing.

September 20: Moritz Schramm - How the European Union and Big Tech reshape Judicial Power

     ABSTRACT: The proposed monograph titled ‘Emulated Guardians: How the EU and Big Tech Reshape Judicial Power’ offers an original perspective on a fundamental problem of contemporary law: how to protect rights and control power if ever more power is exercised by actors beyond public authority? Using the example of content moderation on social media platforms, the book tells a fascinating story about the European Union’s and big corporations’ reflexive struggle for authority and legitimacy in global governance. The book develops an interdisciplinary theoretical framework and relies on exclusive empirical material, produced through qualitative interviews with lawmakers, managers, staffers, and activists. The book argues that one increasingly common approach to control private power and rack up public legitimacy is to re-use tried and tested vocabularies, mechanisms, and institutions known for controlling public power, especially those from public law. Particularly prominent among such seasoned mechanisms are courts or, more generally, adjudicators. The language of rights and constitutionalism gave rise to novel but ultimately ambiguous international adjudicators like Meta’s Oversight Board and permeate the EU’s newly established out of court-dispute settlement bodies under the EU’s new platform law, the Digital Services Act (DSA). These adjudicators – which I conceptualize as Emulated Guardians – will decide cases relevant for millions, perhaps billions of users. Building on exclusive access to interviewees at the European institutions and Meta’s Oversight Board and extensive document review, the book critically evaluates Emulated Guardians’ genesis, practice, and political and legal repercussions. The book connects various contemporary debates, e.g., regarding the EU’s Digital Services Act, the Brussels Effect, content moderation, Meta’s Oversight Board, business and human rights, law & tech, global administrative law, and digital constitutionalism. While situated firmly in a vibrant trans-Atlantic discourse, the book is the first monograph on novel adjudicators like the Oversight Board and the DSA’s out of court-dispute settlement bodies specifically and the broader phenomenon of Emulated Guardians in general.

September 13: David Stein – Rethinking IP (and Competition) in the Age of Online Software

     ABSTRACT: Current IP rules do not work for online consumer software. Software-specific IP doctrine formed during the era of installable software, which has high upfront costs and is easy to copy. IP rights helped companies recoup development costs by granting them the exclusive right to make and sell copies. But online software has low upfront costs and is not susceptible to copying, rendering IP protection unnecessary. Limits on software IP were designed to foster competition by letting market entrants replicate the interfaces of incumbent products. Online, copying incentives point in the other direction. The limits on software IP let incumbents raise barriers to entry by copying from newcomers. The net effect is an IP regime that exacerbates preexisting tendencies towards market concentration and depressed innovation in markets for online consumer services. Given the growing role online services play in data collection, commerce, and speech, these broken innovation and competition incentives have far-reaching effects. Fixing those incentives is urgent. Policymakers and commentators blame the concentration of online services on structural market failures and turn to antitrust remedies for solutions. This pervasive narrative focuses on a symptom, not the cause. I argue that tech concentration is an artifact of IP law’s failure to keep up with technology. This article proposes a program for IP reform: we should replace the trade-motivated aspects of software IP law with expanded trade regulation. Drawing on common-law misappropriation as a model, I sketch one politically pragmatic option for implementing those reforms. Beyond this article’s focus on software innovation, it serves as a case study describing the mechanics behind a law falling out of sync with technology. As such, it may help policymakers avoid similar legislative and regulatory pitfalls as they regulate emerging and fast-changing technologies.
 

Spring 2023

April 19: Anne Bellon -  Seeing through the screen. Transparency as regulation in the digital economy
April 12: Gabriel Nicholas, Christopher Morton & Salome Viljeon - Researcher Access to Social Media Data: Lessons from Clinical Trial Data Sharing
April 5: Amanda Parsons & Salome Viljeon - How Law Collides with Informational Capitalism
March 29: Cade Mallett - Judicial Review of Administrative Action Based on AI
March 22: Stein - Innovation Protection for Platform Competition
March 8: Aileen Nielsen & Yafit Lev-Aretz - Disclosure and Our Moral Calculus: Do Data Use Disclosures Change Data Subjects’ Sense of Culpability
March 1: Ari Ezra Waldman - Privacy Civil Society
February 22: Thomas Streinz - Contingencies of the Brussels Effect in the Digital Domain
February 15: Sebastian Benthall - New Computational Approaches to Information Policy Research
February 8:  Argyri Panezi, Leon Anidjar, and Nizan Geslevich Packing - The Metaverse Privacy Problem: If you built it, it will come
February 1: Aniket Kesari - The Consumer Review Fairness Act and the Reputational Sanctions Market
January 25: Michelle Shen - The Brussels Effect as a ‘New-School’ Regulation Globalizing Democracy: A Comparative Review of the CLOUD Act and the European-United States Data Privacy FrameworkAlgorithmic Turn

Fall 2022

November 30: Ira Rubenstein - Artificial Speech and the First Amendment: A Skeptical View
November 16: Michal Gal - Synthetic Data: Legal Implications of the Data-Generation Revolution
November 9: Ashit Srivastava - Default Protectionist Tracing Applications: Erosion of Cooperative Federalism
November 2: María Angel - Privacy's Algorithmic Turn
October 26: Mimee Xu - Netflix and Forget
October 19: Paul Friedl - Dis/similarities in the Design and Development of Legal and Algorithmic Normative Systems: the Case of Perspective API
October 12: Katja Langenbucher - Fair Lending in the Age of AI
October 5: Ari Waldman - Gender Data in the Automated State
September 28: Elettra Bietti - The Structure of Consumer Choice: Antitrust and Utilities' Convergence in Digital Platform Markets
September 21: Mark Verstraete - Adversarial Information Law
September 14: Aniket Kesari - Do Data Breach Notification Laws Work?


Spring 2022

April 27: Stefan Bechtold - Algorithmic Explanations in the Field
April 20: Molly de Blanc - Employing the Right to Repair to Address Consent Issues in Implanted Medical Devices

April 13: Sergio Alonso de Leon - IP law in the data economy: The problematic role of trade secrets and database rights for the emerging data access rights
April 6: Michelle Shen – Criminal Defense Strategy and Brokering Innovation in the Digital and Scientific Era: Justice for Whom?
March 30: Elettra Bietti – From Data to Attention Infrastructures: Regulating Extraction in the Attention Platform Economy
March 23: Aniket Kesari - A Computational Law & Economics Toolkit for Balancing Privacy and Fairness in Consumer Law
March 9: Gabriel Nicholas - Administering Social Data: Lessons for Social Media from Other Sectors
March 2: Jiaying Jiang - Central Bank Digital Currencies and Consumer Privacy Protection
February 23: Aileen Nielsen & Karel Kubicek - How Does Law Make Code? The Timing and Content of Open Source Responses to GDPR and CCPA

February 16: Stein - Unintended Consequences: How Data Protection Laws Leave our Data Less Protected
February 9: Stav Zeitouni - Propertization in Information Privacy
February 2: Ben Sundholm - AI in Clinical Practice: Reconceiving the Black-Box Problem
January 26: Mark Verstraete - Probing Personal Data

 

Fall 2021

December 1: Ira Rubinstein & Tomer Kenneth - Health Misinformation, Online Platforms, and Government Action
November 17: Aileen Nielsen - Can an algorithm be too accurate?
November 10: Thomas Streinz - Data Capitalism
November 3: Barbara Kayondo - A Governance Framework for Enhancing Patient’s Data Privacy Protection in Electronic Health Information Systems
October 27: Sebastian Benthal - Fiduciary Duties for Computational Systems
October 20: Jiang Jiaying -  Technology-Enabled Co-Regulation as a New Regulatory Approach to Blockchain Implementation
October 13: Aniket Kesari - Privacy Law Diffusion Across U.S. State Legislatures
October 6: Katja Langenbucher - The EU Proposal for an AI Act – tested on algorithmic credit scoring
September 29: Francesca Episcopo - PrEtEnD – PRivate EnforcemenT in the EcoNomy of Data
September 22: Ben Green - The Flaws of Policies Requiring Human Oversight of Government Algorithms
September 15: Ari Waldman - Misinformation Project in Need of Pithy
 

Spring 2021

April 16:Tomer Kenneth — Public Officials on Social Media
April 9: Thomas Streinz — The Flawed Dualism of Facebook's Oversight Board
April 2: Gabe Nicholas — Have Your Data and Eat it Too: Bridging the Gap between Data Sharing and Data Protection
March 26: Ira Rubinstein  — Voter Microtargeting and the Future of Democracy
March 19: Stav Zeitouni
March 12: Ngozi Nwanta
March 5: Aileen Nielsen
February 26: Tom McBrien
February 19: Ari Ezra Waldman
February 12: Albert Fox Cahn
February 5: Salome Viljoen & Seb Benthall — Data Market Discipline: From Financial Regulation to Data Governance
January 29: Mason Marks  — Biosupremacy: Data Protection, Antitrust, and Monopolistic Power Over Human Behavior
 

Fall 2020

December 4: Florencia Marotta-Wurgler & David Stein — Teaching Machines to Think Like Lawyers
November 20: Andrew Weiner
November 6: Mark Verstraete — Cybersecurity Spillovers
October 30: Ari Ezra Waldman — Privacy Law's Two Paths
October 23: Aileen Nielsen — Tech's Attention Problem
October 16: Caroline Alewaerts — UN Global Pulse
October 9: Salome Viljoen — Data as a Democratic Medium: From Individual to Relational Data Governance
October 2: Gabe Nicholas — Surveillance Delusion: Lessons from the Vietnam War
September 25: Angelina Fisher & Thomas Streinz — Confronting Data Inequality
September 18: Danny Huang — Watching loTs That Watch Us: Studying loT Security & Privacy at Scale
September 11: Seb Benthall — Accountable Context for Web Applications
   

Spring 2020

April 29: Aileen Nielsen — "Pricing" Privacy: Preliminary Evidence from Vignette Studies Inspired by Economic Anthropology
April 22: Ginny Kozemczak — Dignity, Freedom, and Digital Rights: Comparing American and European Approaches to Privacy
April 15: Privacy and COVID-19 Policies
April 8: Ira Rubinstein — Urban Privacy
April 1: Thomas Streinz — Data Governance in Trade Agreements: Non-territoriality of Data and Multi-Nationality of Corporations
March 25: Christopher Morten — The Big Data Regulator, Rebooted: Why and How the FDA Can and Should Disclose Confidential Data on Prescription Drugs
March 4: Lilla Montanagni — Regulation 2018/1807 on the Free Flow of Non Personal Data: Yet Another Piece in the Data Puzzle in the EU?
February 26: Stein — Flow of Data Through Online Advertising Markets
February 19: Seb Benthall — Towards Agend-Based Computational Modeling of Informational Capitalism
February 12: Yafit Lev-Aretz & Madelyn Sanfilippo — One Size Does Not Fit All: Applying a Single Privacy Policy to (too) Many Contexts
February 5: Jake Goldenfein & Seb Benthall — Data Science and the Decline of Liberal Law and Ethics
January 29: Albert Fox Cahn — Reimagining the Fourth Amendment for the Mass Surveillance Age
January 22: Ido Sivan-Sevilia — Europeanization on Demand? The EU's Cybersecurity Certification Regime Between the Rationale of Market Integration and the Core Functions of the State

 

Fall 2019

December 4: Ari Waldman — Discussion on Proposed Privacy Bills
November 20: Margarita Boyarskaya & Solon Barocas [joint work with Hanna Wallach] — What is a Proxy and why is it a Problem?
November 13: Mark Verstraete & Tal Zarsky — Data Breach Distortions
November 6: Aaron Shapiro — Dynamic Exploits: Calculative Asymmetries in the On-Demand Economy
October 30: Tomer Kenneth — Who Can Move My Cheese? Other Legal Considerations About Smart-Devices
October 23: Yafit Lev-Aretz & Madelyn Sanfilippo — Privacy and Religious Views
October 16: Salome Viljoen — Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought
October 9: Katja Langenbucher — Responsible A.I. Credit Scoring
October 2: Michal Shur-Ofry — Robotic Collective Memory   
September 25: Mark Verstraete — Inseparable Uses in Property and Information Law
September 18: Gabe Nicholas & Michael Weinberg — Data, To Go: Privacy and Competition in Data Portability 
September 11: Ari Waldman — Privacy, Discourse, and Power


Spring 2019

April 24: Sheila Marie Cruz-Rodriguez — Contractual Approach to Privacy Protection in Urban Data Collection
April 17: Andrew Selbst — Negligence and AI's Human Users
April 10: Sun Ping — Beyond Security: What Kind of Data Protection Law Should China Make?
April 3: Moran Yemini — Missing in "State Action": Toward a Pluralist Conception of the First Amendment
March 27: Nick Vincent — Privacy and the Human Microbiome
March 13: Nick Mendez — Will You Be Seeing Me in Court? Risk of Future Harm, and Article III Standing After a Data Breach
March 6: Jake Goldenfein — Through the Handoff Lens: Are Autonomous Vehicles No-Win for Users
February 27: Cathy Dwyer — Applying the Contextual Integrity Framework to Cambride Analytica
February 20: Ignacio Cofone & Katherine Strandburg — Strategic Games and Algorithmic Transparency
February 13: Yan Shvartshnaider — Going Against the (Appropriate) Flow: A Contextual Integrity Approach to Privacy Policy Analysis
January 30: Sabine Gless — Predictive Policing: In Defense of 'True Positives'


Fall 2018

December 5: Discussion of current issues
November 28: Ashley Gorham — Algorithmic Interpellation
November 14: Mark Verstraete — Data Inalienabilities
November 7: Jonathan Mayer — Estimating Incidental Collection in Foreign Intelligence Surveillance
October 31: Sebastian Benthall — Trade, Trust, and Cyberwar
October 24: Yafit Lev-Aretz — Privacy and the Human Element
October 17: Julia Powles — AI: The Stories We Weave; The Questions We Leave
October 10: Andy Gersick — Can We Have Honesty, Civility, and Privacy Online? Implications from Evolutionary Theories of Animal and Human Communication
October 3: Eli Siems — The Case for a Disparate Impact Regime Covering All Machine-Learning Decisions
September 26: Ari Waldman — Privacy's False Promise
September 19: Marijn Sax — Targeting Your Health or Your Wallet? Health Apps and Manipulative Commercial Practices
September 12: Mason Marks — Algorithmic Disability Discrimination
 

Spring 2018

May 2: Ira Rubinstein Article 25 of the GDPR and Product Design: A Critical View [with Nathan Good and Guilermo Monge, Good Research]
April 25: Elana Zeide — The Future Human Futures Market
April 18: Taylor Black — Performing Performative Privacy: Applying Post-Structural Performance Theory for Issues of Surveillance Aesthetics
April 11: John Nay Natural Language Processing and Machine Learning for Law and Policy Texts
April 4: Sebastian Benthall — Games and Rules of Information Flow
March 28: Yann Shvartzshanider and Noah Apthorpe Discovering Smart Home IoT Privacy Norms using Contextual Integrity    
February 28: Thomas Streinz TPP’s Implications for Global Privacy and Data Protection Law

February 21: Ben Morris, Rebecca Sobel, and Nick Vincent — Direct-to-Consumer Sequencing Kits: Are Users Losing More Than They Gain?
February 14: Eli Siems — Trade Secrets in Criminal Proceedings: The Battle over Source Code Discovery
February 7: Madeline Bryd and Philip Simon Is Facebook Violating U.S. Discrimination Laws by Allowing Advertisers to Target Users?
January 31: Madelyn Sanfilippo Sociotechnical Polycentricity: Privacy in Nested Sociotechnical Networks 
January 24: Jason Schultz and Julia Powles Discussion about the NYC Algorithmic Accountability Bill


Fall 2017

November 29: Kathryn Morris and Eli Siems Discussion of Carpenter v. United States
November 15:Leon Yin Anatomy and Interpretability of Neural Networks
November 8: Ben Zevenbergen Contextual Integrity for Password Research Ethics?
November 1: Joe Bonneau An Overview of Smart Contracts
October 25: Sebastian Benthall Modeling Social Welfare Effects of Privacy Policies
October 18: Sue Glueck Future-Proofing the Law
October 11: John Nay — Algorithmic Decision-Making Explanations: A Taxonomy and Case Study
October 4:Finn Bruton — 'The Best Surveillance System we Could Imagine': Payment Networks and Digital Cash
September 27: Julia Powles Promises, Polarities & Capture: A Data and AI Case Study
September 20: Madelyn Rose Sanfilippo AND Yafit Lev-Aretz — Breaking News: How Push Notifications Alter the Fourth Estate
September 13: Ignacio Cofone — Anti-Discriminatory Privacy
 

Spring 2017

April 26: Ben Zevenbergen Contextual Integrity as a Framework for Internet Research Ethics
April 19: Beate Roessler Manipulation
April 12: Amanda Levendowski Conflict Modeling
April 5: Madelyn Sanfilippo Privacy as Commons: A Conceptual Overview and Case Study in Progress
March 29: Hugo Zylberberg Reframing the fake news debate: influence operations, targeting-and-convincing infrastructure and exploitation of personal data
March 22: Caroline Alewaerts, Eli Siems and Nate Tisa will lead discussion of three topics flagged during our current events roundups: smart toys, the recently leaked documents about CIA surveillance techniques, and the issues raised by the government’s attempt to obtain recordings from an Amazon Echo in a criminal trial. 
March 8: Ira Rubinstein Privacy Localism
March 1: Luise Papcke Project on (Collaborative) Filtering and Social Sorting
February 22: Yafit Lev-Aretz and Grace Ha (in collaboration with Katherine Strandburg) Privacy and Innovation     
February 15: Argyri Panezi Academic Institutions as Innovators but also Data Collectors - Ethical and Other Normative Considerations
February 8: Katherine Strandburg Decisionmaking, Machine Learning and the Value of Explanation
February 1: Argyro Karanasiou A Study into the Layers of Automated Decision Making: Emergent Normative and Legal Aspects of Deep Learning
January 25: Scott Skinner-Thompson Equal Protection Privacy
 

Fall 2016

December 7: Tobias Matzner The Subject of Privacy
November 30: Yafit Lev-Aretz Data Philanthropy
November 16: Helen Nissenbaum Must Privacy Give Way to Use Regulation?
November 9: Bilyana Petkova Domesticating the "Foreign" in Making Transatlantic Data Privacy Law
November 2: Scott Skinner-Thompson Recording as Heckling
October 26: Yan Shvartzhnaider Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms
October 19: Madelyn Sanfilippo Privacy and Institutionalization in Data Science Scholarship
October 12: Paula Kift The Incredible Bulk: Metadata, Foreign Intelligence Collection, and the Limits of Domestic Surveillance Reform

October 5: Craig Konnoth Health Information Equity
September 28: Jessica Feldman the Amidst Project
September 21: Nathan Newman UnMarginalizing Workers: How Big Data Drives Lower Wages and How Reframing Labor Law Can Restore Information Equality in the Workplace
September 14: Kiel Brennan-Marquez Plausible Cause
 

Spring 2016

April 27: Yan Schvartzschnaider Privacy and loT AND Rebecca Weinstein - Net Neutrality's Impact on FCC Regulation of Privacy Practices
April 20: Joris van Hoboken Privacy in Service-Oriented Architectures: A New Paradigm? [with Seda Gurses]

April 13: Florencia Marotta-Wurgler Who's Afraid of the FTC? Enforcement Actions and the Content of Privacy Policies (with Daniel Svirsky)

April 6: Ira Rubinstein Big Data and Privacy: The State of Play

March 30: Clay Venetis Where is the Cost-Benefit Analysis in Federal Privacy Regulation?

March 23: Diasuke Igeta An Outline of Japanese Privacy Protection and its Problems
; Johannes Eichenhofer Internet Privacy as Trust Protection

March 9: Alex Lipton Standing for Consumer Privacy Harms

March 2: Scott Skinner-Thompson Pop Culture Wars: Marriage, Abortion, and the Screen to Creed Pipeline [with Professor Sylvia Law]

February 24: Daniel Susser Against the Collection/Use Distinction

February 17: Eliana Pfeffer Data Chill: A First Amendment Hangover

February 10: Yafit Lev-Aretz Data Philanthropy

February 3: Kiel Brennan-Marquez Feedback Loops: A Theory of Big Data Culture

January 27: Leonid Grinberg But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race
 

Fall 2015

December 2: Leonid Grinberg But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race AND Kiel Brennan-Marquez - Spokeo and the Future of Privacy Harms
November 18: Angèle Christin - Algorithms, Expertise, and Discretion: Comparing Journalism and Criminal Justice
November 11: Joris van Hoboken Privacy, Data Sovereignty and Crypto
November 4: Solon Barocas and Karen Levy Understanding Privacy as a Means of Economic Redistribution
October 28: Finn Brunton Of Fembots and Men: Privacy Insights from the Ashley Madison Hack

October 21: Paula Kift Human Dignity and Bare Life - Privacy and Surveillance of Refugees at the Borders of Europe
October 14: Yafit Lev-Aretz and co-author, Nizan Geslevich Packin Between Loans and Friends: On Soical Credit and the Right to be Unpopular
October 7: Daniel Susser What's the Point of Notice?
September 30: Helen Nissenbaum and Kirsten Martin Confounding Variables Confounding Measures of Privacy
September 23: Jos Berens and Emmanuel Letouzé Group Privacy in a Digital Era
September 16: Scott Skinner-Thompson Performative Privacy

September 9: Kiel Brennan-Marquez Vigilantes and Good Samaritan
 

Spring 2015

April 29: Sofia Grafanaki Autonomy Challenges in the Age of Big Data; David Krone Compliance, Privacy and Cyber Security Information Sharing; Edwin Mok Trial and Error: The Privacy Dimensions of Clinical Trial Data Sharing; Dan Rudofsky Modern State Action Doctrine in the Age of Big Data

April 22: Helen Nissenbaum Respect for Context' as a Benchmark for Privacy: What it is and Isn't
April 15: Joris van Hoboken From Collection to Use Regulation? A Comparative Perspective
April 8: Bilyana Petkova
 Privacy and Federated Law-Making in the EU and the US: Defying the Status Quo?
April 1: Paula Kift — Metadata: An Ontological and Normative Analysis

March 25: Alex Lipton — Privacy Protections for the Secondary User of Consumer-Watching Technologies

March 11: Rebecca Weinstein (Cancelled)
March 4: Karen Levy & Alice Marwick — Unequal Harms: Socioeconomic Status, Race, and Gender in Privacy Research


February 25 : Luke Stark — NannyScam: The Normalization of Consumer-as-Surveillorm


February 18: Brian Choi A Prospect Theory of Privacy

February 11: Aimee Thomson — Cellular Dragnet: Active Cell Site Simulators and the Fourth Amendment

February 4: Ira Rubinstein — Anonymity and Risk

January 28: Scott Skinner-Thomson Outing Privacy

 

Fall 2014

December 3: Katherine Strandburg — Discussion of Privacy News [which can include recent court decisions, new technologies or significant industry practices]

November 19: Alice Marwick — Scandal or Sex Crime? Ethical and Privacy Implications of the Celebrity Nude Photo Leaks

November 12: Elana Zeide — Student Data and Educational Ideals: examining the current student privacy landscape and how emerging information practice and reforms implicate long-standing social and legal traditions surrounding education in America. The Proverbial Permanent Record [PDF]

November 5: Seda Guerses — Let's first get things done! On division of labor and practices of delegation in times of mediated politics and politicized technologies
October 29:Luke Stark — Discussion on whether “notice” can continue to play a viable role in protecting privacy in mediated communications and transactions given the increasing complexity of the data ecology and economy.
Kristen Martin — Transaction costs, privacy, and trust: The laudable goals and ultimate failure of notice and choice to respect privacy online

Ryan Calo — Against Notice Skepticism in Privacy (and Elsewhere)

Lorrie Faith Cranor — Necessary but Not Sufficient: Standardized Mechanisms for Privacy Notice and Choice
October 22: Matthew Callahan — Warrant Canaries and Law Enforcement Responses
October 15: Karen Levy — Networked Resistance to Electronic Surveillance
October 8: Joris van Hoboken —  The Right to be Forgotten Judgement in Europe: Taking Stock and Looking Ahead

October 1: Giancarlo Lee — Automatic Anonymization of Medical Documents
September 24: Christopher Sprigman — MSFT "Extraterritorial Warrants" Issue 

September 17: Sebastian Zimmeck — Privee: An Architecture for Automatically Analyzing Web Privacy Policies [with Steven M. Bellovin]
September 10: Organizational meeting
 

Spring 2014

April 30: Seda Guerses — Privacy is Security is a prerequisite for Privacy is not Security is a delegation relationship
April 23: Milbank Tweed Forum Speaker — Brad Smith: The Future of Privacy
April 16: Solon Barocas — How Data Mining Discriminates - a collaborative project with Andrew Selbst, 2012-13 ILI Fellow
March 12: Scott Bulua & Amanda Levendowski — Challenges in Combatting Revenge Porn


March 5: Claudia Diaz — In PETs we trust: tensions between Privacy Enhancing Technologies and information privacy law: The presentation is drawn from a paper, "Hero or Villain: The Data Controller in Privacy Law and Technologies” with Seda Guerses and Omer Tene.

February 26: Doc Searls Privacy and Business

February 19: Report from the Obfuscation Symposium, including brief tool demos and individual impressions

February 12: Ira Rubinstein The Ethics of Cryptanalysis — Code Breaking, Exploitation, Subversion and Hacking
February 5: Felix Wu — The Commercial Difference which grows out of a piece just published in the Chicago Forum called The Constitutionality of Consumer Privacy Regulation

January 29: Organizational meeting
 

Fall 2013

December 4: Akiva Miller — Are access and correction tools, opt-out buttons, and privacy dashboards the right solutions to consumer data privacy? & Malte Ziewitz What does transparency conceal?
November 20: Nathan Newman — Can Government Mandate Union Access to Employer Property? On Corporate Control of Information Flows in the Workplace

November 6: Karen Levy — Beating the Box: Digital Enforcement and Resistance
October 23: Brian Choi — The Third-Party Doctrine and the Required-Records Doctrine: Informational Reciprocals, Asymmetries, and Tributaries
October 16: Seda Güerses — Privacy is Don't Ask, Confidentiality is Don't Tell
October 9: Katherine Strandburg — Freedom of Association Constraints on Metadata Surveillance
October 2: Joris van Hoboken — A Right to be Forgotten
September 25: Luke Stark — The Emotional Context of Information Privacy
September 18: Discussion — NSA/Pew Survey
September 11: Organizational Meeting


Spring 2013

May 1: Akiva Miller — What Do We Worry About When We Worry About Price Discrimination
April 24: Hannah Block-Wheba and Matt Zimmerman — National Security Letters [NSL's]

April 17: Heather Patterson — Contextual Expectations of Privacy in User-Generated Mobile Health Data: The Fitbit Story
April 10: Katherine Strandburg — ECPA Reform; Catherine Crump: Cotterman Case; Paula Helm: Anonymity in AA

April 3: Ira Rubinstein — Voter Privacy: A Modest Proposal
March 27: Privacy News Hot Topics — US v. Cotterman, Drones' Hearings, Google Settlement, Employee Health Information Vulnerabilities, and a Report from Differential Privacy Day

March 6: Mariana Thibes — Privacy at Stake, Challenging Issues in the Brazillian Context
March 13: Nathan Newman — The Economics of Information in Behavioral Advertising Markets
February 27: Katherine Strandburg — Free Fall: The Online Market's Consumer Preference Disconnect
February 20: Brad Smith — Privacy at Microsoft
February 13: Joe Bonneau  — What will it mean for privacy as user authentication moves beyond passwo
February 6: Helen Nissenbaum — The (Privacy) Trouble with MOOCs
January 30: Welcome meeting and discussion on current privacy news
 

Fall 2012

December 5: Martin French — Preparing for the Zombie Apocalypse: The Privacy Implications of (Contemporary Developments in) Public Health Intelligence
November 7: Sophie Hood — New Media Technology and the Courts: Judicial Videoconferencing
November 14: Travis Hall — Cracks in the Foundation: India's Biometrics Programs and the Power of the Exception

November 28: Scott Bulua and Catherine Crump — A framework for understanding and regulating domestic drone surveillance

November 21: Lital Helman — Corporate Responsibility of Social Networking Platforms
October 24: Matt Tierney and Ian Spiro — Cryptogram: Photo Privacy in Social Media
October 17: Frederik Zuiderveen Borgesius — Behavioural Targeting. How to regulate?

October 10: Discussion of 'Model Law'

October 3: Agatha Cole — The Role of IP address Data in Counter-Terrorism Operations & Criminal Law Enforcement Investigations: Looking towards the European framework as a model for U.S. Data Retention Policy
September 26: Karen Levy — Privacy, Professionalism, and Techno-Legal Regulation of U.S. Truckers
September 19: Nathan Newman — Cost of Lost Privacy: Google, Antitrust and Control of User Data