Joining PRG
PRG Calendar
Spring 2023
April 19: Anne Bellon - Seeing through the screen. Transparency as regulation in the digital economy
ABSTRACT: Considering the hegemonic and gatekeeping power of large platforms, new regulatory initiatives have been adopted in Europe to increase public supervision over digital markets. A common feature of these legislations is “transparency obligations” for platforms to inform about their moderation efforts, the profiling of their users or their algorithms. Transparency thus appears as a central aspect of platform regulation, if not its main goal, to create “a safer and more transparent digital environment” under the Digital Services Act. Yet the notion of transparency is far from self-explanatory. Rather, it is a multifaceted concept discussed and studied by different literatures and traditions, that do not always refer to the same process or disclosing organization. Finding its roots in liberal philosophy, then considered a grounding value for the Internet, transparency stands for heterogenous practices and requirements gathered around the idea of good governance. Open data shared by public administration, annual transparency report published by the Big Tech, or standards-setting in international finance, each display a particular vision of transparency and raise different issues regarding their enforcement and efficacy. The paper discusses the notion of transparency, its philosophical and political origins and concrete instantiations, in order to understand how it became such a central language and issue in European digital regulation. I introduce a formal distinction between transparency as accountability, as control or as openness and examine how these categories are combined in recent regulatory laws. I then study transparency practices and limitations applied to large platforms such as YouTube, Facebook and Twitter. Finally, I offer some thoughts about future enforcement of transparency regulation for the digital economy.
April 12: Gabriel Nicholas, Christopher Morton & Salome Viljeon - Researcher Access to Social Media Data: Lessons from Clinical Trial Data Sharing
ABSTRACT: As the problems of misinformation, child welfare, and heightened political polarization on social media platforms grow more salient, lawmakers and advocates are pushing to grant independent researchers access to social media data to better understand these problems. Yet researcher access is controversial. Privacy advocates and companies raise the potential privacy threats of researchers using such data irresponsibly. In addition, social media companies raise concerns over trade secrecy: the data these companies hold and the algorithms powered by that data are secretive sources of competitive advantage. This Article shows that one way to navigate this difficult strait is by drawing on lessons from the successful governance program that has emerged to regulate the sharing of clinical trial data. Like social media data, clinical trial data implicates both individual privacy and trade secrecy concerns. Nonetheless, clinical trial data’s governance regime was gradually legislated, regulated, and brokered into existence, managing the interests of industry, academia, and other stakeholders. The result is a functionally successful (if yet imperfect) clinical trial data-sharing ecosystem. Part I sketches the status quo of researchers’ access to social media data and provides a novel taxonomy of the problems that arise under this regime. Part II reviews the legal structures governing how clinical trial data is shared and traces the history of scandals, investigations, industry protest, and legislative response that gave rise to the mix of mandated sharing and experimental programs we have today. Part III applies lessons from clinical trial data sharing to social media data, and charts a strategic course forward. Two primary lessons emerge: First, law without institutions to implement the law is insufficient, and second, data access regimes must be tailored to the data they make available.
April 5: Amanda Parsons & Salome Viljeon - How Law Collides with Informational Capitalism
ABSTRACT: This Article argues that social data (i.e. data about people) production presents a form of value production that is historically particular to, and defining of, informational capitalism. Social data production materializes and stores value (and risk) in ways that are distinct from other value forms. In our view, this departure in data’s value proposition fuels the need to depart, in legal thinking about data, from the basic intuitions and first principles of the disparate legal regimes encountering social data production.
March 29: Cade Mallett - Judicial Review of Administrative Action Based on AI
ABSTRACT: When reviewing agency action for arbitrariness, courts must initially determine how “hard” a look to take at the substance of agency action. The increasing use of AI as a basis for agency action threatens to complicate this threshold analysis significantly, as agencies and courts both are commonly lacking in significant expertise creating and reviewing AI. While it is common for lower courts to rotely determine they are entitled to “hard look” review of agency action, the Court’s precedent in this area is decidedly more deferential, requiring a case-by-case assessment of the extent to which an agency leverages its substantive expertise in taking the action. Leveraging both the Court’s expertise-based analysis and a review of the policy considerations underlying the decision to grant deference, this paper contributes a framework for courts to use in choosing the level of deference to grant agency action based on AI.
March 22: Stein - Innovation Protection for Platform Competition
ABSTRACT: The digital platform industry is dominated by a few players wielding immense influence over public discourse, access to information, consumer privacy, and online marketplaces. This concentration of power has raised concerns regarding consumer choice, reduced innovation, and increased prices in digital platform markets. Regulators and commentators have proposed various strategies to counteract concentration in digital platform markets, ranging from behavioral remedies to structural interventions. This article posits that the proposed remedies may inadvertently exacerbate market concentration by failing to address an underlying market failure rooted in intellectual property (IP) rules. I argue that current IP rules disproportionately favor incumbent online services and erect barriers to entry for small firms—which are crucial for disruptive innovation—and create barriers to growth that prevent firms’ transition from nascent to actual competitors in the market. Automation of computer programming and the rise of remotely-operated online software means disruptive interface designs are one of the only differentiators available to smaller companies. Since interface innovation receives almost no IP protection, incumbents use their existing infrastructure to saturate the market with copies before newcomers can build capacity. To address these concerns, I argue that IP protection for computer programs should be expanded for software interfaces and reduced along almost every other dimension. Decades of commentary and case law argues against interface protection, but does not anticipate new problems raised by AI and the internet. Still, my proposal is carefully limited. Drawing on doctrinal approaches used in recent data misappropriation cases, I propose a pragmatic, market-context-aware, quasi-property right tailored to protect disruptive innovations in software interface design.
March 8: Aileen Nielsen & Yafit Lev-Aretz - Disclosure and Our Moral Calculus: Do Data Use Disclosures Change Data Subjects’ Sense of Culpability
ABSTRACT: Do disclosures change the subjective moral calculus of information transactions for data subjects? Privacy regulation has long resorted to operationalizing individual control through notice and consent. The disclosure model, however, has been widely criticized by privacy scholars on philosophical, social, economic, and practical grounds. In this work, we add to this rich body of privacy scholarship by investigating shifts in subjective culpability induced by disclosures of data practices. Specifically, we are set to study whether data subjects feel culpable when privacy disclosures are readily available and accessible to them, yet they fail to inform themselves. The control paradigm of privacy purports to provide individuals with control over their personal information, mainly through notice and consent. But, privacy scholars have consistently demonstrated that the notice and consent model fails to give individuals meaningful control over their personal information. The control paradigm has also been criticized for its limited conceptual framing of privacy values, particularly for ignoring dignitarian and socially-inflected privacy harms. Such critiques have prompted the development of alternative proposals for appropriate privacy behaviors and laws, such as Helen Nissenbaum’s contextual integrity framework, which informs this current empirical investigation. Our research aims to assess whether heightened disclosure of illegitimate information flows could result in harm when individuals are formally given the means to access terms of service, yet choose not to read them. The fact that individuals choose not to read even if disclosures are presented in accessible form and language is well established in privacy and contract law commentaries. Indeed, the demonstrated impracticality of self-managing one's privacy choices (in work such as McDonald and Cranor (2008) and Marotta-Wurgler (2010) - even when the disclosure is easily comprehensible (as in Svirsky (2022)) - has been compellingly established on many occasions. We hypothesize that providing granular and accessible disclosures of illegitimate information flows will make individuals who failed to read the disclosures feel worse, potentially shifting blame from the collector to themselves. This greater subjective sense of culpability by ordinary people is likely not mitigated by any compensating increase in individuals’ ability to avoid undesired outcomes or even to process the disclosed information. We hope to show that disclosures are not only unsuccessful in offering control over personal information, but they are also potentially harmful in laundering otherwise illegitimate information flows by triggering a sense of guilt in individuals. In an initial study, we found that participants exposed to different levels of intrusiveness in a disclosure notification showed different levels of regret regarding a decision not to read the terms of service. At the same time, differing levels of disclosure did not change people’s expected future behavior or attribution of moral responsibility as divided between the web user and the firm. This suggests that the effect of disclosures is most likely to create a subjective sense of regret or culpability without any compensatory benefits. This project, which we believe is the first to empirically study the shifting blame dynamics of heightened disclosures, contributes to empirical studies in both law and moral philosophy. In law, in addition to the robust literature criticizing notice and consent cited above (and in our bibliography), we import emergent insights from the consumer contracts setting, as in the work of Furth-Matzkin and Sommers (2020) and of Wilkinson-Ryan (2020), who identified a pattern whereby consumers rationalize otherwise unfair and even illegal contractual provisions. Likewise, we will contribute to the experimental literature on moral philosophy by understanding whether knowledge, in the absence of the ability to change outcomes, increases or shifts judgments of moral blame, continuing work by Knobe and Doris (2010) that seeks to understand how moral culpability is understood and assigned by ordinary people.
March 1: Ari Ezra Waldman - Privacy Civil Society
ABSTRACT: Privacy law and policy has attracted significant interest from civil society. Non-profit policy advocacy organizations—including the Electronic Privacy Information Center (EPIC), the Future of Privacy Forum (FPF), the Center for Democracy and Technology (CDT), as well as myriad other organizations that focus at least part of their policy research and advocacy on commercial privacy—advise policymakers in private, testify before legislatures, write white papers that propose model legislation, and advocate for specific changes in the law. The organizations themselves attract millions of dollars in funding, both from Big Tech and independent foundations. These organizations have seats at the table and yet there has been no systemic study of their role in constructing (or deconstructing) privacy law. This project, which is at an early stage, seeks to understand what nonprofit privacy law advocacy organizations do, why they do it, and why social forces have contributed to their participation in a wave of privacy laws that will do very little to actually protect privacy. What I have called a "second wave" of privacy law features ineffectual individual rights of control and internal compliance procedures (as well as some other things), many of which have been part of proposals and model legislation from advocacy organizations for some time. Even if we disagree on these proposals' effectiveness, it is still remarkable that many of these organizations have called for the same provisions in new privacy laws. Why? For this project, I will be going inside three nonprofit privacy advocacy organizations and interviewing their staffs and leadership. Do their positions reflect the relatively ambivalent cultural orientation toward privacy in the U.S.? Do their positions simply reflect what their donors want, what staffs think is possible, or the overriding need for organizations to maintain a seat at the table regardless of the substance of the proposal? The literature includes several social forces that influence these organizations. I want to see what has caused privacy advocacy organizations to do what they do.
February 22: Thomas Streinz - Contingencies of the Brussels Effect in the Digital Domain
ABSTRACT: The EU has been hailed as a global data regulator. European policymakers have embraced this “Brussels Effect” as the EU embarks on an ambitious new regulatory agenda to regulate the digital economy within Europe and beyond. But the extent to which EU law has shaped the digital domain globally has been overstated and should not be taken for granted. After fighting vigorously against its adoption, companies now often claim to embrace the EU’s General Data Protection (GDPR) and to adhere to it globally. However, in practice, the GDPR’s enforcement record is mixed at best and companies’ assurances do not always hold up to closer scrutiny. The EU’s recently adopted Data Governance Act (DGA), Digital Services Act (DSA), Digital Markets Act (DMA) and the proposals for an Artificial Intelligence Act (AIA) and Data Act (DA) are unlikely to generate wholesale Brussels Effects. Instead, companies will pick and choose if, when, and how to implement European data law globally.
February 15: Sebastian Benthall - New Computational Approaches to Information Policy Research
ABSTRACT: For information policy in the United States to keep up with advances in cloud computing, app development, and artificial intelligence, new computational approaches are needed. Policy analysis suggests that regulatory efforts based on consumer and data protection have been ineffective. Rather, new regulatory efforts aim to reduce conflicts of interest between data processors and data subjects, and to address broader financial risks rather than individual consumer harms. New research approaches are needed to evaluate these proposals. We discuss the design of fiduciary AI and the use of heterogeneous agent modeling to model complex interactions between computation, business, society, and regulation.
February 8: Argyri Panezi, Leon Anidjar, and Nizan Geslevich Packing - The Metaverse Privacy Problem: If you built it, it will come
ABSTRACT: How realistic is the idea of a decentralized and privacy-enhancing Web 3.0? Are data governance and other legal tools currently employed to address the various information law and privacy challenges of Web 2.0 sufficient to tackle the new challenges that Web 3.0 brings about? These central questions set the stage for this Article’s inquiry: how do we (re-) conceptualize privacy challenges in Web 3.0 in general, and particularly in the metaverse? The Article begins with describing the metaverse and discusses its technological foundation and associated privacy concerns. It explains how privacy risks stem from the vast amount of data generated, gathered, and exchanged in the metaverse, comprising personal data, but also data constantly tracing behavior and interactions. Most importantly, it argues that in the metaverse, data has an evolved role; it is no longer a valuable resource as understood in Web 1.0 and Web 2.0, as in Web 3.0, data is the infrastructure itself. Furthermore, the Article introduces the multidimensional conceptualization of data exchanges in the metaverse, which are traced at three levels of analysis: micro, macro, and meso. To mitigate the complexity and its consequences related to privacy protection, the Article makes normative suggestions, namely it analyses the potential benefits of a market for privacy disclosure obligations. The conclusion reflects upon the long-term normative implications of the transition towards Web 3.0 revisiting the decades-old debate about the need – or not - to invent new rules and legal approaches to address legal problems in the cyberspace.
February 1: Aniket Kesari - The Consumer Review Fairness Act and the Reputational Sanctions Market
ABSTRACT: How do statutes that protect consumers’ rights to write reviews shape the reputational sanctions market? In 2016, Congress passed the Consumer Review Fairness Act (CRFA), commonly championed as the “right to Yelp” law. The law makes contract provisions that prevent honest consumer reviews unenforceable, but creates carve outs for abusive, libelous, or false/misleading reviews. However, a number of states have similar laws that do not provide such a carve out. These laws arguably create an important avenue for consumers to impose reputational sanctions on bad businesses, possibly as a substitute for legal sanctions. However, bad faith consumers and competitors can also impose costs on businesses by posting dishonest, troll, or unfair reviews. This Article explores how the CRFA and similar state laws affect this reputational sanctions market. Using a difference-in-differences design, I show that the Illinois law that provides no carveouts caused a small (30/month) increase in negative reviews, and a small (1.5/month) decrease in troll-like reviews each month, but this result was not statistically significant. A computational textual analysis leveraging sentiment analysis and embedding regression reveals that there is no evidence that the content of the text of reviews was altered by the CRFA.
January 25: Michelle Shen - The Brussels Effect as a ‘New-School’ Regulation Globalizing Democracy: A Comparative Review of the CLOUD Act and the European-United States Data Privacy FrameworkAlgorithmic Turn
ABSTRACT: Cross-border data sharing is increasingly relevant for state purposes, entangling questions of balancing individuals’ data privacy rights with state interests. The CLOUD Act’s limited extraterritorial reach has prevented United States (U.S.) law enforcement from accessing data managed by U.S.-based companies stored on European soil. The primary issue this Note addresses is whether the EU-U.S. DPF (Data Privacy Framework), as a bilateral agreement between the EU and US incorporating U.S. laws as authority, may expand the extraterritorial reach of U.S.-law enforcement to obtain data and maintain privacy protection as a fundamental right. This Note asserts that the EU-U.S. DPF has three main benefits compared to the CLOUD Act. First, the EU-U.S. DPF can overcome jurisdictional and comity issues the CLOUD Act faced in enabling U.S. law enforcement to obtain data stored in Europe because it is a bilateral agreement rather than a federal statute. Second, the EU-U.S. DPF is easier to implement domestically because it directly incorporates US federal law and EU law and provides explicit instructions to courts. Third, the EU-U.S. DPF better protects privacy rights by giving companies and users direct pathways to challenge government demands for data. Normatively, the EU-U.S. DPF better embodies democratic ideals compared to the CLOUD Act because it expands claim-making in the U.S. court system to a greater number of individuals (such as EU citizens). However, neither the EU-U.S. DPF nor the CLOUD Act can independently enable claimants to actually receive remedies. Further, the EU-U.S. DPF may result in global disparity in citizens’ access to privacy rights and may force nations to compromise their sovereign values. Lastly, this Comment proposes a global treaty to coordinate foreign nations’ privacy standards as a solution to uphold user privacy, enable law enforcement access to data, and honor nations’ sovereignty.
Fall 2022
November 30: Ira Rubenstein - Artificial Speech and the First Amendment: A Skeptical View
ABSTRACT: What is the proper treatment under the First Amendment of speech or text generated by Artificial Intelligence (AI)? This is an increasingly relevant topic given the power of machine learning (ML) (a subset of AI) to successfully perform a wide range of expressive tasks such as translation, summary, speech recognition, and responding to speech or text inputs with speech or text outputs. Furthermore, a text generation program known as GPT-3 has grabbed headlines for its impressive ability to produce text that is very hard to distinguish from text written by humans. Should the First Amendment treat these outputs as protected speech? In this presentation, I argue that this issue matters for both theoretical and practical purposes. On the theoretical side, artificial speech (my term for AI-generated speech) raises novel issues: Is it covered under existing First Amendment categories? Does such coverage require a human speaker? Or are human speakers superfluous as long as artificial speech is understandable and valuable to listeners? On the practical side: courts (and legislatures) have begun to address algorithmic or ML outputs in a number of settings such as search results, content moderation, and content recommendations, but with limited understanding of how AI works. And now the Supreme Court is poised to address algorithmic content recommendations in the Section 230 context. Down the road, even more is at stake. For example, the use of programs like GPT-3 to generate high-quality, cheap, and personalized misinformation and fake news is extremely worrisome. Also, it is easy to anticipate future First Amendment challenges to future AI regulations seeking to address fairness, accountability and transparency. But should the First Amendment protect artificial hate speech? Or block or constrain AI regulation? Is this inevitable under current theory and doctrine or is there any way to redirect these outcomes? The presentation advances a somewhat radical claim, namely, that artificial speech should not enjoy First Amendment protection at all or at least not to the same extent as human speech. This claim rests on three, related arguments showing that the text GPT-3 generates is quite different from human speech:
1. It does not originate with a human speaker and AI is not a person.
2. It does not generate messages that realize any distinctively First Amendment values (such as knowledge via the marketplace of ideas, democracy and self-governance, or autonomy).
3. And these deficiencies undermine the value of artificial speech to listeners.
The argument boils down to this: it makes no sense to protect artificial speech under the First Amendment because ML programs are not the kind of things whose outputs should enjoy such protection. In other words, granting First Amendment protection to artificial speech is a “category mistake.”
November 16: Michal Gal - Synthetic Data: Legal Implications of the Data-Generation Revolution
ABSTRACT: A data-generation revolution is underway. Up until recently, most of the data used for algorithmic decision-making was collected from events that take place in the physical world ("real" data). Yet, it is forecasted that by 2024, 60% of data used to train artificial intelligence systems around the world will be synthetic (!). Synthetic data is artificially-generated data that has analytical value. For some purposes, synthetic datasets can replace real data by preserving or mimicking their properties. For some others it can complement real data in a way which increases their accuracy or their privacy or security protection. The importance of this data revolution for our economies and societies cannot be over-stated. It affects data access and data flows, potentially changing the competitive dynamics in markets where real data cannot be easily collected, and potentially affecting decision-making in many spheres of our lives. In many ways, synthetic data does to data what synthetic threads did to cotton. This data-generation revolution requires us to reevaluate and potentially restructure our current legal data governance regime, which was designed with real data in mind. As we show, synthetic data challenges the current equilibrium erected by our laws among the values to be protected, including data utility, privacy, security, and human rights. For instance, by revolutionizing data access, synthetic data challenges assumptions regarding the height of access barriers to data. As such, it may affect the need for and the application of antitrust and direct regulation to some firms whose comparative advantage is data-based. Even more importantly, by potentially making data about individuals more granular and by increasing the accuracy and completeness of such data used for decision-making about individuals, synthetic data also challenges the governance structures and basic principles underpinning current privacy laws. Indeed, many argue that synthetic data does not constitute personal data and thus avoids the application of privacy laws. We challenge this claim. We also show that synthetic data exposes deep conceptual flaws in the data governance framework. It raises fundamental questions such as whether data without a person in the original dataset should still be treated as personal data; and how inferences based on real data should be treated. Thirdly, we reevaluate the justifications for legal requirements regarding data quality, such as data completeness and accuracy, as well as those relating to fairness and informed decision-making, such as data transparency and explainability. The claim is often made that such obligations enhance social welfare. Yet, as we show, synthetic data changes the optimal balance between the protected values, potentially leading to different optimal legal requirements in different contexts. For example, where synthetic data significantly increases consumer-welfare-enhancing decision-making, yet the causality which stands at its basis cannot be easily explained, requirements to look under the hood of datasets might not always be welfare maximizing. Accordingly, this article seeks to bring the state-of-the-art data generation methods into the legal debate, and to propose legal reforms which capture the unique characteristics of synthetic data. While some of the challenges also arise with the use of real data, synthetic data puts these challenges on steroids.
November 9: Ashit Srivastava - Default Protectionist Tracing Applications: Erosion of Cooperative Federalism
ABSTRACT: (In)determinative growth of the suspected patients under the current pandemic has naturally alarmed the Union and the State governments together. With no cure in sight, prevention seems to be the only available option left for the Governments; and tracing applications appears to be playing a fruitful role in this endeavor. However, tracing applications across several levels (namely Union and State levels) is also raising questions of unprecedented nature about our peculiar Federalism. The fact that there is already a Tracing application at the Centre level launched by the Union Government: ‘Aarogy Setu Application,' yet several State Governments have floated their Tracing Applications, at present, there are around 15 to 20 Tracing applications floated at the State level. These applications bring a sense of informatics competitiveness, out of which rises a frugal question if an individual registers itself on ‘State tracing application’ (let's say COVA Punjab application), is he/she immune from registering under the ‘Aarogya Setu Application.’ Apart from this, the other question pertains to privacy, the fact that there are 15 to 20 tracing applications floated at State level with mostly private developers developing them, the question of informational privacy becomes a concern, knowing that third party would be having access to the information. The other significant problems come of ‘Interoperability,' the fact that there are several applications with different developers, their method of anonymization, their source code of the 'application' will all be different and peculiar. This peculiarity of the forms will make the interoperability of information nearly impossible; therefore, the data collected by the Union cannot be shared by it with State and vice-a-versa.
November 2: María Angel - Privacy's Algorithmic Turn
ABSTRACT: Recently, the concept of information privacy discussed in American privacy law scholarship has experienced an algorithmic turn. In the wake of the transition from computer databases to artificial intelligence and algorithmic decision-making systems, scholars have begun to consider not only new types of information privacy harms (e.g., discrimination, algorithmic manipulation, procedural injustices) but also novel tools to address them (e.g., substantive rules and prohibitions, democratic forms of data governance). Inspired by Paul M. Schwartz and William M. Treanor’s 2003 paper The New Privacy, this article presents evidence of this new development in American privacy law scholarship. In the last ten years, American privacy legal scholars have gradually transformed information privacy into a post-algorithmic concept that, rather than enabling individuals to protect themselves from traditional privacy harms, is expected to act as a tool for the government to protect society against data extraction and its consequent power asymmetries. It is the objective of this article to describe this development, explore some of its sociotechnical reasons, and identify some of its more relevant implications. As with the "new privacy," time will tell if this new evolution of the concept of information privacy was worthwhile. For now, the fact that it is happening is certainly worth acknowledging.
October 26: Mimee Xu - Netflix and Forget
ABSTRACT: In the age of algorithmic curation, break-ups, pregnancy losses, and bereavements are especially painful. Suppose a sentimental person, who has streamed rom-coms exclusively with their significant other, breaks up. Consider an expecting mom, who has shopped for baby goods, miscarries. Their streaming and shopping recommendations do not update, and serve as reminders of their wounds. On one hand, Americans know their private data is used for developing algorithms. Their daily activities from card swipes to social media updates will eventually be used to curate their newsfeed, recommend products, suggest movies, and target them for advertising. However, little is known about the curating technology’s capacity to retain knowledge, after the original data is removed. Oftentimes, even if the database “forgets”, the downstream software built using the data will not immediately update. In fact, for lucrative and sticky information like user pregnancy, removing the influence through feeding the recommendation system new trends may still take very long. If a user wants to move on, they may reasonably demand some past records expunged from Netflix's recommendation engines. Fortunately, the technical problem isn’t hard: most recommendation engines are built on collaborative filtering, which tends to follow a bi-linear format presented in the same mathematical form. My own research develops an “untraining” algorithm to these systems. The goal is to update downstream recommendations to reflect the removal of random training data without incurring the cost of re-training. I show that my un-training procedure is exact, fast, and cheap, and can be applied to trained recommendation models without altering the way companies develop these technologies in the first place. Potentially, a whole suite of systems — Netflix’s movie recommendations, TikTok and YouTube’s recommendations, algorithmic newsfeed ranking on Twitter — could all apply a lightweight “forgetting” layer when users want the whole system to adapt to their deletion real-time.
October 19: Paul Friedl - Dis/similarities in the Design and Development of Legal and Algorithmic Normative Systems: the Case of Perspective API
ABSTRACT: For several decades now, legal scholars and other social scientists have been interested in conceiving of technologies as regulatory media and comparing their normative affordances to law’s regulatory characteristics. In line with current technological innovations, scholars have recently also started to explore the normative nature of Machine Learning and Artificial Intelligence systems. Most of this scholarship, however, adopts a largely theoretical perspective. This article takes a different approach and attempts to provide the discussion with a more empirical grounding. It does so by investigating the construction of one particular Machine Learning system, the content moderation system Perspective API developed by Google. Its open-source development and a voluminous trove of publicly available documentation render Perspective API a virtually unique resource to study the inner logics of Machine Learning systems development. Based on an in-depth analysis of these logics, the article fleshes out similarities and dissimilarities concerning the normative structure of algorithmic and legal systems with regard to four different subjects: normative tutelage, evaluative diversity, modes of evolution and standards of evaluation. The article then relates these findings to the European Union’s proposal for a Digital Services Act and shows how they might help in readying the Act for the realities of large-scale automated content moderation engines.
October 12: Katja Langenbucher - Fair Lending in the Age of AI
ABSTRACT: Search costs for lenders when evaluating potential borrowers are driven by the quality of the underwriting model and by access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. Against that background, the paper explores the potential and the challenges of AI underwriting models to further inclusion in the context of credit decisions. First, my interest lies with the set of „invisible prime“ minority applicants which perform better under AI models than under traditional metrics. Broader data and more refined models based on machine learning help to detect such applicants without triggering prohibitive costs. Second, I explore risks of AI-enhanced underwriting. Historic training data shape these models in several ways. In the long run, this could decrease the efficiency of AI models. The EU proposal for an AI Act illustrates a possible approach to quality-oriented regulatory intervention. Further risks concern algorithmic discrimination. The paper highlights difficulties of fitting algorithmic discrimination into the traditional regime of disparate treatment and disparate impact. It suggests that received antidiscrimination law with its focus on distinct motivational factors leading up to a decision is illequipped to deal with algorithmic decision-making. This is especially obvious for AI models which process a large quantity of variables and offer few or no options for reverse-engineering to disentangle the weight of different variables. The paper concludes with a first attempt to outline contours of fair lending law in the age of AI.
October 5: Ari Waldman - Gender Data in the Automated State
ABSTRACT: The state is data-driven and automated. State agencies collect, share, and use personal data in algorithmic systems designed to verify identity, secure spaces, and predict behaviors. The conventional account of the automated administrative state suggests that automation has arisen in a legal regulatory void. This Article challenges that account. Using a case study of sex and gender data in the automated state based on a novel data set of forms, public record requests, and direct interviews, this Article demonstrates how the law, both on the books and on the ground, mandates, incentivizes, and fosters a particular kind of automation that binarizes gender data and excludes transgender, nonbinary, and gender-nonconforming individuals. In myriad areas of public life—from voting to professional licensure—the state collects, shares, and uses sex and gender data, saddling gender diverse populations with all the dangers of automation but without any of the benefits of legibility. I trace the law’s gender data pathways every step of the way, from their collection on forms, through their sharing through intergovernmental agreements, and finally to their use in automated systems procured by agencies and legitimized by procedural privacy compliance. At each point, the law mandates and fosters automated governance that privileges efficiency rather than inclusive goals. The law’s role in creating this automated state has mostly been hidden from view. It is a puzzle of statutes, on-the-ground policymaking, interagency agreements, efficiency mandates, and policy-by-procurement. In piecing this puzzle together, this Article provides a novel, critical account of the automated state that challenges the conventional wisdom of an automated state devoid of discretion and reliant on engineering expertise. The reality is far more complex. I demonstrate that the state is reliant on discretion and devoid of understanding of the power and limits of sex and gender data to the detriment of gender diverse populations.
September 28: Elettra Bietti - The Structure of Consumer Choice: Antitrust and Utilities' Convergence in Digital Platform Markets
ABSTRACT: The regulation of digital platforms is frequently framed as a legal and institutional trade-off. Should policy makers “regulate” or should they “break-up” Big Tech? Should they decentralize digital power or should they transform companies like Google into accountable bottlenecks? These dichotomies reflect an impoverished – and deregulatory – understanding of the scope of antitrust law and the nature of regulation in the digital economy. Antitrust, which includes remedies such as break-ups, is conceived as a body of law which acts marginally to preserve pre-legal and decentralized market processes. Utilities and other regulatory schemes are viewed as rigid modes of intervention in production that interfere with free competition and pre-structure consumer choice and innovation. These dichotomies fail to reflect a more nuanced reality where decentralizing and centralizing efforts, which structure and enable digital markets, overlap across legal domains. The Article defends a conceptual move away from disciplinary siloes and discontinuous remedial solutions and toward a joint approach to law in digital ecosystems. In practice, antitrust and regulatory law are converging in a new way. Antitrust cases are increasingly sensitive to infrastructural power and digital market regulation is becoming consciously procompetitive. As such, deregulatory justifications for the distinction between antitrust and regulation, and between pre-legal and legally constructed market dynamics, are weakening. Antitrust is but one branch of law that structures and enables competition. Regulation does not undermine but instead can promote competition, innovation and consumer choice. Relying on the case of Google and its regulation between 1998 and 2022, the Article situates antitrust and public utility efforts as part of a spectrum of coextensive regulatory approaches to digital markets. It configures the space of regulatory possibility across ex ante and ex post, centralizing and decentralizing strategies. Its aim is to guide a move away from siloed or acontextual efficiency or deregulatory justifications and toward situated legal-regulatory decisions about the collective needs and choices that markets like search or online advertising can advance. The role of law in constructing markets that are more pluralistic is underexplored. There is a need for more horizontal forms of production including bottom-up entrepreneurship, cooperatives and public options. The digital platform economy is a place to begin experimenting with new ways of structuring production in line with the public interest. The question is not whether to break-up or regulate Big Tech, it is what forms of competition, innovation and choice are needed in a digital society.
September 21: Mark Verstraete - Adversarial Information Law
ABSTRACT: American information privacy law has tacitly accepted anti-adversarialism in existing law and popular proposals for reform. Rather than accepting conflict between users and platforms, privacy law has both implicitly and explicitly insisted on cooperation between these two opposing groups. This Article contends that the anti-adversarial turn in information privacy law is ineffective and normatively undesirable. As political theorists have long recognized, adversarialism—or conflict between mutually opposed groups—provides important benefits. Adversarialism is necessary to create social differentiation that is foundational for the formation of political identity and, moreover, adversarialism restores passion to politics and acts as a bulwark against stagnation resignation.
September 14: Aniket Kesari - Do Data Breach Notification Laws Work?
ABSTRACT: Over 2.8 million Americans have reported being victims identify theft in recent years, costing the U.S. economy at least $13 billion in 2020. In response to this growing problem, all 50 states have enacted some form of data breach notification law in the past 20 years. Despite their prevalence, evaluating the efficacy of these laws remains elusive. This Article fills this gap, while further creating a new taxonomy to understand when these laws work and when they do not. Legal scholars have generally treated data breach notification laws as doing just one thing—disclosing information to consumers. But this approach ignores rich variation: differences in disclosure requirements to regulators and credit monitoring agencies; varied mechanisms for public and private enforcement; and a range of thresholds that define how firms should assess the likelihood that a data breach will ultimately harm consumers. This Article leverages the Federal Trade Commission’s Consumer Sentinel database to build a comprehensive dataset measuring identity theft report rates since 2000. Using staggered adoption synthetic control – a popular method for policy evaluation that has yet to be widely applied in empirical legal studies – this Article finds that whether identify theft laws work depends on which of these different strands of legal provisions are employed. In particular, while baseline disclosure requirements and private rights of action have small effects, requiring firms to notify state regulators reduces identity theft report rates by approximately 10%. And surprisingly, laws that fail to exclude low-risk breaches from reporting requirements are counterproductive, increasing identify theft report rates by 4%. The Article ties together these results within a functional typology: namely, whether legal provisions (1) enable consumer mitigation of data breach harms, or (2) encourage organizations to invest in better data security. It explains how these results and typology provide lessons for current federal and state proposals to expand or amend the scope of breach notification laws. A new federal law that simply mimics existing baseline requirements is unlikely to have an additional effect and may preempt further innovations. At the state level, introducing private rights of action may help at the margins, but likely suffers from well-identified issues of adequately establishing standing and damages. States that close loopholes surrounding breach requirements for encrypted data see lower identity-theft report rates, which suggests that other states may be wise to tighten these requirements as well. Looking forward, states should experiment with solutions such as automatically enrolling consumers in identity theft protection services or providing direct incentives for strong data security.
Spring 2022
April 27: Stefan Bechtold - Algorithmic Explanations in the Field
April 20: Molly de Blanc - Employing the Right to Repair to Address Consent Issues in Implanted Medical Devices
April 13: Sergio Alonso de Leon - IP law in the data economy: The problematic role of trade secrets and database rights for the emerging data access rights
April 6: Michelle Shen – Criminal Defense Strategy and Brokering Innovation in the Digital and Scientific Era: Justice for Whom?
March 30: Elettra Bietti – From Data to Attention Infrastructures: Regulating Extraction in the Attention Platform Economy
March 23: Aniket Kesari - A Computational Law & Economics Toolkit for Balancing Privacy and Fairness in Consumer Law
March 9: Gabriel Nicholas - Administering Social Data: Lessons for Social Media from Other Sectors
March 2: Jiaying Jiang - Central Bank Digital Currencies and Consumer Privacy Protection
February 23: Aileen Nielsen & Karel Kubicek - How Does Law Make Code? The Timing and Content of Open Source Responses to GDPR and CCPA
February 16: Stein - Unintended Consequences: How Data Protection Laws Leave our Data Less Protected
February 9: Stav Zeitouni - Propertization in Information Privacy
February 2: Ben Sundholm - AI in Clinical Practice: Reconceiving the Black-Box Problem
January 26: Mark Verstraete - Probing Personal Data
Fall 2021
December 1: Ira Rubinstein & Tomer Kenneth - Health Misinformation, Online Platforms, and Government Action
November 17: Aileen Nielsen - Can an algorithm be too accurate?
November 10: Thomas Streinz - Data Capitalism
November 3: Barbara Kayondo - A Governance Framework for Enhancing Patient’s Data Privacy Protection in Electronic Health Information Systems
October 27: Sebastian Benthal - Fiduciary Duties for Computational Systems
October 20: Jiang Jiaying - Technology-Enabled Co-Regulation as a New Regulatory Approach to Blockchain Implementation
October 13: Aniket Kesari - Privacy Law Diffusion Across U.S. State Legislatures
October 6: Katja Langenbucher - The EU Proposal for an AI Act – tested on algorithmic credit scoring
September 29: Francesca Episcopo - PrEtEnD – PRivate EnforcemenT in the EcoNomy of Data
September 22: Ben Green - The Flaws of Policies Requiring Human Oversight of Government Algorithms
September 15: Ari Waldman - Misinformation Project in Need of Pithy
Spring 2021
April 16:Tomer Kenneth — Public Officials on Social Media
April 9: Thomas Streinz — The Flawed Dualism of Facebook's Oversight Board
April 2: Gabe Nicholas — Have Your Data and Eat it Too: Bridging the Gap between Data Sharing and Data Protection
March 26: Ira Rubinstein — Voter Microtargeting and the Future of Democracy
March 19: Stav Zeitouni
March 12: Ngozi Nwanta
March 5: Aileen Nielsen
February 26: Tom McBrien
February 19: Ari Ezra Waldman
February 12: Albert Fox Cahn
February 5: Salome Viljoen & Seb Benthall — Data Market Discipline: From Financial Regulation to Data Governance
January 29: Mason Marks — Biosupremacy: Data Protection, Antitrust, and Monopolistic Power Over Human Behavior
Fall 2020
December 4: Florencia Marotta-Wurgler & David Stein — Teaching Machines to Think Like Lawyers
November 20: Andrew Weiner
November 6: Mark Verstraete — Cybersecurity Spillovers
October 30: Ari Ezra Waldman — Privacy Law's Two Paths
October 23: Aileen Nielsen — Tech's Attention Problem
October 16: Caroline Alewaerts — UN Global Pulse
October 9: Salome Viljoen — Data as a Democratic Medium: From Individual to Relational Data Governance
October 2: Gabe Nicholas — Surveillance Delusion: Lessons from the Vietnam War
September 25: Angelina Fisher & Thomas Streinz — Confronting Data Inequality
September 18: Danny Huang — Watching loTs That Watch Us: Studying loT Security & Privacy at Scale
September 11: Seb Benthall — Accountable Context for Web Applications
Spring 2020
April 29: Aileen Nielsen — "Pricing" Privacy: Preliminary Evidence from Vignette Studies Inspired by Economic Anthropology
April 22: Ginny Kozemczak — Dignity, Freedom, and Digital Rights: Comparing American and European Approaches to Privacy
April 15: Privacy and COVID-19 Policies
April 8: Ira Rubinstein — Urban Privacy
April 1: Thomas Streinz — Data Governance in Trade Agreements: Non-territoriality of Data and Multi-Nationality of Corporations
March 25: Christopher Morten — The Big Data Regulator, Rebooted: Why and How the FDA Can and Should Disclose Confidential Data on Prescription Drugs
March 4: Lilla Montanagni — Regulation 2018/1807 on the Free Flow of Non Personal Data: Yet Another Piece in the Data Puzzle in the EU?
February 26: Stein — Flow of Data Through Online Advertising Markets
February 19: Seb Benthall — Towards Agend-Based Computational Modeling of Informational Capitalism
February 12: Yafit Lev-Aretz & Madelyn Sanfilippo — One Size Does Not Fit All: Applying a Single Privacy Policy to (too) Many Contexts
February 5: Jake Goldenfein & Seb Benthall — Data Science and the Decline of Liberal Law and Ethics
January 29: Albert Fox Cahn — Reimagining the Fourth Amendment for the Mass Surveillance Age
January 22: Ido Sivan-Sevilia — Europeanization on Demand? The EU's Cybersecurity Certification Regime Between the Rationale of Market Integration and the Core Functions of the State
Fall 2019
December 4: Ari Waldman — Discussion on Proposed Privacy Bills
November 20: Margarita Boyarskaya & Solon Barocas [joint work with Hanna Wallach] — What is a Proxy and why is it a Problem?
November 13: Mark Verstraete & Tal Zarsky — Data Breach Distortions
November 6: Aaron Shapiro — Dynamic Exploits: Calculative Asymmetries in the On-Demand Economy
October 30: Tomer Kenneth — Who Can Move My Cheese? Other Legal Considerations About Smart-Devices
October 23: Yafit Lev-Aretz & Madelyn Sanfilippo — Privacy and Religious Views
October 16: Salome Viljoen — Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought
October 9: Katja Langenbucher — Responsible A.I. Credit Scoring
October 2: Michal Shur-Ofry — Robotic Collective Memory
September 25: Mark Verstraete — Inseparable Uses in Property and Information Law
September 18: Gabe Nicholas & Michael Weinberg — Data, To Go: Privacy and Competition in Data Portability
September 11: Ari Waldman — Privacy, Discourse, and Power
Spring 2019
April 24: Sheila Marie Cruz-Rodriguez — Contractual Approach to Privacy Protection in Urban Data Collection
April 17: Andrew Selbst — Negligence and AI's Human Users
April 10: Sun Ping — Beyond Security: What Kind of Data Protection Law Should China Make?
April 3: Moran Yemini — Missing in "State Action": Toward a Pluralist Conception of the First Amendment
March 27: Nick Vincent — Privacy and the Human Microbiome
March 13: Nick Mendez — Will You Be Seeing Me in Court? Risk of Future Harm, and Article III Standing After a Data Breach
March 6: Jake Goldenfein — Through the Handoff Lens: Are Autonomous Vehicles No-Win for Users
February 27: Cathy Dwyer — Applying the Contextual Integrity Framework to Cambride Analytica
February 20: Ignacio Cofone & Katherine Strandburg — Strategic Games and Algorithmic Transparency
February 13: Yan Shvartshnaider — Going Against the (Appropriate) Flow: A Contextual Integrity Approach to Privacy Policy Analysis
January 30: Sabine Gless — Predictive Policing: In Defense of 'True Positives'
Fall 2018
December 5: Discussion of current issues
November 28: Ashley Gorham — Algorithmic Interpellation
November 14: Mark Verstraete — Data Inalienabilities
November 7: Jonathan Mayer — Estimating Incidental Collection in Foreign Intelligence Surveillance
October 31: Sebastian Benthall — Trade, Trust, and Cyberwar
October 24: Yafit Lev-Aretz — Privacy and the Human Element
October 17: Julia Powles — AI: The Stories We Weave; The Questions We Leave
October 10: Andy Gersick — Can We Have Honesty, Civility, and Privacy Online? Implications from Evolutionary Theories of Animal and Human Communication
October 3: Eli Siems — The Case for a Disparate Impact Regime Covering All Machine-Learning Decisions
September 26: Ari Waldman — Privacy's False Promise
September 19: Marijn Sax — Targeting Your Health or Your Wallet? Health Apps and Manipulative Commercial Practices
September 12: Mason Marks — Algorithmic Disability Discrimination
Spring 2018
May 2: Ira Rubinstein — Article 25 of the GDPR and Product Design: A Critical View [with Nathan Good and Guilermo Monge, Good Research]
April 25: Elana Zeide — The Future Human Futures Market
April 18: Taylor Black — Performing Performative Privacy: Applying Post-Structural Performance Theory for Issues of Surveillance Aesthetics
April 11: John Nay — Natural Language Processing and Machine Learning for Law and Policy Texts
April 4: Sebastian Benthall — Games and Rules of Information Flow
March 28: Yann Shvartzshanider and Noah Apthorpe — Discovering Smart Home IoT Privacy Norms using Contextual Integrity
February 28: Thomas Streinz — TPP’s Implications for Global Privacy and Data Protection Law
February 21: Ben Morris, Rebecca Sobel, and Nick Vincent — Direct-to-Consumer Sequencing Kits: Are Users Losing More Than They Gain?
February 14: Eli Siems — Trade Secrets in Criminal Proceedings: The Battle over Source Code Discovery
February 7: Madeline Bryd and Philip Simon — Is Facebook Violating U.S. Discrimination Laws by Allowing Advertisers to Target Users?
January 31: Madelyn Sanfilippo — Sociotechnical Polycentricity: Privacy in Nested Sociotechnical Networks
January 24: Jason Schultz and Julia Powles — Discussion about the NYC Algorithmic Accountability Bill
Fall 2017
November 29: Kathryn Morris and Eli Siems — Discussion of Carpenter v. United States
November 15:Leon Yin — Anatomy and Interpretability of Neural Networks
November 8: Ben Zevenbergen — Contextual Integrity for Password Research Ethics?
November 1: Joe Bonneau — An Overview of Smart Contracts
October 25: Sebastian Benthall — Modeling Social Welfare Effects of Privacy Policies
October 18: Sue Glueck — Future-Proofing the Law
October 11: John Nay — Algorithmic Decision-Making Explanations: A Taxonomy and Case Study
October 4:Finn Bruton — 'The Best Surveillance System we Could Imagine': Payment Networks and Digital Cash
September 27: Julia Powles — Promises, Polarities & Capture: A Data and AI Case Study
September 20: Madelyn Rose Sanfilippo AND Yafit Lev-Aretz — Breaking News: How Push Notifications Alter the Fourth Estate
September 13: Ignacio Cofone — Anti-Discriminatory Privacy
Spring 2017
April 26: Ben Zevenbergen — Contextual Integrity as a Framework for Internet Research Ethics
April 19: Beate Roessler — Manipulation
April 12: Amanda Levendowski — Conflict Modeling
April 5: Madelyn Sanfilippo — Privacy as Commons: A Conceptual Overview and Case Study in Progress
March 29: Hugo Zylberberg — Reframing the fake news debate: influence operations, targeting-and-convincing infrastructure and exploitation of personal data
March 22: Caroline Alewaerts, Eli Siems and Nate Tisa will lead discussion of three topics flagged during our current events roundups: smart toys, the recently leaked documents about CIA surveillance techniques, and the issues raised by the government’s attempt to obtain recordings from an Amazon Echo in a criminal trial.
March 8: Ira Rubinstein — Privacy Localism
March 1: Luise Papcke — Project on (Collaborative) Filtering and Social Sorting
February 22: Yafit Lev-Aretz and Grace Ha (in collaboration with Katherine Strandburg) — Privacy and Innovation
February 15: Argyri Panezi — Academic Institutions as Innovators but also Data Collectors - Ethical and Other Normative Considerations
February 8: Katherine Strandburg — Decisionmaking, Machine Learning and the Value of Explanation
February 1: Argyro Karanasiou — A Study into the Layers of Automated Decision Making: Emergent Normative and Legal Aspects of Deep Learning
January 25: Scott Skinner-Thompson — Equal Protection Privacy
Fall 2016
December 7: Tobias Matzner — The Subject of Privacy
November 30: Yafit Lev-Aretz — Data Philanthropy
November 16: Helen Nissenbaum — Must Privacy Give Way to Use Regulation?
November 9: Bilyana Petkova — Domesticating the "Foreign" in Making Transatlantic Data Privacy Law
November 2: Scott Skinner-Thompson — Recording as Heckling
October 26: Yan Shvartzhnaider — Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms
October 19: Madelyn Sanfilippo — Privacy and Institutionalization in Data Science Scholarship
October 12: Paula Kift — The Incredible Bulk: Metadata, Foreign Intelligence Collection, and the Limits of Domestic Surveillance Reform
October 5: Craig Konnoth — Health Information Equity
September 28: Jessica Feldman — the Amidst Project
September 21: Nathan Newman — UnMarginalizing Workers: How Big Data Drives Lower Wages and How Reframing Labor Law Can Restore Information Equality in the Workplace
September 14: Kiel Brennan-Marquez — Plausible Cause
Spring 2016
April 27: Yan Schvartzschnaider — Privacy and loT AND Rebecca Weinstein - Net Neutrality's Impact on FCC Regulation of Privacy Practices
April 20: Joris van Hoboken — Privacy in Service-Oriented Architectures: A New Paradigm? [with Seda Gurses]
April 13: Florencia Marotta-Wurgler — Who's Afraid of the FTC? Enforcement Actions and the Content of Privacy Policies (with Daniel Svirsky)
April 6: Ira Rubinstein — Big Data and Privacy: The State of Play
March 30: Clay Venetis — Where is the Cost-Benefit Analysis in Federal Privacy Regulation?
March 23: Diasuke Igeta — An Outline of Japanese Privacy Protection and its Problems; Johannes Eichenhofer — Internet Privacy as Trust Protection
March 9: Alex Lipton — Standing for Consumer Privacy Harms
March 2: Scott Skinner-Thompson — Pop Culture Wars: Marriage, Abortion, and the Screen to Creed Pipeline [with Professor Sylvia Law]
February 24: Daniel Susser — Against the Collection/Use Distinction
February 17: Eliana Pfeffer — Data Chill: A First Amendment Hangover
February 10: Yafit Lev-Aretz — Data Philanthropy
February 3: Kiel Brennan-Marquez — Feedback Loops: A Theory of Big Data Culture
January 27: Leonid Grinberg — But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race
Fall 2015
November 18: Angèle Christin - Algorithms, Expertise, and Discretion: Comparing Journalism and Criminal Justice
November 4: Solon Barocas and Karen Levy — Understanding Privacy as a Means of Economic Redistribution
October 28: Finn Brunton — Of Fembots and Men: Privacy Insights from the Ashley Madison Hack
October 21: Paula Kift — Human Dignity and Bare Life - Privacy and Surveillance of Refugees at the Borders of Europe
October 14: Yafit Lev-Aretz and co-author, Nizan Geslevich Packin — Between Loans and Friends: On Soical Credit and the Right to be Unpopular
October 7: Daniel Susser — What's the Point of Notice?
September 30: Helen Nissenbaum and Kirsten Martin — Confounding Variables Confounding Measures of Privacy
September 23: Jos Berens and Emmanuel Letouzé — Group Privacy in a Digital Era
September 16: Scott Skinner-Thompson — Performative Privacy
September 9: Kiel Brennan-Marquez — Vigilantes and Good Samaritan
Spring 2015
April 22: Helen Nissenbaum — Respect for Context' as a Benchmark for Privacy: What it is and Isn't
April 15: Joris van Hoboken — From Collection to Use Regulation? A Comparative Perspective
March 11: Rebecca Weinstein (Cancelled)
Fall 2014
Kristen Martin — Transaction costs, privacy, and trust: The laudable goals and ultimate failure of notice and choice to respect privacy online
Ryan Calo — Against Notice Skepticism in Privacy (and Elsewhere)
Lorrie Faith Cranor — Necessary but Not Sufficient: Standardized Mechanisms for Privacy Notice and Choice
October 22: Matthew Callahan — Warrant Canaries and Law Enforcement Responses
October 15: Karen Levy — Networked Resistance to Electronic Surveillance
October 8: Joris van Hoboken — The Right to be Forgotten Judgement in Europe: Taking Stock and Looking Ahead
October 1: Giancarlo Lee — Automatic Anonymization of Medical Documents
September 24: Christopher Sprigman — MSFT "Extraterritorial Warrants" Issue
September 17: Sebastian Zimmeck — Privee: An Architecture for Automatically Analyzing Web Privacy Policies [with Steven M. Bellovin]
September 10: Organizational meeting
Spring 2014
January 29: Organizational meeting
Fall 2013
November 20: Nathan Newman — Can Government Mandate Union Access to Employer Property? On Corporate Control of Information Flows in the Workplace
September 25: Luke Stark — The Emotional Context of Information Privacy
September 18: Discussion — NSA/Pew Survey
September 11: Organizational Meeting
Spring 2013
April 10: Katherine Strandburg — ECPA Reform; Catherine Crump: Cotterman Case; Paula Helm: Anonymity in AA
March 27: Privacy News Hot Topics — US v. Cotterman, Drones' Hearings, Google Settlement, Employee Health Information Vulnerabilities, and a Report from Differential Privacy Day
March 6: Mariana Thibes — Privacy at Stake, Challenging Issues in the Brazillian Context
March 13: Nathan Newman — The Economics of Information in Behavioral Advertising Markets
February 27: Katherine Strandburg — Free Fall: The Online Market's Consumer Preference Disconnect
February 20: Brad Smith — Privacy at Microsoft
February 13: Joe Bonneau — What will it mean for privacy as user authentication moves beyond passwo
February 6: Helen Nissenbaum — The (Privacy) Trouble with MOOCs
January 30: Welcome meeting and discussion on current privacy news
Fall 2012
November 14: Travis Hall — Cracks in the Foundation: India's Biometrics Programs and the Power of the Exception
September 19: Nathan Newman — Cost of Lost Privacy: Google, Antitrust and Control of User Data