November 29: Monika Leszczynska - Defining the Boundaries of Marketing Influence: Public Perception and Unfair Trade Practices in the Digital Era
ABSTRACT: Companies are deploying increasingly sophisticated techniques to influence consumer choices and preferences in the digital environment. As yet, however, it is unclear whether and how consumer law should respond to such practices. This paper explores a valuable benchmark to inform an answer to this question: public norms and perceptions regarding online marketing practices. Understanding such perceptions is a crucial factor in assessing the legitimacy of consumer protection law and potential areas for reform. Based on an experimental vignette study, I examine the moral acceptability of several of online marketing practices, as well as factors that underlie these judgments. I demonstrate that practices leading to privacy harms are perceived as less morally acceptable than those causing no harm. Additionally, I show that some practices specifically invite moral condemnation relative to a neutral choice design, independent of the presence and type of harm involved. My findings suggest that there may well be a reason to expand the scope of unfair trade practices laws to include the scrutiny of online marketing strategies targeting consumer decisions that could potentially result in privacy harms. If strategies pose a significant threat to consumer autonomy, the requirement to demonstrate tangible harm for classifying a practice as unfair should be eliminated. Furthermore, I suggest that the notion of unfairness should indeed encompass the potential threat to freedom of choice, with its assessment closely linked to consumers' perspectives.
November 15: Aileen Nielsen & Arna Woemmel - Ageism unrestrained: the perplexing lack of action to protect older adults in the digital world
ABSTRACT: Discrimination against older people, a global threat to individuals and society at large, is pervasive in digital spaces. Yet, it goes largely undiscussed. Discrimination against older people is a significant global threat to both individual well-being and society as a whole, as recently emphasized by the World Health Organization. Yet, in the digital space – a key source for shaping social values and norms – ageism appears to be prevalent and unhindered. In this Comment, we present surprisingly easy to find instances of discrimination against older people (ageism) in the digital space. These examples underscore that ageism is pervasive across various settings, including online platforms, search engines, and tech-related policy decisions. Alarmingly, key stakeholders such as the machine learning research community, policymakers, and tech companies appear remarkably passive in addressing these issues. We write to prompt further research and practical interventions into the prevalence of and best measures to combat the further proliferation of digital ageism. The ML community, policymakers, and tech companies must act to curb its further proliferation.
November 8: Aniket Kesari - A Legal Framework for Explainable AI
ABSTRACT: What makes a good artificial intelligence (AI) explanation? A foundational legal principle is that decision-makers must explain their reasons: judges write opinions, government agencies write reports detailing why they deny benefits in areas such as entitlements and immigration, and credit lenders need to inform applicants about the reasons for denying an application. As more of these decisions become automated with machine learning and AI tools, the notion of reason-giving has received renewed attention within the legal community. Black-box algorithms can improve the speed and accuracy of legal decision-making, but it can be difficult to scrutinize the reasons underlying their predictions. Even when it is possible to scrutinize the reasons, simple appeals to intuition may falter as these methods are adept at uncovering patterns that elude humans. An active literature in explainable AI has produced a growing library of methods for explaining algorithmic predictions and decisions. But explainable AI has largely focused on the needs of software developers to debug, rather than the interests of decision subjects to understand. The legal-ethical debates, on the one hand, and explainable AI innovations, on the other, have mostly proceeded independently without a connecting conversation. We bridge this gap by introducing a typology for good legal explanations in algorithmic decision-making contexts. Are explanations global (explaining the behavior of the system as a whole) or local (explaining the decision as it pertains to a particular data subject)? Are they contrastive (detailing what the data subject could have changed to receive a different decision) or non-contrastive (simply giving the model's predictions)? Ensuring that the bedrock principle of giving explanations is preserved, even as more high-stakes decisions are made with AI, will be a paramount law and policy issue. Explanations pave the way for other parts of a functioning legal system including the right to appeal adverse decisions, transparency in government decisions, and building public trust in institutions. We conclude by showing how our framework can be used to capture the benefits of AI decision-making while still producing good explanations.
November 1: Toussaint Nothias - The Idea of Digital Colonialism: An Intellectual History At the Intersection of Research and Digital Rights Advocacy
ABSTRACT: Our societies are in the process of grappling with the harmful and global impact of a wide range of data-driven technologies. Conversations about the oppressive dimensions of predictive algorithms, the privacy implications of facial recognition technology, the biases of NLP models and the blindspots of automated content moderation are increasingly widespread in tech policy and civil society communities, as well as in academia and the tech industry itself. Driving this reckoning, a growing community of scholars and civil society voices call for challenging what they see as harmful instances of ‘digital colonialism’. This paper proposes an intellectual history of this movement to critique and oppose digital colonialism. In the last five years, scholars from a wide range of disciplines have turned to this concept (or variations like techno-colonialism, tech colonialism, tech imperialism, data colonialism, algorithmic colonization or digital coloniality) as a novel explanatory framework to understand the societal, economic and political role of digital technologies on a global level. These include scholars in law (Coleman, 2019), computer science (Birhane, 2019), social theory (Couldry and Mejias, 2019), anthropology (Amrute, 2019), communication (Madianou, 2019, Oyedemi, 2019, Ricaurte, 2019) sociology (Kwet, 2019), and political scientists (Hicks, 2019). In this paper, I ask: why did scholars from varied disciplines turn to this idea of the colonial features of digital technologies? Why did they develop similar frameworks, at the same time, and at this specific historical juncture? In answering these questions, I make two main arguments. On the one hand, I argue that there are significant historical precedents to these ideas – including the STS literature on postcolonial computing from the early 2010s as well as the political economy literature on electronic colonialism and media imperialism in the 1970s and 80s. In other words, these ideas are not altogether new. They are part of an historical continuum of rich scholarly thinking about technology and coloniality. On the other hand, I argue that digital rights activists have been actively developing as well as popularizing these ideas over the last decade. I draw on cases from Kenya (related to election data and digital ID) and India (net neutrality campaign), and writings by activists and artists to illustrate the prominence and circulation of these ideas in digital rights communities simultaneously to their emergence in academic publications. Together, these two findings invite us to conceptualize global knowledge production about technology and privacy as a dialectic process in which scholarly and activist communities are often co-creators.
October 25: Sebastian Benthall - Regulatory CI: Adaptively Regulating Privacy as Contextual Integrity
ABSTRACT: Privacy regulators are captured by outdated privacy paradigms that challenge their ability to anticipate and prevent harms to social values due to inappropriate flows of information. Pivoted around a positive definition of privacy, Contextual Integrity (CI) can inform regulators by modeling how information flows can or cannot be legitimized by contextual purposes, societal values, and individual ends. Regulating according to CI is challenging in practice, however, because of the need to dynamically operationalize the social values at risk by information flows, and because the flows themselves are opaque, complex, and require constant updates to regulatory models. We call for a shift in the object to be regulated, moving away from regulating ’data’ to regulating ’information flows’ and propose adaptive regulation techniques to apply this new approach. At the core of our proposal is Regulatory CI, a formalization of contextual integrity in which information flows are modeled and audited using Bayesian networks and causal game theory. These models are used in three parallel learning cycles of the adaptive regulatory process (a) assessment of new risks, (b) real-time monitoring of existing threat actors, and (c) validation assessment of existing regulatory instruments to update and work around information flow models: Stakeholders develop a scientific model of privacy risk, calibrate it to data collected from society, and predict and tests the impact of regulatory measures on beneficial social outcomes. We use the Cambridge Analytica scandal to demonstrate existing gaps in privacy regulations and the novelty of our proposal.
October 18: Michal Shur-Ofry - Multiplicity as an AI Governance Practice
ABSTRACT: The recent proliferation of artificial intelligence large language models (LLMs) could mark a watershed moment in the interaction between AI and humans. As the enormous potential of large language models is starting to unfold, this research explores their systemic implications. Much of the public and scholarly discussion to date has focused on the risks of LLMs generating information that is false, misleading, or inaccurate. This study suggests that LLMs can impact social perceptions, even when the output they generate is reliable and valuable. Relying on multidisciplinary research in computer science, sociology, communication and cultural studies, this article takes a close look at the technological paradigm underlying LLMs, and unravels the human judgements that ultimately affect their output. It then describes three case studies, based on experimentations with ChatGPT, that demonstrate how LLMs can affect users’ perceptions, even when they generate valuable and relevant responses on issues such as historical figures, television series, or culinary options. The analysis indicates that the outputs of LLMs are likely to be geared toward the popular and reflect a mainstream and concentrated worldview, rather than a multiplicity of contents and narratives. This inclination could have adverse societal effects—from undermining cultural diversity, to limiting the multiplicity of narratives that build collective memory, narrowing users’ perceptions, or impeding democratic dialogue. The analysis further indicates that the power of LLMs to influence their users’ perceptions could be particularly significant, due to a series of design and technological traits that exacerbate the asymmetrical power relations between LLMs and their users. To address these challenges, the article proposes a novel policy response: recognizing multiplicity as an AI governance principle. Multiplicity implies exposing users, or at least alerting them to the existence of multiple options, contents and narratives, and encouraging them to seek additional information. The analysis explains why current AI governance principles, such as explainability and transparency, are insufficient for alleviating the aforesaid concerns, and how adopting multiplicity as part of AI ethical and regulatory principles could directly address them. It then suggests ways for incorporating multiplicity into AI governance, concentrating on two nonexhaustive directions: Multiplicity-by-Design and Second (AI) Opinions. Finally, the study explores potential legal frameworks that can accommodate multiplicity in AI governance principle. It concludes that integrating multiplicity as an AI governance principle will allow society to benefit from the integration of generative AI tools into our daily lives without jeopardizing the intricacies of the human experience.
October 11: Michael Goodyear - Infringing Information Architectures
ABSTRACT: Information architectures underpin daily life, from television programming to social media. At the same time, new information distribution ecosystems that have the potential to change how we create and distribute information are rapidly evolving. However, they face an underexamined existential crisis from intellectual property law. Rights owners allege that, unlike a copycat mimicking a single painting or logo, online platforms should be liable for all their users’ copyright and trademark infringements. This is architectural infringement. If these claims succeeded, the aggregate damages could be catastrophic and significantly chill innovation. Architectural infringement’s threat to innovation is hardly new, but current judicial, legislative, and scholarly analyses are misleading and incomplete. This Article explains how, in response to these potentially ruinous claims, courts and Congress consistently refined infringement liability tests to accommodate new information architectures from video recorders to online file sharing. Together, these refinements resulted in today’s nuanced intellectual property liability framework. But these ad hoc refinements, while well-meaning, lacked a uniform normative purpose. By examining tort theory and prior architectural infringement cases, this Article proposes consciousness—awareness of the specific infringement and one’s role in furthering the infringement through their action or inaction—as a framework for consistently and effectively responding to new architectural infringement claims. Guided by this framework, courts should further refine copyright and trademark law in response to new technologies such as blockchain, NFTs, and Web 3.0 ecosystems to reduce the risk of overbroad architectural infringement and facilitate greater innovation and information dissemination.
October 4: Yafit Lev-Aretz - Humanized Choice Architecture
ABSTRACT: The field of choice architecture, which focuses on strategically influencing decisions by altering the context in which choices are made, has rapidly evolved in the past two decades. Early choice architecture relied on general nudges tailored to exploit common cognitive biases. With the rise of data-driven profiling came the “hypernudge,” highly personalized nudges that are adapted in real-time based on the user. While hypernudges raised concerns about manipulation, conversational AI represents an even more radical shift, introducing humanized choice architecture. Unlike previous nudges, interactive AI systems simulate human attributes to forge quasi-social connections with users. This enables highly customized nudges that leverage the user’s instinct to anthropomorphize machines. Even with full awareness that they are engaging with programmed code rather than with a human, the allure of machines that convincingly simulate human attributes leaves users prone to AI influence. The sophisticated nudging powers of conversational AI thus stem from a human instinct that overrides logic. This inversion of dynamics creates risks of manipulation that may infringe on decision-making autonomy regardless of the voluntary and conscious nature of user engagement with AI. Consumer protection laws that focus on overt manipulation may prove insufficient to address the novel risks posed by humanized choice architecture. In this work, I explore alternative legal frameworks that may offer meaningful safeguards, even when choices are made voluntarily and informed. Specifically, the FTC’s Section 5 authority against unfair practices, along with contract law doctrines of undue influence and unconscionability may provide protections in cases of abusive AI nudging. Through these frameworks policy could be adapted to determine when voluntary anthropomorphizing reflects a legally actionable compromise of personal autonomy in the age of AI. This approach could enable more nuanced safeguards tailored to the distinct regulatory challenges posed by AI's emergence as an intimate, humanized choice architect.
September 27: Alexis Shore - Governing the screenshot feature: Fighting interpersonal breaches of privacy through law and policy
ABSTRACT: Case law has widely recognized screenshots of digital messages as validating evidence of unlawful behaviors. While this exemplifies a valued, utilitarian purpose of the screenshot feature, little attention has been paid to the screenshot feature as a threat to private digital communications. In fact, many digital messaging platforms authorize individuals to surreptitiously capture and share conversations using the screenshot feature without notice to the original information owner. This dismantles the ability to have true intellectual or intimate privacy within supposedly private digital mediums. Given its expressive function, law and policy have the power to influence not only technology design, but the societal norms around screenshot collection, use, and sharing of private digital conversations. Using relevant case law and rulemakings by the FTC, the findings of this study highlight inconsistencies in the law and provide guidance from the FTC in regulating behaviors akin to screenshot collection and sharing.
September 20: Moritz Schramm - How the European Union and Big Tech reshape Judicial Power
ABSTRACT: The proposed monograph titled ‘Emulated Guardians: How the EU and Big Tech Reshape Judicial Power’ offers an original perspective on a fundamental problem of contemporary law: how to protect rights and control power if ever more power is exercised by actors beyond public authority? Using the example of content moderation on social media platforms, the book tells a fascinating story about the European Union’s and big corporations’ reflexive struggle for authority and legitimacy in global governance. The book develops an interdisciplinary theoretical framework and relies on exclusive empirical material, produced through qualitative interviews with lawmakers, managers, staffers, and activists. The book argues that one increasingly common approach to control private power and rack up public legitimacy is to re-use tried and tested vocabularies, mechanisms, and institutions known for controlling public power, especially those from public law. Particularly prominent among such seasoned mechanisms are courts or, more generally, adjudicators. The language of rights and constitutionalism gave rise to novel but ultimately ambiguous international adjudicators like Meta’s Oversight Board and permeate the EU’s newly established out of court-dispute settlement bodies under the EU’s new platform law, the Digital Services Act (DSA). These adjudicators – which I conceptualize as Emulated Guardians – will decide cases relevant for millions, perhaps billions of users. Building on exclusive access to interviewees at the European institutions and Meta’s Oversight Board and extensive document review, the book critically evaluates Emulated Guardians’ genesis, practice, and political and legal repercussions. The book connects various contemporary debates, e.g., regarding the EU’s Digital Services Act, the Brussels Effect, content moderation, Meta’s Oversight Board, business and human rights, law & tech, global administrative law, and digital constitutionalism. While situated firmly in a vibrant trans-Atlantic discourse, the book is the first monograph on novel adjudicators like the Oversight Board and the DSA’s out of court-dispute settlement bodies specifically and the broader phenomenon of Emulated Guardians in general.
September 13: David Stein – Rethinking IP (and Competition) in the Age of Online Software
ABSTRACT: Current IP rules do not work for online consumer software. Software-specific IP doctrine formed during the era of installable software, which has high upfront costs and is easy to copy. IP rights helped companies recoup development costs by granting them the exclusive right to make and sell copies. But online software has low upfront costs and is not susceptible to copying, rendering IP protection unnecessary. Limits on software IP were designed to foster competition by letting market entrants replicate the interfaces of incumbent products. Online, copying incentives point in the other direction. The limits on software IP let incumbents raise barriers to entry by copying from newcomers. The net effect is an IP regime that exacerbates preexisting tendencies towards market concentration and depressed innovation in markets for online consumer services. Given the growing role online services play in data collection, commerce, and speech, these broken innovation and competition incentives have far-reaching effects. Fixing those incentives is urgent. Policymakers and commentators blame the concentration of online services on structural market failures and turn to antitrust remedies for solutions. This pervasive narrative focuses on a symptom, not the cause. I argue that tech concentration is an artifact of IP law’s failure to keep up with technology. This article proposes a program for IP reform: we should replace the trade-motivated aspects of software IP law with expanded trade regulation. Drawing on common-law misappropriation as a model, I sketch one politically pragmatic option for implementing those reforms. Beyond this article’s focus on software innovation, it serves as a case study describing the mechanics behind a law falling out of sync with technology. As such, it may help policymakers avoid similar legislative and regulatory pitfalls as they regulate emerging and fast-changing technologies.
April 19: Anne Bellon - Seeing through the screen. Transparency as regulation in the digital economy
ABSTRACT: Considering the hegemonic and gatekeeping power of large platforms, new regulatory initiatives have been adopted in Europe to increase public supervision over digital markets. A common feature of these legislations is “transparency obligations” for platforms to inform about their moderation efforts, the profiling of their users or their algorithms. Transparency thus appears as a central aspect of platform regulation, if not its main goal, to create “a safer and more transparent digital environment” under the Digital Services Act. Yet the notion of transparency is far from self-explanatory. Rather, it is a multifaceted concept discussed and studied by different literatures and traditions, that do not always refer to the same process or disclosing organization. Finding its roots in liberal philosophy, then considered a grounding value for the Internet, transparency stands for heterogenous practices and requirements gathered around the idea of good governance. Open data shared by public administration, annual transparency report published by the Big Tech, or standards-setting in international finance, each display a particular vision of transparency and raise different issues regarding their enforcement and efficacy. The paper discusses the notion of transparency, its philosophical and political origins and concrete instantiations, in order to understand how it became such a central language and issue in European digital regulation. I introduce a formal distinction between transparency as accountability, as control or as openness and examine how these categories are combined in recent regulatory laws. I then study transparency practices and limitations applied to large platforms such as YouTube, Facebook and Twitter. Finally, I offer some thoughts about future enforcement of transparency regulation for the digital economy.
April 12: Gabriel Nicholas, Christopher Morton & Salome Viljeon - Researcher Access to Social Media Data: Lessons from Clinical Trial Data Sharing
ABSTRACT: As the problems of misinformation, child welfare, and heightened political polarization on social media platforms grow more salient, lawmakers and advocates are pushing to grant independent researchers access to social media data to better understand these problems. Yet researcher access is controversial. Privacy advocates and companies raise the potential privacy threats of researchers using such data irresponsibly. In addition, social media companies raise concerns over trade secrecy: the data these companies hold and the algorithms powered by that data are secretive sources of competitive advantage. This Article shows that one way to navigate this difficult strait is by drawing on lessons from the successful governance program that has emerged to regulate the sharing of clinical trial data. Like social media data, clinical trial data implicates both individual privacy and trade secrecy concerns. Nonetheless, clinical trial data’s governance regime was gradually legislated, regulated, and brokered into existence, managing the interests of industry, academia, and other stakeholders. The result is a functionally successful (if yet imperfect) clinical trial data-sharing ecosystem. Part I sketches the status quo of researchers’ access to social media data and provides a novel taxonomy of the problems that arise under this regime. Part II reviews the legal structures governing how clinical trial data is shared and traces the history of scandals, investigations, industry protest, and legislative response that gave rise to the mix of mandated sharing and experimental programs we have today. Part III applies lessons from clinical trial data sharing to social media data, and charts a strategic course forward. Two primary lessons emerge: First, law without institutions to implement the law is insufficient, and second, data access regimes must be tailored to the data they make available.
April 5: Amanda Parsons & Salome Viljeon - How Law Collides with Informational Capitalism
ABSTRACT: This Article argues that social data (i.e. data about people) production presents a form of value production that is historically particular to, and defining of, informational capitalism. Social data production materializes and stores value (and risk) in ways that are distinct from other value forms. In our view, this departure in data’s value proposition fuels the need to depart, in legal thinking about data, from the basic intuitions and first principles of the disparate legal regimes encountering social data production.
March 29: Cade Mallett - Judicial Review of Administrative Action Based on AI
ABSTRACT: When reviewing agency action for arbitrariness, courts must initially determine how “hard” a look to take at the substance of agency action. The increasing use of AI as a basis for agency action threatens to complicate this threshold analysis significantly, as agencies and courts both are commonly lacking in significant expertise creating and reviewing AI. While it is common for lower courts to rotely determine they are entitled to “hard look” review of agency action, the Court’s precedent in this area is decidedly more deferential, requiring a case-by-case assessment of the extent to which an agency leverages its substantive expertise in taking the action. Leveraging both the Court’s expertise-based analysis and a review of the policy considerations underlying the decision to grant deference, this paper contributes a framework for courts to use in choosing the level of deference to grant agency action based on AI.
March 22: Stein - Innovation Protection for Platform Competition
ABSTRACT: The digital platform industry is dominated by a few players wielding immense influence over public discourse, access to information, consumer privacy, and online marketplaces. This concentration of power has raised concerns regarding consumer choice, reduced innovation, and increased prices in digital platform markets. Regulators and commentators have proposed various strategies to counteract concentration in digital platform markets, ranging from behavioral remedies to structural interventions. This article posits that the proposed remedies may inadvertently exacerbate market concentration by failing to address an underlying market failure rooted in intellectual property (IP) rules. I argue that current IP rules disproportionately favor incumbent online services and erect barriers to entry for small firms—which are crucial for disruptive innovation—and create barriers to growth that prevent firms’ transition from nascent to actual competitors in the market. Automation of computer programming and the rise of remotely-operated online software means disruptive interface designs are one of the only differentiators available to smaller companies. Since interface innovation receives almost no IP protection, incumbents use their existing infrastructure to saturate the market with copies before newcomers can build capacity. To address these concerns, I argue that IP protection for computer programs should be expanded for software interfaces and reduced along almost every other dimension. Decades of commentary and case law argues against interface protection, but does not anticipate new problems raised by AI and the internet. Still, my proposal is carefully limited. Drawing on doctrinal approaches used in recent data misappropriation cases, I propose a pragmatic, market-context-aware, quasi-property right tailored to protect disruptive innovations in software interface design.
March 8: Aileen Nielsen & Yafit Lev-Aretz - Disclosure and Our Moral Calculus: Do Data Use Disclosures Change Data Subjects’ Sense of Culpability
ABSTRACT: Do disclosures change the subjective moral calculus of information transactions for data subjects? Privacy regulation has long resorted to operationalizing individual control through notice and consent. The disclosure model, however, has been widely criticized by privacy scholars on philosophical, social, economic, and practical grounds. In this work, we add to this rich body of privacy scholarship by investigating shifts in subjective culpability induced by disclosures of data practices. Specifically, we are set to study whether data subjects feel culpable when privacy disclosures are readily available and accessible to them, yet they fail to inform themselves. The control paradigm of privacy purports to provide individuals with control over their personal information, mainly through notice and consent. But, privacy scholars have consistently demonstrated that the notice and consent model fails to give individuals meaningful control over their personal information. The control paradigm has also been criticized for its limited conceptual framing of privacy values, particularly for ignoring dignitarian and socially-inflected privacy harms. Such critiques have prompted the development of alternative proposals for appropriate privacy behaviors and laws, such as Helen Nissenbaum’s contextual integrity framework, which informs this current empirical investigation. Our research aims to assess whether heightened disclosure of illegitimate information flows could result in harm when individuals are formally given the means to access terms of service, yet choose not to read them. The fact that individuals choose not to read even if disclosures are presented in accessible form and language is well established in privacy and contract law commentaries. Indeed, the demonstrated impracticality of self-managing one's privacy choices (in work such as McDonald and Cranor (2008) and Marotta-Wurgler (2010) - even when the disclosure is easily comprehensible (as in Svirsky (2022)) - has been compellingly established on many occasions. We hypothesize that providing granular and accessible disclosures of illegitimate information flows will make individuals who failed to read the disclosures feel worse, potentially shifting blame from the collector to themselves. This greater subjective sense of culpability by ordinary people is likely not mitigated by any compensating increase in individuals’ ability to avoid undesired outcomes or even to process the disclosed information. We hope to show that disclosures are not only unsuccessful in offering control over personal information, but they are also potentially harmful in laundering otherwise illegitimate information flows by triggering a sense of guilt in individuals. In an initial study, we found that participants exposed to different levels of intrusiveness in a disclosure notification showed different levels of regret regarding a decision not to read the terms of service. At the same time, differing levels of disclosure did not change people’s expected future behavior or attribution of moral responsibility as divided between the web user and the firm. This suggests that the effect of disclosures is most likely to create a subjective sense of regret or culpability without any compensatory benefits. This project, which we believe is the first to empirically study the shifting blame dynamics of heightened disclosures, contributes to empirical studies in both law and moral philosophy. In law, in addition to the robust literature criticizing notice and consent cited above (and in our bibliography), we import emergent insights from the consumer contracts setting, as in the work of Furth-Matzkin and Sommers (2020) and of Wilkinson-Ryan (2020), who identified a pattern whereby consumers rationalize otherwise unfair and even illegal contractual provisions. Likewise, we will contribute to the experimental literature on moral philosophy by understanding whether knowledge, in the absence of the ability to change outcomes, increases or shifts judgments of moral blame, continuing work by Knobe and Doris (2010) that seeks to understand how moral culpability is understood and assigned by ordinary people.
March 1: Ari Ezra Waldman - Privacy Civil Society
ABSTRACT: Privacy law and policy has attracted significant interest from civil society. Non-profit policy advocacy organizations—including the Electronic Privacy Information Center (EPIC), the Future of Privacy Forum (FPF), the Center for Democracy and Technology (CDT), as well as myriad other organizations that focus at least part of their policy research and advocacy on commercial privacy—advise policymakers in private, testify before legislatures, write white papers that propose model legislation, and advocate for specific changes in the law. The organizations themselves attract millions of dollars in funding, both from Big Tech and independent foundations. These organizations have seats at the table and yet there has been no systemic study of their role in constructing (or deconstructing) privacy law. This project, which is at an early stage, seeks to understand what nonprofit privacy law advocacy organizations do, why they do it, and why social forces have contributed to their participation in a wave of privacy laws that will do very little to actually protect privacy. What I have called a "second wave" of privacy law features ineffectual individual rights of control and internal compliance procedures (as well as some other things), many of which have been part of proposals and model legislation from advocacy organizations for some time. Even if we disagree on these proposals' effectiveness, it is still remarkable that many of these organizations have called for the same provisions in new privacy laws. Why? For this project, I will be going inside three nonprofit privacy advocacy organizations and interviewing their staffs and leadership. Do their positions reflect the relatively ambivalent cultural orientation toward privacy in the U.S.? Do their positions simply reflect what their donors want, what staffs think is possible, or the overriding need for organizations to maintain a seat at the table regardless of the substance of the proposal? The literature includes several social forces that influence these organizations. I want to see what has caused privacy advocacy organizations to do what they do.
February 22: Thomas Streinz - Contingencies of the Brussels Effect in the Digital Domain
ABSTRACT: The EU has been hailed as a global data regulator. European policymakers have embraced this “Brussels Effect” as the EU embarks on an ambitious new regulatory agenda to regulate the digital economy within Europe and beyond. But the extent to which EU law has shaped the digital domain globally has been overstated and should not be taken for granted. After fighting vigorously against its adoption, companies now often claim to embrace the EU’s General Data Protection (GDPR) and to adhere to it globally. However, in practice, the GDPR’s enforcement record is mixed at best and companies’ assurances do not always hold up to closer scrutiny. The EU’s recently adopted Data Governance Act (DGA), Digital Services Act (DSA), Digital Markets Act (DMA) and the proposals for an Artificial Intelligence Act (AIA) and Data Act (DA) are unlikely to generate wholesale Brussels Effects. Instead, companies will pick and choose if, when, and how to implement European data law globally.
February 15: Sebastian Benthall - New Computational Approaches to Information Policy Research
ABSTRACT: For information policy in the United States to keep up with advances in cloud computing, app development, and artificial intelligence, new computational approaches are needed. Policy analysis suggests that regulatory efforts based on consumer and data protection have been ineffective. Rather, new regulatory efforts aim to reduce conflicts of interest between data processors and data subjects, and to address broader financial risks rather than individual consumer harms. New research approaches are needed to evaluate these proposals. We discuss the design of fiduciary AI and the use of heterogeneous agent modeling to model complex interactions between computation, business, society, and regulation.
February 8: Argyri Panezi, Leon Anidjar, and Nizan Geslevich Packing - The Metaverse Privacy Problem: If you built it, it will come
ABSTRACT: How realistic is the idea of a decentralized and privacy-enhancing Web 3.0? Are data governance and other legal tools currently employed to address the various information law and privacy challenges of Web 2.0 sufficient to tackle the new challenges that Web 3.0 brings about? These central questions set the stage for this Article’s inquiry: how do we (re-) conceptualize privacy challenges in Web 3.0 in general, and particularly in the metaverse? The Article begins with describing the metaverse and discusses its technological foundation and associated privacy concerns. It explains how privacy risks stem from the vast amount of data generated, gathered, and exchanged in the metaverse, comprising personal data, but also data constantly tracing behavior and interactions. Most importantly, it argues that in the metaverse, data has an evolved role; it is no longer a valuable resource as understood in Web 1.0 and Web 2.0, as in Web 3.0, data is the infrastructure itself. Furthermore, the Article introduces the multidimensional conceptualization of data exchanges in the metaverse, which are traced at three levels of analysis: micro, macro, and meso. To mitigate the complexity and its consequences related to privacy protection, the Article makes normative suggestions, namely it analyses the potential benefits of a market for privacy disclosure obligations. The conclusion reflects upon the long-term normative implications of the transition towards Web 3.0 revisiting the decades-old debate about the need – or not - to invent new rules and legal approaches to address legal problems in the cyberspace.
February 1: Aniket Kesari - The Consumer Review Fairness Act and the Reputational Sanctions Market
ABSTRACT: How do statutes that protect consumers’ rights to write reviews shape the reputational sanctions market? In 2016, Congress passed the Consumer Review Fairness Act (CRFA), commonly championed as the “right to Yelp” law. The law makes contract provisions that prevent honest consumer reviews unenforceable, but creates carve outs for abusive, libelous, or false/misleading reviews. However, a number of states have similar laws that do not provide such a carve out. These laws arguably create an important avenue for consumers to impose reputational sanctions on bad businesses, possibly as a substitute for legal sanctions. However, bad faith consumers and competitors can also impose costs on businesses by posting dishonest, troll, or unfair reviews. This Article explores how the CRFA and similar state laws affect this reputational sanctions market. Using a difference-in-differences design, I show that the Illinois law that provides no carveouts caused a small (30/month) increase in negative reviews, and a small (1.5/month) decrease in troll-like reviews each month, but this result was not statistically significant. A computational textual analysis leveraging sentiment analysis and embedding regression reveals that there is no evidence that the content of the text of reviews was altered by the CRFA.
January 25: Michelle Shen - The Brussels Effect as a ‘New-School’ Regulation Globalizing Democracy: A Comparative Review of the CLOUD Act and the European-United States Data Privacy FrameworkAlgorithmic Turn
ABSTRACT: Cross-border data sharing is increasingly relevant for state purposes, entangling questions of balancing individuals’ data privacy rights with state interests. The CLOUD Act’s limited extraterritorial reach has prevented United States (U.S.) law enforcement from accessing data managed by U.S.-based companies stored on European soil. The primary issue this Note addresses is whether the EU-U.S. DPF (Data Privacy Framework), as a bilateral agreement between the EU and US incorporating U.S. laws as authority, may expand the extraterritorial reach of U.S.-law enforcement to obtain data and maintain privacy protection as a fundamental right. This Note asserts that the EU-U.S. DPF has three main benefits compared to the CLOUD Act. First, the EU-U.S. DPF can overcome jurisdictional and comity issues the CLOUD Act faced in enabling U.S. law enforcement to obtain data stored in Europe because it is a bilateral agreement rather than a federal statute. Second, the EU-U.S. DPF is easier to implement domestically because it directly incorporates US federal law and EU law and provides explicit instructions to courts. Third, the EU-U.S. DPF better protects privacy rights by giving companies and users direct pathways to challenge government demands for data. Normatively, the EU-U.S. DPF better embodies democratic ideals compared to the CLOUD Act because it expands claim-making in the U.S. court system to a greater number of individuals (such as EU citizens). However, neither the EU-U.S. DPF nor the CLOUD Act can independently enable claimants to actually receive remedies. Further, the EU-U.S. DPF may result in global disparity in citizens’ access to privacy rights and may force nations to compromise their sovereign values. Lastly, this Comment proposes a global treaty to coordinate foreign nations’ privacy standards as a solution to uphold user privacy, enable law enforcement access to data, and honor nations’ sovereignty.
November 30: Ira Rubenstein - Artificial Speech and the First Amendment: A Skeptical View
November 16: Michal Gal - Synthetic Data: Legal Implications of the Data-Generation Revolution
November 9: Ashit Srivastava - Default Protectionist Tracing Applications: Erosion of Cooperative Federalism
November 2: María Angel - Privacy's Algorithmic Turn
October 26: Mimee Xu - Netflix and Forget
October 19: Paul Friedl - Dis/similarities in the Design and Development of Legal and Algorithmic Normative Systems: the Case of Perspective API
October 12: Katja Langenbucher - Fair Lending in the Age of AI
October 5: Ari Waldman - Gender Data in the Automated State
September 28: Elettra Bietti - The Structure of Consumer Choice: Antitrust and Utilities' Convergence in Digital Platform Markets
September 21: Mark Verstraete - Adversarial Information Law
September 14: Aniket Kesari - Do Data Breach Notification Laws Work?
April 27: Stefan Bechtold - Algorithmic Explanations in the Field
April 20: Molly de Blanc - Employing the Right to Repair to Address Consent Issues in Implanted Medical Devices
April 13: Sergio Alonso de Leon - IP law in the data economy: The problematic role of trade secrets and database rights for the emerging data access rights
April 6: Michelle Shen – Criminal Defense Strategy and Brokering Innovation in the Digital and Scientific Era: Justice for Whom?
March 30: Elettra Bietti – From Data to Attention Infrastructures: Regulating Extraction in the Attention Platform Economy
March 23: Aniket Kesari - A Computational Law & Economics Toolkit for Balancing Privacy and Fairness in Consumer Law
March 9: Gabriel Nicholas - Administering Social Data: Lessons for Social Media from Other Sectors
March 2: Jiaying Jiang - Central Bank Digital Currencies and Consumer Privacy Protection
February 23: Aileen Nielsen & Karel Kubicek - How Does Law Make Code? The Timing and Content of Open Source Responses to GDPR and CCPA
February 16: Stein - Unintended Consequences: How Data Protection Laws Leave our Data Less Protected
February 9: Stav Zeitouni - Propertization in Information Privacy
February 2: Ben Sundholm - AI in Clinical Practice: Reconceiving the Black-Box Problem
January 26: Mark Verstraete - Probing Personal Data
December 1: Ira Rubinstein & Tomer Kenneth - Health Misinformation, Online Platforms, and Government Action
November 17: Aileen Nielsen - Can an algorithm be too accurate?
November 10: Thomas Streinz - Data Capitalism
November 3: Barbara Kayondo - A Governance Framework for Enhancing Patient’s Data Privacy Protection in Electronic Health Information Systems
October 27: Sebastian Benthal - Fiduciary Duties for Computational Systems
October 20: Jiang Jiaying - Technology-Enabled Co-Regulation as a New Regulatory Approach to Blockchain Implementation
October 13: Aniket Kesari - Privacy Law Diffusion Across U.S. State Legislatures
October 6: Katja Langenbucher - The EU Proposal for an AI Act – tested on algorithmic credit scoring
September 29: Francesca Episcopo - PrEtEnD – PRivate EnforcemenT in the EcoNomy of Data
September 22: Ben Green - The Flaws of Policies Requiring Human Oversight of Government Algorithms
September 15: Ari Waldman - Misinformation Project in Need of Pithy
April 16:Tomer Kenneth — Public Officials on Social Media
April 9: Thomas Streinz — The Flawed Dualism of Facebook's Oversight Board
April 2: Gabe Nicholas — Have Your Data and Eat it Too: Bridging the Gap between Data Sharing and Data Protection
March 26: Ira Rubinstein — Voter Microtargeting and the Future of Democracy
March 19: Stav Zeitouni
March 12: Ngozi Nwanta
March 5: Aileen Nielsen
February 26: Tom McBrien
February 19: Ari Ezra Waldman
February 12: Albert Fox Cahn
February 5: Salome Viljoen & Seb Benthall — Data Market Discipline: From Financial Regulation to Data Governance
January 29: Mason Marks — Biosupremacy: Data Protection, Antitrust, and Monopolistic Power Over Human Behavior
December 4: Florencia Marotta-Wurgler & David Stein — Teaching Machines to Think Like Lawyers
November 20: Andrew Weiner
November 6: Mark Verstraete — Cybersecurity Spillovers
October 30: Ari Ezra Waldman — Privacy Law's Two Paths
October 23: Aileen Nielsen — Tech's Attention Problem
October 16: Caroline Alewaerts — UN Global Pulse
October 9: Salome Viljoen — Data as a Democratic Medium: From Individual to Relational Data Governance
October 2: Gabe Nicholas — Surveillance Delusion: Lessons from the Vietnam War
September 25: Angelina Fisher & Thomas Streinz — Confronting Data Inequality
September 18: Danny Huang — Watching loTs That Watch Us: Studying loT Security & Privacy at Scale
September 11: Seb Benthall — Accountable Context for Web Applications
April 29: Aileen Nielsen — "Pricing" Privacy: Preliminary Evidence from Vignette Studies Inspired by Economic Anthropology
April 22: Ginny Kozemczak — Dignity, Freedom, and Digital Rights: Comparing American and European Approaches to Privacy
April 15: Privacy and COVID-19 Policies
April 8: Ira Rubinstein — Urban Privacy
April 1: Thomas Streinz — Data Governance in Trade Agreements: Non-territoriality of Data and Multi-Nationality of Corporations
March 25: Christopher Morten — The Big Data Regulator, Rebooted: Why and How the FDA Can and Should Disclose Confidential Data on Prescription Drugs
March 4: Lilla Montanagni — Regulation 2018/1807 on the Free Flow of Non Personal Data: Yet Another Piece in the Data Puzzle in the EU?
February 26: Stein — Flow of Data Through Online Advertising Markets
February 19: Seb Benthall — Towards Agend-Based Computational Modeling of Informational Capitalism
February 5: Jake Goldenfein & Seb Benthall — Data Science and the Decline of Liberal Law and Ethics
January 29: Albert Fox Cahn — Reimagining the Fourth Amendment for the Mass Surveillance Age
January 22: Ido Sivan-Sevilia — Europeanization on Demand? The EU's Cybersecurity Certification Regime Between the Rationale of Market Integration and the Core Functions of the State
December 4: Ari Waldman — Discussion on Proposed Privacy Bills
November 20: Margarita Boyarskaya & Solon Barocas [joint work with Hanna Wallach] — What is a Proxy and why is it a Problem?
November 13: Mark Verstraete & Tal Zarsky — Data Breach Distortions
November 6: Aaron Shapiro — Dynamic Exploits: Calculative Asymmetries in the On-Demand Economy
October 30: Tomer Kenneth — Who Can Move My Cheese? Other Legal Considerations About Smart-Devices
October 23: Yafit Lev-Aretz & Madelyn Sanfilippo — Privacy and Religious Views
October 16: Salome Viljoen — Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought
October 9: Katja Langenbucher — Responsible A.I. Credit Scoring
October 2: Michal Shur-Ofry — Robotic Collective Memory
September 25: Mark Verstraete — Inseparable Uses in Property and Information Law
September 18: Gabe Nicholas & Michael Weinberg — Data, To Go: Privacy and Competition in Data Portability
September 11: Ari Waldman — Privacy, Discourse, and Power
April 24: Sheila Marie Cruz-Rodriguez — Contractual Approach to Privacy Protection in Urban Data Collection
April 17: Andrew Selbst — Negligence and AI's Human Users
April 10: Sun Ping — Beyond Security: What Kind of Data Protection Law Should China Make?
April 3: Moran Yemini — Missing in "State Action": Toward a Pluralist Conception of the First Amendment
March 27: Nick Vincent — Privacy and the Human Microbiome
March 13: Nick Mendez — Will You Be Seeing Me in Court? Risk of Future Harm, and Article III Standing After a Data Breach
March 6: Jake Goldenfein — Through the Handoff Lens: Are Autonomous Vehicles No-Win for Users
February 27: Cathy Dwyer — Applying the Contextual Integrity Framework to Cambride Analytica
February 20: Ignacio Cofone & Katherine Strandburg — Strategic Games and Algorithmic Transparency
January 30: Sabine Gless — Predictive Policing: In Defense of 'True Positives'
December 5: Discussion of current issues
November 28: Ashley Gorham — Algorithmic Interpellation
November 14: Mark Verstraete — Data Inalienabilities
November 7: Jonathan Mayer — Estimating Incidental Collection in Foreign Intelligence Surveillance
October 31: Sebastian Benthall — Trade, Trust, and Cyberwar
October 24: Yafit Lev-Aretz — Privacy and the Human Element
October 17: Julia Powles — AI: The Stories We Weave; The Questions We Leave
October 10: Andy Gersick — Can We Have Honesty, Civility, and Privacy Online? Implications from Evolutionary Theories of Animal and Human Communication
October 3: Eli Siems — The Case for a Disparate Impact Regime Covering All Machine-Learning Decisions
September 26: Ari Waldman — Privacy's False Promise
September 19: Marijn Sax — Targeting Your Health or Your Wallet? Health Apps and Manipulative Commercial Practices
September 12: Mason Marks — Algorithmic Disability Discrimination
May 2: Ira Rubinstein — Article 25 of the GDPR and Product Design: A Critical View [with Nathan Good and Guilermo Monge, Good Research]
April 25: Elana Zeide — The Future Human Futures Market
April 18: Taylor Black — Performing Performative Privacy: Applying Post-Structural Performance Theory for Issues of Surveillance Aesthetics
April 11: John Nay — Natural Language Processing and Machine Learning for Law and Policy Texts
April 4: Sebastian Benthall — Games and Rules of Information Flow
March 28: Yann Shvartzshanider and Noah Apthorpe — Discovering Smart Home IoT Privacy Norms using Contextual Integrity
February 28: Thomas Streinz — TPP’s Implications for Global Privacy and Data Protection Law
February 21: Ben Morris, Rebecca Sobel, and Nick Vincent — Direct-to-Consumer Sequencing Kits: Are Users Losing More Than They Gain?
February 14: Eli Siems — Trade Secrets in Criminal Proceedings: The Battle over Source Code Discovery
February 7: Madeline Bryd and Philip Simon — Is Facebook Violating U.S. Discrimination Laws by Allowing Advertisers to Target Users?
January 31: Madelyn Sanfilippo — Sociotechnical Polycentricity: Privacy in Nested Sociotechnical Networks
January 24: Jason Schultz and Julia Powles — Discussion about the NYC Algorithmic Accountability Bill
November 29: Kathryn Morris and Eli Siems — Discussion of Carpenter v. United States
November 15:Leon Yin — Anatomy and Interpretability of Neural Networks
November 8: Ben Zevenbergen — Contextual Integrity for Password Research Ethics?
November 1: Joe Bonneau — An Overview of Smart Contracts
October 25: Sebastian Benthall — Modeling Social Welfare Effects of Privacy Policies
October 18: Sue Glueck — Future-Proofing the Law
October 11: John Nay — Algorithmic Decision-Making Explanations: A Taxonomy and Case Study
October 4:Finn Bruton — 'The Best Surveillance System we Could Imagine': Payment Networks and Digital Cash
September 27: Julia Powles — Promises, Polarities & Capture: A Data and AI Case Study
September 20: Madelyn Rose Sanfilippo AND Yafit Lev-Aretz — Breaking News: How Push Notifications Alter the Fourth Estate
September 13: Ignacio Cofone — Anti-Discriminatory Privacy
April 26: Ben Zevenbergen — Contextual Integrity as a Framework for Internet Research Ethics
April 19: Beate Roessler — Manipulation
April 12: Amanda Levendowski — Conflict Modeling
April 5: Madelyn Sanfilippo — Privacy as Commons: A Conceptual Overview and Case Study in Progress
March 29: Hugo Zylberberg — Reframing the fake news debate: influence operations, targeting-and-convincing infrastructure and exploitation of personal data
March 22: Caroline Alewaerts, Eli Siems and Nate Tisa will lead discussion of three topics flagged during our current events roundups: smart toys, the recently leaked documents about CIA surveillance techniques, and the issues raised by the government’s attempt to obtain recordings from an Amazon Echo in a criminal trial.
March 8: Ira Rubinstein — Privacy Localism
March 1: Luise Papcke — Project on (Collaborative) Filtering and Social Sorting
February 22: Yafit Lev-Aretz and Grace Ha (in collaboration with Katherine Strandburg) — Privacy and Innovation
February 15: Argyri Panezi — Academic Institutions as Innovators but also Data Collectors - Ethical and Other Normative Considerations
February 8: Katherine Strandburg — Decisionmaking, Machine Learning and the Value of Explanation
February 1: Argyro Karanasiou — A Study into the Layers of Automated Decision Making: Emergent Normative and Legal Aspects of Deep Learning
January 25: Scott Skinner-Thompson — Equal Protection Privacy
December 7: Tobias Matzner — The Subject of Privacy
November 30: Yafit Lev-Aretz — Data Philanthropy
November 16: Helen Nissenbaum — Must Privacy Give Way to Use Regulation?
November 9: Bilyana Petkova — Domesticating the "Foreign" in Making Transatlantic Data Privacy Law
November 2: Scott Skinner-Thompson — Recording as Heckling
October 26: Yan Shvartzhnaider — Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms
October 19: Madelyn Sanfilippo — Privacy and Institutionalization in Data Science Scholarship
October 12: Paula Kift — The Incredible Bulk: Metadata, Foreign Intelligence Collection, and the Limits of Domestic Surveillance Reform
October 5: Craig Konnoth — Health Information Equity
September 28: Jessica Feldman — the Amidst Project
September 21: Nathan Newman — UnMarginalizing Workers: How Big Data Drives Lower Wages and How Reframing Labor Law Can Restore Information Equality in the Workplace
September 14: Kiel Brennan-Marquez — Plausible Cause
April 27: Yan Schvartzschnaider — Privacy and loT AND Rebecca Weinstein - Net Neutrality's Impact on FCC Regulation of Privacy Practices
April 20: Joris van Hoboken — Privacy in Service-Oriented Architectures: A New Paradigm? [with Seda Gurses]
April 13: Florencia Marotta-Wurgler — Who's Afraid of the FTC? Enforcement Actions and the Content of Privacy Policies (with Daniel Svirsky)
April 6: Ira Rubinstein — Big Data and Privacy: The State of Play
March 30: Clay Venetis — Where is the Cost-Benefit Analysis in Federal Privacy Regulation?
March 23: Diasuke Igeta — An Outline of Japanese Privacy Protection and its Problems; Johannes Eichenhofer — Internet Privacy as Trust Protection
March 9: Alex Lipton — Standing for Consumer Privacy Harms
March 2: Scott Skinner-Thompson — Pop Culture Wars: Marriage, Abortion, and the Screen to Creed Pipeline [with Professor Sylvia Law]
February 24: Daniel Susser — Against the Collection/Use Distinction
February 17: Eliana Pfeffer — Data Chill: A First Amendment Hangover
February 10: Yafit Lev-Aretz — Data Philanthropy
February 3: Kiel Brennan-Marquez — Feedback Loops: A Theory of Big Data Culture
January 27: Leonid Grinberg — But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race
November 18: Angèle Christin - Algorithms, Expertise, and Discretion: Comparing Journalism and Criminal Justice
November 4: Solon Barocas and Karen Levy — Understanding Privacy as a Means of Economic Redistribution
October 28: Finn Brunton — Of Fembots and Men: Privacy Insights from the Ashley Madison Hack
October 21: Paula Kift — Human Dignity and Bare Life - Privacy and Surveillance of Refugees at the Borders of Europe
October 14: Yafit Lev-Aretz and co-author, Nizan Geslevich Packin — Between Loans and Friends: On Soical Credit and the Right to be Unpopular
October 7: Daniel Susser — What's the Point of Notice?
September 30: Helen Nissenbaum and Kirsten Martin — Confounding Variables Confounding Measures of Privacy
September 23: Jos Berens and Emmanuel Letouzé — Group Privacy in a Digital Era
September 16: Scott Skinner-Thompson — Performative Privacy
September 9: Kiel Brennan-Marquez — Vigilantes and Good Samaritan
April 22: Helen Nissenbaum — Respect for Context' as a Benchmark for Privacy: What it is and Isn't
April 15: Joris van Hoboken — From Collection to Use Regulation? A Comparative Perspective
March 11: Rebecca Weinstein (Cancelled)
Kristen Martin — Transaction costs, privacy, and trust: The laudable goals and ultimate failure of notice and choice to respect privacy online
Ryan Calo — Against Notice Skepticism in Privacy (and Elsewhere)
Lorrie Faith Cranor — Necessary but Not Sufficient: Standardized Mechanisms for Privacy Notice and Choice
October 22: Matthew Callahan — Warrant Canaries and Law Enforcement Responses
October 15: Karen Levy — Networked Resistance to Electronic Surveillance
October 8: Joris van Hoboken — The Right to be Forgotten Judgement in Europe: Taking Stock and Looking Ahead
October 1: Giancarlo Lee — Automatic Anonymization of Medical Documents
September 24: Christopher Sprigman — MSFT "Extraterritorial Warrants" Issue
September 17: Sebastian Zimmeck — Privee: An Architecture for Automatically Analyzing Web Privacy Policies [with Steven M. Bellovin]
September 10: Organizational meeting
January 29: Organizational meeting
November 20: Nathan Newman — Can Government Mandate Union Access to Employer Property? On Corporate Control of Information Flows in the Workplace
September 25: Luke Stark — The Emotional Context of Information Privacy
September 18: Discussion — NSA/Pew Survey
September 11: Organizational Meeting
April 10: Katherine Strandburg — ECPA Reform; Catherine Crump: Cotterman Case; Paula Helm: Anonymity in AA
March 27: Privacy News Hot Topics — US v. Cotterman, Drones' Hearings, Google Settlement, Employee Health Information Vulnerabilities, and a Report from Differential Privacy Day
March 6: Mariana Thibes — Privacy at Stake, Challenging Issues in the Brazillian Context
March 13: Nathan Newman — The Economics of Information in Behavioral Advertising Markets
February 27: Katherine Strandburg — Free Fall: The Online Market's Consumer Preference Disconnect
February 20: Brad Smith — Privacy at Microsoft
February 13: Joe Bonneau — What will it mean for privacy as user authentication moves beyond passwo
February 6: Helen Nissenbaum — The (Privacy) Trouble with MOOCs
January 30: Welcome meeting and discussion on current privacy news
November 14: Travis Hall — Cracks in the Foundation: India's Biometrics Programs and the Power of the Exception
September 19: Nathan Newman — Cost of Lost Privacy: Google, Antitrust and Control of User Data