April 27: Stefan Bechtold - Algorithmic Explanations in the Field
ABSTRACT: The increasing use of algorithms in legal and economic decision-making has led to calls for a "right to explanation" for decision subjects. Such explanations are desired in particular where decision-making algorithms are opaque, for example with machine learning or artificial intelligence. Even a specified right to explanation leaves open many questions, in particular how decisions made by black-box algorithms can and should be explained. In this project, we propose an organizing framework for explanations of algorithmic decision-making. Drawing on this framework, we design a controlled field experiment to produce evidence on how decision subjects perceive and respond to different types of explanations. We thereby analyze the possible behavioral effects of a right to explanation and investigate what types of explanations might be legally and ethically useful for human decision subjects.
April 20: Molly de Blanc - Employing the Right to Repair to Address Consent Issues in Implanted Medical Devices
ABSTRACT: I'll be discussing a part of my thesis project. I'm interested in the role repair (inclusive of modification and customization) could play in addressing inadequacies of consent for empowering implanted medical device patients (those with, e.g., pacemaker-defibrillators, neurostimulation/neuromodulation devices, optical implants, and various diabetes, blood glucose monitoring, and insulin control devices). These consent issues relate to the limits implanted medical device patients face in retracting or modifying consent to their devices once consent is given. In addition to a software-focused discussion on repair, I advocate for people to control their data for their own sake and for the sake of others. This project takes inspiration from the #WereNotWaiting diabetes movement, OpenAPS and Nightscout, the work of advocates like Hugo Campos, Dana Lewis, Karen Sandler, and Ben West, and lots of anecdotes.
April 13: Sergio Alonso de Leon - IP law in the data economy: The problematic role of trade secrets and database rights for the emerging data access rights
ABSTRACT: Who has the data? Who gets access to it? The way in which we answer these questions is profoundly consequential. In the fast-evolving context of the data economy, we should deliberate on how we relate to private claims on data and information in our society. There is no specific property framework for data, yet existing instruments of IP law can help to define a zone of exclusivity. This piece asks a series of questions this much needed conversation; what data means for the law; why the law should foster data sharing, sometimes overcoming the reluctance of data holders; and whether claims over data based on instruments in the orbit of IP are legitimate. I vindicate two main ideas: data is a new, different ‘object of the law’; and trade secrets and sui generis database rights can be instrumentalised to erect ‘legal walls’ around data.
April 6: Michelle Shen – Criminal Defense Strategy and Brokering Innovation in the Digital and Scientific Era: Justice for Whom?
ABSTRACT: As the use of science and technology increases in the US criminal justice system (CJS), scholars and CJS practitioners debate whether it promotes public safety or contributes to the mass incarceration of low-income communities of color. In assessing the impact of technological advances on the CJS, empirical studies typically focus on law enforcement or on government. This paper offers the perspective of public defender offices through mixed-methods network analysis of the DNA Unit and Digital Forensics Unit (DFU) at The Legal Aid Society. It finds: 1) public defender offices resemble collegial organizational structures, which allow for variance in structure, 2) that differences in types of science correspond to the units’ differences in their optimal patterns of legal and scientific advice exchange to distribute knowledge, and 3) greater levels of reciprocal scientific advice-seeking from attorneys correspond to greater levels of reciprocal legal advice-seeking. While attorneys specializing in the use of DNA science (DNA Unit) depended on each other for both legal and scientific advice, attorneys specializing in digital forensics (DFU) primarily depended on each other for legal advice and on on-site technologists for scientific advice. DNA Unit attorneys also exchanged greater levels of advice with each other overall. This paper theorizes that this difference is due to the differences between the development of DNA science (well-established by academic literature) and that of digital forensics (a constantly evolving field). Furthermore, the positive correlation between reciprocal scientific advice-seeking and legal advice-seeking in the DNA Unit implies that as technology and science are integrated into the legal system overall, the epistemology of the legal constructions of truth and justice itself necessarily change. However, most public defender offices do not have adequate resources to afford on-site scientists and technologists. Therefore, the increased use of technology and the deprivation of defense counsel resources raises serious concerns for amplifying mass incarceration. Michelle Shen is a current law student at The University of Chicago. This project was conducted as her MSc Sociology dissertation at The University of Oxford. She will be working at the Center for Democracy and Technology and will return to LAS this summer at the Digital Forensics Unit (the subject of the project below). She aims to contribute to socio-legal research on networks, technology, and law throughout her legal career. Accordingly, she would love any feedback on how to tighten up her research and any relevant questions in the law it may raise.
March 30: Elettra Bietti – From Data to Attention Infrastructures: Regulating Extraction in the Attention Platform Economy
ABSTRACT: Rethinking the regulation of advertising-based platform business models such as Facebook/Meta and Google/Alphabet, which I call attention platforms, is an urgent task. Two decades of regulatory apathy and intellectual fragmentation have produced siloed approaches to the regulation of data and content that leave many urgent political, economic and environmental issues unaddressed. In this paper, I argue that current approaches to regulating data and datafication – in particular approaches that regulate personal data or approaches that focus on social data – fail to address the most pervasive forms of extraction and harm in the attention platform economy: those that stem from addiction, over-consumption, virality, and fragmentation of the public sphere. Data governance is structurally unequipped to prioritize the emergence of just attention infrastructures. Shifting priorities, I argue, requires a move toward horizontal power to shape attention infrastructures, focus on more just advertising and funding systems, and experimentation with attention minimization measures, that is friction and incentives against attention capture.
March 23: Aniket Kesari - A Computational Law & Economics Toolkit for Balancing Privacy and Fairness in Consumer Law
ABSTRACT: Both law and computer science are concerned with developing frameworks for protecting privacy and ensuring fairness. Both fields often consider these two values separately and develop legal doctrines and machine learning metrics in isolation from one another. Yet, privacy and fairness values can conflict, especially when considered alongside the utility of an algorithm. The computer science literature often treats this problem as an “impossibility theorem" - we can have privacy or fairness but not both. Legal doctrine is similarly constrained by a focus on the inputs to a decision - did the decisionmaker intend to use information about protected attributes. Despite these challenges, there is a way forward. The law has integrated economic frameworks to consider tradeoffs in other domains, and a similar approach can clarify policymakers’ thinking around balancing utility, privacy, and fairnesss. This piece illustrates this idea by bridging the law and computer science literatures, using a law & economics lens to formalize the notion of a Privacy-Fairness-Utility frontier, and demonstrating this framework on a consumer lending dataset. An open-source Python software library and GUI will be made available to assist regulators and academics in conducting algorithmic audits using this framework.
March 9: Gabriel Nicholas - Administering Social Data: Lessons for Social Media from Other Sectors
ABSTRACT: As the problems of competition, misinformation, and heightened political polarization grow more salient for social media companies, there is increased pressure on these companies to grant certain outsiders access to social data. Granting access has several benefits: it can aid understanding of the dynamics of online communication, further social scientific research, develop robust independent analysis of algorithmic decision-making, and guide effective regulation. Yet access is not without controversy. Privacy advocates raise concerns over the sensitivity of social data held by companies and the spotty track record of governments and researchers in using access to such data responsibly. In addition, companies that gather and hold this data raise concerns over trade secrecy: social data is a key component of how companies maintain a competitive advantage, allowing them to develop better algorithmic products and reap the benefits of network effects. Navigating the Scylla and Charybdis of privacy and trade secrecy poses a notable challenge to realizing the benefits of social data sharing. This year, I am working on two projects that relate to giving researchers access to social media data. The first is an inchoate law review article with Salome Viljoen and Chris Morten on what lessons can be learned from the data governance mechanisms around sharing medical data. The other is a research paper I am working on at the Center for Democracy & Technology about lessons social media can learn from how other sectors share data with researchers. These are very early stage products so I look forward to your feedback and inspiration!
March 2: Jiaying Jiang - Central Bank Digital Currencies and Consumer Privacy Protection
February 23: Aileen Nielsen & Karel Kubicek - How Does Law Make Code? The Timing and Content of Open Source Responses to GDPR and CCPA
ABSTRACT: How does law make its way into code? Concrete and richly textured examples of how law creates or modifies computer code could provide crucial feedback to legislators and policymakers on the dynamics of how and when law shapes computer code. With the advent of recent data protection laws alongside substantial gains in the size, scope, and relevance of open source software, there is a novel opportunity to study the dynamics of how and when law shapes code. We will present an early stage study to construct a novel data set to address questions relating to the behavioral and temporal dynamics of how open source communities have responded to recent data protection statutes.
February 16: Stein - Unintended Consequences: How Data Protection Laws Leave our Data Less Protected
ABSTRACT: Over the past two decades, regulators have enacted hundreds of measures to prevent and mitigate the harms of data breaches. Yet the frequency, scope, and harm of data breaches continue to rise steadily. Most policymakers and commentators agree: the law doesn’t hold firms sufficiently accountable when they fail to protect their user’s data, resulting in private under-investment in data security. This article presents an alternative explanation for the apparent failure of data breach laws. Rather than failing to induce the right _level_ of investment, this paper argues that current laws encourage the wrong _allocation_ of resources.
This explanation builds on three observations:
First, the ability to detect breaches depends on the relative sophistication of the attacker and defender. On average, a breach eludes detection for over half a year, not counting the unknown number of never-discovered breaches. Current laws focus on preventing breaches entirely or mitigating the costs of discovered breaches. The duration and detectability of breaches—major areas of focus in cybersecurity practice—are conspicuously absent from the literature on data policy. Second, the shift to “cloud” and mobile computing in the late ‘00s triggered a wave of specialization and reliance on community-maintained infrastructure. The most severe failures of data protection in the past decade originated from failures in these communal infrastructures. Policy and scholarship almost exclusively focus on the efforts and discoveries of individual firms; when a firm discovers a breach, they assume responsibility for mitigating it. The increased cost of taking responsibility for shared infrastructure exacerbates an already problematic collective action problem. Finally, there is a massive labor shortage in information security. The recent exponential growth of data use creates a similar growth in the need for trained security engineers. Yet no vocational or university training pipeline exists, and apprenticeship-based training has failed to meet demand. For some important classes of data protection work, the number of experienced security experts is essentially fixed. These observations combine to paint a dire picture. The current legal regime directs much of the limited supply of skilled security expertise _away_ from the parts of the internet most in need of protection, leaving common infrastructure more likely to face undetectable attacks. Consumers may never find out how they lost control of their data, leaving no opportunity for mitigation or redress. While these dynamics are new to the internet, they are not unique to data and software. Drawing on lessons from environmental and aircraft safety regulations, this paper suggests that augmenting current data breach laws with strategic infrastructure investment and public-private partnerships could mitigate these failures.
February 9: Stav Zeitouni - Propertization in Information Privacy
ABSTRACT: After lying dormant for some years, the information privacy propertization debates are upon us again. In both the past and the present, much is made of the question of whether information privacy should be propertized in these discussions. By contrast, I argue that the way information privacy has been legislated, propertization is, in several important ways, a descriptive fact. Supporting this claim requires an exploration of what legislated information privacy entails, along with an elucidation of the meaning of propertization. Therefore, I begin by examining three prominent data protection laws (the EU’s GDPR, California’s CCPA, and China’s PIPL), focusing on two of the main loci of these kinds of laws: individual control and personal data. From here it is easier to explore the question of what it means for an area of law to be “propertized”. In particular, I identify three facets of propertization that show that information privacy is well on its way to being propertized. First, propertization occurs when the language and discourse of ownership and property becomes more prevalent in relation to information privacy. Calls for protecting “my data” or, even more pointedly, assigning property rights in data, are the most relevant examples of this discourse. It is sometimes difficult, however, to gauge what kind of entitlements these calls contemplate. To clarify these, and to assess whether they comport with typical property entitlements, another level of analysis is needed. The second facet wrestles with several theories regarding the structure and core of property (bundle of rights, new essentialism, pluralism, etc.) Even without settling this debate, it is possible to identify several general entitlements which are commonly (though not always) found in property and which are increasingly found in information privacy as well. Finally, the third facet of propertization draws connections between specific entitlements as legislated in data protection laws and particular property entitlements legislated in other laws. Through a series of concrete examples I hope to show how a particular kind of control links property entitlements to existing data protection entitlements.
February 2: Ben Sundholm - AI in Clinical Practice: Reconceiving the Black-Box Problem
ABSTRACT: Today, data is being collected and analyzed on a massive scale to enhance healthcare and medicine. This phenomenon affords many benefits, but it also poses significant challenges. The so-called “black-box problem” is among the most serious of such challenges. The black-box problem refers to the fact that the opaque nature of some artificial intelligence (AI) systems means clinicians cannot fully understand the cascade of calculations producing certain outputs. When clinicians rely on AI outputs to treat patients, but they cannot fully understand how the AI technology arrived at the conclusion it did, who is responsible when an AI recommendation results in an injury to a patient? I propose that we consider whether common enterprise liability (CEL) can address the concerns raised by the black-box problem. Although there are several available justifications for CEL (e.g., placing liability in the hands of the cheapest cost avoider), I propose a rational justification for applying CEL to the black-box problem. My proposal draws heavily from Alan Gewirth’s principle of generic consistency. I intend my suggestions to be conversation starters rather than definitive claims.
January 26: Mark Verstraete - Probing Personal Data
ABSTRACT: Personal data is an essential concept within privacy law. As Schwartz and Solove suggest in their groundbreaking paper on personal information, the boundaries of privacy are fixed by personal data. That is, personal data is required for privacy claims and, by contrast, without personal data any claims are not “privacy” claims. While personal data marks the boundaries of privacy and offers a triggering condition for privacy claims, the boundaries of personal data remain contested. We argue that merely analyzing the connection between a person and information does not capture what is unique about data. Instead, we argue that personal data as a coherent concept depends on how data is used. To account for the role of use, we introduce the philosophical concept separability in order to make determinations about which uses are connected to the person and which are not. We argue that separability provides a desirable foundation for crafting a theory of personal data that captures individual interests in data. Separability marks an improvement both conceptually and normatively over earlier theories of personal data. Conceptually, separability allows us to better identify when a person’s interest are at stake which is the foundational question for determining when information should be treated as personal data. Normatively, separability provides a rigorous philosophical foundation to privacy law that better incorporates autonomy and dignity values thus better offsetting modern privacy harms like manipulation.
December 1: Ira Rubinstein & Tomer Kenneth - Health Misinformation, Online Platforms, and Government Action
ABSTRACT: Everyone – from Dr. Fauci to anti-vaxxers – agrees that health misinformation is socially undesirable. Health misinformation is undesirable because it affects people and their health choices, and because those choices can lead to healthy lives or to illness and death – for individuals and societies. Health misinformation is a form of speech; it uses the speech technology that is most prevalent in that time and space. Therefore, it is not surprising that health misinformation about covid-19 spread through social media, and many public health experts soon regarded it as an ‘infodemic.’ In this paper, we adopt the standard definition of misinformation and apply it only to health-related claims. Also, our focus is on occurrences of health misinformation in online platforms. That is, our focus is on ‘online health misinformation,’ false information about health issues disseminated online. To distinguish information from misinformation – false claims from true ones – we rely on a long-standing tradition in democratic states of deferring to the best available scientific-based medical knowledge. Online health misinformation poses a serious problem in recent years, which peaked in the pandemic. Health misinformation spreads online about any aspect of the pandemic – the source of the disease, its nature, the efficacy of mitigating actions, the safety of drugs. Online platforms and governments soon realized that it is impossible to ignore this problem and that not acting against health misinformation is irresponsible. We explore what online platforms and the government do, can do, and should do, to mitigate the problem of health misinformation.
November 17: Aileen Nielsen - Can an algorithm be too accurate?
ABSTRACT: Much research on social and legal concerns about the increasing use of algorithms has focused on ways to detect or prevent algorithmic misbehavior or mistake. However, there are also harms that result when algorithms perform too well rather than too poorly. This paper makes the case that significant harms can occur because algorithms are too accurate and proposes a novel conceptual tool for reigning in such harms: accuracy bounding. Accuracy bounding would limit the performance of algorithms with respect to their information producing qualities. This technique could provide an intuitive and flexible means to address concerns arising from undesirably accurate algorithms. Accuracy bounding could be complementary to many existing proposed governance and accountability tools for algorithms, such as fairness audits and cyber-security best practices. It also represents a new version of existing tactics in law, policy, and technical disciplines, which have all previously included examples of trading off performance capabilities against other values. Thus, accuracy bounding could provide a useful addition to the proposed regulatory toolbox of methods to address concerns about the increasing use of algorithms.
November 10: Thomas Streinz - Data Capitalism
November 3: Barbara Kayondo - A Governance Framework for Enhancing Patient’s Data Privacy Protection in Electronic Health Information Systems
ABSTRACT: With the advances in Information Technologies most medical facilities have turned to the use of electronic health systems in-order to serve their patients better. However, the innovation has come with a short coming on how to effectively protect the privacy of patient’s data. Several frameworks, laws and best practices exits. Again, Literature indicates that the frameworks are fragmented, There is need invest in interdisciplinary research so as to combine the different integrates legal requirements, information security governance frameworks, best practices and methods requirements into one comprehensive privacy governance framework . Moreover, an effective governance framework incorporates requirements that impact the creation, management, and disposition of all organisational information, and this includes privacy requirement. Therefore, in this study we propose a governance framework for enhancing patient’s data privacy protection in electronic health information systems. The study will adopt a pragmatism research paradigm with an abductive approach and design research methodology. We will review literature on legal, technical, standards, governance related to privacy protection Our sample size will be 70 respondents. Data will be collected from both primary and secondary sources using interview guides, questionnaires and document review guides. Qualitative and quantitative data will be analysed using Nvivo and Statistical Package for the Social Sciences respectively.
October 27: Sebastian Benthal - Fiduciary Duties for Computational Systems
ABSTRACT: A fiduciary duty is a legally recognized requirement that a trusted agent act with loyalty and care for a principal that employs them. Fiduciary duties apply to a broad range of professional roles, including experts such as doctors and lawyers, as well as investment advisors and corporate executives. Recent legal scholarship, policy proposals, and court cases applied fiduciary principles to computational systems. We consider the rationale for fiduciary duties and their applicability to digital services and AI and find them appropriate legal tools for addressing the fairness, accountability, transparency, and justice of these systems. We then operationalize fiduciary duties in computational terms. We model fiduciary duty compliance for information flows between parties, including confidentiality and disclosure requirements. We also address the duties of loyalty and care more broadly, proposing considerations for Loyal and Careful AI.
October 20: Jiang Jiaying - Technology-Enabled Co-Regulation as a New Regulatory Approach to Blockchain Implementation
ABSTRACT: Blockchain technology has great potential to reshape the financial industry. However, the existing policy and regulatory regimes fail to provide a supportive environment for blockchain technology to fulfill its potential. In this article, I propose technology-enabled co-regulation as a new approach to blockchain implementation, especially in the financial markets. This approach has two distinctive elements: a collaborative environment and a technology-enabled mechanism. A collaborative environment consists of regulatory and industry sandboxes in which regulators and industry representatives can experiment with novel ideas. A technology-enabled mechanism is empowered by regulatory technologies (RegTech) and supervisory technologies (SupTech) that support compliance with regulatory and reporting requirements and facilitate supervisory obligations. This technology-enabled co-regulation can help to achieve policy and regulatory goals: a fair and efficient market, financial stability, consumer and investor protection, law enforcement efficiency, and, most importantly, technology innovation. Technology-enabled co-regulation is preferable to traditional command-and-control regulation and self-regulation. Its collaborative and technological elements are also more advanced than a simple co-regulation is. To reach this conclusion, I conducted an impact assessment of proposed regulatory options. The impact assessment consists of five analytic steps, asking the following questions: What problems have emerged from existing policies and regulations? What are the objectives of the proposed regulations? What are the regulatory options? What are the possible impacts? How do the options compare?
October 13: Aniket Kesari - Privacy Law Diffusion Across U.S. State Legislatures
ABSTRACT: How does privacy legislation diffuse across state legislatures? Legal scholars have long argued that the U.S. maintains a "privacy federalism" where the states take the lead on experimenting with different privacy legislation. However, new literature in political science and computational social science argues that state legislatures copy legislation from interest groups and other legislatures. State legislatures vary in terms of their resources; some legislatures meet frequently and have full time staffs while others meet for a few weeks each year and little staffing support. This study examines the extent to which states experiment with novel privacy laws, or simply piggyback off other legislatures. Combining this comprehensive dataset of state privacy legislation with existing datasets on company privacy policies, this study then looks at whether companies change compliance strategies across different states.
October 6: Katja Langenbucher - The EU Proposal for an AI Act – tested on algorithmic credit scoring
ABSTRACT: I will discuss how algorithmic credit scoring would work under the EU’s proposal for an AI Act of 2021. The proposal understands ML-based creditworthiness assessments as „high risk“ due to the risks they present for fundamental human rights. Under the new approach, this triggers a host of compliance requirements. I am doubtful whether these requirements (which are drafted in the spirit of product regulation and risk management) are fit for dealing with risks to human rights.
September 29: Francesca Episcopo - PrEtEnD – PRivate EnforcemenT in the EcoNomy of Data
ABSTRACT: Big tech companies use aggregated personal data for a variety of business-related activities – provision of services, data analytics, profiling, commercial operation in data markets etc. Legislation has been progressively adopted, updated, and coordinated all over the word to keep up with the needs of the data-driven economy, as well as its risks and challenges at the national, regional, and international level. A multiform and complex ‘data law’ is thus taking shape from the overlap of different general and sector-specific measures, e.g., in antitrust, IP, consumer and data protection law. All these instruments set various constrains on the activities of data-savvy companies, and yet gaps both in regulation and enforcement leave major social and economic issues open. The present project aims to analyze and address one of said ‘blind-spots’, namely that related to the Tech Giants’ capacity to escape civil liability in case of unauthorized use of personal data. Indeed, across various jurisdictions, the following scenario often occurs: an unauthorized use of personal data is discovered, on some occasion public-law response is taken, while data subjects seek to have the infringement of their privacy and data protection rights made good, either on an individual or a collective basis. However, lack of procedural or substantial requirement – related, for instance, to the lack of provable ‘concrete and individualized harm’, or the difficulty in ‘quantifying the recoverable damages’ – often stand in their way and prevent them from receiving judicial protection and acting as private attorneys general of privacy and data protection rules. The project aims to address these problems in a comparative and multidisciplinary perspective. In particular, it aims to (i) analyze the legal constraints limiting private parties’ capacity to hold data companies accountable for their breaches, and their relevance in terms of legal and economic trends; (ii) clarify how these problems have radiating effects globally (e.g., affecting the enforceability of claims in private international law disputes); (iii) identify which solutions could be drafted both de iure condito and de iure condendo to allow the desired internalization of costs and judicial protection; (iv) and critically assess their desirability and expected efficacy, based on their capacity to achieve the desired results (internalization of costs and judicial protection), as well as on their compatibility with other fundamental aspects of data-law, economic law, and fundamental rights protection.
September 22: Ben Green - The Flaws of Policies Requiring Human Oversight of Government Algorithms
ABSTRACT: Policymakers around the world are increasingly considering how to prevent government uses of algorithms from producing injustices. One mechanism that has become a centerpiece of global efforts to regulate government algorithms is to require human oversight of algorithmic decisions. However, the functional quality of this regulatory approach has not been thoroughly interrogated. In this article, I survey 40 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, human oversight policies legitimize government use of flawed and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security in adopting algorithms and enable vendors and agencies to shirk accountability for algorithmic harms. In light of these flaws, I propose a more rigorous approach for determining whether and how to incorporate algorithms into government decision-making. First, policymakers must critically consider whether it is appropriate to use an algorithm at all in a specific context. Second, before deploying an algorithm alongside human oversight, vendors or agencies must conduct preliminary evaluations of whether people can effectively oversee the algorithm.
September 15: Ari Waldman - Misinformation Project in Need of Pithy Title
ABSTRACT: The legal literature on disinformation is primarily focused on how the law should respond to the problem of misleading information cascading across the media ecosystem. The related social science literature focuses on how misinformation spreads, including the role of social networks, platform design, and the political economy of informational capitalism, and its goals--namely, to sow discord, erode trust, and undermine democratic institutions. I think the goal of some disinformation is far more specific and legal in nature. The sociolegal literature has not explored the extent to which legal institutions are already vulnerable to disinformation and how disinformation may be specifically designed to achieve legal advantage for the ideological partisans that spread it. Through a case study about disinformation from right wing sources about abortion, transgender rights, religious liberty, and election integrity, this project identifies legally relevant trends in disinformation campaigns specifically intended to take advantage of legal doctrines that are agnostic about facts
April 16:Tomer Kenneth — Public Officials on Social Media
April 9: Thomas Streinz — The Flawed Dualism of Facebook's Oversight Board
April 2: Gabe Nicholas — Have Your Data and Eat it Too: Bridging the Gap between Data Sharing and Data Protection
March 26: Ira Rubinstein — Voter Microtargeting and the Future of Democracy
March 19: Stav Zeitouni
March 12: Ngozi Nwanta
March 5: Aileen Nielsen
February 26: Tom McBrien
February 19: Ari Ezra Waldman
February 12: Albert Fox Cahn
February 5: Salome Viljoen & Seb Benthall — Data Market Discipline: From Financial Regulation to Data Governance
January 29: Mason Marks — Biosupremacy: Data Protection, Antitrust, and Monopolistic Power Over Human Behavior
December 4: Florencia Marotta-Wurgler & David Stein — Teaching Machines to Think Like Lawyers
November 20: Andrew Weiner
November 6: Mark Verstraete — Cybersecurity Spillovers
October 30: Ari Ezra Waldman — Privacy Law's Two Paths
October 23: Aileen Nielsen — Tech's Attention Problem
October 16: Caroline Alewaerts — UN Global Pulse
October 9: Salome Viljoen — Data as a Democratic Medium: From Individual to Relational Data Governance
October 2: Gabe Nicholas — Surveillance Delusion: Lessons from the Vietnam War
September 25: Angelina Fisher & Thomas Streinz — Confronting Data Inequality
September 18: Danny Huang — Watching loTs That Watch Us: Studying loT Security & Privacy at Scale
September 11: Seb Benthall — Accountable Context for Web Applications
April 29: Aileen Nielsen — "Pricing" Privacy: Preliminary Evidence from Vignette Studies Inspired by Economic Anthropology
April 22: Ginny Kozemczak — Dignity, Freedom, and Digital Rights: Comparing American and European Approaches to Privacy
April 15: Privacy and COVID-19 Policies
April 8: Ira Rubinstein — Urban Privacy
April 1: Thomas Streinz — Data Governance in Trade Agreements: Non-territoriality of Data and Multi-Nationality of Corporations
March 25: Christopher Morten — The Big Data Regulator, Rebooted: Why and How the FDA Can and Should Disclose Confidential Data on Prescription Drugs
March 4: Lilla Montanagni — Regulation 2018/1807 on the Free Flow of Non Personal Data: Yet Another Piece in the Data Puzzle in the EU?
February 26: Stein — Flow of Data Through Online Advertising Markets
February 19: Seb Benthall — Towards Agend-Based Computational Modeling of Informational Capitalism
February 5: Jake Goldenfein & Seb Benthall — Data Science and the Decline of Liberal Law and Ethics
January 29: Albert Fox Cahn — Reimagining the Fourth Amendment for the Mass Surveillance Age
January 22: Ido Sivan-Sevilia — Europeanization on Demand? The EU's Cybersecurity Certification Regime Between the Rationale of Market Integration and the Core Functions of the State
December 4: Ari Waldman — Discussion on Proposed Privacy Bills
November 20: Margarita Boyarskaya & Solon Barocas [joint work with Hanna Wallach] — What is a Proxy and why is it a Problem?
November 13: Mark Verstraete & Tal Zarsky — Data Breach Distortions
November 6: Aaron Shapiro — Dynamic Exploits: Calculative Asymmetries in the On-Demand Economy
October 30: Tomer Kenneth — Who Can Move My Cheese? Other Legal Considerations About Smart-Devices
October 23: Yafit Lev-Aretz & Madelyn Sanfilippo — Privacy and Religious Views
October 16: Salome Viljoen — Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought
October 9: Katja Langenbucher — Responsible A.I. Credit Scoring
October 2: Michal Shur-Ofry — Robotic Collective Memory
September 25: Mark Verstraete — Inseparable Uses in Property and Information Law
September 18: Gabe Nicholas & Michael Weinberg — Data, To Go: Privacy and Competition in Data Portability
September 11: Ari Waldman — Privacy, Discourse, and Power
April 24: Sheila Marie Cruz-Rodriguez — Contractual Approach to Privacy Protection in Urban Data Collection
April 17: Andrew Selbst — Negligence and AI's Human Users
April 10: Sun Ping — Beyond Security: What Kind of Data Protection Law Should China Make?
April 3: Moran Yemini — Missing in "State Action": Toward a Pluralist Conception of the First Amendment
March 27: Nick Vincent — Privacy and the Human Microbiome
March 13: Nick Mendez — Will You Be Seeing Me in Court? Risk of Future Harm, and Article III Standing After a Data Breach
March 6: Jake Goldenfein — Through the Handoff Lens: Are Autonomous Vehicles No-Win for Users
February 27: Cathy Dwyer — Applying the Contextual Integrity Framework to Cambride Analytica
February 20: Ignacio Cofone & Katherine Strandburg — Strategic Games and Algorithmic Transparency
January 30: Sabine Gless — Predictive Policing: In Defense of 'True Positives'
December 5: Discussion of current issues
November 28: Ashley Gorham — Algorithmic Interpellation
November 14: Mark Verstraete — Data Inalienabilities
November 7: Jonathan Mayer — Estimating Incidental Collection in Foreign Intelligence Surveillance
October 31: Sebastian Benthall — Trade, Trust, and Cyberwar
October 24: Yafit Lev-Aretz — Privacy and the Human Element
October 17: Julia Powles — AI: The Stories We Weave; The Questions We Leave
October 10: Andy Gersick — Can We Have Honesty, Civility, and Privacy Online? Implications from Evolutionary Theories of Animal and Human Communication
October 3: Eli Siems — The Case for a Disparate Impact Regime Covering All Machine-Learning Decisions
September 26: Ari Waldman — Privacy's False Promise
September 19: Marijn Sax — Targeting Your Health or Your Wallet? Health Apps and Manipulative Commercial Practices
September 12: Mason Marks — Algorithmic Disability Discrimination
May 2: Ira Rubinstein — Article 25 of the GDPR and Product Design: A Critical View [with Nathan Good and Guilermo Monge, Good Research]
April 25: Elana Zeide — The Future Human Futures Market
April 18: Taylor Black — Performing Performative Privacy: Applying Post-Structural Performance Theory for Issues of Surveillance Aesthetics
April 11: John Nay — Natural Language Processing and Machine Learning for Law and Policy Texts
April 4: Sebastian Benthall — Games and Rules of Information Flow
March 28: Yann Shvartzshanider and Noah Apthorpe — Discovering Smart Home IoT Privacy Norms using Contextual Integrity
February 28: Thomas Streinz — TPP’s Implications for Global Privacy and Data Protection Law
February 21: Ben Morris, Rebecca Sobel, and Nick Vincent — Direct-to-Consumer Sequencing Kits: Are Users Losing More Than They Gain?
February 14: Eli Siems — Trade Secrets in Criminal Proceedings: The Battle over Source Code Discovery
February 7: Madeline Bryd and Philip Simon — Is Facebook Violating U.S. Discrimination Laws by Allowing Advertisers to Target Users?
January 31: Madelyn Sanfilippo — Sociotechnical Polycentricity: Privacy in Nested Sociotechnical Networks
January 24: Jason Schultz and Julia Powles — Discussion about the NYC Algorithmic Accountability Bill
November 29: Kathryn Morris and Eli Siems — Discussion of Carpenter v. United States
November 15:Leon Yin — Anatomy and Interpretability of Neural Networks
November 8: Ben Zevenbergen — Contextual Integrity for Password Research Ethics?
November 1: Joe Bonneau — An Overview of Smart Contracts
October 25: Sebastian Benthall — Modeling Social Welfare Effects of Privacy Policies
October 18: Sue Glueck — Future-Proofing the Law
October 11: John Nay — Algorithmic Decision-Making Explanations: A Taxonomy and Case Study
October 4:Finn Bruton — 'The Best Surveillance System we Could Imagine': Payment Networks and Digital Cash
September 27: Julia Powles — Promises, Polarities & Capture: A Data and AI Case Study
September 20: Madelyn Rose Sanfilippo AND Yafit Lev-Aretz — Breaking News: How Push Notifications Alter the Fourth Estate
September 13: Ignacio Cofone — Anti-Discriminatory Privacy
April 26: Ben Zevenbergen — Contextual Integrity as a Framework for Internet Research Ethics
April 19: Beate Roessler — Manipulation
April 12: Amanda Levendowski — Conflict Modeling
April 5: Madelyn Sanfilippo — Privacy as Commons: A Conceptual Overview and Case Study in Progress
March 29: Hugo Zylberberg — Reframing the fake news debate: influence operations, targeting-and-convincing infrastructure and exploitation of personal data
March 22: Caroline Alewaerts, Eli Siems and Nate Tisa will lead discussion of three topics flagged during our current events roundups: smart toys, the recently leaked documents about CIA surveillance techniques, and the issues raised by the government’s attempt to obtain recordings from an Amazon Echo in a criminal trial.
March 8: Ira Rubinstein — Privacy Localism
March 1: Luise Papcke — Project on (Collaborative) Filtering and Social Sorting
February 22: Yafit Lev-Aretz and Grace Ha (in collaboration with Katherine Strandburg) — Privacy and Innovation
February 15: Argyri Panezi — Academic Institutions as Innovators but also Data Collectors - Ethical and Other Normative Considerations
February 8: Katherine Strandburg — Decisionmaking, Machine Learning and the Value of Explanation
February 1: Argyro Karanasiou — A Study into the Layers of Automated Decision Making: Emergent Normative and Legal Aspects of Deep Learning
January 25: Scott Skinner-Thompson — Equal Protection Privacy
December 7: Tobias Matzner — The Subject of Privacy
November 30: Yafit Lev-Aretz — Data Philanthropy
November 16: Helen Nissenbaum — Must Privacy Give Way to Use Regulation?
November 9: Bilyana Petkova — Domesticating the "Foreign" in Making Transatlantic Data Privacy Law
November 2: Scott Skinner-Thompson — Recording as Heckling
October 26: Yan Shvartzhnaider — Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms
October 19: Madelyn Sanfilippo — Privacy and Institutionalization in Data Science Scholarship
October 12: Paula Kift — The Incredible Bulk: Metadata, Foreign Intelligence Collection, and the Limits of Domestic Surveillance Reform
October 5: Craig Konnoth — Health Information Equity
September 28: Jessica Feldman — the Amidst Project
September 21: Nathan Newman — UnMarginalizing Workers: How Big Data Drives Lower Wages and How Reframing Labor Law Can Restore Information Equality in the Workplace
September 14: Kiel Brennan-Marquez — Plausible Cause
April 27: Yan Schvartzschnaider — Privacy and loT AND Rebecca Weinstein - Net Neutrality's Impact on FCC Regulation of Privacy Practices
April 20: Joris van Hoboken — Privacy in Service-Oriented Architectures: A New Paradigm? [with Seda Gurses]
April 13: Florencia Marotta-Wurgler — Who's Afraid of the FTC? Enforcement Actions and the Content of Privacy Policies (with Daniel Svirsky)
April 6: Ira Rubinstein — Big Data and Privacy: The State of Play
March 30: Clay Venetis — Where is the Cost-Benefit Analysis in Federal Privacy Regulation?
March 23: Diasuke Igeta — An Outline of Japanese Privacy Protection and its Problems; Johannes Eichenhofer — Internet Privacy as Trust Protection
March 9: Alex Lipton — Standing for Consumer Privacy Harms
March 2: Scott Skinner-Thompson — Pop Culture Wars: Marriage, Abortion, and the Screen to Creed Pipeline [with Professor Sylvia Law]
February 24: Daniel Susser — Against the Collection/Use Distinction
February 17: Eliana Pfeffer — Data Chill: A First Amendment Hangover
February 10: Yafit Lev-Aretz — Data Philanthropy
February 3: Kiel Brennan-Marquez — Feedback Loops: A Theory of Big Data Culture
January 27: Leonid Grinberg — But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race
November 18: Angèle Christin - Algorithms, Expertise, and Discretion: Comparing Journalism and Criminal Justice
November 4: Solon Barocas and Karen Levy — Understanding Privacy as a Means of Economic Redistribution
October 28: Finn Brunton — Of Fembots and Men: Privacy Insights from the Ashley Madison Hack
October 21: Paula Kift — Human Dignity and Bare Life - Privacy and Surveillance of Refugees at the Borders of Europe
October 14: Yafit Lev-Aretz and co-author, Nizan Geslevich Packin — Between Loans and Friends: On Soical Credit and the Right to be Unpopular
October 7: Daniel Susser — What's the Point of Notice?
September 30: Helen Nissenbaum and Kirsten Martin — Confounding Variables Confounding Measures of Privacy
September 23: Jos Berens and Emmanuel Letouzé — Group Privacy in a Digital Era
September 16: Scott Skinner-Thompson — Performative Privacy
September 9: Kiel Brennan-Marquez — Vigilantes and Good Samaritan
April 22: Helen Nissenbaum — Respect for Context' as a Benchmark for Privacy: What it is and Isn't
April 15: Joris van Hoboken — From Collection to Use Regulation? A Comparative Perspective
March 11: Rebecca Weinstein (Cancelled)
Kristen Martin — Transaction costs, privacy, and trust: The laudable goals and ultimate failure of notice and choice to respect privacy online
Ryan Calo — Against Notice Skepticism in Privacy (and Elsewhere)
Lorrie Faith Cranor — Necessary but Not Sufficient: Standardized Mechanisms for Privacy Notice and Choice
October 22: Matthew Callahan — Warrant Canaries and Law Enforcement Responses
October 15: Karen Levy — Networked Resistance to Electronic Surveillance
October 8: Joris van Hoboken — The Right to be Forgotten Judgement in Europe: Taking Stock and Looking Ahead
October 1: Giancarlo Lee — Automatic Anonymization of Medical Documents
September 24: Christopher Sprigman — MSFT "Extraterritorial Warrants" Issue
September 17: Sebastian Zimmeck — Privee: An Architecture for Automatically Analyzing Web Privacy Policies [with Steven M. Bellovin]
September 10: Organizational meeting
January 29: Organizational meeting
November 20: Nathan Newman — Can Government Mandate Union Access to Employer Property? On Corporate Control of Information Flows in the Workplace
September 25: Luke Stark — The Emotional Context of Information Privacy
September 18: Discussion — NSA/Pew Survey
September 11: Organizational Meeting
April 10: Katherine Strandburg — ECPA Reform; Catherine Crump: Cotterman Case; Paula Helm: Anonymity in AA
March 27: Privacy News Hot Topics — US v. Cotterman, Drones' Hearings, Google Settlement, Employee Health Information Vulnerabilities, and a Report from Differential Privacy Day
March 6: Mariana Thibes — Privacy at Stake, Challenging Issues in the Brazillian Context
March 13: Nathan Newman — The Economics of Information in Behavioral Advertising Markets
February 27: Katherine Strandburg — Free Fall: The Online Market's Consumer Preference Disconnect
February 20: Brad Smith — Privacy at Microsoft
February 13: Joe Bonneau — What will it mean for privacy as user authentication moves beyond passwo
February 6: Helen Nissenbaum — The (Privacy) Trouble with MOOCs
January 30: Welcome meeting and discussion on current privacy news
November 14: Travis Hall — Cracks in the Foundation: India's Biometrics Programs and the Power of the Exception
September 19: Nathan Newman — Cost of Lost Privacy: Google, Antitrust and Control of User Data