May 2: Ira Rubinstein — Article 25 of the GDPR and Product Design: A Critical View [with Nathan Good and Guilermo Monge, Good Research]
ABSTRACT: The General Data Protection Regulation (GDPR) seeks to protect the privacy and security of EU citizens in a world that is vastly different from that of the 1995 Data Protection Directive. This is largely due to the rise of the Internet and digital technology, which together define how we communicate, access the world of ideas, and make ourselves into social creatures. The GDPR seeks to modernize European data protection law by establishing data protection as a fundamental right. It requires data controllers to respect the rights of individuals including new rights of erasure and data portability and to comply with new obligations including accountability, a risk-based approach, impact assessments and data protection by design and default (DPDD). Ideally, this new DPDD obligations will change business norms by bringing data protection to the forefront of product design. Although the, GDPR strives to remain sufficiently broad and flexible to allow for creative solutions, it also adopts a belt and suspenders approach to regulation, imposing multiple, overlapping obligations on data controllers. What, then, is the specific task of the DPDD provision? It requires organizations to implement privacy-enhancing measures at the earliest stage of design and to select techniques that by default are the most protective of individuals' privacy and data protection. More specifically, Article 25 requires that "controllers shall ... implement appropriate technical and organisational measures ... in an effective manner... in order to meet the requirements of this Regulation and protect the rights of data subjects" and to ensure that "by default, only personal data which are necessary for each specific purpose of the processing are processed."This begs several questions, however. For example, do organizations achieve these goals by implementing specific measures over and above those they might otherwise put into effect to meet their obligations under the remainder of the Regulation? Are certain "technical and organizational measures" (like pseudonymisation and data minimisation) required or merely recommended? Are there specific design and engineering techniques that organizations should follow to satisfy their DPDD obligations? And how do organizations know when their efforts satisfy Article 25 requirements, especially when they have already complied with other obligations? In this paper, we examine what technology companies are doing currently to satisfy their obligations under Article 25 in the course of establishing overall GDPR compliance programs. We expect to find that companies with limited privacy resources are confining their efforts to a compliance-based approach, resulting in a patchwork of privacy practices rather than adoption of a privacy-based model of product design. And we predict that in a rush to achieve compliance, these companies will fail to implement the methods and practices that comprise privacy by design as that term is understood, not by regulators, but by engineers and designers as described in our earlier work (Rubinstein & Good, Privacy by Design: A Counterfactual Analysis of Google and Facebook Privacy Incidents). In other words, many firms will treat the DPDD obligation as just a checkbox requirement.We will investigate these claims via case studies. However, we do not rely on surveys of a representative sampling of regulated firms or snowball sampling of industry practitioners whose work exemplifies the methods and practices that engineers and designers rely on to achieve specific privacy (and security) goals. Rather, we will analyze what two groups of vendors are offering their customers to help them operationalize the GDPR generally and Article 25 in particular. We will look at both privacy technology vendors (a new niche market of firms selling into the private-sector market of firms needing help with GDPR compliance) and cloud infrastructure vendors (like Microsoft) who are marketing their platform to large multinationals and SMEs as GDPR-ready. (If necessary, we may supplement this approach with telephone interviews but mostly for purposes of follow up questions rather than as a source of primary knowledge.) Finally, we report on the incentives and motivations behind these practical solutions and discuss how supervisory authorities might develop policies to encourage firms to adopt appropriate solutions and develop the necessary expertise to achieve them. In sum, we provide an analysis of Article 25 with the goal of helping EU regulators bridge the gap between the ideals and practice of data protection by design and default.
April 25: Elana Zeide — The Future Human Futures Market
ABSTRACT: This paper considers the emerging market in student futures as a cautionary tale. Income sharing arrangements involve the explicit and literal commodification of “human capital” by for-profit third parties who broker income sharing agreements between private investors and students with promising predictive data profiles. This paper considers the problematic legal and ethical aspects of the predictive technologies driving these markets and draws a parallel to the role schools and third-party career platforms play in sorting, scoring, and predicting student futures as part of a formal education. These matching systems not only mete out opportunity but preempt access to opportunity (see Kerr & Earle). Many coding “bootcamps” take an untraditional approach to student financing. Some after a money-back employment “guarantee.” Others use “human capital” contracts. Instead of requiring students to pay tuition up front to take out onerous loans based on uncertain career paths, schools claim a portion of a graduate’s wages upon gainful employment. A two-year software engineering program in San Francisco, for example, asks for no money upfront but then takes 17% of students’ internship earnings during the program and 17% of salaries for three years after finding a job. Other schools, advocates, and policymakers push for similar private education funding arrangements, including bills introduced in the U.S. Senate and House of Representatives in 2017. They promote “income share agreements” as more equitable and efficient for students than the traditional student loan system, where debt may be disproportionate to post-graduation wages. These arrangements raise numerous constitutional, legal, and ethical questions. Do students have to accept the first offer they receive? How can they be enforced? How might this arrangement shift who can obtain a post-secondary credential? Are they a simply a modern version of indentured servitude? A less discussed but key component of the developing “futures” market is the role of opportunity brokers: third parties who design, implement, and “take the complexity out of” income sharing agreements. These for-profit companies match interested investors with promising “opportunities” based on proprietary predictive analytics that project future income. Some go beyond commodifying student futures to securitizing them: as one commentator writes “human capital - the present value of individuals’ future earnings - may soon become an important investable asset class, following in the footsteps of home mortgage debt.” Schools are themselves opportunity brokers, credential-creators, and career matchmakers that end up determining whose futures we support - individually, institutionally, or as a society. Scholars and popular entertainment offer chilling accounts of the dystopian aspects of a scored society, governed by anticipatory and proprietary data-models likely to reinforce existing patterns of privilege and inequity. Ubiquitous surveillance systems that chill free expression, promote performativity and create circumstances ripe for social control and engineering. Except we already have such a system in place: the formal education system. American schools not only provide whatever one considers “an education,” but also sort, score, and predict student potential. The tools they use to do so - textbooks, SATs, and standards like the Common Core - are subject to intense public scrutiny. Schools increasingly rely on for-profit vendors to provide the platforms and tools that deliver, assess, and document student progress. These include “personalized learning systems” that continuously monitor student progress and adapt instruction at scale - what some have called the “mass customization” of education. They use predictive analytics classify students, infer characteristics, and predict optimal learning pathways. Higher education institutions also use predictive platforms to make recruiting and admissions decisions, award financial aid, and detect students at risk of dropping out. Social media platforms and people analytics firms increasingly mediate and automate candidate-employer matching. This system might similarly not just deny but preempt access to opportunity without accompanying due process provisions. And it is likely to do so in ways that reinforce today's inequities - creating a new segregation of education.
April 18: Taylor Black — Performing Performative Privacy: Applying Post-Structural Performance Theory for Issues of Surveillance Aesthetics
ABSTRACT: In 2017, Scott Skinner-Thompson published “Performative Privacy,” based in part on work with this group. In the article, he “identifies a new dimension of public privacy” and argues for a reading of certain public acts of anti-surveillance resistance as performative, and therefore to be legally understood as expressions of speech and protected as such. In this talk, I extend the framework of performative privacy from the perspective of performance studies, and discuss some new applications of critical theory and performance theory in contemporary issues of surveillance. As a discipline performance studies, particularly its critique of speech as act and its intervention in the use of liveness in action, offers an opportunity to meaningfully trouble the distinction between efficacy and expression underlying the question of performative privacy. To test these limits and demonstrate the possible applications of performance theory, I follow the performative privacy framework in two directions. First, we’ll examine privacy’s impact on performance and aesthetics in the rise of the post-Snowden “surveillance art” movement. Then, I incorporate Clare Birchall’s emerging research on “shareveillance” to explore the question of efficacy in surveillance resistance and the resulting impacts of performance entering into privacy discourse.
April 11: John Nay — Natural Language Processing and Machine Learning for Law and Policy Texts
ABSTRACT: Almost all law is expressed in natural language; therefore, natural language processing (NLP) is a key component of efforts to automatically structure, explore and predict law and policy at scale. NLP converts unstructured text into a formal, structured representation that computers can analyze. First, we provide a brief overview of the different types of law and policy texts and the different types of machine learning methods to process those texts. We introduce the core idea of representing words, sentences and documents as numbers. Then we describe NLP and machine learning tools for leveraging the text data to accomplish tasks. We describe methods for automatically summarizing content (sentiment analyses, text summaries, topic models), extracting content (entities, attributes and relations), retrieving information and documents, predicting outcomes related to text, and answering questions.
April 4: Sebastian Benthall — Games and Rules of Information Flow
ABSTRACT: Attempts to characterize the nature of privacy must acknowledge the complexity of concept. They tend to be either particularist (acknowledging many, unrelated, particular meanings) or contextualist (describing how the same concept manifests itself differently across social contexts. Both these approaches are insufficient for making policy and technical design decisions about technical infrastructure that spans many different contexts. A new model is needed, one that is compatible with these theories but which characterizes privacy considerations in terms of the reality of information flow, not our social expectations of it. I build a model of information flow from the theories of Fred Dretske, Judea Pearl, and Helen Nissenbaum that is compatible with both intuitive causal reasoning and contemporary machine learning methods. This model clarifies that information flow is a combination of causal flow and nomic association, where the associations of information depend on the causal structure of which the flow is a part. This model also affords a game theoretic and mechanism design extensions using the Multi-Agent Influence Diagram framework. I employ this model to illustrate several different economic contexts involving personal information, as well as what happens when these contexts collapse. The model allows for a robust formulation of the difference between a tactical and a strategic information flow, which roughly correspond to the differences between the impact of a sudden data breach and the chilling effects of ongoing surveillance.
March 28: Yann Shvartzshanider and Noah Apthorpe — Discovering Smart Home IoT Privacy Norms using Contextual Integrity
ABSTRACT: The proliferation of Internet of Things (IoT) devices for consumer “smart” homes raises concerns about user privacy. We present a survey method based on the Contextual Integrity (CI) privacy framework that can quickly and efficiently discover privacy norms at scale. We apply the method to discover privacy norms in the smart home context, surveying 1,731 American adults on Amazon Mechanical Turk. For $2,800 and in less than six hours, we measured the acceptability of 3,840 information flows representing a combinatorial space of smart home devices sending consumer information to first and third-party recipients under various conditions. Our results provide actionable recommendations for IoT device manufacturers, including design best practices and instructions for adopting our method for further research.
March 21: Cancelled
March 7: Cancelled
February 28: Thomas Streinz — TPP’s Implications for Global Privacy and Data Protection Law
ABSTRACT: On 8 March, the remaining eleven parties of the original Trans-Pacific Partnership (TPP)–Australia, Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore, and Vietnam–will meet in Santiago, Chile to revive the TPP via the awkwardly (and arguably misleadingly) labelled Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP). This is a surprising development for two reasons: 1) After President Trump withdrew the US from the original TPP in January 2017, most observers believed the agreement was dead for good. 2) The TPP11 parties preserved the vast majority of the provisions of the original TPP (with notable exceptions mainly in the investment and IP chapters) despite the fact that the agreement mainly followed US models of (so called) free trade agreements (FTAs) and was in fact promoted as “Made in America” by the Office of the United States Trade Representative (USTR) during the Obama administration which was particularly proud of a new set on rules that it branded as the "Digital2Dozen." The chapter on “electronic commerce” which contains most but not all provisions with relevance for internet law and regulation got incorporated into CPTPP without any modifications and is bound to become the template for future trade agreements (including the ongoing renegotiations of NAFTA) without EU participation. In my presentation for PRG, I will focus on TPP’s (weak) provision on “personal information protection” (Article 14.8) and its innovative rules for free data flows (Article 14.11) and against data localization requirements (Article 14.13). I will explain and we should discuss why the EU views these rules as problematic from a privacy perspective. In its recent agreement with Japan, which is also a TPP party, this can got kicked down the road, but on 31 January 2018 the European Commission announced that it would endorse provisions for data flows and data protection in EU trade agreements. The crucial difference to the US model as incorporated in TPP is that the EU will likely require compliance with the General Data Protection Regulation (GDPR) as a condition for free data flows—complementing the existing adequacy assessment procedures and leveraging its trade muscle to promote the GDPR as the global standard.
February 21: Ben Morris, Rebecca Sobel, and Nick Vincent — Direct-to-Consumer Sequencing Kits: Are Users Losing More Than They Gain?
ABSTRACT: Direct-to-consumer genetic sequencing, provided by companies like 23andMe, Ancestry, and Helix, has opened a myriad of scientific and legal issues ranging from the statistical interpretation of results to access, regulation, and user privacy. Interestingly, the most recent efforts have attempted to tie together direct-to-consumer testing with the blockchain and cryptocurrencies, but consumer protection and privacy concerns remain. In this presentation, we will provide a history of the direct-to-consumer genetic sequencing market and how we have arrived at the current market. We will also highlight some of the legal and regulatory issues surrounding the activities of these companies in relation to FDA requirements, the Genetic Information Nondiscrimination Act of 2008 (GINA), and the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Finally, we will use current examples from the emerging market of direct-to-consumer gut microbiome sequencing kits as a study for how privacy policies of these companies are evolving in a developing market and what concerns customers could (and perhaps should) have when using these kits.
February 14: Eli Siems — Trade Secrets in Criminal Proceedings: The Battle over Source Code Discovery
ABSTRACT: In Intellectual Property law, a trade secret is information which “derives economic value from not being generally known . . . to other persons who can obtain economic value from its disclosure or use.” (Uniform Trade Secrets Act). Unlike patent, trade secret law will cease to protect against the use of information once that information becomes generally known. A trade secret, once disclosed, is the proverbial cat out of the bag. For this reason, courts have developed an evidentiary privilege protecting trade secrets from disclosure in trial unless a party shows that such disclosure is actually necessary to a just resolution. This privilege has developed over decades of civil litigation. Recently, a confluence of factors has led to an increase in assertions of the trade secret privilege in criminal trials. State police departments and prosecutors have begun contracting with private software developers for the use of algorithmic tools that generate either forensic proof to be used at trial, “risk assessment” to be used at sentencing, or data for policing. Criminal defendants have sought access to the source code for such programs only to be met with claims that the information sought is privileged as a trade secret. In addressing what a criminal defendant must show to overcome the privilege, some courts have directly applied the standard from civil common law, while others have imported key elements of that standard. Assuming that a defendant must always make some showing to justify the disclosure of “trade secret” source code in her criminal trial, her effective defense will require an understanding of the nature of her burden—must she show that the code is “necessary” to her defense (a replica of the civil standard), that the code is simply “material and relevant” (in line with basic criminal discovery standards), or something in between? This talk will draw from a spate of cases in which defendants sought the source code from “probabilistic genotyping” programs in order to define the contours of these standards as they have recently been applied. Centrally, it will identify the factors that have led courts to find that criminal defendants have failed to carry the burden of establishing either relevance or necessity of the source code. It will reveal that judges have relied on the same validation studies properly considered at the admissibility stage (where the court must determine the reliability of expert/scientific evidence) to determine that a defense review of the source code is either irrelevant or unnecessary. The idea that validation studies can defeat a defendant’s claim that source code is relevant or necessary to her defense fails to account for two key considerations—first, that a defendant may seek to challenge something other than the reliability of the software, and second, that validation of these tools may not be providing the type of assurance legally sufficient to defeat a defendant’s discovery requests. In addition to critiquing judicial reasoning, this talk will address deficiencies in defense pleadings and potential adaptations that may lead to more successful discovery motions in the future.
February 7: Madeline Bryd and Philip Simon — Is Facebook Violating U.S. Discrimination Laws by Allowing Advertisers to Target Users?
ABSTRACT: In 2016, ProPublica published an article revealing the startlingly easy method Facebook’s advertising program provided to exclude protected classes from seeing employment, housing, and credit advertisements. The article raised numerous questions about potential liability and what other mechanisms advertisers could use to discriminate via Facebook’s platform. This presentation will address whether Facebook can be held liable for advertising discrimination based on the discriminatory uses of its platform by advertisers; the current state of U.S. discrimination laws with respect to targeted online advertising in general, and; whether online platforms can escape liability through the Communications Decency Act (CDA) § 230. Our analysis of potential discriminatory uses will focus on research done by Krishna Gummadi and his team that explores Facebook’s advertising features (to be presented at FAT* ‘18, February 2018). Their paper identifies three ways in which advertisers can target users: PII-based targeting, attribute-based targeting, and look-alike audience targeting. Each targeting tool will be analyzed in the context of employment, housing, and credit discrimination laws to address whether these features can be illegally used by advertisers. Finally, we will address possible ways in which Facebook can be held liable for these illegal uses, despite any protection against liability that it may enjoy under CDA-230.
January 31: Madelyn Sanfilippo — Sociotechnical Polycentricity: Privacy in Nested Sociotechnical Networks
ABSTRACT: Knowledge access is both constrained and supported by social, political, human, economic, and technological factors, making formal and informal governance of knowledge a set of complex sociotechnical constructs. Political theory surrounding polycentric governance has long structured inquiry into contexts in which public service provision is nested or competing. This talk will define and discuss applications of polycentric governance theory to sociotechnical knowledge services, in order to support empirically grounded policy-making. Polycentricity is often defined in terms of many nested or overlapping contexts or jurisdictions, which may compete with or compliment one another, yet is also fundamentally about the many centers of decision-making within those contexts or jurisdictions. Sociotechnical polycentricity pertains not only to the complex exogenous policy environment, but also to endogenous decisions of firms or actors, which themselves overlap with this external environment. Extensive literature demonstrates how polycentricity illuminates complexity and supports policy recommendations or improvements, based off of failures, complexity, or conflicts in cases; this talk will explore polycentric frames applied to questions around sociotechnical governance, including various examples centered on knowledge access and privacy.
January 24: Jason Schultz and Julia Powles — Discussion about the NYC Algorithmic Accountability Bill
ABSTRACT: The New York City Council recently passed one of the first laws in the United States to address “algorithmic accountability.” The bill, NY 1696 proposed by council member James Vacca, creates a task force to explore how the city can best open up public agency’s computerized decision-making tools to public scrutiny. This effort raises many technical, legal, and political questions about how algorithmic systems fit into the broader notions of responsible and responsive government. Julia Powles and Jason Schultz have each been involved in the debate over the bill and will lead a discussion of its contents, its context, and its next steps. Julia Powles' recent New Yorker piece for some more background.
November 29: Kathryn Morris and Eli Siems — Discussion of Carpenter v. United States
ABSTRACT: In the 1970s, the Supreme Court decided a series of cases establishing within its Fourth Amendment jurisprudence a principle now known as the Third-Party Doctrine. One defendant’s bank records were seized and examined without a warrant. U.S. v. Miller, 425 U.S. 435 (1976). The phone numbers dialed by another were surreptitiously recorded by a pen register, also without a warrant. Smith v. Maryland, 442 U.S. 735 (1979). The Court reasoned that these were valid exercises of law enforcement authority and not violations of the Fourth Amendment, chiefly because the defendants had willingly turned this information over to a third party and, in doing so, forfeited any legitimate expectation that the information would be private and thus subject to constitutional protection. Under the Third-Party Doctrine, access to such materials by law enforcement does not constitute a “search.” But much of how we transmit information to third parties has changed. Recently, some Justices of the Supreme Court have signaled willingness to revisit some Fourth Amendment principles in light of modern developments. Taking a fresh look at the Search Incident to Arrest doctrine in 2014, Chief Justice Roberts issued sweeping statements indicating that smartphones are different enough from traditional objects of search or seizure to change Fourth Amendment calculations. Riley v. California, 573 U.S. __ (2014). Justice Sotomayor has called the Third-Party Doctrine “ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks.” U.S. v. Jones, 132 S.Ct. 945 (2012). (Sotomayor, J. Concurring). Carpenter has petitioned the Supreme Court to rule on whether the warrantless collection of Cell-Site Location Information (CSLI) violated his Fourth Amendment rights, and the government is set to argue that this information is exempt from the warrant requirement under the Third-Party doctrine. Much will turn on Carpenter’s efforts to draw a meaningful distinction between CSLI and the pen register data in Smith. Carpenter will argue (in line with recent SCOTUS dicta) that cell data is different enough in terms of scope and potential intrusion that the third-party rule should not mechanically apply. He will also argue that his transmission of information to his cell provider was not voluntary and that this transmission should not be found to affect the legitimacy of his expectation of privacy. The Electronic Frontier Foundation’s amicus brief in support of Carpenter details how technical and practical considerations push against application of the Third-Party Doctrine to CSLI. The relevant portion of the Government’s brief is in section I of that argument. Browse SCOTUSblog for additional coverage and filings. Background reading: Brief for the United States and Brief of Amici Curiea Electronic Frontier.
November 15:Leon Yin — Anatomy and Interpretability of Neural Networks
ABSTRACT: From tumor spotting to facial identification, neural networks are designed to optimize and automate decision-making. In a recent blog post, Andrej Karpathy-- the director of AI at Tesla, called neural networks Software 2.0. But just how do neural networks work? Interpretability is an increasingly hot topic among practitioners and policymakers alike. This presentation dives into the anatomy of neural networks, from input to output, and everything in between. The aim of this presentation is to establish a baseline understanding of how neural networks operate internally, in hopes that it will inform how we interact with neural networks externally.
November 8: Ben Zevenbergen — Contextual Integrity for Password Research Ethics?
ABSTRACT: Ben will present a draft chapter of his PhD, where he applies contextual integrity and the literature on research ethics to technical password research. While there’s are some benefits to password research, the origin of the research data is usually a hacked and leaked database containing millions of passwords. The aim of the chapter is not to criticize password research per se, but to test whether contextual integrity would be a useful framework to apply the concepts of research ethics. The paper Ben will use as a case example is “Empirical Analysis of Password Reuse and Modification across Online Service”: https://arxiv.org/abs/1706.01939. Ben has previously blogged about this research here: https://www.considerati.com/publications/blog/research-password-dumps-good-bad/
November 1: Joe Bonneau — An Overview of Smart Contracts
ABSTRACT: Smart contracts are an exciting and rapidly developing technology. Ethereum, the most popular platform for smart contracts, is already worth over $30 billion on hopes that they can revolutionize some types of contractual agreement. For example-Alice and Bob can agree to play a game of chess, without meeting or trusting each other. A smart contract can guarantee that the loser pays the winner a bet with no traditional legal system to enforce the terms. Alice and Bob might live in different jurisdictions, or one of them might even be a robot. This talk will provide an overview of the technology and its limitations. It will also discuss the controversy behind the DAO, which highlights the difficulty of automated contract enforcement with no human oversight. Finally, several open questions about the legal implications of smart contracts will be presented.
October 25: Sebastian Benthall — Modeling Social Welfare Effects of Privacy Policies
ABSTRACT: According to Contextual Integrity, privacy norms are legitimized by a balance of societal values, contextual purposes, and individual ends. While several canonical arguments are used to make this point, formal reasoning about the social welfare consequences of privacy can shed light on policy design, especially when fine-grained computational policies, such as differential privacy, are available. Using the compact game-theoretic framework of Multi-Agent Influence Diagrams (Koller and Milch, 2003), we model several classes of information market and the impact of privacy regulations on them individually as well as in combination. We discover that the social welfare implications of privacy are not evenly distributed, and compare this result with empirical data about diversity in privacy preferences.
October 18: Sue Glueck — Future-Proofing the Law
ABSTRACT: In July 2016, the Court of Appeals for the Second Circuit agreed with Microsoft that U.S. federal or state law enforcement cannot use traditional search warrants to seize emails of citizens of foreign countries that are located in data centers outside the United States. On October 16, 2017, the Supreme Court granted the Department of Justice’s petition to review this decision. Microsoft believes that the Electronic Communications Privacy Act (ECPA) – a law enacted decades before there was such a thing as cloud computing – was never intended to reach within other countries’ borders. But there’s a broader dimension to this issue: The continued reliance on a law passed in 1986 will neither keep people safe nor protect people’s rights. If U.S. law enforcement can obtain the emails of foreigners stored outside the United States, what’s to stop the government of another country from getting your emails even though they are located in the United States? Microsoft believes that people’s privacy rights should be protected by the laws of their own countries and that information stored in the cloud should have the same protections as paper stored in your desk. Please join Sue Glueck, Microsoft’s academic relations director, for a lively discussion of the issues implicated by this case.
October 11: John Nay — Algorithmic Decision-Making Explanations: A Taxonomy and Case Study
ABSTRACT: Algorithms make decisions that permeate our lives. Explanations of the decisions can assist with improving algorithm performance and ensuring procedural fairness. We describe a taxonomy for algorithmic decision-making explanations. We argue that explanations of algorithmic decisions should be provided in terms of why a decision chosen is better than the alternative decision that could have been chosen, i.e. the difference in the outcomes that would occur in a world where the decision is taken and a world where an alternative decision is taken. Then the explanation should provide local and global explanations of the input-output behavior of the models feeding into the decision module. In walking through the components of an explanation, we focus on complex data-driven systems, but the methods are applicable to simpler models as long as their input-output behavior can be analyzed. For an empirical case study, we model and explain an example of algorithmic decision-making in cooperation games.
October 4:Finn Bruton — 'The Best Surveillance System we Could Imagine': Payment Networks and Digital Cash
ABSTRACT: This wide-ranging talk will be less about an argument -- there is one, but I don't think it's as germane to the PRG's interests -- than a history that I hope will start further discussion about privacy and payment. I will tell two linked stories: Almost fifty years ago, a team of computer scientists and electrical engineers developed "the best surveillance system we could imagine": a prototype electronic funds transfer (EFT) system, an early sketch of payment networks from Visa to PayPal. Electronic money is a medium for data: for records of purchases, locations and times, names and social networks. From money as online performance (think Venmo or WeChat's gift-money games) to the information-collection practices of different payment platforms, digital money can produce detailed dossiers and reward or punish particular choices in subtle ways. There is an alternative history to this one, however: the project of building anonymous digital cash -- money as a medium that provides no information but its own verification. This project, filled with tricky technical and social paradoxes to resolve, takes us from radical experiments and subcultures in the 1980s to Bitcoin, Zcash, online black markets, and digital money-laundering schemes and obfuscation attempts in the present day. These projects carry their own problems -- from legitimacy to money laundering -- that we can consider.
September 27: Julia Powles — Promises, Polarities & Capture: A Data and AI Case Study
ABSTRACT: The case study: In November 2015, 1.6 million Londoners fully identified medical records were transferred to Google. The first the public heard of it was an explosive news story in April 2016. The claimed purpose? So that Google's AI arm, DeepMind, could develop an app for kidney injury alerts. The discussion: What is the best way to animate concerns over privacy, public value-for-data, competition, and civic innovation? How do you avoid polarization? How do you motivate intervention and accountability? What is the optimal strategy at different layers and for different audiences? Julia will speak to the paper Google DeepMind and Healthcare in an Age of Algorithms, and the draft reply to DeepMind's reply to the paper.
September 20: Madelyn Rose Sanfilippo AND Yafit Lev-Aretz — Breaking News: How Push Notifications Alter the Fourth Estate
ABSTRACT: News outlets increasingly capitalize on the potential of push notifications to drive engagement and enhance readership. Such changes in news reporting and consumption offer a new, largely overlooked, research perspective into the competing narratives about the definition of news, their impact on political participation, entrenchment of political views, the ubiquity of media environments, and anxiety in media consumption. Situated within discussions about fake news, how new technologies have changed journalism, and the nature of news consumption overall, this paper and a larger ongoing empirical project seek to explore: 1) how push notifications and online “breaking news” phenomenon differ from traditional news reporting; 2) relationships between objectivity in journalism, reader affect and trust; and 3) what this means for participatory politics and its relationship to the fourth estate. This article illustrates patterns and key insights about the impact of push notifications on journalism and changes in sentiment in news communication through a case study comparing reporting on President Nixon firing Special Prosecutor Archibald Cox in 1973 to the recent firing of FBI Director James Comey by President Trump. While headlines and push notifications vary significantly by news providers, push notifications are similar across platforms in distinguishing characteristics such as emotionally-loaded and subjective language. Both of these are defining elements of fake and deceptive news and may potentially account for some of the media mistrust in recent years.
September 13: Ignacio Cofone — Anti-Discriminatory Privacy
ABSTRACT: The paper examines the information dynamics of privacy and discrimination (Strahilevitz 2007, Roberts 2015) to design anti-discriminatory privacy rules, especially for statistical and algorithmic discrimination (Barocas and Selbst 2016, Kim 2017). To do so, it uses empirical studies of informational anti-discriminatory rules (Goldin and Rouse 1997, Agan and Starr 2016) and explores how privacy rules can overcome the limitations that these rules faced. It proposes that taste-based discrimination and statistical discrimination, a traditional distinction in economics, have the same information dynamic and should therefore be addressed similarly by privacy law. The common element between different kinds of discrimination is that, to effectively prevent them, informational rules must focus on blocking information flows that can be used to shift discrimination to other groups (e.g. former inmates versus black men). Anti-discriminatory privacy rules, in other words, should block not only undesirable information but also their proxies. The paper develops a theory on how to identify such proxies based on the cross-elasticity of information. It then applies this idea to algorithmic discrimination and proposes that the literature has so far brought legal solutions to an information problem. The paper proposes an information solution to the informational problem instead.
April 26: Ben Zevenbergen — Contextual Integrity as a Framework for Internet Research Ethics
ABSTRACT: This doctoral work investigates to what extent the theory of Contextual Integrity can be used (or enhanced) to inform an ethics review procedure for Internet research. The project uses a structured case methodology, which can be used to test and enhance a theory. The analytical framework from literature is the starting point in this methodology, which consist of internet research ethics methodologies, contextual integrity, and the principles of purpose limitation and data minimization. The thesis then assesses three cases through this lens that increase in technical complexity, whereby the findings of one case feed into the analysis of the next. After the three case studies have been completed, the thesis concludes with a chapter about how the analytical framework developed throughout the thesis, where the methodology succeeds, and where there may be issues that will need to be addressed. The cases to be addressed are projects from 1) Internet measurement, 2) data and algorithmic transparency, and 3) artificial intelligence. Please note, Ben has only recently changed the focus of his research from privacy engineering to Internet research ethics. His former supervisor had to leave academia, so he merged his ongoing side project on research ethics with the methodology and analytical lens of his PhD thesis. The thesis is thus very much an ongoing work. Please have a look at this guideline document, which has been the result of Ben’s side project and is very much informed/inspired by Contextual Integrity: http://networkedsystemsethics.net/
April 19: Beate Roessler — Manipulation
ABSTRACT: The problem we want to discuss is what precisely it is in techniques like behavioural targeting that is worrying. These techniques seem to influence our behavior, our actions in certain ways: and we want to get at the reasons why these ways could be illegitimate and harmful. What we suspect is that it is a form of manipulation which makes these techniques harmful; therefore, we are going to unwrap the concept of manipulation, try to make conceptual distinctions which can be linked back to the cases, and make some suggestions what the harm of manipulation consists in conflicts.
April 12: Amanda Levendowski — Conflict Modeling
ABSTRACT: Conflict modeling offers a methodology rooted in case studies to identify and prioritize online conflicts and think about ways to mitigate the risks of those conflicts. Online systems—from social media platforms like Facebook and Twitter, to communities like reddit, to online games like League of Legends—are rife with conflict, and are notoriously bad at dealing with it. Abuse, clashes, and tensions (broadly "conflict") can arise between users or between users and the system itself, and online systems too often respond to conflict with ad hoc riffs on the Politician's Fallacy: We have a problem, we must do something, this is something, so we must do this. Except that “this” can end up causing other types of conflict. Conflict modeling adapts security threat modeling into a similarly systemic and predictable approach for spotting conflict. Conflict modeling draws from computer science literature related to threat modeling and value-sensitive design and builds on the legal literature regarding adapting threat modeling to privacy problems to offer a taxonomy of the kinds of conflicts that can arise on a system—broadly, safety, comfort, usability, legal, privacy, and transparency conflicts—as well as known techniques for mitigating those conflicts.
April 5: Madelyn Sanfilippo — Privacy as Commons: A Conceptual Overview and Case Study in Progress
ABSTRACT: Conceptualizing privacy in terms of information flows within a knowledge commons augments Helen Nissenbaum’s “privacy as contextual integrity” approach. Nissenbaum’s framework focuses on “appropriate flow of personal information”, as determined by contextual norms. The Governing Knowledge Commons (GKC) framework, which builds on Ostrom’s Institutional Analysis and Development (IAD) framework, highlights the development and sharing of knowledge resources among community members according to rules-in-use. Comparing Nissenbaum’s framework with the privacy commons approach highlights the reciprocal relationships between constraint and control over personal information and openness and sharing. For example, a group might make a collective decision to deploy the Chatham House Rules, constraining information flows to outsiders as a means of encouraging greater sharing among members. By viewing privacy as information flow rules-in-use constructed within a specific commons arrangement, the GKC framework goes beyond recognizing the importance of existing norms of appropriate information flow, drawing attention to the formal and informal governance mechanisms by which rules-in-use for information flows are created and maintained and providing tools for analyzing those mechanisms. This work also builds on multifaceted conceptualizations of privacy, such as those articulated by Solove and Bennett. This presentation will provide an overview of a series of projects addressing commons governance of privacy, including conceptual work and meta-analysis of privacy issues in knowledge commons cases with Katherine Strandburg and Brett Frischmann, as well as a case study, in progress, of policy networks as privacy commons. Following a theoretical explication of the GKC privacy commons framework, the Chatham House Rules example and the on-gong case study will be discussed.
March 29: Hugo Zylberberg — Reframing the fake news debate: influence operations, targeting-and-convincing infrastructure and exploitation of personal data
ABSTRACT: Privacy advocates and the national security community have long been at odds with each other. Starting with the first Crypto Wars, the framing of encryption issues as a tradeoff between privacy and security (i.e. if you want more security, you will have to give some of your privacy away) in the digital world has offered these two community a zero-sum game to play, as in the public debate around backdoors. But in a world where privately-owned targeting-and-convincing infrastructures can be leased by organizations to efficiently influence people’s decision-making processes, privacy and security are no longer opposed - rather, they are two sides of the same coin. In this article, we describe the privacy-security tradeoff and explain how it evolves in the era of surveillance capitalism and targeting-and-convincing infrastructures. Crucially, these infrastructure rely on the collection of personal data on a massive scale, which enables us to make a security argument for data protection.
March 22: Caroline Alewaerts, Eli Siems and Nate Tisa will lead discussion of three topics flagged during our current events roundups: smart toys, the recently leaked documents about CIA surveillance techniques, and the issues raised by the government’s attempt to obtain recordings from an Amazon Echo in a criminal trial. Background reading:
1. Smart Toys:
- An article describing the data breach that recently affected the smart toy "CloudPets": https://www.forbes.com/sites/leemathews/2017/02/28/cloudpets-data-leak-is-a-privacy-nightmare-for-parents-and-kids/#6179cc4ab0bf
- An article describing the existing tension between the aspirations of the toy manufacturing industry and the requirements of the children privacy legislation: https://www.cnet.com/news/smart-toys-connected-internet-of-things-voice-recording-coppa-children-privacy-parents-kids/
- Report from Senate Bill Nelson on privacy and security concerns relating to smart toys: https://www.billnelson.senate.gov/sites/default/files/12.14.16_Ranking_Member_Nelson_Report_on_Connected_Toys.pdf
2. CIA Surveillance Techniques Leak:
- Background article from the Guardian: https://www.theguardian.com/media/2017/mar/07/wikileaks-publishes-biggest-ever-leak-of-secret-cia-documents-hacking-surveillance
- Update on Wikileaks' efforts to share the zero day exploits with tech companies: https://techcrunch.com/2017/03/17/wikileaks-tech-companies-demands/
- Tech companies’ recent responses to the leaks: https://www.benzinga.com/news/17/03/9188862/intel-others-respond-to-vault-7-cia-wikileaks-with-new-security-tools
3. Amazon Echo Recordings:
- Discussion of Amazon’s First Amendment argument against producing the recordings: https://www.forbes.com/sites/thomasbrewster/2017/02/23/amazon-echo-alexa-murder-trial-first-amendment-rights/#59f5ad145d81
- Report on Amazon dropping the argument and releasing the data http://www.pbs.org/newshour/rundown/amazon-releases-echo-data-murder-case-dropping-first-amendment-argument/
March 8: Ira Rubinstein — Privacy Localism
ABSTRACT: This is an early-stage presentation of an article in which I hope to offer the first in-depth study of what I will call “privacy localism.” Using case studies of three activist cities, Seattle, Oakland, and New York City, I will examine the origins, motivations, and outcomes of city-based efforts to develop privacy principles and practices while providing city services, pursuing smart city and open data initiatives, and carrying out both local police and counterterrorism activities. Cities are data rich environments for obvious reasons: large populations that generate a vast array of data through their use of city services, their encounters with local police, and their daily interaction with a variety of widely-deployed surveillance technologies such as license plate readers, police dashboard and body cameras, and gunfire location services. “Smart cities” collect even more data and local police forces draw on all these data sources for crime prevention and criminal investigation purposes. In the past few years, cities like Seattle and Oakland have begun to engage in privacy localism, launching privacy initiatives defining how they collect, use, and dispose of data and imposing citywide requirements on the funding, acquisition, and use of surveillance technologies. For example, Seattle has adopted an ordinance relating to its use of surveillance equipment and requiring City departments (1) to obtain City Council approval prior to acquiring certain surveillance equipment and (2) to propose protocols related to proper use and deployment of such equipment and addressing data retention, storage and access of any data obtained through its use. Oakland is taking similar steps as is NYC (as of just last week). How did these developments come about? What is their scope and likelihood of success given that cities have a very weak hand to play in the face of (1) the competing needs and interests of their own local police forces; (2) their limited powers under state constitutions and statutes, which make them almost entirely subject to state control; (3) their reliance on federal grants from the DHS or the DOJ, which make purchases of new police technologies subject to a variety of federal data sharing and other requirements, and which may violate local privacy principles or surveillance ordinances; and (4) the high likelihood of federal and state privacy laws preempting local rulemaking initiatives? This paper makes four claims: First, that cities are salient to privacy debates for at least six reasons, which I discuss under the following headings: Localism, urbanization, urban tech, public spaces, local police surveillance, and stalemate at the federal level. Second, that cities have limited but sufficient power to protect local citizens’ privacy. This section draws on the federalism literature to develop a theoretical framework in which cities occupy a discretionary space in which they may engage in three main privacy-related activities: smart city self-governance; regulation of local police surveillance; and resistance to federal laws or practices which they object such as the USA Patriot Act (think privacy “sanctuary” cities). Third, using case studies of Seattle, Oakland, and NYC, that cities are actively engaged in all three areas. And, finally, that cities can and should do more.
March 1: Luise Papcke — Project on (Collaborative) Filtering and Social Sorting
ABSTRACT: I am working on a larger project about (collaborative) filtering and social sorting and how it challenges, or supports, various tenets of liberal theory. There is of course copious theoretical work about the nature of surveillance, interrogating for instance how we have moved from the Benthamian/Foucauldian panopticon style surveillance to the ban-opticon(Bigo) of keeping people out on the basis of information about them and/or synopticon(Matthiesen) describing how we are implicated in the surveillance of ourselves and of each other. All models describe how the new wealth of information is used to reinforce old or establish new discriminatory patterns in the market place, in social contexts and in governmental practices. Due to recent electoral results, the debate about how filtering of news information may have contributed to 'bubbles' and a further erosion of the already polarized public discourse has if anything intensified. In this part of my project, I survey the practices of social sorting. I take a closer look at how the categorization of citizens that is at the basis of surveillance practices actually works and what effects that has on the equal standing of citizens in the public and civic spheres. What data collected by which (bureaucratic/ governmental) institutions come to play a role for how citizens can pursue their interests in the political and civic spheres? Which classifications affect citizenship standing the most, and are they simply a reinforcement of ‘old’ discriminatory patterns, or contain significant new elements? Finally, is it reasonable to distinguish between public institutions and private third parties making discriminations on the basis of such categorization, given that discriminatory treatment in the market may have very strong effects on citizenship standing? This part of the project being still rather early-stage, my presentation will map out the different categorization models to discuss what categorization practices seem especially detrimental to equal citizenship standing.
February 22: Yafit Lev-Aretz and Grace Ha (in collaboration with Katherine Strandburg) — Privacy and Innovation
ABSTRACT: Calls to limit or refrain from privacy regulation rest on a variety of conflicting grounds, such as freedom of speech, safety, security, efficiency, and innovation. One of the most widely cited, but least clearly specified, such grounds is the stifling effect that privacy regulation is said to have on innovation. Regulatory intervention for the sake of privacy, goes the claim, is suspect because it will hinder the development of a variety of socially valuable and innovative products, technologies or business models.[i] The threat of stifled innovation is often invoked in essentially talismanic fashion by those opposed to privacy regulation, without evidence and with little detail as to precisely what kind of innovation is at risk, the nature and severity of the looming risk, or by what mechanism any particular regulatory proposal would make the risk materialize.[ii] Privacy scholarship also has devoted surprisingly little attention to these questions.[iii] In this project, we interrogate and analyze the interplay between privacy regulation and innovation, drawing upon insights from the privacy, innovation and regulatory literatures. In particular, we set the debate about privacy regulation and innovation into the context of studies of the effects of regulation on innovation in other arenas, such as health care, environmental policy and consumer safety.[iv] We show that the bare argument that privacy regulation will “stifle” innovation is overly simplistic. Innovation is not a commodity of which society simply has “more” or “less.” Like many other aspects of the legal and economic background within which innovation occurs, regulation shapes innovation and affects its direction and character as much as it affects the amount of innovation that occurs. Moreover, the implications of regulation for innovation will depend, in the privacy arena as elsewhere, on the design of the regulation.[v] While we do not deny that there may be normative tradeoffs to be made between certain types of innovation and certain instantiations of privacy values, we argue that privacy regulation cannot be pigeonholed exclusively as an enemy of technological development. Indeed, privacy may be an essential catalyst for innovation. Thus, viewing the relationship between privacy and innovation simplistically, as a zero-sum trade-off, does a disservice to the social importance of both. We set off by mapping and categorizing the contentions that have been made about the effect of privacy regulation on innovation during previous debates about privacy regulation. While some of the possible arguments are unique to privacy regulation, others are classic counter-regulation arguments that are generally unpersuasive without a concrete cost-benefit analysis tailored to a particular situation.[vi] We then disentangle and characterize the various ways in which regulation can interact with innovation. The relationship between privacy regulation and innovation may involve a variety of regulatory means and innovation systems. We home in on issues such as the direction of the putatively stifled innovation, the particular types of innovation that may be stifled, the possibility that regulation can re-direct innovation in socially desirable directions, the possibility of innovation in means for regulatory compliance, mechanisms connecting specific regulatory avenues with particular effects on innovation and the nature of the social costs and benefits that might emerge from these interactions. Beginning with existing literature in privacy and other fields, we also explore the various available regulatory design levers that affect how regulation and innovation interact. While the relationship between privacy regulation and innovation has much in common with the relationship between regulation and innovation more broadly, we also consider how a more careful analysis of the relationship between privacy and innovation might play out in particular regulatory debates in the privacy arena. For example, privacy regulation’s long-standing reliance on a notice and consent regime has been the subject of almost universal critique based on its effectiveness in protecting privacy.[vii] Here we consider the implications of the significant gap between compliance with notice and consent based regulation and the effective promotion of privacy values for innovation. Notice and consent regulation may be both ineffective and wasteful, misleading individual consumers about their privacy and prompting expenditure of resources on compliance measures that do not promote privacy goals.[viii] Other examples include the possibility that regulation promoting “privacy by design,” in which privacy protection measures are integrated into the software, might be a spur for privacy-enhancing innovation and the opposite possibility that certain types of privacy regulation might divert resources away from innovation in privacy-preserving technologies and toward regulatory compliance initiatives.
[i] See, e.g., Richard Waters, Google Says Tighter EU Search Regulations Would ‘Hurt’ Innovation, The Financial Times, June 24, 2013; Colleen Taylor, Google Co-Founders Talk Regulation, Innovation, and More in Fireside Chat with Vinod Khosla, TechCrunch, Jul. 6, 2014, https://techcrunch.com/2014/07/06/google-co-founders-talk-long-term-innovation-making-big-bets-and-more-in-fireside-chat-with-vinod-khosla/; Adam Thierer & Ryan Hagemann, Removing Roadblocks to Intelligent Vehicles and Driverless Cars, 5 Wake Forest J.L. & Pol’y 339, 349 (2015).
[ii] See Julie E. Cohen, The Surveillance-Innovation Complex: The Irony of the Participatory Turn, in The Participatory Condition 10 (Darin Barney et. al. eds., 2015).
[iii] But see, e.g., Avi Goldfarb & Catherine Tucker, Privacy and Innovation, 12 Innovation Pol’y & the Economy 65, 77 (2012) (noting that privacy regulations will likely restrict innovation in the domain of the advertising-supported Internet) [‘Goldfarb and Tucker’]; Tal Z. Zarsky, The Privacy-Innovation Conundrum, 19 Lewis & Clark L. Rev. 115, 140-41 (2015) (stating that stronger privacy protections will reduce innovation).
[iv] See, e.g., Matthew Grennan & Robert Town, The FDA and the Regulation of Medical Device Innovation: A Problem of Information, Risk, and Access, 4 Penn Wharton Public Policy Initiative 1 (2016) (discussing the relationship between FDA regulations on coronary stents and consumer safety); Rebecca S. Eisenberg, Reexamining Drug Regulation from the Perspective of Innovation Policy, 160 J. Institutional & Theoretical Economics (JITE) 126 (2004) (discussing the impact of FDA regulations on new drug development); David Popp, Innovation and Climate Policy, 2 Annual Review of Resource Economics 283 (2010) (describing the impact of environmental regulations on the development of clean technologies).
[v] Dennis D. Hirsch & Ira S. Rubinstein, Better Safe than Sorry: Designing Effective Safe Harbor Programs for Consumer Privacy Legislation, 10 BNA Privacy & Security Law Report 1639, 1643-46 (2011).
[vi] See, e.g., Goldfarb & Tucker, at 77; Rahul Telang, A Privacy and Security Policy Infrastructure for Big Data, 10 I/S: J. L. & Pol’y for Info Soc’y 783 (2015).
[vii] See, e.g., Daniel J. Solove, Privacy Self-Management and the Consent Dilemma, 126
Harv. L. Rev. 1880 (2013); James P. Nehf, Open Book: The Failed Promise
of Information Privacy in America 191 (2012); Richard Warner, Undermined Norms: The Corrosive Effect of Information Processing Technology on Informational Privacy, 55 St. Louis L.J. 1047, 1084–86 (2011).
[viii] See Protecting Consumer Privacy in an Era of Rapid Change (2010 FTC Report), available at http://www.ftc.gov/os/2010/12/101201privacyreport.pdf.
February 15: Argyri Panezi — Academic Institutions as Innovators but also Data Collectors - Ethical and Other Normative Considerations
ABSTRACT: In my presentation I wish to discuss the role of academic institutions as innovators particularly when they are involved in data-driven research projects immediately related to members of their community (students, researchers, administration and faculty) but also to their local communities. With research projects on the Internet of Things and on Smart Cities taking off, there is arguably a need to discuss ethics, codes of conduct and perhaps responsibilities when institutions collect and manage different types of data needed for these projects. I am generally interested in the management of digital resources within academia. I define digital resources broadly to include data in digitized form and other digitized material that are machine-readable -thus material in any form that when digitized can ultimately be processed as raw data. Academic institutions have long been familiar with circumstances when their collection of data, incidental (for example for practical, administrative purposes) or purposeful (for research or for archival purposes), is subject to legal and ethical rules. One can look at several examples to draw analogies from, in longstanding practices within academic environments: recruitment and admissions departments storing all kinds of sensitive data collected by candidates, academic libraries having access to data of their readers (which books are checked out), science labs conducting experiments in which members of the student body participate etc. Is the involvement of academia in big-data research projects any different? During the presentation I will try to map the relevant legal issues and also suggest what types of academic research I focus on. A central question is which responsibilities arise when academic institutions partner with industry. There are a number of complex legal issues that arise in this context: an interesting mix of access issues (IP considerations), data protection, and security issues. To exemplify the complexity I will also be presenting an example coming from my current research in digitization.
February 8: Katherine Strandburg — Decisionmaking, Machine Learning and the Value of Explanation
ABSTRACT: Much of the policy and legal debate about algorithmic decision-making has focused on issues of accuracy and bias. Equally important, however, is the question of whether algorithmic decisions are understandable by human observers: whether the relationship between algorithmic inputs and outputs can be explained. Explanation has long been deemed a crucial aspect of accountability, particularly in legal contexts. By requiring that powerful actors explain the bases of their decisions — the logic goes — we reduce the risks of error, abuse, and arbitrariness, thus producing more socially desirable decisions. Decision-making processes employing machine learning algorithms complicate this equation. Such approaches promise to refine and improve the accuracy and efficiency of decision-making processes, but the logic and rationale behind each decision often remains opaque to human understanding. Indeed, at a technical level, it is not clear that all algorithms can be made explainable and, at a normative level, it is an open question when and if the costs of making algorithms explainable outweigh the benefits. This presentation will begin to map out some of the issues that must be addressed in determining in what contexts, and under what constraints, machine learning approaches to governmental decision-making are appropriate
February 1: Argyro Karanasiou — A Study into the Layers of Automated Decision Making: Emergent Normative and Legal Aspects of Deep Learning
ABSTRACT: The paper dissects the intricacies of Automated Decision Making (ADM) and urges for refining the current legal definition of AI when pinpointing the role of algorithms in the advent of ubiquitous computing, data analytics and deep learning. ADM relies upon a plethora of algorithmic approaches and has already found a wide range of applications in marketing automation, social networks, computational neuroscience, robotics, and other fields. Whilst coming up with a toolkit to measure algorithmic determination in automated/semi-automated tasks might be proven to be a tedious task for the legislator, our main aim here is to explain how a thorough understanding of the layers of ADM could be a first good step towards this direction: AI operates on a formula based on several degrees of automation employed in the interaction between the programmer, the user, and the algorithm; this can take various shapes and thus yield different answers to key issues regarding agency. The paper offers a fresh look at the concept of “Machine Intelligence”, which exposes certain vulnerabilities in its current legal interpretation. To highlight this argument, analysis proceeds in two parts: Part 1 strives to provide a taxonomy of the various levels of automation that reflects distinct degrees of Human – Machine interaction and can thus serve as a point of reference for outlining distinct rights and obligations of the programmer and the consumer: driverless cars are used as a case study to explore the several layers of human and machine interaction. These different degrees of automation reflect various levels of complexities in the underlying algorithms, and pose very interesting questions in terms of regulating the algorithms that undertake dynamic driving tasks. Part 2 further discusses the intricate nature of the underlying algorithms and artificial neural networks (ANN) that implement them and considers how one can interpret and utilize observed patterns in acquired data. Finally, the paper explores the scope for user empowerment and data transparency and discusses attendant legal challenges posed by these recent technological developments.
January 25: Scott Skinner-Thompson — Equal Protection Privacy
ABSTRACT: To the extent the right to privacy exists, it is often understood as universal. If not universal, then of particular importance to marginalized individuals. But in practice, people of privilege tend to fare far better when they bring privacy tort claims than do non-privileged individuals. This, despite doctrine suggesting that those who occupy prominent and public social positions are entitled to diminished privacy tort protections. This Article unearths disparate outcomes in public disclosure tort case outcomes, and uses the unequal results as a lens to expand our understanding of how constitutional equality principles might be used to rejuvenate beleaguered privacy tort law. Scholars and the Supreme Court have long recognized that state action applies to the common law, both because judges make the substantive rule of decision and enforce the law. Under this theory of state action, the First Amendment has been used as a means of limiting the extent of privacy and defamation torts. But if state action applies to tort law, should other constitutional provisions bear on the substance of common law torts? This Article argues that the answer is yes, and uses the unequal implications of prevailing public disclosure tort doctrine to explore whether constitutional equality principles can be used to reform the currently weak protections provided by black letter privacy tort law. By so doing, the Article also opens a doctrinally-sound basis for a broader discussion of how constitutional liberty, due process, and equality norms might influence tort law across a variety of substantive contexts.
December 7: Tobias Matzner — The Subject of Privacy
ABSTRACT: The paper engages with theories which establish the value of privacy. It compares two accounts of privacy: the first as protecting a particular, private space like the home or the “private sphere”, and the second as the relative separation of social contexts. Most theories of the value of privacy pertain to the first category, where privacy is seen as necessary space for an autonomous subject. Using various examples from current privacy research as well as normative positions, the paper shows that this focus on autonomy is problematic. Thus, it is shown that the second account of privacy is much better suited to grasp the problems brought about by digital media. The paper continues to show that the second account of privacy is often linked to the idea of “identity management”; i.e. privacy is not only meant to separate social contexts, but also to clear a space where free decisions about the personalities one assumes in this contexts can be taken. Such a view implies the first account of privacy within the second. Based on theories of Hannah Arendt and Judith Butler, the paper develops an alternative account of privacy and personality that better fits the problems of digital communication. Examples from empirical studies of teenagers’ behavior online illustrate how the implicit individualism in “identity management” can lead to victim blaming. The paper concludes by showing how the value of privacy can be conceived from this perspective. Rather than providing freedom in the sense of autonomy privacy protects the freedom to be someone else in the future or at other places – which however need not necessarily be an autonomous person. Thus, privacy eventually protects the fundamental value of plurality.
November 30: Yafit Lev-Aretz — Data Philanthropy
ABSTRACT: Everybody is busy collecting. The business of collecting data and extracting insights in pursuit of specified goals has never been more thriving. The privacy and security implications are terrifying: unlimited information about virtually anyone and anything is being recorded and archived in data banks that are subject to a variety of cyber threats. But alongside the risks lies an enormous opportunity: troves of data represent a boundless wealth of potential insights for the progress of knowledge and society. When the right information is matched with the right questions, numbers could be translated into real life value by answering pressing questions, mitigating common challenges, and guiding policy decisions. Because data is non-rivalry, the same information could be analyzed for different purposes, and data that has been deemed useless for one could unlock a world of possibilities for another. Advocates of data sharing have been calling on private sector actors to voluntarily share their data for social impact. Robert Kirkpatrick, the head of the UN Global Pulse Initiative, an R&D lab that uses big data and real-time analytics to make policymaking more agile and effective, explained that “the public sector cannot fully exploit Big Data without leadership from the private sector.” And stressed: “what we need is action that goes beyond corporate social responsibility.” Similarly, Matt Stempeck, Microsoft’s Director of Civic Technology in New York City, wrote: “Companies shaping this data-driven world can contribute to the public good by working directly with public institutions and social organizations to bring their expertise and information assets to bear on shared challenges.” In many instances, this kind of giving has been termed “data philanthropy.” Following a comprehensive introduction to the data philanthropy discourse, this project aims at providing a better understanding of data collaborations, sharing incentives, and practical concerns. Subsequently, using the Fair Information Practices Principles framework, the project will submit a set of policy recommendations to capitalize on the potential of data givings while minimizing risks that could result in from such collaborations.
November 16: Helen Nissenbaum — Must Privacy Give Way to Use Regulation?
ABSTRACT: In a departure from traditional modes of privacy regulation, there is growing support for regulating only certain uses of personal information while entirely deregulating its collection. Proponents argue that the safeguards usually associated with privacy protection can be achieved through judicious constraints on use, so that ex ante constraints on collection will not stifle the enormous potential of AI and big data. My paper questions this increasingly popular logic not only because it is ambiguous to the point of incoherence or plays suspiciously well with the dominant business model of information industry incumbents. Although there is no denying the genuine and unprecedented challenges to privacy posed by data science, the paper argues that fully substituting restrictions on collection with use restrictions will weaken one of the cornerstones of a free society with little assurance of public welfare gains.
November 9: Bilyana Petkova — Domesticating the "Foreign" in Making Transatlantic Data Privacy Law
ABSTRACT: Research shows that in the data privacy domain, the regulation promoted by frontrunner states in federated systems such as the United States or the European Union generates races to the top, not to the bottom. Institutional dynamics or the willingness of major interstate companies to work with a single standard generally create opportunities for the federal lawmaker to level up privacy protection. This article uses federalism to explore whether a similar pattern of convergence (toward the higher regulatory standard) emerges when it comes to the international arena, or whether we witness a more nuanced picture. I focus on the interaction of the European Union with the United States, looking at the migration of legal ideas across the (member) state jurisdictions with a focus on breach notification statutes and privacy officers. The article further analyses recent developments such as the invalidation of the Safe Harbor Agreement and the adoption of a Privacy Shield. I argue that instead of a one-way street, usually conceptualized as the EU ratcheting up standards in the US, the influences between the two blocs are mutual. Such influences are conditioned by the receptivity and ability of domestic actors in both the US and the EU to translate, and often, adapt the “foreign” to their respective contexts. Instead of converging toward a uniform standard, the different points of entry in the two federated systems contribute to the continuous development of two models of regulating commercial privacy that, thus far, remain distinct.
November 2: Scott Skinner-Thompson — Recording as Heckling
ABSTRACT: There are increasing calls for a right to public privacy, and often such calls are justified with reliance on the First Amendment. Similarly, there is a growing body of authority recognizing that recording of public space is also protected by the First Amendment. Both purported rights serve important First Amendment values—recording information can be critical to future speech and, as a form of confrontation to authority, is also a direct form of expression. Likewise, functional efforts to maintain privacy while navigating public space may help create an incubator for thought and future speech, and can also serve as a form of direct expressive resistance to surveillance regimes. But while recordings may be critical to government accountability and have important First Amendment benefits, they also have obvious privacy implications. How do we balance the right to record with the right to maintain privacy? When can the government regulate recording that attempts to breach the privacy shields erected by other citizens? I suggest that the concept of the heckler’s veto provides a promising rubric for analyzing attempts to regulate these sometimes competing forms of “speech.” This piece argues that just as a heckler’s suppression of another’s free speech justifies government regulation of the heckler’s speech, so too when recording (a form of speech) infringes on and pierces reasonable efforts to maintain privacy (also a form of speech), then the government may—through direct regulation or even tort law—limit the ability to record.
October 26: Yan Shvartzhnaider — Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms
ABSTRACT: Designing programmable privacy logic frameworks that correspond to social, ethical, and legal norms has been a fundamentally hard problem. The theory of Contextual integrity (CI) (Nissenbaum 2010) offers a model for conceptualizing privacy that is able to bridge technical design with ethical, legal, and policy approaches. While CI is capable of capturing the various components of contextual privacy in theory, it is challenging to discover and formally express these norms in operational terms. In this talk I will discuss our work in designing a framework for crowdsourcing privacy norms based on the theory of contextual integrity.
October 19: Madelyn Sanfilippo — Privacy and Institutionalization in Data Science Scholarship
ABSTRACT: Meta-analysis of methodological institutionalization across three scholarly disciplines provides evidence that not only are traditional statistical quantitative methods more institutionalized and consistent, but also are drawn on to structure data scientific approaches when institutionalization is sought for new and large n quantitative methods. Among the strategies, norms, and rules within this body of literature are various institutionalisms surrounding issues of privacy, with stark contrasts in level of detail and attitudes–such as compliance versus privacy as a social value—based on discipline and methodological approaches. This talk will focus on key insights from recently completed work on institutionalization in data science scholarship and outline preliminary findings from work-in-progress pursuing insight into attitudinal and institutional differences reflected in this literature toward privacy.
October 12: Paula Kift — The Incredible Bulk: Metadata, Foreign Intelligence Collection, and the Limits of Domestic Surveillance Reform
ABSTRACT: On June 2, 2015 Congress passed the USA FREEDOM Act, which, among other things, was intended to end the bulk collection of domestic telephony metadata that the National Security Agency (NSA) had been conducting under the authority of Section 215 of the USA PATRIOT Act. The metadata program sparked outrage among privacy and civil liberties advocates across the United States since it implied that, in the course of foreign intelligence investigations, the U.S. government was collecting the communication records of millions of Americans in bulk, in the absence of any particularized suspicion. The reliance on Section 215 of the PATRIOT Act as the legal basis for the program also raised significant statutory and constitutional concerns. This paper analyzes whether the passage of the USA FREEDOM Act was able to alleviate some of these these concerns. It argues that, even though the FREEDOM Act made some headway towards limiting the scope, and improving the accountability, of domestic government surveillance programs, a significant risk remains that the U.S. government can continue collecting large amounts of communications metadata of Americans that are not strictly relevant to any authorized investigations. Most worryingly, the U.S. government may have simply shifted the bulk collection of domestic metadata to a different authority, sweeping up the telecommunication records of millions of Americans at home under the guise of foreign intelligence collection abroad.
October 5: Craig Konnoth — Health Information Equity
ABSTRACT: As of the last few years, the health information of numerous Americans is being collected and used for follow-on, secondary research to study correlations between medical conditions, genetic or behavioral profiles, and treatments. Recent federal legislation and regulations make it easier to use the data of the low income, unwell, and elderly, than that of others, for this research. This imposes disproportionate security and autonomy burdens on these individuals. Those who are well off and pay out of pocket can effectively exempt their data from the publicly available information pot. This presents a problem which modern research ethics is not well equipped to address. Where it considers equity at all, it emphasizes underinclusion and the disproportionate distribution of research benefits, rather than overinclusion and disproportionate distribution of burdens. I rely on basic intuitions of reciprocity and fair play, as well as broader accounts of social and political equity to show that equity in burden distribution is a key aspect of the ethics of secondary research. To satisfy its demands we can use three sets of regulatory and policy levers. First, information collection for public research should expand beyond groups having the lowest welfare. Next, data analyses and queries should more equitably draw on data pools. Finally, we must create an entity to coordinate these solutions using existing statutory authority if possible. Considering health information collection at a systematic level rather than that of individual clinical encounters gives us insight into the broader role health information plays as a site of personhood, citizenship, and community.
September 28: Jessica Feldman — the Amidst Project
ABSTRACT: In this talk I will discuss the amidst project -- an ad-hoc, peer-to-peer, encrypted network for mobile phones -- and the fieldwork that led me to work on it. Drawing on 50+ interviews and surveys with activists, human rights workers, journalists, and engineers in Cairo, Istanbul, Madrid, and New York City, my doctoral dissertation considers surveillance, blocking, and alternate communications methods in the "movements of the squares" and their aftermath. As a response to this fieldwork, I am working with a team of engineers on the amidst network. As a mobile "mesh" network, amidst comes into being when a large group of people are assembled together, and uses each phone as a node to build the network, attempting to provide a solution to the problems of just-in-time blocking and infrastructural surveillance allowed for by centralized telecom. The project also experiments with decentralized, non-hierarchical, localized communication and security practices, which bring about some interesting problems, both philosophically and technically, regarding the fraught relationships among privacy, trust, accountability, and democratic publics.
September 21: Nathan Newman — UnMarginalizing Workers: How Big Data Drives Lower Wages and How Reframing Labor Law Can Restore Information Equality in the Workplace
ABSTRACT: While there has been a flurry of new scholarship on how employer use of data analysis may lead to subtle but potentially devastating individual discrimination in employment systems, there has been far less attention to the ways the deployment of big data may be driving down wages for most workers, including those who manage to be hired. This article details the ways big data can and in many cases is actively being deployed to lower wages through hiring practices, in the ways raises are now being offered, and in the ways workplaces are organized (and disorganized) to lower employee bargaining power—and how new interpretations of labor law are beginning to and can in the future reshape the workplace to address these economic harms. Data analysis is increasingly helping to lower wages in companies beginning in the hiring process where pre-hire personality testing helps employers screen out employees who will agitate for higher wages and organize or support unionization drives in their companies. For employees who are hired, companies have massively expanded data-driven workplace surveillance that allows employers to assess which employees are most likely to leave and thereby limit pay increases largely to them, lowering wages over time for workers either less able to find new employment because of their age or less inclined in general to risk doing so. Data analysis and so-called “algorithmic management” has also allowed the centralized monitoring of far flung workers organized nominally in subcontractors or as individual contractors, while traditional firms such as in retail implement data-driven scheduling that resembles the “on-demand” employment of independent contractors. All of this shifts risk and “downtime” costs to employees and lowers their take-home pay, even as the fragmenting of the workplace makes it harder for workers to collectively organize for higher wages. The article addresses how we should rethink and interpret existing labor law in each of these aspects of the employment process. The NLRB can reasonably construe many pre-hire employment tests as violating federal labor law’s prohibition of screening out union sympathizers, much as the EEOC has found many personality tests violate the Americans with Disabilities Act by allowing indirect identification of people with mental illness. Similarly, since big data analysis can reveal pro-union sympathies of current employees, under existing prohibitions of “polling” employees for their views, a reasonable extension of the law would be to prohibit sharing any personal data collected by management that might reveal protected conduct or union sympathies with line managers or outside management consultants involved in advising in labor campaigns. The Board can also level the informational playing field by making both hiring algorithms and those determining pay increases more available during collective bargaining. The Board is already moving to expand its “joint employer” doctrine to allow workers to challenge the fragmented workplace increasingly driven by algorithmic management and a clear recognition that algorithms establish exactly the control of nominally independent contractors or subcontractor’s workers that entitle them to collective bargaining rights with a central employer, strengthening worker bargaining power. Such a “collective action” approach to the problem is far more likely to succeed than other proposals focused on strengthening individual worker privacy or anti-discrimination rights in the workplace in regards to data-driven decision-making. As scholars have noted, disadvantaged groups under the civil rights laws may have sharply different preferences in wage versus benefit packages, so a process that increases informational resources for all workers and allows them to negotiate together for the mix of wages, benefits, work conditions and other “public goods” in the workplace, including privacy protections, will better reflect the overall interests of employees than in either a classic economic model based on a marginal worker’s “exit” or a “rights consciousness” litigation approach to rein in individual employment harms. In making this overall argument, the article partially addresses the debate on why wages have stagnated and even fallen below productivity gains over the last four decades as the deployment of data technology has played a significant and growing role in helping employers extract a disproportionate share of employee productivity gains to the benefit of management and shareholders.
September 14: Kiel Brennan-Marquez — Plausible Cause
ABSTRACT: “Probable cause” is not about probability. It is about plausibility. To determine if an officer has the requisite suspicion to perform a search or seizure, what matters is not the statistical likelihood that a “person, house, paper or effect” is linked to criminal activity. What matters is whether criminal activity provides a convincing explanation of observed facts. For an inference to qualify as plausible, an observer must understand why the inference follows; she must be able to explain its relationship to the facts. Probable inferences, by contrast, do not require explanations. An inference can be probable—in a predictive sense, based on past trends—without a human observer understanding what makes it so. In many cases, plausibility and probability overlap. An inference that accounts for observed facts is often likely to be true, and vice versa. But there is an important sub-set of cases in which the two properties pull apart, raising deep questions about the underpinnings of Fourth Amendment suspicion: inferences generated by predictive algorithms. In this Article, I argue that casting suspicion in terms of plausibility, rather than probability, is both more consistent with established law and crucial to the Fourth Amendment’s normative integrity. Before law enforcement officials may intrude on private life, they must explain why they believe wrongdoing has occurred. This “explanation-giving” requirement has two key virtues. First, it facilitates governance; we cannot effectively regulate what we do not understand. Second, it allows judges to consider the “other side of the story”—the innocent version of events a suspect might offer on her own behalf—before warranting searches and seizures. In closing, I connect these virtues to broader themes of democratic theory. In a free society, legitimacy is not measured solely by outcomes. The exercise of state power must be explained—and the explanations must be responsive both to the democratic community writ large and to the specific individuals whose interests are infringed.
April 27: Yan Schvartzschnaider — Privacy and loT AND Rebecca Weinstein - Net Neutrality's Impact on FCC Regulation of Privacy Practices
April 20: Joris van Hoboken — Privacy in Service-Oriented Architectures: A New Paradigm? [with Seda Gurses]
April 13: Florencia Marotta-Wurgler — Who's Afraid of the FTC? Enforcement Actions and the Content of Privacy Policies (with Daniel Svirsky)
April 6: Ira Rubinstein — Big Data and Privacy: The State of Play
March 30: Clay Venetis — Where is the Cost-Benefit Analysis in Federal Privacy Regulation?
March 23: Diasuke Igeta — An Outline of Japanese Privacy Protection and its Problems
Johannes Eichenhofer — Internet Privacy as Trust Protection
March 9: Alex Lipton — Standing for Consumer Privacy Harms
March 2: Scott Skinner-Thompson — Pop Culture Wars: Marriage, Abortion, and the Screen to Creed Pipeline [with Professor Sylvia Law]
February 24: Daniel Susser — Against the Collection/Use Distinction
February 17: Eliana Pfeffer — Data Chill: A First Amendment Hangover
February 10: Yafit Lev-Aretz — Data Philanthropy
February 3: Kiel Brennan-Marquez — Feedback Loops: A Theory of Big Data Culture
January 27: Leonid Grinberg — But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race
November 18: Angèle Christin - Algorithms, Expertise, and Discretion: Comparing Journalism and Criminal Justice
November 4: Solon Barocas and Karen Levy — Understanding Privacy as a Means of Economic Redistribution
October 28: Finn Brunton — Of Fembots and Men: Privacy Insights from the Ashley Madison Hack
October 21: Paula Kift — Human Dignity and Bare Life - Privacy and Surveillance of Refugees at the Borders of Europe
October 14: Yafit Lev-Aretz and co-author, Nizan Geslevich Packin — Between Loans and Friends: On Soical Credit and the Right to be Unpopular
October 7: Daniel Susser — What's the Point of Notice?
September 30: Helen Nissenbaum and Kirsten Martin — Confounding Variables Confounding Measures of Privacy
September 23: Jos Berens and Emmanuel Letouzé — Group Privacy in a Digital Era
September 16: Scott Skinner-Thompson — Performative Privacy
September 9: Kiel Brennan-Marquez — Vigilantes and Good Samaritan
David Krone — Compliance, Privacy and Cyber Security Information Sharing
Edwin Mok — Trial and Error: The Privacy Dimensions of Clinical Trial Data Sharing
Dan Rudofsky — Modern State Action Doctrine in the Age of Big Data
April 22: Helen Nissenbaum — Respect for Context' as a Benchmark for Privacy: What it is and Isn't
April 15: Joris van Hoboken — From Collection to Use Regulation? A Comparative Perspective
March 11: Rebecca Weinstein (Cancelled)
Kristen Martin — Transaction costs, privacy, and trust: The laudable goals and ultimate failure of notice and choice to respect privacy online
Ryan Calo — Against Notice Skepticism in Privacy (and Elsewhere)
Lorrie Faith Cranor — Necessary but Not Sufficient: Standardized Mechanisms for Privacy Notice and Choice
October 22: Matthew Callahan — Warrant Canaries and Law Enforcement Responses
October 15: Karen Levy — Networked Resistance to Electronic Surveillance
October 8: Joris van Hoboken — The Right to be Forgotten Judgement in Europe: Taking Stock and Looking Ahead
October 1: Giancarlo Lee — Automatic Anonymization of Medical Documents
September 24: Christopher Sprigman — MSFT "Extraterritorial Warrants" Issue
September 17: Sebastian Zimmeck — Privee: An Architecture for Automatically Analyzing Web Privacy Policies [with Steven M. Bellovin]
September 10: Organizational meeting
January 29: Organizational meeting
November 20: Nathan Newman — Can Government Mandate Union Access to Employer Property? On Corporate Control of Information Flows in the Workplace
September 25: Luke Stark — The Emotional Context of Information Privacy
September 18: Discussion — NSA/Pew Survey
September 11: Organizational Meeting
April 10: Katherine Strandburg — ECPA Reform; Catherine Crump: Cotterman Case; Paula Helm: Anonymity in AA
March 27: Privacy News Hot Topics — US v. Cotterman, Drones' Hearings, Google Settlement, Employee Health Information Vulnerabilities, and a Report from Differential Privacy Day
March 6: Mariana Thibes — Privacy at Stake, Challenging Issues in the Brazillian Context
March 13: Nathan Newman — The Economics of Information in Behavioral Advertising Markets
February 27: Katherine Strandburg — Free Fall: The Online Market's Consumer Preference Disconnect
February 20: Brad Smith — Privacy at Microsoft
February 13: Joe Bonneau — What will it mean for privacy as user authentication moves beyond passwo
February 6: Helen Nissenbaum — The (Privacy) Trouble with MOOCs
January 30: Welcome meeting and discussion on current privacy news
November 14: Travis Hall — Cracks in the Foundation: India's Biometrics Programs and the Power of the Exception
September 19: Nathan Newman — Cost of Lost Privacy: Google, Antitrust and Control of User Data