Privacy Research Group

The Privacy Research Group is a weekly meeting of students, professors, and industry professionals who are passionate about exploring, protecting, and understanding privacy in the digital age.

Joining PRG:

Because we deal with early-stage work in progress, attendance at meetings of the Privacy Research Group is generally limited to researchers and students who can commit to ongoing participation in the group. To discuss joining the group, please contact Professor Katherine Strandburg or Paula Kift. If you are interested in these topics, but cannot commit to ongoing participation in PRG, you may wish to join the PRG-All mailing list.
 

PRG Calendar

Fall 2016

December 7:
November 30:
November 23:
November 16:
November 9:
November 2:
October 26:
October 19:
October 12:
October 5:
September 28:
September 21: Nathan Newman - UnMarginalizing Workers: How Big Data Drives Lower Wages and How Reframing Labor Law Can Restore Information Equality in the Workplace
     ABSTRACT:
While there has been a flurry of new scholarship on how employer use of data analysis may lead to subtle but potentially devastating individual discrimination in employment systems, there has been far less attention to the ways the deployment of big data may be driving down wages for most workers, including those who manage to be hired. This article details the ways big data can and in many cases is actively being deployed to lower wages through hiring practices, in the ways raises are now being offered, and in the ways workplaces are organized (and disorganized) to lower employee bargaining power—and how new interpretations of labor law are beginning to and can in the future reshape the workplace to address these economic harms. Data analysis is increasingly helping to lower wages in companies beginning in the hiring process where pre-hire personality testing helps employers screen out employees who will agitate for higher wages and organize or support unionization drives in their companies. For employees who are hired, companies have massively expanded data-driven workplace surveillance that allows employers to assess which employees are most likely to leave and thereby limit pay increases largely to them, lowering wages over time for workers either less able to find new employment because of their age or less inclined in general to risk doing so. Data analysis and so-called “algorithmic management” has also allowed the centralized monitoring of far flung workers organized nominally in subcontractors or as individual contractors, while traditional firms such as in retail implement data-driven scheduling that resembles the “on-demand” employment of independent contractors. All of this shifts risk and “downtime” costs to employees and lowers their take-home pay, even as the fragmenting of the workplace makes it harder for workers to collectively organize for higher wages. The article addresses how we should rethink and interpret existing labor law in each of these aspects of the employment process. The NLRB can reasonably construe many pre-hire employment tests as violating federal labor law’s prohibition of screening out union sympathizers, much as the EEOC has found many personality tests violate the Americans with Disabilities Act by allowing indirect identification of people with mental illness. Similarly, since big data analysis can reveal pro-union sympathies of current employees, under existing prohibitions of “polling” employees for their views, a reasonable extension of the law would be to prohibit sharing any personal data collected by management that might reveal protected conduct or union sympathies with line managers or outside management consultants involved in advising in labor campaigns. The Board can also level the informational playing field by making both hiring algorithms and those determining pay increases more available during collective bargaining. The Board is already moving to expand its “joint employer” doctrine to allow workers to challenge the fragmented workplace increasingly driven by algorithmic management and a clear recognition that algorithms establish exactly the control of nominally independent contractors or subcontractor’s workers that entitle them to collective bargaining rights with a central employer, strengthening worker bargaining power. Such a “collective action” approach to the problem is far more likely to succeed than other proposals focused on strengthening individual worker privacy or anti-discrimination rights in the workplace in regards to data-driven decision-making. As scholars have noted, disadvantaged groups under the civil rights laws may have sharply different preferences in wage versus benefit packages, so a process that increases informational resources for all workers and allows them to negotiate together for the mix of wages, benefits, work conditions and other “public goods” in the workplace, including privacy protections, will better reflect the overall interests of employees than in either a classic economic model based on a marginal worker’s “exit” or a “rights consciousness” litigation approach to rein in individual employment harms. In making this overall argument, the article partially addresses the debate on why wages have stagnated and even fallen below productivity gains over the last four decades as the deployment of data technology has played a significant and growing role in helping employers extract a disproportionate share of employee productivity gains to the benefit of management and shareholders.

September 14: Kiel Brennan-Marquez - Plausible Cause
     ABSTRACT:
“Probable cause” is not about probability. It is about plausibility. To determine if an officer has the requisite suspicion to perform a search or seizure, what matters is not the statistical likelihood that a “person, house, paper or effect” is linked to criminal activity. What matters is whether criminal activity provides a convincing explanation of observed facts. For an inference to qualify as plausible, an observer must understand why the inference follows; she must be able to explain its relationship to the facts. Probable inferences, by contrast, do not require explanations. An inference can be probable—in a predictive sense, based on past trends—without a human observer understanding what makes it so. In many cases, plausibility and probability overlap. An inference that accounts for observed facts is often likely to be true, and vice versa. But there is an important sub-set of cases in which the two properties pull apart, raising deep questions about the underpinnings of Fourth Amendment suspicion: inferences generated by predictive algorithms. In this Article, I argue that casting suspicion in terms of plausibility, rather than probability, is both more consistent with established law and crucial to the Fourth Amendment’s normative integrity. Before law enforcement officials may intrude on private life, they must explain why they believe wrongdoing has occurred. This “explanation-giving” requirement has two key virtues. First, it facilitates governance; we cannot effectively regulate what we do not understand. Second, it allows judges to consider the “other side of the story”—the innocent version of events a suspect might offer on her own behalf—before warranting searches and seizures. In closing, I connect these virtues to broader themes of democratic theory. In a free society, legitimacy is not measured solely by outcomes. The exercise of state power must be explained—and the explanations must be responsive both to the democratic community writ large and to the specific individuals whose interests are infringed.
 

Spring 2016

April 27: Yan Schvartzschnaider - Privacy and loT AND Rebecca Weinstein - Net Neutrality's Impact on FCC Regulation of Privacy Practices

April 20: Joris van Hoboken - Privacy in Service-Oriented Architectures: A New Paradigm? [with Seda Gurses]


April 13: Florencia Marotta-Wurgler - Who's Afraid of the FTC? Enforcement Actions and the Content of Privacy Policies (with Daniel Svirsky)
     ABSTRACT:
The approach to the protection of information privacy in the United States has been sharply criticized as weak and incomplete, consisting mostly of a handful of area-specific laws and market self-regulation, and lacking any clear and general guidelines or enforcement. Yet many commentators, regulators, and industry participants believe that the Federal Trade Commission' use of Section 5 to bring actions against firms engaged in unfair and deceptive practices has effectively filled this void by creating and enforcing guidelines which industry participants then follow. In this paper we, ask whether they actually do so. Specifically, After the FTC brings a Section 5 action against a given firm for a specific type of unfair or deceptive practice, do other firms that engage (or claim to engage) in the same problematic practice respond to the consent agreement? We address this question in a specific setting. We take weekly snapshots of the privacy polices of 230 firms from markets where the FTC has been active and where privacy concerns are nontrivial—social networks, dating sites, cloud computing, message boards, news and reviews, and gaming. beginning in 2010 and ending in 2015. Our focus is on a small, relatively “clean” set of actions involving practices that are fully identifiable by the terms of the privacy policy, with a special focus on compliance with US-EU Safe Harbor Agreement after the FTC bring enforcement actions against offending firms. In these situations, our fine-grained, longitudinal data allow for simple before-and-after analyses. While we cannot see whether firms comply with the terms in their policies, of course, we can observe whether their privacy policies themselves violate the terms of FTC enforcement actions and whether, as many believe, firms modify their policies to comply with new guidelines. We can also observe whether firms respond to FTC actions in other ways, such as by reducing the number of actionable promises. we find that the broader effects of these FTC actions, which involve important online privacy issues brought against some of the most prominent online firms, are typically undetectably small, and in one case slightly perverse. Even when the FTC specifically identifies a term as deceptive or unfair, websites that have the term in their policy virtually always keep it.

April 6: Ira Rubinstein - Big Data and Privacy: The State of Play

     ABSTRACT: Big data undermines modern conceptions of privacy law in at least two ways. First, big data challenges the Fair Information Practices (FIPs), which form the basis of all modern privacy law, by exploding the core premises of informed choice and data minimization. Second, the classic FIPs seem ill-equipped to handle a new class of privacy violations and related harms in which algorithmic processes and/or inaccurate or biased data leads to discriminatory actions against protected groups. Regulators and policy experts have sought to address these problems in either of five ways: First, by extending the FIPs to directly address “profiling” (as in the new EU General Regulation) or "out of context" data collection and use (as in the draft Obama privacy bill); second, by narrowing the FIPs to focus primarily on use regulations, while developing new balancing tests that weigh the costs and benefits of specific uses of Big Data under the FTC’s unfairness standard or similar criteria; third, by recasting the FIPs in terms of technological due process; fourth and fifth, by supplementing (or even supplanting) the FIPs by describing new business models premised on consumer empowerment or, finally, by developing new technological solutions under the banner of “fairness by design.”  This essay seeks to capture and evaluate the current state of play in privacy and big data by analyzing the strengths and weaknesses of all five responses to the FIPs and, if possible, synthesizing a more satisfactory approach.

March 30: Clay Venetis - Where is the Cost-Benefit Analysis in Federal Privacy Regulation?


March 23: Diasuke Igeta - An Outline of Japanese Privacy Protection and its Problems
     ABSTRACT: In Japan, no statute includes the word of "privacy". The concept of privacy has developed through many judicial cases. Because of the lack of statute, we have some problems about privacy; some is overprotected, the other should be more protected. I would introduce some of the important cases and the modern problems in Japan.


Johannes Eichenhofer - Internet Privacy as Trust Protection
     ABSTRACT: This presentation argues for a legal conception of Internet Privacy based on the idea of trust protection. The protection of trust through legal certainty is considered one of the key elements of both German and European Law. It applies to both the relations of the individual to the State – governed by “public privacy rules” – and to private entities (e.g. Internet service providers), which are governed by “private privacy rules”. Even though the latter is of enormous relevance for the Internet users’ privacy, this relationship finds only weak or even non-existing protection under current German or European Constitutional Law. This condition can be challenged under the perspective of Internet Privacy as trust protection.


March 9: Alex Lipton - Standing for Consumer Privacy Harms
     ABSTRACT: Courts are struggling to apply traditional standing doctrine to claims involving modern consumer privacy harms, leading to inconsistent outcomes for plaintiffs alleging near-identical injuries. However, while the privacy interests and resulting injuries prove similar, not all consumer privacy claims are the same. This Article hypothesizes that federal courts are more likely to recognize consumer privacy harms as cognizable for standing when framed as statutory or contract-based harms, as opposed to tort-based harms. The distinction between statutory and tort-based harms aligns with the normative goals of standing law, which respects legislative recognition of novel harms and raises concerns where the judiciary attempts to extend its reach to injuries previously unrecognized by courts or Congress. The distinction between contract and tort-based harms has not been previously recognized in standing doctrine, but may reflect courts’ reticence to construct privacy standards outside of those agreed to between the parties (i.e. by contract), in line with the normative goals of consumer contract law. To test this hypothesis, I compare rates of dismissal on standing grounds between (1) statutory-based consumer privacy harm claims; (2) contract-based consumer privacy harm claims; and (3) tort-based consumer privacy harm claims, providing empirical support for the claim that courts are more likely to dismiss claims based in tort rather than statute or contract. Finally, I discuss the implications of the dismissal rates on standing grounds for the future of consumer privacy protection.


March 2: Scott Skinner-Thompson - Pop Culture Wars: Marriage, Abortion, and the Screen to Creed Pipeline [with Professor Sylvia Law]


February 24: Daniel Susser - Against the Collection/Use Distinction


February 17: Eliana Pfeffer - Data Chill: A First Amendment Hangover


February 10: Yafit Lev-Aretz - Data Philanthropy


February 3: Kiel Brennan-Marquez - Feedback Loops: A Theory of Big Data Culture


January 27: Leonid Grinberg - But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race
 

Fall 2015

December 2: Leonid Grinberg - But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race AND Kiel Brennan-Marquez - Spokeo and the Future of Privacy Harms

November 18: Angèle Christin - Algorithms, Expertise, and Discretion: Comparing Journalism and Criminal Justice
     BACKGROUND READING: Courts and Predictive Algorithms
 
November 11: Joris van Hoboken - Privacy, Data Sovereignty and Crypto

November 4: Solon Barocas and Karen Levy - Understanding Privacy as a Means of Economic Redistribution

October 28: Finn Brunton - Of Fembots and Men: Privacy Insights from the Ashley Madison Hack

October 21: Paula Kift - Human Dignity and Bare Life - Privacy and Surveillance of Refugees at the Borders of Europe
     ABSTRACT: In the summer of 2015, tens of thousands of forcibly displaced persons arrived at the borders of Europe. At least in one regard the continent was prepared: over the years it had developed an extensive surveillance assemblage that disparages asylum seekers as “crimmigrants” and subjects them to extensive systems of discipline and control, often long before embarking on their perilous journey to Europe. This paper treats privacy as an aspect of human dignity, and argues that denying asylum seekers informational, visual, physical, and decisional privacy reduces them to homines sacri, or bare life. The paper will analyze EU law and policy, German constitutional law and the media coverage of the refugee crisis based on theories of sovereignty, biopolitics, visual culture, social psychology, and critical border studies.

October 14: Yafit Lev-Aretz and co-author, Nizan Geslevich Packin - Between Loans and Friends: On Soical Credit and the Right to be Unpopular
     ABSTRACT: Credit scoring systems calculate the specific level of risk that a person or entity brings to a particular transaction.  These levels of risk assessments are compiled into a credit score, a numerical expression of one’s financial health at a given point in time.  Certain laws, such as the Fair Credit Reporting Act, the Fair and Accurate Credit Transactions Act, Equal Credit Opportunity Act, and the recent Dodd-Frank Wall Street Reform and Consumer Protection Act, place limits on the type of information that can be used to calculate creditworthiness and the ways in which it may be put to use.  These laws have been effectively applied to conventional formulas employed by traditional lenders in order to protect certain rights of those being evaluated.  But in the last few years, new, aggressive, and loosely regulated lenders have become increasingly popular, especially among certain populations like millennials and the financially underserved.  Some of these online marketplace lenders calculate their customers’ creditworthiness based on big-data analytics that are said to significantly increase the accuracy of the scoring methods.  Specifically, some lenders have built their score-generating algorithms around behavioral data gleaned from social media and social networking information, including quantity and quality of social media presence; the identity and features of an applicant’s contacts; an applicant’s online social ties and interactions; contacts’ financial standing; an applicant’s personality attributes as extracted from her online footprints, and more. This Article studies the potential consequences of social credit systems that are predicated on a simple transaction: authorized use of highly personal information in return for better interest rates.  Following a description of the trend, the Article moves to analyze the inclination of rational and irrational customers to be online socially active and/or disclose all their online social-related information for financial ranking purposes.  This examination includes, inter alia, customers’ preferences as well as mistakes, attempts to manipulate the system, customers’ self-doxing or lack thereof, and lenders’ inferences on their customers.  The Article then explains the potential consequential harms that could result in from social-based financial ranking – especially if it became the new creditworthiness baseline – focusing on (i) discrimination and social polarization ensuing from customers adapting their behavior to the biased and limited algorithmic modeling, (ii) the use of inaccurate or inappropriate data in automated processes, which could lead to flawed financial decisions, and (iii) broader privacy concerns.  The social credit trend is then compared with other financially sound yet socially undesired practices, such as the use of medical information in creditworthiness assessments.  The Article concludes by introducing a limited “right to be unpopular,” to accommodate the welcomed aspects of social credit systems while mitigating many of the trend’s undesired consequences.

October 7: Daniel Susser - What's the Point of Notice?

September 30: Helen Nissenbaum and Kirsten Martin - Confounding Variables Confounding Measures of Privacy

September 23: Jos Berens and Emmanuel Letouzé - Group Privacy in a Digital Era
     ABSTRACT: Big Data has blurred the boundaries between individual and group data. Through the sheer number and richness of databases and the increasing sophistication of algorithms, the “breadcrumbs” left behind by each one of us have not only multiplied to a degree that calls our individual privacy into question; they have also created new risks for groups, who can be targeted and discriminated against unbeknownst to themselves, or even unbeknownst to data analysts. This challenges us to enrich our approach to privacy. Where individual privacy might once have sufficed to rein in state and corporate surveillance and the neighbors’ curiosity, and to give individuals a measure of control over their reputations and security, today it can leave groups vulnerable to discrimination and targeting and, what’s more, leave them unaware of that risk. The concept of group privacy attempts to supplement individual privacy by addressing this blindspot.

September 16: Scott Skinner-Thompson - Performative Privacy
     ABSTRACT: Conventional legal theory suggests that the right to privacy is non-existent once one enters the public realm.  Still, some scholars contend that privacy ought to exist in public—but they justify this right to “public privacy” with reference to other, ancillary values privacy may serve (for instance, public privacy may be necessary to make the freedoms of movement and association meaningful in practice). This Article advances the pro-public-privacy theories one step further, arguing that demands for public privacy are more accurately conceptualized as a form of performative resistance against an ever pervasive surveillance society.  For example, when a person wears a hoodie in public obscuring their identity, one is engaged in a form of active, expressive resistance to the surveillance regime—communicating in no uncertain terms a refusal to be surveilled. This Article isolates and labels “performative privacy” as a social practice, and explains how this identification of public, performative privacy will provide doctrinal and discursive solutions to some of our most pressing social controversies. By demonstrating that demands for public privacy are inherently expressive, the Article helps establish that public privacy is grounded in the First Amendment and entitled to its robust protections.  Discursively, directly linking public privacy performances with the well-ensconced freedom of expression will help shift societal reaction to such privacy demands from suspicion to embrace.  Moreover, to the extent that acts of performative privacy cut across conflicts traditionally viewed in terms of racial, religious, or gender identity, (Trayvon Martin’s hoodie, bans on head veils, and transgender demands for gender privacy are some examples), performative privacy has the potential to provide a more universal and unifying normative response to these conflicts.
 
September 9: Kiel Brennan-Marquez - Vigilantes and Good Samaritan
 

Spring 2015

April 29: Sofia Grafanaki - Autonomy Challenges in the Age of Big Data
                 David Krone - Compliance, Privacy and Cyber Security Information Sharing
                 Edwin Mok - Trial and Error: The Privacy Dimensions of Clinical Trial Data Sharing
                 Dan Rudofsky - Modern State Action Doctrine in the Age of Big Data


April 22: Helen Nissenbaum Respect for Context' as a Benchmark for Privacy: What it is and Isn't
April 15: Joris van Hoboken From Collection to Use Regulation? A Comparative Perspective
April 8: Bilyana Petkova
 Privacy and Federated Law-Making in the EU and the US: Defying the Status Quo?
April 1: Paula Kift — Metadata: An Ontological and Normative Analysis

March 25: Alex Lipton — Privacy Protections for the Secondary User of Consumer-Watching Technologies

March 11: Rebecca Weinstein (Cancelled)
March 4: Karen Levy & Alice Marwick — Unequal Harms: Socioeconomic Status, Race, and Gender in Privacy Research


February 25 : Luke Stark — NannyScam: The Normalization of Consumer-as-Surveillorm


February 18: Brian Choi A Prospect Theory of Privacy

February 11: Aimee Thomson — Cellular Dragnet: Active Cell Site Simulators and the Fourth Amendment

February 4: Ira Rubinstein — Anonymity and Risk

January 28: Scott Skinner-Thomson Outing Privacy

 

Fall 2014

December 3: Katherine Strandburg — Discussion of Privacy News [which can include recent court decisions, new technologies or significant industry practices]

November 19: Alice Marwick — Scandal or Sex Crime? Ethical and Privacy Implications of the Celebrity Nude Photo Leaks

November 12: Elana Zeide — Student Data and Educational Ideals: examining the current student privacy landscape and how emerging information practice and reforms implicate long-standing social and legal traditions surrounding education in America. The Proverbial Permanent Record [PDF]

November 5: Seda Guerses — Let's first get things done! On division of labor and practices of delegation in times of mediated politics and politicized technologies
October 29:Luke Stark — Discussion on whether “notice” can continue to play a viable role in protecting privacy in mediated communications and transactions given the increasing complexity of the data ecology and economy.
Kristen Martin — Transaction costs, privacy, and trust: The laudable goals and ultimate failure of notice and choice to respect privacy online

Ryan Calo — Against Notice Skepticism in Privacy (and Elsewhere)

Lorrie Faith Cranor — Necessary but Not Sufficient: Standardized Mechanisms for Privacy Notice and Choice
October 22: Matthew Callahan — Warrant Canaries and Law Enforcement Responses
October 15: Karen Levy — Networked Resistance to Electronic Surveillance
October 8: Joris van Hoboken —  The Right to be Forgotten Judgement in Europe: Taking Stock and Looking Ahead

October 1: Giancarlo Lee — Automatic Anonymization of Medical Documents
September 24: Christopher Sprigman — MSFT "Extraterritorial Warrants" Issue 

September 17: Sebastian Zimmeck — Privee: An Architecture for Automatically Analyzing Web Privacy Policies [with Steven M. Bellovin]
September 10: Organizational meeting
 

Spring 2014

April 30: Seda Guerses — Privacy is Security is a prerequisite for Privacy is not Security is a delegation relationship
April 23: Milbank Tweed Forum Speaker — Brad Smith: The Future of Privacy
April 16: Solon Barocas — How Data Mining Discriminates - a collaborative project with Andrew Selbst, 2012-13 ILI Fellow
March 12: Scott Bulua & Amanda Levendowski — Challenges in Combatting Revenge Porn


March 5: Claudia Diaz — In PETs we trust: tensions between Privacy Enhancing Technologies and information privacy law: The presentation is drawn from a paper, "Hero or Villain: The Data Controller in Privacy Law and Technologies” with Seda Guerses and Omer Tene.

February 26: Doc Searls Privacy and Business

February 19: Report from the Obfuscation Symposium, including brief tool demos and individual impressions

February 12: Ira Rubinstein The Ethics of Cryptanalysis — Code Breaking, Exploitation, Subversion and Hacking
February 5: Felix Wu — The Commercial Difference which grows out of a piece just published in the Chicago Forum called The Constitutionality of Consumer Privacy Regulation

January 29: Organizational meeting
 

Fall 2013

December 4: Akiva Miller — Are access and correction tools, opt-out buttons, and privacy dashboards the right solutions to consumer data privacy? & Malte Ziewitz What does transparency conceal?
November 20: Nathan Newman — Can Government Mandate Union Access to Employer Property? On Corporate Control of Information Flows in the Workplace

November 6: Karen Levy — Beating the Box: Digital Enforcement and Resistance
October 23: Brian Choi — The Third-Party Doctrine and the Required-Records Doctrine: Informational Reciprocals, Asymmetries, and Tributaries
October 16: Seda Güerses — Privacy is Don't Ask, Confidentiality is Don't Tell
October 9: Katherine Strandburg — Freedom of Association Constraints on Metadata Surveillance
October 2: Joris van Hoboken — A Right to be Forgotten
September 25: Luke Stark — The Emotional Context of Information Privacy
September 18: Discussion — NSA/Pew Survey
September 11: Organizational Meeting


Spring 2013

May 1: Akiva Miller — What Do We Worry About When We Worry About Price Discrimination
April 24: Hannah Block-Wheba and Matt Zimmerman — National Security Letters [NSL's]

April 17: Heather Patterson — Contextual Expectations of Privacy in User-Generated Mobile Health Data: The Fitbit Story
April 10: Katherine Strandburg — ECPA Reform; Catherine Crump: Cotterman Case; Paula Helm: Anonymity in AA

April 3: Ira Rubinstein — Voter Privacy: A Modest Proposal
March 27: Privacy News Hot Topics — US v. Cotterman, Drones' Hearings, Google Settlement, Employee Health Information Vulnerabilities, and a Report from Differential Privacy Day

March 6: Mariana Thibes — Privacy at Stake, Challenging Issues in the Brazillian Context
March 13: Nathan Newman — The Economics of Information in Behavioral Advertising Markets
February 27: Katherine Strandburg — Free Fall: The Online Market's Consumer Preference Disconnect
February 20: Brad Smith — Privacy at Microsoft
February 13: Joe Bonneau  — What will it mean for privacy as user authentication moves beyond passwo
February 6: Helen Nissenbaum — The (Privacy) Trouble with MOOCs
January 30: Welcome meeting and discussion on current privacy news
 

Fall 2012

December 5: Martin French — Preparing for the Zombie Apocalypse: The Privacy Implications of (Contemporary Developments in) Public Health Intelligence
November 7: Sophie Hood — New Media Technology and the Courts: Judicial Videoconferencing
November 14: Travis Hall — Cracks in the Foundation: India's Biometrics Programs and the Power of the Exception

November 28: Scott Bulua and Catherine Crump — A framework for understanding and regulating domestic drone surveillance

November 21: Lital Helman — Corporate Responsibility of Social Networking Platforms
October 24: Matt Tierney and Ian Spiro — Cryptogram: Photo Privacy in Social Media
October 17: Frederik Zuiderveen Borgesius — Behavioural Targeting. How to regulate?

October 10: Discussion of 'Model Law'

October 3: Agatha Cole — The Role of IP address Data in Counter-Terrorism Operations & Criminal Law Enforcement Investigations: Looking towards the European framework as a model for U.S. Data Retention Policy
September 26: Karen Levy — Privacy, Professionalism, and Techno-Legal Regulation of U.S. Truckers
September 19: Nathan Newman — Cost of Lost Privacy: Google, Antitrust and Control of User Data