Closing Statements

Illustration of a brain with thumbs down icons

Revelations that Cambridge Analytica mined Facebook data in the 2016 election sparked new concerns about data privacy and put social media platforms under more pressure to safeguard users’ data. But the reach and influence of Big Data is only growing. We asked Professor Jason Schultz, director of the Technology Law and Policy Clinic, co-director of the Engelberg Center on Innovation Law & Policy, and research lead for law and policy at NYU’s new AI Now Institute, to talk about how to protect consumer data, how artificial intelligence will change the legal profession, and why he stopped using Facebook. 

What did the Facebook–Cambridge Analytica scandal reveal about issues in data privacy?

The Facebook–Cambridge Analytica controversy revealed what we had actually suspected and partially known for a very long time: There’s almost no oversight, internal regulation, auditing, or accountability within most tech companies concerning their data sharing practices. 

What are consumers risking if companies like Facebook do not safeguard our data?

Historically, the vast majority of computers and databases were isolated up until the internet. Now we’ve had massive integration and cross-pollination, not only of existing data sets, but including the creation of vast new data sets about us that cover almost every aspect of our entire lives—who we know, what we do, where we go, how we work, what we like, what we don’t like. Every single decision that you make, and every single judgment that could be made about you that affects your life, is potentially influenced by the data that not only you submit voluntarily, but is collected about you without any knowledge or observation. 

For instance, your cell-site location information. When you travel around, your cell phone is almost constantly pinging cell towers. Those towers collect information on your phone. They know exactly who you are and which phone you have. That information is being collected about us without our consent, without our knowledge, all the time. As recently highlighted in the Carpenter case at the Supreme Court, there are concerns about government surveillance, but there are also risks for commercial exploitation of this information. For example, what if you go out to bars at certain times of the night? What if you visit certain treatment centers or bookstores on a regular basis? Many companies will pay to exploit this information, including many employers.

The idea that you can take control of your data, or have choices in the world around your privacy or how your data is used—it’s an outdated notion. 

Could a framework like Europe’s General Data Protection Regulation (GDPR) work for the United States? 

If you are a European citizen, GDPR may solve some of these problems, but there are a lot of loopholes. Technology companies can engineer for safety in all kinds of contexts. Google, Facebook, and Amazon have really good security—breaking into their servers is nearly impossible. But these companies have not applied the same rigor to data protection on a social scale. We need to set standards for them to meet. We have law enforcement agencies and consumer protection regulators who hold people accountable. We have environmental standards. In the same way that we value the long-term importance of clean air, clean water, and soil that grows, we need to value protecting individual data from harmful uses. 

Where are we today with artificial intelligence?

There is a lot of hype coming out of the AI industry, but fundamentally, these technologies are often a combination of advanced algorithmic systems, large data sets, and massive surveillance/sensor networks funneled into prediction or decision-making processes. In their infancy, such systems were primarily used to determine relatively simple technological or commercial choices, such as which ads to display on a website. But as their complexity evolves, system designers are expanding their use into much more serious questions, such as “Are you dangerous? Are you a good hire?” These uses raise the stakes concerning core legal rights, such as equal protection and due process. 

There are firms out there that claim to help companies hire people using AI. For every person who comes in for a job interview, they record the interview. That interview is put into a system that analyzes every little movement on your face to determine what they think is going on inside your brain and what emotions you’re having—to see whether you fit the culture of the company. So it’s not even about your answers. It’s about how your face responds to the questions. Then they rate you. And they say, “This person is a good or bad fit on this scale for the position.”

What if your company has a history of hiring white men for technical jobs? If you task an AI to analyze your hiring data and recommend a profile for a successful job candidate, there is a risk that it will create the profile of someone who is a man and who is white. 

The systems have no context to understand issues that we’ve had around race, inequality, class, gender, sexuality. So many of the solutions proposed are technical solutions from the computer science world. While those people are very smart, unless they’re connected to the broader, multidisciplinary understanding of the issues, they may not know what is at the core of the problem. 

How will AI affect the practice of law in the next five to 10 years?

We’ve already seen a fair amount of automation in the e-discovery world. The next area where I see this happening is contract comparison. So say there’s a merger between two giant corporations who have millions of contracts with vendors, and you want to understand which contracts are compatible and which raise issues. The economics of law firms are going to have to adjust to the fact that these huge, fee-generating initial intake moments can be automated. 

The ability to create a legal opinion or analysis is years away, if ever. 

Are you on Facebook or other social media platforms? 

I initially was on Facebook. But I saw that their data practices were going down a very bad road. I deleted it years ago. Twitter is much more conservative. LinkedIn is also much more conservative around how much they share and with whom. But honestly, I just assume that anything I share can be made public. 

This Q&A was edited and condensed for clarity. Posted September 4, 2018.