At a Latham & Watkins Forum, panelists debate accountability for decisions made by artificial intelligence

When governments rely on artificial intelligence and other automated systems to make decisions that affect human lives, who can be held ultimately responsible for those decisions? Panelists debated that question and related issues at a Latham & Watkins Forum on “Accountability in the Age of Artificial Intelligence” on February 21.

Jason Schultz
Jason Schultz

Professor of Clinical Law Jason Schultz, research lead for law and policy at the AI Now Institute, moderated a discussion that included Justice Mariano-Florentino Cuéllar of the California Supreme Court; Rachel Goodman ’10, staff attorney with the American Civil Liberties Union’s Racial Justice Program; and Rashida Richardson, legislative counsel at the New York Civil Liberties Union.

Select remarks:

Jason Schultz: “The humans in the system of the government, the humans in the system of the courts, other places, we do have some sense they can be held accountable, maybe politically, maybe in the press, maybe actually legally. We sort of think on some level we have a handle on what humans care about, and we can try to think about different deterrence mechanisms or compensation or even punishment in some contexts. But what some people struggle with when they think about as more of these [automated] decisions or adjudications or terminations get made, that machines aren’t going to care about the sort of things that the humans care about…. Does that change how we think about accountability?”

Rachel Goodman
Rachel Goodman ’10
Rachel Goodman ’10: “In a lot of places where government entities or even private entities make decisions about us, we have this expectation… that decisions are really being made about you as an individual.… That is not what these [automated] systems, generally speaking, are doing, right? They are creating averages and probabilities and saying we can deduce from lots of people—who look mostly like you in the ways we have decided are salient in the design of this system—that this figure who approximates you as translated through this data ought to be allowed to board the plane or continue receiving your Medicaid benefits or be allowed to be free pending your trial. And that is a really profound shift.”

Justice Mariano-Florentino Cuéllar
Mariano-Florentino Cuéllar
Mariano-Florentino Cuéllar: “When we hear the arguments for relying on algorithms, they often seem to assert two things… that are in some way in tension with each other. One argument is [that] with the technology we have right now that is based on pattern recognition and not just on the use of symbolic logic, we can actually capture the quirks of human decision making in a meaningful way, because we take lots of different types of data and we don’t make as many assumptions about it. But also this technology can help us overcome human bias. Everyone in the audience should recognize these two things, it’s really actually hard to reconcile them.”

Rashida Richardson
Rashida Richardson
Rashida Richardson: “I also think that attention to decision making is really important, because I think the problem that both systems-based decisions and human decisions have is that for too long we were assuming that both could be objective, and the truth is that they’re both subjective. When thinking about the accountability question... I think it’s working with the assumption that all of these decisions are subjective and informed by some level of bias. Can we correct for that? Is that the right way to ensure accountability, to make certain assumptions about failures and then correct? I don’t know the answer to that, but that’s one thing that we’re grappling with.”

Watch the full video of the event (1 hour, 11 minutes):

Posted March 13, 2018