NYU Law and NYU’s AI Now Institute analyze the ways emerging technology imposes upon civil liberties

Automated decision-making systems are designed to help eliminate human error and bias. But since they are designed by humans, how can these technologies be prevented from repeating and encoding existing biases? In a close partnership between NYU’s AI Now Institute and NYU Law, legal scholars like AI Now Policy Director Rashida Richardson and Professor of Clinical Law Jason Schultz are analyzing the ways emerging technologies impact civil liberties, and how Americans can advocate for greater protections in the law.

Founded at NYU in 2017 by Kate Crawford, NYU Distinguished Research Professor and Principal Researcher at Microsoft, and Meredith Whittaker, NYU Distinguished Research Scientist and founder and lead of Google’s Open Research group, the AI Now Institute focuses on developing new methods of measurement, analysis, and improvement of AI systems within the rapidly changing social contexts in which they are embedded. The institute relies on a range of NYU expertise, partnering with leading experts at the Law School; the Tandon School of Engineering; the Steinhardt School of Culture, Education, and Human Development; the Center for Data Science; the Courant Institute of Mathematical Sciences; and the Stern School of Business. The AI Now Institute also partners with the American Civil Liberties Union and the Partnership on AI.

Recently, AI Now collaborated with the Law School to study the impact of predictive technologies in the criminal justice system. In a new paper entitled “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice,” Richardson, Schultz, and Crawford found that predictive policing algorithms are prone to racist biases because they are often powered by racist data.

Rashida Richardson
Rashida Richardson

These predictive systems analyze past crime patterns to try to predict where and when crimes are likely to occur, or who is likely to be a victim or perpetrator of a crime. But due to a lack of transparency and oversight, the types of data that power these algorithms and how widely used they are is information rarely released to the public, Richardson says.

“I don’t think anyone knows an actual number in use,” she said. “There is an opaqueness in the process at multiple levels.”

In their research, Richardson, Schultz, and Crawford identified nine police jurisdictions that utilized algorithms built on data collected while that jurisdiction was under government investigation of corrupt, racially biased, or otherwise illegal policing practices.

Because black and brown communities are often unconstitutionally overpoliced, the paper argues, these predictive systems run the risk of creating a racist feedback loop for those communities. If the systems train their “thinking” on data derived from unlawful practices, the systems run the risk of recommending the same unlawful policing practices going forward.

“Police data reflects police practices and policies,” the paper says. “If a group or geographic area is disproportionately targeted for unjustified police contacts and actions, this group or area will be overrepresented in the data, in ways that often suggest greater criminality.”

Richardson, Schultz, and Crawford call for greater standardization and transparency in what police deem crime-relevant data. The authors also note that it may be impractical to address these risks adequately in predictive AI systems that rely on historical police data, given the history of bias in policing.

Jason Schultz
Jason Schultz

The Law School’s Center on Race, Inequality, and the Law (CRIL) has also partnered with AI Now to advocate for reform and transparency in the criminal justice system. In November 2018, CRIL and AI Now submitted public comments on the Pennsylvania Commission on Sentencing’s Sentence Risk Assessment Instrument. This pilot program used biographical and past criminal data to power an assessment tool to aid judges in determining a defendant’s probability of recidivism or of skipping bail, and other risks.

“Risk assessments used in sentencing perpetuate racial bias, inappropriately shape judge’s perceptions of individual cases, and fail to reduce incarceration or improve public safety,” wrote the authors of the public comments, Richardson, Schultz, CRIL Executive Director Vincent Southerland, and Professor of Clinical Law Anthony Thompson.

They proposed the use of impact assessment tools to evaluate the Sentence Risk Assessment Instrument that would solicit public input on the use of predictive systems and provide regular progress reports detailing how the technology is being used.

The Law School and AI Now have worked together on other recent projects. A workshop that was co-sponsored by the Brennan Center for Justice focused on the Trump Administration’s use of data harvesting to target immigrant communities. Another event, co-hosted with the Institute on International Law and Justice, focused on another work by Crawford—"Anatomy of an AI system”—which examines the Amazon Echo as an anatomical map of human labor, data and planetary resources.

Looking to the future of the partnership, Schultz points to several ways in which NYU Law students can get involved: “If you’re interested in the intersection between law and machine learning and racial justice, or you’re interested in the intersection between law and housing policy and the ways in which automation is impacting labor and housing for poor people, there are experts in those social science and technical domains that AI Now brings into the picture.”

He adds: “If you’re interested in these questions, the AI Now Institute provides an opportunity to get involved in that research that no other law school has.”

Posted May 9, 2019