Structured decision-making and technology
The Center is exploring the use of risk assessment instruments, algorithmic tools, and artificial intelligence in the criminal legal system and other systems that govern people's lives.
These tools have been designed, deployed, and advanced as mechanisms to improve decision-making, but carry with them the potential to exacerbate and reify the racial bias that already infects that systems of governance. The Center convenes researchers, advocates, and national leaders on algorithmic tools and technologies and collaborates with social justice and technology focused organizations to produce reports, tool kits, and scholarship to more fully understand the impact that these tools have on communities of color.
As new insights emerge, we engage in advocacy at the local and national level to ensure decision-makers are armed with the right information to make certain that if and when tools are deployed, they are used to reduce, rather than exacerbate, racial harm and inequality. Some examples of the Center's work in this space includes:
- Joint Statement with AI Now Institute on the Pennslyvania Commission on Sentencing's Risk Assessment Instrument
- The Use of Pretrial "Risk Assessment" Instruments: A Shared Statement of Civil Rights Concerns
- Membership on the New York City Automated Decision Systems Task Force
- Litigating Algorithms 2018 and 2019
- Report with ACLU: What Does Fairness Look Like? Conversations on Race, Risk Assessment Tools, and Pretrial Justice