Structured decision-making and technology

The Center is exploring the use of risk assessment instruments, algorithmic tools, and artificial intelligence in the criminal legal system and other systems that govern people's lives.

These tools have been designed, deployed, and advanced as mechanisms to improve decision-making, but carry with them the potential to exacerbate and reify the racial bias that already infects that systems of governance. The Center convenes researchers, advocates, and national leaders on algorithmic tools and technologies and collaborates with social justice and technology focused organizations to produce reports, tool kits, and scholarship to more fully understand the impact that these tools have on communities of color.

As new insights emerge, we engage in advocacy at the local and national level to ensure decision-makers are armed with the right information to make certain that if and when tools are deployed, they are used to reduce, rather than exacerbate, racial harm and inequality. Some examples of the Center's work in this space includes:

Event Spotlight: How Can Artificial Intelligence Be Used for Good in the Criminal Legal System? 

On Wednesday, September 13th the Center hosted a virtual conversation with a team of legal scholars, policy advocates, and computer scientists to explore a new direction for AI-informed decision-making to advocate for justice. View the event recording below! 

Freedom of Information Act (FOIA) Training for Beginners

In this training, co-sponsored by The Center on Race, Inequality and the Law and the Electronic Frontier Foundation, learn the ins and outs of how to initiate and navigate records requests from the federal government using the Freedom of Information Act (FOIA).

Race and Technology News Updates

Court Cases

City and State Updates

Federal Updates

General News Updates

Global Updates

People and Events to Note

Archive of Past Updates