Algorithms and Explanations


Thursday-Friday, April 27-28, 2017
NYU School of Law

Abstract: Explanation has long been deemed a crucial aspect of accountability.  By requiring that powerful actors explain the bases of their decisions — the logic goes — we reduce the risks of error, abuse, and arbitrariness, thus producing more socially desirable decisions.  Decisionmaking processes employing machine learning algorithms and similar data-driven approaches complicate this equation.  Such approaches promise to refine and improve the accuracy and efficiency of decisionmaking processes, but the logic and rationale behind each decision remains opaque to human understanding.  The conference will grapple with the question of when and to what extent decisionmakers should be legally or ethically obligated to provide humanly meaningful explanations of individual decisions to those who are affected or to society at large. The ILI is grateful to Microsoft Corporation for its generous support of this conference.

Thursday, April 27

8:30-9:00         Breakfast

9:00-9:15         Introductory Remarks

I.  Reasons for Reasons from Law and Ethics video

Kevin Stack, Vanderbilt (Law)
Katherine Strandburg, NYU (Law)
Andrew Selbst, Yale (Information Society Project) & Georgetown (Law)
Moderator: Helen Nissenbaum, NYU (MCC) & Cornell Tech (Information)

10:15 -10:30    Break

II. Automated Decisionmaking and Challenges to Explanation-Giving video

Duncan Watts, Microsoft Research
Jenna Burrell, UC Berkeley (Information)
Solon Barocas, Microsoft Research
Moderator: Bilyana Petkova, NYU (Law)

11:45-1:00       Lunch

1:00-3:45 (with a 15 minute break)
III. Modes of Explanation in Machine Learning: What is Possible and what are the Tradeoffs? video
Foster Provost, NYU (Stern School of Business)
Krishna Gummadi, Max Planck Institute for Software Systems
Anupam Datta, Carnegie Mellon (CS/ECE)
Enrico Bertini, NYU (Engineering)
Alexandra Chouldechova, Carnegie Mellon (Public Policy/Statistics)
Zachary Lipton, UCSD (Computer Science and Engineering)
Moderator: John Nay, Vanderbilt (Computational Decision Science)

3:45-4:00         Break

IV. Regulatory Approaches to Explanation video
Sandra Wachter, University of Oxford, Oxford Internet Institute
Deven Desai, Georgia Tech (Law and Ethics)
Alison Howard, Microsoft
Moderator: Ira Rubinstein, NYU (Law)

5:15-5:30         Happy Hour

Happy Hour Discussion video

Jer Thorp, Office for Creative Research & NYU Tisch ITP

Friday, April 28

8:30-9:00         Breakfast

V.  Explainability in Context – Health video

Francesca Rossi, IBM Watson Lab
Rich Caruana, Cornell (CS)
Federico Cabitza, Università degli Studi di Milano-Bicocca (Human-Computer Interaction)
Moderator: Ignacio Cofone, Yale (Law)

10:15-10:30     Break

VI. Explainability in Context – Consumer Credit video

Dan Raviv, Lendbuzz
Aaron Rieke, Upturn
Frank Pasquale, University of Maryland (Law)
Moderator: Yafit Lev-Aretz, NYU (Law)

11:45-1:30       Lunch

VII. Explainability in Context – Media video

Gilad Lotan, Buzzfeed
Nicholas Diakopoulos, University of Maryland (Journalism)
Brad Greenberg, Yale (Information Society Project)
Moderator: Madelyn Sanfilippo, NYU (Law)

2:45-3:00         Break

VIII. Explainability in Context – The Courts video

Julius Adebayo, FastForward Labs
Paul Rifelj, Wisconsin Public Defenders
Andrea Roth, UC Berkeley (Law)
Moderator: Amanda Levendowski, NYU (Law)

4:15-4:30         Break

IX. Explainability in Context – Policing and Surveillance

Jeremy Heffner, Hunchlab
Dean Esserman, Police Foundation
Kiel Brennan-Marquez, NYU (Law)
Moderator: Rebecca Wexler, Yale Public Interest Fellow at The Legal Aid Society