Algorithms and Explanations

 


Thursday-Friday, April 27-28, 2017
NYU School of Law

Abstract: Explanation has long been deemed a crucial aspect of accountability.  By requiring that powerful actors explain the bases of their decisions — the logic goes — we reduce the risks of error, abuse, and arbitrariness, thus producing more socially desirable decisions.  Decisionmaking processes employing machine learning algorithms and similar data-driven approaches complicate this equation.  Such approaches promise to refine and improve the accuracy and efficiency of decisionmaking processes, but the logic and rationale behind each decision remains opaque to human understanding.  The conference will grapple with the question of when and to what extent decisionmakers should be legally or ethically obligated to provide humanly meaningful explanations of individual decisions to those who are affected or to society at large. The ILI is grateful to Microsoft Corporation for its generous support of this conference.

Agenda
Thursday, April 27

8:30-9:00         Breakfast

9:00-9:15         Introductory Remarks

9:15-10:15
I.  Reasons for Reasons from Law and Ethics video

Kevin Stack, Vanderbilt (Law)
Katherine Strandburg, NYU (Law)
Andrew Selbst, Yale (Information Society Project) & Georgetown (Law)
Moderator: Helen Nissenbaum, NYU (MCC) & Cornell Tech (Information)

10:15 -10:30    Break

10:30-11:45
II. Automated Decisionmaking and Challenges to Explanation-Giving video

Duncan Watts, Microsoft Research
Jenna Burrell, UC Berkeley (Information)
Solon Barocas, Microsoft Research
Moderator: Bilyana Petkova, NYU (Law)

11:45-1:00       Lunch

1:00-3:45 (with a 15 minute break)
III. Modes of Explanation in Machine Learning: What is Possible and what are the Tradeoffs? video
Foster Provost, NYU (Stern School of Business)
Krishna Gummadi, Max Planck Institute for Software Systems
Anupam Datta, Carnegie Mellon (CS/ECE)
Enrico Bertini, NYU (Engineering)
Alexandra Chouldechova, Carnegie Mellon (Public Policy/Statistics)
Zachary Lipton, UCSD (Computer Science and Engineering)
Moderator: John Nay, Vanderbilt (Computational Decision Science)

3:45-4:00         Break

4:00-5:15
IV. Regulatory Approaches to Explanation video
Sandra Wachter, University of Oxford, Oxford Internet Institute
Deven Desai, Georgia Tech (Law and Ethics)
Alison Howard, Microsoft
Moderator: Ira Rubinstein, NYU (Law)

5:15-5:30         Happy Hour

5:30-6:30                   
Happy Hour Discussion video

Jer Thorp, Office for Creative Research & NYU Tisch ITP


Friday, April 28

8:30-9:00         Breakfast

9:00-10:15
V.  Explainability in Context – Health video

Francesca Rossi, IBM Watson Lab
Rich Caruana, Cornell (CS)
Federico Cabitza, Università degli Studi di Milano-Bicocca (Human-Computer Interaction)
Moderator: Ignacio Cofone, Yale (Law)

10:15-10:30     Break

10:30-11:45
VI. Explainability in Context – Consumer Credit video

Dan Raviv, Lendbuzz
Aaron Rieke, Upturn
Frank Pasquale, University of Maryland (Law)
Moderator: Yafit Lev-Aretz, NYU (Law)

11:45-1:30       Lunch

1:30-2:45
VII. Explainability in Context – Media video

Gilad Lotan, Buzzfeed
Nicholas Diakopoulos, University of Maryland (Journalism)
Brad Greenberg, Yale (Information Society Project)
Moderator: Madelyn Sanfilippo, NYU (Law)

2:45-3:00         Break

3:00-4:15
VIII. Explainability in Context – The Courts video

Julius Adebayo, FastForward Labs
Paul Rifelj, Wisconsin Public Defenders
Andrea Roth, UC Berkeley (Law)
Moderator: Amanda Levendowski, NYU (Law)

4:15-4:30         Break

4:30-5:45
IX. Explainability in Context – Policing and Surveillance

Jeremy Heffner, Hunchlab
Dean Esserman, Police Foundation
Kiel Brennan-Marquez, NYU (Law)
Moderator: Rebecca Wexler, Yale Public Interest Fellow at The Legal Aid Society

#algoexpla17