Inner Workings: Katherine Strandburg analyzes arguments for and against more transparency in algorithmic decision-making

Mechanical Illustration

Katherine Strandburg analyzes arguments for and against greater transparency in the algorithmic decision-making that increasingly affects us all.

In 1770, the Hungarian inventor Wolfgang von Kempelen debuted a chess-playing machine known as the Mechanical Turk, which defeated the likes of Napoleon Bonaparte and Benjamin Franklin. Spectators marveled, until the truth finally came to light: A human chess master hid inside the machine to challenge its unwitting opponents.

People in the 21st century are witnessing equally opaque machine-powered processes whose stakes are far higher than those of a celebrity chess match. Both governments and private entities frequently assert that the workings of increasingly common algorithmic decision-making processes—in applications as varied as auditing taxpayers, preventing terrorism, measuring credit risk, and pricing life insurance—require secrecy to prevent decision subjects (individuals affected by a given decision) from gaming the system. But a recent article in the McGill Law Journal by Alfred B. Engelberg Professor of Law Katherine Strandburg and Ignacio Cofone, an assistant professor at McGill University Faculty of Law, challenges that claim. 

Katherine Strandburg
Katherine Strandburg

Strandburg and Cofone first began thinking collaboratively about this issue in 2017, when Cofone was a research fellow at NYU Law’s Information Law Institute (ILI), where Strandburg serves as faculty director. Strandburg became more deeply interested in the subject when the ILI organized an “Algorithms and Explanations” conference that year.

Much of the discussion centered on questions of bias, and rightly so, says Strandburg. “So we focused on a different aspect, which is this question of ‘How do we get explanations to people of what’s going on with these algorithms?’”

At the 2017 conference, Strandburg recalls, she became familiar with how claims of trade secrecy for decision-making algorithms render the workings of those algorithms opaque, even to the people affected by the resulting decisions. This angle caught her attention, given her expertise in intellectual property and her experience co-editing The Law and Theory of Trade Secrecy: A Handbook of Contemporary Research (2011) with Pauline Newman Professor of Law Rochelle Dreyfuss. Along with Dreyfuss and Julia Powles, then an ILI fellow, Strandburg co-hosted a conference on trade secrets and algorithmic systems in 2018. 

At the same time, Strandburg had begun working with Cofone on an article scrutinizing claims made by users of algorithmic systems that transparency is undesirable. “We had the feeling…that sometimes this gaming argument was being used as an excuse so that we don’t have to debate what we should do about trade secrecy or how much an algorithm really is not explainable,” she says.

Strandburg and Cofone’s finished article, “Strategic Games and Algorithmic Secrecy,” argues that blanket claims regarding the dangers of disclosure are exaggerated, and that in the majority of cases, “socially beneficial disclosure regimes” can be created. The co-authors assert that gaming is most often possible when decision makers use imperfect proxies loosely connected to the ideal criteria for reaching a determination—for example, if a software company screened potential hires by considering as a proxy for relevant job skills whether applicants subscribe to top computer magazines, rather than looking at their college grades.

They add that most decision subjects lack the ability to falsify the input data used in algorithms, such as age, prior arrests, or test scores; also, the true values of such variables are essentially unalterable. Even putting those hurdles aside, without knowing precisely how the different variables are weighted, especially in a complex machine learning–based algorithm, subjects would still find gaming them difficult. 

Changing one’s conduct, they say, is the most likely scenario to alter data input. But if the proxies used in decision making are well connected to the desired criteria, they explain, “gaming” may not be a bad thing—for example, when consumers alter their spending habits in order to bolster their credit scores before seeking a mortgage. 

The article suggests that some decision makers may underestimate the value of disclosure both to society and decision subjects. A lack of disclosure, Strandburg and Cofone argue, can obscure socially undesirable algorithm design, and using the risk of gaming to keep algorithms secret might lead to a lack of accountability, inaccurate decisions, bias, arbitrariness, or unfairness. Conversely, they write, greater transparency could allow decision subjects “to challenge the factual or other bases for erroneous decisions, and to undertake the socially beneficial strategic behaviors.” 

The co-authors also stress that even machine learning–based algorithms rely on human input; if the design is flawed, a computer will merely execute the problematic algorithm flawlessly. Strandburg expresses concern that the growing complexity of algorithmic processes makes it difficult for decision makers themselves to understand what’s going on. “It’s now possible for a decision maker to say, ‘Well, we’ll just use this algorithm, and it’s a trade secret,’ or ‘We can’t disclose it because it’s so complicated.’… I read a lot of the old literature on explanations and why they’re such a big deal in due process, and part of it is that individuals deserve the explanation. But part of it also is that giving the explanation forces the decision maker to justify why they’re doing it.”

She worries, too, about the “human in the loop” problem, invoking the example of a decision-making automated tool for a judge deciding on pretrial detention for a defendant. “I get some piece of information that just says, ‘This person is low risk,’ ‘This person is high risk,’ ‘This person is 8 on the scale.’ But I don't know what information has gone into that,” Strandburg says. “What factors have already been taken into account? Which factors have not been taken into account? What am I supposed to do with it? If I’m supposed to combine it with other evidence, how do I know which evidence has already been taken into account? I think that is a huge neglected concern.”

Strandburg’s example is not merely hypothetical. The Wisconsin Supreme Court case Loomis v. Wisconsin (2017) challenged Wisconsin’s use of closed-source risk assessment software in sentencing Eric Loomis to six years in prison. Loomis argued his due process rights were violated when the judge determined the sentence in part by using a risk assessment score gauging the likelihood of recidivism, even though the workings of the software that generated it were hidden behind trade secrecy claims. The US Supreme Court declined to hear Loomis’s appeal. 

Ewert v. Canada (2018), in which the Supreme Court of Canada ruled that federal prison authorities must demonstrate the cross-cultural validity of psychological and statistical assessments used to make decisions about indigenous prisoners, signifies that US judges are not the only ones taking a look at the impact of decision-making tools, says Cofone. He recently spoke to a group of appeals court judges at the Canadian Institute for the Administration of Justice about the regulation of algorithmic transparency. “There is judicial interest in Canada as to how seriously to take objections related to trade secrecy and gaming,” he says.

State and local governments have begun to grapple with some of these issues, but Strandburg acknowledges that she does not see a consensus emerging soon. On one point, however, she expresses considerable clarity.

“Frequently this debate proceeds as though the two alternatives are game or don’t disclose,” she says. “Of course, there is often a third alternative: Construct a less gameable algorithm, and disclose.”

Posted March 16, 2020

Updated August 13, 2020