By Emma Young
Let’s say you’ve been found guilty of stealing a car. Would you prefer that a judge decided your punishment — or an algorithm?
Algorithms are increasingly taking over from people in making decisions in everything from the hiring of new employees to healthcare, as well as criminal punishment. But, as the authors of a new paper in the Journal of Experimental Psychology: General note, there is mounting public concern about just how algorithms reach their decisions. In some US states, for example, companies that use algorithms in hiring are now obliged to explain the steps of the process.
However, “this emphasis on making algorithmic decision-making transparent, although well-motivated, raises a paradox,” argue Andrea Bonezzi at New York University and colleagues.
Judges, recruiters and doctors aren’t required to explain every decision. So why do we have such a problem with algorithms doing the same thing? The team thinks it’s because we misguidedly believe that we understand human decision-making better than algorithmic decision-making. In fact, they argue, human decision-makers “are often just as much of a black box as the algorithms that are meant to replace them”.
To explore this, the team ran a series of studies. In the first, groups of online participants were asked to consider one of three scenarios in which either a human expert or an algorithm had to make a decision. One scenario involved evaluating the risk of a criminal defendant re-offending; another was about deciding whether an MRI scan revealed the presence of a disease; and the final concerned examining a video interview, to pick whom to hire. Participants then rated how well they understood the process that the human or algorithm would use to make their decision. In all scenarios, those who read about the human decision maker reported having a better understanding of this process than those who read about the algorithm.
But another group of participants was first asked to explain how a person, or an algorithm, would actually go about making these decisions. (This strategy is known to make people more realistic about what they actually do — and don’t — understand.) These participants gave lower “understanding” scores for the decision-making process of both the human expert and the algorithm — but there was a bigger impact on scores relating to the human expert, and in some cases this meant that the difference in scores for a person vs an algorithm vanished. “These results show that people foster a stronger illusion of understanding human than algorithmic decision-making,” the team concludes.
In a subsequent online experiment, some participants were told that it was easy for non-experts to evaluate a defendant’s risk of re-offending, and so to make a decision about whether or not to grant parole. This led them to have a greater “illusion of understanding” of the human expert’s process, but did not affect their reported understanding of the algorithm’s process. And this suggests that we think we “get” a human expert’s decision-making more because we project our understanding of our own decision-making process more onto other people than onto algorithms. This might not be surprising, but it helps to builds the team’s case. As do the results of a third experiment.
In this, when some participants were asked to reflect on what made them different to either a human radiologist or an algorithm, this led to lower “understanding” scores for the radiologist, but not the algorithm. The team argues that this shows that being forced to confront how different we are to a human expert disrupts our projection of our own decision-making processes onto them, making us less pro-human-biased.
There are other explanations, though, for why we are inclined to trust acknowledged human experts more than algorithms. The journey to becoming a judge, say, is highly competitive and takes many years. Such high-achievers might not be expected to reveal their decision-making process because, whatever exactly it entails, their career success implies that it’s sound. But it would be the same for an algorithm — data showing a solid track record of making good calls would alleviate concerns about its decision-making process, or about accepting its judgements.
Algorithms can “often out-perform” human decision-makers, the team argues. They conclude — and you might want to take a mental breath because it’s a long sentence, but one I think worth including in full: “Because the inner workings of modern algorithms are often inexplicable, holding inscrutable yet more accurate algorithms to transparency standards higher than those imposed on less accurate human counterparts that we delude ourselves to understand may ultimately be impractical and perhaps detrimental to social welfare.” Fair enough. But the qualifier “yet more accurate” is an important one. Providing clear evidence of this in relation to hiring, healthcare and punishment (as well as other fields in which algorithmic decisions affect people) would surely do more to convince a sceptical public to trust in algorithms.