In a study paper, researchers noted that we tend to trust machines and algorithms more than other humans in a decision-making situation. This study, funded by the U.S. Army to the tune of $300,000 was led searchers of the Department of Information Technology at the University of Georgia. 3 experiments were conducted to verify these results
A range of psychological experiments
In order to carry out this study, 1,500 people were solicited on the Internet to count the number of people present in a series of photographs. With each new experiment, the number of individuals in the photographs increased. For each photograph, the test persons had to choose between the suggestions of a group of 5,000 people (the crowd) or those of an algorithm trained beforehand using a database of 5,000 images.
This study is the first step in a long series of research as Aaron Schechter explains:
The end goal is to look at groups of humans and machines making decisions and find out how we can get them to trust each other and how that changes their behavior.
Eric Bogert, the lead author of the publication detailing the entire study explains the value of these three experiments:
Algorithms are capable of performing a large number of tasks, and the number of tasks they are capable of performing is increasing almost daily. There seems to be a cognitive bias that shows reliance on the algorithms, as the tasks that need to be done on a daily basis become more difficult, and this effect is stronger than the bias of relying on the advice of others.
The results are clear: when the effects of the three experiments are compared, the greater the number of people in the photograph, the more people will trust the algorithm to make their decision rather than the advice given by the crowd. This effect persists even when the quality of the advice, numeracy, and accuracy of the subjects are controlled. One of the analyses that Aaron Schecter proposes for these results is that the individuals tested generally think that a job that depends on counting is a more appropriate job to give to a trained algorithm than to a human being.
The latter continues his analysis by talking about people’s perception of algorithms and AI:
One of the common problems with AI is when it’s used to extend credit or approve someone for loans. While it’s a subjective decision, there are a lot of numbers in there – like income and credit score – so people feel like it’s a good job for an algorithm. But we know that reliance leads to discriminatory practices in many cases because of social factors that aren’t taken into account.
However, another fact is partially at odds with the initial survey results: subjects in the experiment also tended to more strongly ignore inaccurate advice labeled as algorithmic compared to equally inaccurate advice labeled as crowd-sourced.
People will trust the algorithm to make their decision rather than the advice given by the crowd
People tend to more strongly ignore inaccurate advice labeled as algorithmic compared to equally inaccurate advice labeled as crowd-sourced.