Explore: How Our Google Searches Reveal Our Prejudices

If you search for an African-American sounding name on Google you are more likely to see an advertisement for instantcheckmate. com, a website offering criminal background checks than if you enter a non-African-American sounding name. Why is this? It could be because Google or instantcheckmate.com have applied an overtly unjust rule that says African-American sounding names should trigger advertisements for criminal background checks. Unsurprisingly, both Google and instantcheckmate.com strongly deny this.

What instead seems to be happening—although we can’t know for sure—is that Google decides which advertisements should be displayed by applying a neutral rule: if people who enter search term X tend to click on advertisement Y, then advertisement Y should be displayed more prominently to those who enter search term X. The resulting injustice is not caused by an overtly unjust rule or poor-quality data: instead, we get a racist result because people’s previous searches and clicks have themselves exhibited racist patterns.

“If you type ‘Why do gay guys . . . ’, Google offers the completed question, ‘Why do gay guys have weird voices?’”

Something similar happens if you use Google’s autocomplete system, which offers full questions in response to the first few words typed in. If you type ‘Why do gay guys . . . ’, Google offers the completed question, ‘Why do gay guys have weird voices?’ One study shows that ‘relatively high proportions’ of the autocompleted questions about black people, gay people, and males are ‘negative’ in nature:

For black people, these questions involved constructions of them as lazy, criminal, cheating, under-achieving and suffering from various conditions such as dry skin or fibroids. Gay people were negatively constructed as contracting AIDS, going to hell, not deserving equal rights, having high voices or talking like girls.

These are clear cases of algorithmic injustice. A system that propagates negative stereotypes about certain groups cannot be said to be treating them with equal status and esteem. And it can have distributive consequences too. For instance, more advertisements for high-income jobs are shown to men than to women. This necessarily means an expansion in economic opportunity for men and a contraction in such opportunity for women.

“These are clear cases of algorithmic injustice.”

 

What appears to be happening in these cases is that ‘neutral’ algorithms, applied to statistically representative data, reproduce injustices that already exist in the world. Google’s algorithm auto- completes the question ‘Why do women . . .’ to ‘Why do women talk so much?’ because so many users have asked it in the past.

It raises a mirror to our own prejudices.

As time goes on, digital systems learning from humans will pick up on even the most subtle of injustices. Recently a neural network unleashed on a database of 3 million English words learned to answer simple analogy problems. Asked Paris is to France as Tokyo is to [?], the system correctly responded Japan. But asked man is to computer programmer as woman is to [?] the system’s response was homemaker. Asked Father is to doctor as mother is to [?], the system replied nurse. He is to architect was met with she is to interior designer. This study revealed something shocking but, on reflection, not all that surprising: that the way humans use language reflects unjust gender stereotypes. So long as digital systems ‘learn’ from flawed, messy, imperfect humans, we can expect neutral algorithms to result in more injustice.

These examples are troubling because they challenge an instinct shared by many—particularly, I have noticed, in tech circles— which is that a rule is just if it treats everyone the same. I call this the neutrality fallacy.

“So long as digital systems ‘learn’ from flawed, messy, imperfect humans, we can expect neutral algorithms to result in more injustice.”

 

Those who unthinkingly adopt the neutrality fallacy tend to assume that code offers an exciting prospect for justice precisely because it can be used to apply rules that are impersonal, objective, and dispassionate. Code, they say, is free from the passions, prejudices, and ideological commitments that lurk inside every flawed human heart. Digital systems might finally provide the ‘view from nowhere’ that philosophers have sought for so long.

The fallacy is that neutrality is not always the same as justice. Yes, in some contexts it’s important to be neutral as between groups, as when a judge decides between two conflicting accounts of the same event. But treating disadvantaged groups the same as everyone else can, in fact, reproduce, entrench, and even generate injustice. The Nobel Peace Prize winner Desmond Tutu once remarked, ‘If an elephant has its foot on the tail of a mouse and you say that you are neutral, the mouse will not appreciate your neutrality.’ His point was that a neutral rule can easily be unjust. To add insult to injury, the neutrality fallacy gives these injustices the veneer of objectivity, making them seem natural and inevitable when they are not. ‘Neutrality,’ taught the Nobel laureate Elie Wiesel, ‘helps the oppressor, never the victim.’

The following is an edited extract from Future Politics: Living Together in a World Transformed by Tech  by Jamie Susskind (Oxford University Press)