writing

Bias is a bad word

September 2022

The meaning of a word is its use in the language.

It is not profane4 or malicious words we ought to fear. It is those that are imprecise, coercive, or contain multiple different interpretations that chill discussion and break codebases.

TODO: What is bias, especially in ML

We need bias to accurately depict the world. There is a bias for tables to have legs, or for birds to fly.

Yet bias is a synonym for prejudice and discrimination. It is a word that is often used to describe a model that is not fair.

When you want to use “bias”, use “priors” instead.

Priors gives the same intuitive sense of meaning, but it is far less loaded.

The use of “bias” makes papers harder to read, Caliskan et al. 1 are extremely cautious about saying good things about bias. Instead, they use words like “veridical” (truthful) to not be caught red-handed saying that “bias” would possibly be correct. A “bad” word forces us to apply needless jargon to express what we mean.

The model isn’t “wrong” to have priors - it has no id to motivate malicious action or discriminate. It simply reflects the probabilistic nature of what it has been trained on. Sadly that includes priors that we collectively frown on; for instance, it has learned from its corpus of text that professions have gender. What literature tries to hint at is that “accuracy” and “reality” aren’t necessarily aligned with our desires. From a demographic point of view, there are gendered professions - but on some level, we don’t want this to be the case.

Now, not all of these priors are facts - there is no universal rule that insects are unpleasant - but they are all based on the reality of the text we have written down.

Do not mistake me for thinking that these priors can’t have negative consequences - they absolutely can, but claiming models have “biases” is deceptive and misplaces the blame. You can be biased, it just has priors. The machine has picked up our habits. Much like a child learning a “bad” word - it’s not “wrong” to do so; we just wish it wouldn’t.

Perhaps, much like children, we can teach machines to be better than us.

  1. Semantics derived automatically from language corpora contain human-like biases.
    Caliskan, Aylin, Joanna J. Bryson, and Arvind Narayanan. Science 356.6334 (2017): 183-186.