Bias Runner 2049

Lights in the treesStory-with-a-meaning post by Ray Poynter, 11 October 2018


Tom Torquemada was looking forward to his interview today, he was off to the Global Broadcasting Corporation to talk about his work, and he loved his work. Tom was a Bias Runner, one of a team that hunted down and retired errant AI systems.

Tom had been thinking overnight about the best, non-technical, way to describe what an errant AI system was and how he and his colleagues identified them. The scale of the problem was clear to everybody – nearly everything today, in 2049, was determined by AI. Machines and bots determined who got a job, who got the next home loan, who might commit the next crime, and whether in this brave new world your schooling/conditioning would result in you being a labourer or artist. But with this transfer of power to the AI machines came a fear, a fear that the machines might not play fair, they might be biased or simply error-prone. The job of the Bias Runners was to find the biased or error-prone machines and ‘retire’ them.

There were two key types of problems that the Bias Runners were looking for ‘biased machines’ and ‘unstable machines’. A biased machine was one that had been coded with or had learned to treat people in ways that society deemed biased. If an HR machine was less likely to hire women, or less likely to hire gay men, then it was biased and would be retired.

An unstable machine was one that usually worked but which sometimes made terrible mistakes. The mathematicians had told Tom that this was due to a problem called ‘over-fitting’. They described the problem to him in terms of how men’s trousers used to be made. In the olden days you could buy the trousers in three leg lengths and a range of waist sizes. This meant that for a few people the trousers fitted perfectly, and for most people, they were good enough. Over-fitting is what happened when the Tyrell Corporation measured 1000 men and produced a range of trousers with 1000 variations (many more than in the old system). These 1000 sizes fitted the test men perfectly, but they did not do such a good job for the wider population. People with shorter legs or larger waists were much less likely to find suitable trousers.

The Bias Runners checked out AI machines with their own bit of kit, a Turing 2500. The essence of what the Turing 2500 did was to feed millions of cases to the AI it was testing and evaluate whether the outcomes were fair. Nobody could really tell how the AI machines were working, but it was possible (with care) to construct an almost exhaustive set of inputs and measure the decisions the machine made. The key to spotting bias and instability was whether decisions changed when non-relevant or trivial changes were made to the input. If changing the name from Tom to Tomasina meant that a job offer was less likely, then the machine failed and would be retired. If changing the name from Tom to Thomas changed the result, then the machine would be retired, preventing AI that applies over-fitting.

However, Tom was also aware that he needed to be careful during the interview. There was a dirty secret about the Bias Running process that most people did not fully appreciate. Humans determined the decisions about what was biased, and some people felt the decisions that had been made had institutionalised some forms of discrimination. For example, back in 2040 it had been decided that the criminal and financial records of your parents were relevant variables. So, if the AI machine rejected you because you were Tomasina instead of being Tom, it was biased and would be retired. But, if the machine rejected you because you mother or father had earned a low wage (or that they had lived in areas with high crime rates), that was acceptable. As a consequence, social mobility had declined, and the people from richer/safer backgrounds prospered and those from more challenging backgrounds found it ever harder to climb up the ladder.

As Tom entered the broadcasting centre he thought about that old Latin phrase “Quis custodiet ipsos custodes?” – who watches the watchers?


Note, this story with a meaning is set in 2049, but yesterday (10 October 2018) the media covered a story about Amazon having to ‘retire’ an AI program that was intended to help with hiring people. They retired the bot because it was shown to be sexist. The machine had been given the history of job applications, interviews and hires and set about replicating the process. However, it turned out that Amazon had been systematically (perhaps unconsciously) sexist in its process for years, and that is what the supervised-machine-learning AI learned. Read more about this case by clicking here.