Read Latest Issue now

Are robots racist? The trouble with human-trained AI

Posted on Nov 8, 2019 by PhilRhodes

Are robots racist? Machine learning is only as smart as the humans training it. Given human fallibility, what does our AI-augmented future look like?

Since Babbage’s Analytical Engine in 1837, the purpose of computers has been to do the donkey work for human beings. Machine learning is already making it possible for computers to do vastly more of that work: for the media industry alone, the ability to catalogue thousands of hours of footage without requiring thousands of hours of human time is a revelation. 

But artificial intelligence is only as intelligent as the people training it, and it has occasionally shown itself to be supremely stupid, with one of its most notorious public errors being Google technology tagging images of black
people as ‘gorillas’ in 2015. Google’s solution to the problem was ultimately simply to remove the terms ‘gorilla’, ‘chimp’, ‘chimpanzee’ and ‘monkey’ from its image categories. 

Stuart Coleman is founder of infoNation, a consultancy focusing on data and technology, and a former director of the Open Data Institute. He describes himself as “a technologist” who has spent a lot of time around early internet technology. In recent years, he has focused on implementing data science tools and techniques.

“There’s a lot of people in the last few years talking about AI. It’s a bit like ‘big data’ three years ago,” says Coleman.

Despite the publicity around Google’s gorilla gaffe, Coleman observes that AI problems are generally kept behind closed doors. “The power of machine learning and data science tools and technologies is held by a small selection of companies. Though the tools are being adopted in the mainstream, there hasn’t been enough diversity of activity to see what goes wrong – and the big companies aren’t sharing what goes wrong.” 

Getting machine learning right, then, demands just the sort of care that companies might be tempted to skip in the face of huge potential savings – the very savings machine learning promises to deliver.

“I just came from a meeting with a major company who want to wipe out 30% of their workforce,” continues Coleman. “And yes, they can do that. But they have to be really careful.”

What’s inside the black box?

One concern about machine learning is that while it can draw powerful conclusions and make decisions, it offers no hints as to how it has come to those conclusions. Machine learning is a ‘black box’ process, where humans can monitor the inputs and outputs, but precisely how those numbers are crunched is obscure. 

As with a human being, the machine makes choices based on the sum of its experiences – or the sum of the data supplied to it as training examples. But delving into the resulting configuration of the machine reveals something that looks, to the outsider, like noise and might be so complex as to completely defy any attempt at analysis. And unlike a human being, the machine cannot explain its decisions.

The impenetrability of AI is a subject well known to Adriane Chapman, associate professor of Computer Science at the University of Southampton. Chapman describes her area of interest as “how we process our data to make it useful”. 

“Machine learning,” says Chapman, “is a hugely powerful tool, but within machine learning there are many different techniques. In general these have, until now, been treated very much as a black box. Introspection is hard – we’ve built this tool, it does this thing, but in terms of neural networks, one answer pops out of the end and it’s not easy to determine why. Not impossible, but not easy.”

This might seem like a lot of concern over a comparatively minor issue – after all, who cares why an AI reaches a particular decision, so long as that decision is right? But no machine learning conclusion will ever be completely error-free. In some circumstances a decision might be challenged and, in order to defend that decision, it will be necessary to have some understanding of how it was reached. As a result, AI might be employed for all sorts of tasks – simply based on expediency – for which it may not be suitable at all. 

Fairness is a social and ethical concept. In machine learning we often treat it as a statistical concept

Finding fairness

Stuart Coleman prefers not to name “an organisation that developed a computer vision algorithm that could look at things and infer lots of knowledge”. 

“Their mission was to share machine learning algorithms for the benefit of society – and they realised that the most compelling use for this was really underworld stuff. In a society where it’s deemed OK to be watched, people are living in a Minority Report situation. You cannot go into most urban areas in China without the state knowing where you are, who you’re meeting… It’s very easy to be unaware of the influence these platforms have. There are very few sanctuaries to escape this stuff, because we all crave the convenience.”

Machine learning has been applied to fields as diverse, and as impactful, as commercial health insurance, business information and politics, but perhaps the most concerning developments have been in criminal justice. As the University of Southampton’s Chapman explains, parts of the US court system began using machine learning to predict the likely future behaviour of offenders. 

“The classic example of this is COMPAS, in the US, which is used for recidivism. They took lots of penal records and they were looking at who was likely to recidivate, and then gave the results to the judges,” she explains. “Studies have shown that judges have a high percentage of personal bias in sentencing. COMPAS was given to judges as a tool to try to alleviate this problem. However, COMPAS has been shown to be horribly, horribly racist and biased, because of the data that was given to it for training.”

The power of machine learning and data science tools and technologies is held by a small selection of companies

The data given to a machine learning algorithm during its training ultimately guides the decisions it will make. The way in which that data can introduce bias is complex and hard to predict. 

“It’s a field of work that I think is incredibly interesting right now. It’s called FAT: fairness, accountability and transparency. It’s basically saying: there’s a problem if you’ve got gobs of data and think, if you can just plug it into machine and get some interesting answer out of it, then it’s useful. One of the problems with machine learning is that people measure accuracy, precision, recall, mean square error, but they’re all mathematical measurements,” Chapman says. 

The challenge, she feels, is teaching a machine not only to tell apples from oranges, but also ensuring it is fair, or that its results are used fairly. “Fairness is a social and ethical concept; in machine learning we often treat it as a statistical concept. Often, when we abstract these problems, the things we are measuring are accuracy or mathematical concepts, but ethics and fairness are not mathematical concepts. We need to be able to correct for this.” In 2019, this is still a work in progress. 

“That’s where the research community is right now. We know certain types of biases. The question is, can we predict how data with different types of known biases affects different types of question?” Apologising for the Rumsfeldism, Chapman admits: “There’s always going to be those unknown unknowns, but the community is trying to understand. By looking at the data itself, can we at least advise on how it is biased and how that’s likely to affect some of the outcomes.”

Regulation

One thing that comes up in any business discussion of AI’s risks and benefits is a fear that the heavy hand of regulation might imminently descend. InfoNation’s Coleman argues that the UK government is quite progressive in this area. Professor Dame Wendy Hall, Regius professor of Computer Science at the University of Southampton, sits on a UK AI and ethics panel and the UK is arguably at the forefront of governmental commitment to the issue. 

Coleman explains: “The problem you’re going to have is that, in the same way the internet defies jurisdictions, the web is where this stuff is going to be working. You’re not going to be able to control it at a jurisdictional level. It’s not like encryption; the pace at which these machines will learn and make decisions and do things is such that you can’t account for it in regulation. You need guiding principles and you need a big enough stick. You do need a form of regulation, but it needs to be different. There’s huge amounts of excitement about the value and the intellectual property created. The people in government and the regulator are always going to be behind. The only way to learn is through trial and error. What we want to prevent is such errors as are catastrophic.”

Chapman’s focus is on exactly that sort of prevention. “Ultimately, I think machine learning is going to be much like a car,” she says. “It’s a powerful, useful tool, but we’re not at self-driving level yet. I think we as a society, and government regulations, are going to have to think through what happens when things go wrong.” 

Describing her position as a middle ground, Chapman concludes: “The community is starting to become aware of the problem that, socially, it is unacceptable for a tool to do certain things. But we want the tools. A chunk of research is to figure out how to catch and change what we do in terms of machine learning, so problems are less likely to happen.”

This article first appeared in the November 2019 issue of FEED magazine.

Machine learning archives

Sign up to our free magazine!

VR is all at sea – but virtually under...

June 11th, 2018

The challenges of streaming VR from off the California coast Known as “the grandfather...

YouTube, the one stop video shop

October 3rd, 2023

VOD is now an established media consumption model. We look back at the major...

Streaming to the World: Gemini Boat Race

July 18th, 2023

The Gemini Boat Race has always been a defining mark on the global sporting...

Esports is a real. But what is "real"?

August 6th, 2019

Esports is real sports. The real question is what is real?