A new investigation has been announced into bias in the algorithms that play an increasingly influential role in allocating resources. It is to be carried out by the newly-established Centre for Data Ethics and Innovation (CDEI) based at the Cabinet Office. The investigation is in partnership with the Cabinet Office’s own Race Disparity Unit (RDU), which seeks to measure and explain outcomes in public services and how they differ between ethnic groups. So what will be investigated?

Algorithms are a series of instructions for solving a problem, which are used by computers, and have increasingly come to influence our daily lives. They are used to determine what information we see online as well as in decision making in such areas as finance and insurance. Often, they will make decisions based on historical data.

We have become aware of how algorithms may produce discriminatory results. Indeed, there have been some outright, grade-A racist howlers, such as Google identifying images of black people as gorrilas. There are also reports of facial recognition technology that function perfectly well on white people but flounder when tried on anyone else. This is said to be because the facial data used to build the algorithms have included predominantly white people. And one investigation into an American algorithm that was supposed to identify those more likely to re-offend, found it was no better than guess work, and unfairly discriminated against black people.

A press release quoted CDEI chair Roger Taylor as saying:

“These are complex issues and we will need to take advantage of the expertise that exists across the UK and beyond. If we get this right, the UK can be the global leader in responsible innovation.”

There lies the crux of the matter – “if we get this right”. Algorithms are a double-edged sword. They have the potential to make all our lives easier by speeding up decision making or allocating scarce resources more efficiently. But by their very nature, they discriminate. The trick is to ensure they discriminate on grounds that are fair. 

Given the number of people from ethnic minority backgrounds working in tech, it is hard to imagine that ‘racist’ algorithms stem from racial prejudice against minorities, but rather a lack of foresight. It is furthermore hard to imagine that companies like Google are not acutely aware of these issues already, given what we know from James Damore about the political values that dominate them within. And the potential losses for getting it wrong, in terms of reputational damage, must surely offer a strong incentive structure that will act in lieu of specific government regulations. Couple that with the gains to be made from getting it right – companies want as much business as possible and excluding people for bad reasons makes no sense. 

Algorithms have to also be given their freedom to look and learn. There is the danger of what Thomas Sowell has termed “the vision of the anointed”. That is to say, an elite group of academics, civil servants, and policy makers decide they know better than anyone else what should and should not be and seek to force the issue. The truth however, is, to paraphrase Shakespeare, that there are more things in heaven and earth than in their philosophy. Simply, things are complex and not easily understood. 

The types of people who make up both the CDEI and RDU seem to fit the bill. Some on the board of the CDEI have little if any expertise in the design of algorithms, such as the Bishop of Oxford and the geneticist, Lord Winston. While they are renowned authorities within their respective fields, the idea they can step out and harness the chaos of the tech sector and computer algorithms is far-fetched. As Sowell would inevitably point out, such people would pay no cost if their interventions were found wanting. And has Lord Winston, for instance, a self-confessed “lover of fine wine, pre-war cars and Arsenal Football Club”, run out of infertile couples to help?

The CDEI’s current remit is to evaluate and make recommendations for governance. But we already have democratic structures in place that can do these things; namely select committees and all-party parliamentary groups, plus the full gamut of civil society and academia. It is expected that the CDEI will come to have statutory powers but this is just the extension of non-elected, technocratic government, with a potentially limitless mandate, no clear definition of what success would look like, and no expiry date.

The Race Disparity Unit has devoted itself to monitoring gaps between broadly-defined ethnic groups and closing them. It is a flagship programme of Theresa May’s, currently tottering, government. But as Sowell points out, such ethnic groups differ in all manner of ways in terms of their attributes and preferences, and thus there are no reasons why we should expect equality of outcomes between them. 

Given the commitment of the RDU to equality of outcomes between groups, there is the risk that the CDEI will come to advocate excessive regulation of algorithms and in effect will be looking to substitute one set of biases (real or otherwise) for another set of biases – that each group should have outcomes in proportion to its population share, as measured in the census, and that algorithms should be instrumental in delivering this.

There are some issues related to algorithms that the CDEI should not be concerning itself with, despite the seemingly intuitive belief that they are discriminatory.

An article in the Financial Times, gives the example of Sikhs in Southall being refused bank loans. The reason for this was that computer algorithms were rejecting them because many among them were largely thrifty types with no record of having taken out loans. But such an algorithm would penalise all thrifty types – it just so happens that the value of thrift is not evenly spread across all groups. And these people were being denied loans not because of their race or religion, but because of their individual histories.

This was just a bad algorithm in that it was penalising those who would, in all probability, be ideal people to lend to. Thrifty people will pay back their debts and it would be simply a question of redesigning the algorithm to mark them, correctly, as low-risk. The problem was not to do with race or religion, but rather a badly designed and yet easily correctable algorithm. This is a crucial point – to have algorithms that can go wrong but can be changed with a little effort is much better than having ones that are bound to a preconceived set of notions about what ought to be, that may not be appropriate.

The CDEI investigation needs to be well grounded in an understanding of its limitations. Algorithms are developed largely through trial and error and they need to be allowed the space to fail as well as succeed. While we sometimes may be rightly appalled by what they produce, this is an inevitable cost that stems from their very nature, and needs to be weighed up against the benefits they deliver. But as I have already said, there is already a strong incentive structure in place that can act in lieu of government regulation. And if they are producing genuinely discriminatory results based on race or religion, then existing legal sanctions will take care of them, while the Equality and Human Rights Commission must surely already have the statutory powers to investigate.

Algorithms can be improved upon through better, wiser design. But there is a real threat from those with the vision of the anointed, who would wish to make their impositions upon them. This may lead to ill-designed algorithms that are bad for everyone but not easily improved upon. The CDEI should restrict itself to issuing guidelines of good practice and leave the designers of algorithms with enough freedom to fail as this is a necessary condition for their refinement. One example would be encouraging designers to train their algorithms using a wide range of data, encompassing people from all walks of life and ethnic backgrounds.

Login or register as a guest to comment below