algorithms

If you’re reading this, the algorithm said yes

SHARE
Jack Simpson

08 Sep 2017

Earlier this year, United Airlines instructed two burly airport police to forcibly remove a paying passenger from one of its planes. Why? Because an algorithm worked out how many tickets the flight could overbook by without having more people than seats after inevitable cancellations. Except the algorithm got it wrong. And the now infamous Dr David Dao wasn’t having any of it. Did a similar algorithm decide Dr Dao should be the one booted off the flight? This was the question asked in a recent 99% Invisible podcast – I’d recommend having a listen if you’re into this subject. Algorithms shape the world in more ways than most people know. Invisible but constantly working away in the background of our lives. In every interaction we have with a brand or device. Those algorithms want to learn whether we’re valuable customers, whether we can replay loans, what we want to see on social media, how healthy we are, how likely we are to buy certain products or use certain services. All fantastically useful things for both brands and consumers, of course. And aside from making everyday tasks like online shopping easier, algorithms, with the help of artificial intelligence, have achieved everything from deploying emergency services more efficiently to detecting certain cancers. But there are other things to consider, such as the increasing reliance on maths to make decisions for us, even when we haven’t asked it to. And the assumption that it’s always right. The logic is that by replacing subjective judgement with objective measurement you remove human biases and make better, fairer, more accurate decisions. Algorithms don’t have an agenda. Nor are they capable of prejudice or discrimination. They do have to be built by humans, however, who comfortably tick all the above boxes. So are they really as objective as we think?

Accidental propaganda

As someone who comes from a digital marketing background, I’m most comfortable talking about algorithms in the context of things like search engines and social media. And the latter makes an interesting case study for the unnoticed impact algorithms have on how we see the world, particularly when you look at events like political elections. Because social media platforms’ goal is to show you stuff that’s useful or interesting to you, they naturally filter out the stuff that isn’t. This creates a kind of ideology loop, where we’re shielded from anything outside our normal belief system. If you’re an avid Labour supporter in the run-up to a General Election, for example, you’re going to see positive stories about Labour and negative stories about the Conservatives because those are the things you’re most likely to have interacted with in the past. I often scoff at my parents for getting all their news from one paper. I ask them how they can possibly have a balanced view of the world if everything is filtered through a one editor’s subjective lens. But I get much of my news from social media. And if those sites are just showing me stuff that chimes with my existing biases, how does that give me a more varied viewpoint than reading one publication? The only real difference is I’m the (unwitting) editor. This isn’t to say social media platforms are doing anything wrong. They’re giving us what we want because – well – that’s what we want. But that’s the point here: these algorithms are reflecting the priorities of their human designers. They want you to see the things you like in the hope it’ll prompt you to interact with a particular site more. As I said, I’m not here to criticise the likes of Facebook and Twitter – I use social media platforms multiple times a day, both in my work and personal life. But as this phenomenon gains more publicity, I predict we’ll start seeing calls for change from groups or organisations that feel their content is unfairly filtered out. In fact, some people have already begun to voice similar opinions…

A new age of transparency

In her somewhat provocatively titled book, Weapons of Math Destruction, data scientist Cathy O’Neil suggests that while algorithms aren’t inherently bad, they do perhaps need more regulation. One example she cited in the 99% Invisible podcast was the use of algorithms in criminal law to determine sentencing. The historic data going into those algorithms is already fraught with human prejudice, she argues, and doesn’t always take into account the social realities around crime and punishment. The antidote to this sort of algorithm failure, she says, lies in doing much more to measure their impact, particularly on those who may be disproportionately harmed by them. She also calls for greater transparency, giving people more opportunity to see what an algorithm isn’t showing them and providing more information when an algorithm makes a decision about them. University of Maryland law professor Frank Pasquale shares similar views, suggesting the European Union should expand its data protection laws to create a ‘right of explanation’ when an algorithmic decision impacts a consumer. Clearly this is going to be a huge point of discussion in the coming years as algorithms become even more influential. Whichever direction that discourse take, the important thing to remember is this: Algorithms are not inherently bad. Nor are they inherently good. They are simply reflecting the priorities and judgements of the humans that built them. People still have ethical accountability – you can’t pass that buck to a piece of software. And it’s people – not algorithms – that will ultimately decide the kind of world we live in.