If you heard two pieces of music, one composed by Bach and the other by an algorithm, do you think you could tell the difference?
In an ExCel conference hall filled with IT savvy IPExpo attendees this week, only 50% of people could. I wasn’t one of them.
It was one of the practical examples* mathematics lecturer and author Hannah Fry used to make us question our understanding of algorithms and how they are deployed.
The IPExpo speaker said her awakening to our attitudes towards algorithms happened in Berlin. She was there presenting her research into the mathematical model of the 2011 London Riots and said her delivery didn’t go down well with the audience.
It’s then that she realised that algorithms in themselves aren’t ‘good’ or ‘bad’ in isolation. It depends a lot on the human context.
In another piece of audience participation, Hannah Fry posed the question: “if you were guilty of a crime and had the choice of being judged by a human or an algorithm, who would you choose?”
Remarkably the majority of people chose the human judge.
It’s surprising really, given human fallibility in assessing complex issues and being subject to unconscious bias. Algorithms rarely make the same type of mistakes.
The problem, of course, is that algorithms don’t have the ability to fully understand context or the same subtle nuances that humans can. When they get it wrong, they really get it wrong.
Another issue is that people don’t understand how algorithms work and if there are flaws built into them. This leads to misunderstanding and mistrust.
Hannah Fry spoke of a case in the US, where a government agency was making important decisions about people’s finances based on an algorithm that turned out to be no more than an Excel spreadsheet, riddled with errors.
“Once something is dressed up, it has an air of authority, which makes it difficult to challenge,” she said. “As we go into the AI world, it opens the doors to all sorts of people exploiting the system.”
Humans and algorithms working together
Despite this, it’s clear that Hannah Fry is no algorithm basher, citing cases where they have transformed people’s lives for the better, especially in healthcare. The issue is about accepting them with blind faith.
“Why don’t we accept that neither algorithms nor humans are perfect? So why don’t we use the best of both?
“Let’s take algorithms off their pedestal,” she added. “We have to think about the failings and trust issues of the humans that use them.”
At IPExpo, most of the vendors deploy complex algorithms as part of their offering. But it didn’t seem that there was much evidence of blind faith in the power of them. Humans still seem to be vital to their use.
As Cisco’s Chintan Patel – who was keynote speaker just before Hannah Fry – said, the IT industry can’t scale to support the hyper-connected future we are entering. We need to harness the power of AI and machine learning.
Telling the algorithm story
As my colleague Jack Simpson wrote recently, we’re starting to question how AI will impact our industry.
But the question for me is really, how can our industry impact the algorithm debate?
At Harvard, we always talk about making technology personal. Being the bridge between people and technology.
Part of this lies in explaining how algorithms work and being transparent about how they are deployed.
It’s not just the educational side to comms though. Listen to most futurists and they’ll tell you the likely future is one where people and machines work side by side.
This is a really exciting opportunity.
Just think of the advances we can make in healthcare, transport, security and even plain old-fashioned customer service.
This is the story that really needs telling. Not being either ‘computer says no’ or accepting them with blind faith. But bringing the future potential to life in a way that inspires people to think big and leads to informed debates about the challenges ahead.
Perhaps the comms industry will benefit from algorithms to help us do it…?
* Source: “Experiments in Machine intelligence” by David Cope