“Robots do not misbehave – they are misprogrammed!” – Professor Maja Pantic, 20/03/2017
Four of the most interesting minds in tech were out in force on Monday night at the Guardian’s ‘How Will AI Change Our World?’ event in Kings Cross.
In what reads as a who’s who in AI intelligentsia, the actual four prominent experts included:
- Anil Seth, professor of cognitive and computational neuroscience at the University of Sussex
- Maja Pantic, professor of affective and behavioural computing at Imperial College London
- Anders Sandberg, senior research fellow at the Future of Humanity Institute
- Alan Winfield, professor of robot ethics at UWE, Bristol.
Here’s what they had to say…
Why is everyone talking about AI?
The discussion began with early blows traded as Professor Maja Pantic recoiled at Professor Anil Seth’s suggestion that Ex Machina was indeed a good film.
But when asked what they thought AI meant by Guardian Science editor and host for the evening Ian Sample, each had an eloquent answer to give.
“A simulation of natural intelligence,” said Alan Winfield.
“A set of techniques to learn things and solve problems that humans can’t do” was Maya’s response.
“Simply doing the right thing at the right time,” said Anil Seth.
So why the sudden resurgence in AI?
To answer that question it’s important to realise the first real interest in AI appeared in Japan in the 1970s, with plenty of public funding poured into the industry.
Yet as progress stagnated, funding was cut; these periods were known as “AI Winters”, to which Mr. Sandberg then quipped, “as a Scandanavian, it’s always winter for me!”
The difference nowadays is that a lot of the funding for AI is private – backed by multi-billion-pound tech companies.
Crucially, though, the development and availability of huge data sets being run through super computers has allowed for quicker progress in the field.
Narrow AI vs. general AI
As conversation drifted into the relevance and benefits of the Turing Test (“No need for it!” were the words of Professor Pantic), it was important to distinguish between narrow AI and general AI.
The former involves robots or computers that are only able to do one thing (Google’s AlphaGo AI beating the world champion Go player). The latter involves a system capable of doing everything (or most things), such as beating the world champion Go player, making a cup of tea and helping you complete a crossword.
What are the risks (and how do we overcome them)?
The fluffier side of AI is amusing and fascinating in equal measure, but what of the potentially darker side of artificial intelligence? What are the risks? What are the ethics involved in robots? Could we create a robot that would destroy us ultimately?
One of the more intriguing answers to this question came from part-time Professor and part-time all-round great guy Anil Seth:
“It’s not a question of what they do to us, but what we do to them!”
My colleague Hollie looked somewhat concerned at this point…
Seth was alluding to a potential ethical code for robots; as soon as they start to develop feelings, and consciousness, we would need a whole new rulebook.
“How about taxing robots?” asked a fiendishly keen audience member.
Professor Winfield replied: “Rather than taxing robots, who in all likelihood will not own any property or things, we should look at taxing the big companies making the robots.”
Are we really going to lose our jobs?
And what about jobs? Will AI ‘make workers great again’, or will it replace them?
The consensus was that there will be job disruption, but rather than jobs being replaced there will be a symbiosis between humans and robots.
Doctors, it seems, are already using this practice.
As for a question on AI increasing the wealth gap and creating more inequality, Professor Winfield had some strong words:
“It’s a fallacy that Luddites [the early 19th century English workers who destroyed machinery, especially in cotton and woollen mills, which they perceived to be threatening their jobs] were anti-technology. They were anti-poverty.”
In this vein, Winfield recommended big tech companies who propose AI for mainstream jobs should lobby for a negative income tax or universal basic income, so as to not create the same kind of resentment towards technology.
After an hour-and-a-half of conversation which was anything but robotic in tone, we came away as excited as we were going in. Upon leaving the auditorium I was left furiously pondering the use of predictive analytics and the fourth industrial revolution.
There’s a lot more to AI than meets the eye. The panel thinks we’re hundreds of years of incremental steps away from general AI, but we’re at the cornerstone of some amazing discoveries.
Don’t expect to lose your job to C3PO just yet!