Skip to content Go to Homepage
menu
Insights: Blog
Elon Musk and the killer AI
by Mark Smith

"At least when there's an evil dictator, that human is going to die. But for an AI, there would be no death," says Elon Musk, the man behind Tesla cars and rocket-maker SpaceX, in a new documentary[1]. "It would live forever. And then you'd have an immortal dictator from which we can never escape."

The AI that will one day kill or enslave us all has been a staple of science fiction for decades. Skynet, the killer AI in the Terminator films, was activated in August 1997 and had begun enslaving humanity by the end of the month. This is the kind of efficiency that should have us asking what Siri and Alexa have been doing with their time.

Musk is not a science fiction writer. He's a serial entrepreneur who has worked in technology for 20 years, so when he warns about the dangers of AI, as he does in the documentary Do You Trust This Computer?[2], people listen.

Is Elon right to be worried? In 2001, Musk visited Moscow to try to buy refurbished intercontinental ballistic missiles for his SpaceX business. It didn't go well; he was spat at by one of the Russian rocket scientists, who thought Musk was "full of shit," a colleague said[3]. If this is how humans respond to Elon, it's not surprising that he's worried about how machines will treat him.

The reality is that we are a long way from being able to create an AI apocalypse. Watch this video and you'll see that scientists are struggling to teach a robot to open a door.

As I wrote in my blog AI Isn’t Magic, the AI we have today is very good at automating human intelligence. We can 'teach' it repetitive tasks, such as checking images for signs of cancer, and it does those very well. It doesn't get tired or bored, it just applies a certain set of rules repeatedly.

But the general intelligence humans have is hard to replicate. We can figure out how to solve problems, use tools and transfer existing skills to new tasks. We understand very little about how brains do that.

It's possible, for example, for Alexa to know, via sensors, where you have parked your car and tell you when you ask. But Alexa doesn't understand the concept of a car or what it is for. And Alexa will not, by itself, figure out how to drive one. Likewise, the AI that drives a car cannot teach itself to check cancer scans.

Imagine that we create an AI to improve the environment. One way to do that would be to destroy humanity. We're a big part of the problem. But the AI won't consider that option unless it's within the parameters that it's been given. And even if it somehow came up with the idea by itself, destroying humanity would require it to develop a concept of destruction, a military strategy and the means of taking control of a weapons stockpile.

We might never be able to create a computer that could make conceptual leaps like that.

That said, AI has already killed some people and it will kill more. It just does so by accident. We've already seen people killed by self-driving cars. A recent report described an AI tasked with gently landing a plane on an aircraft carrier[4]. The AI discovered that it could get a perfect score by landing the plane as hard as possible. The pilot would die but the AI would get the best score.

This was simply a loophole in the programming and was discovered in a simulation, long before the AI was allowed to land real planes. These things will usually be fixed but there will be accidents.

The real AI concern we should be addressing today is what to do about all the jobs that AI will make redundant. A widely circulated graphic shows that the most common job in most American states is ‘truck driver’. Computers will soon do those jobs.

Typically, technology makes some jobs obsolete but also creates new ones. This is true to an extent with AI - we'll need people to design, programme and manage it, for example. However, there are likely to be fewer of those jobs than the ones they are replacing. That's partly why many - on the right and the left - have begun considering the idea of a universal basic income, which reduces people's need to work.

This is more pressing than the killer AI. Just as Terminator in 1984 predicted that a killer AI was 13 years in the future, Musk is assuming we will advance more quickly than we really will. Managing an AI with general intelligence is a question for our great-grandchildren.

_______________________________________________________________________________________

[1] https://www.cnbc.com/2018/04/06/elon-musk-warns-ai-could-create-immortal-dictator-in-documentary.html

[2] http://doyoutrustthiscomputer.org

[3] https://www.bloomberg.com/graphics/2015-elon-musk-spacex/

[4] http://aiweirdness.com/post/172894792687/when-algorithms-surprise-us

Icon Book Demo

Isn’t it time you stop assuming? Book a demonstration with us

Thank you, we will be in touch shortly!
loader

Error Submitting Form

Back to top