SecurityBrief Asia - Technology news for CISOs & cybersecurity decision-makers
Story image
Musk says AI 'more dangerous than nukes' - expert stays optimistic
Tue, 13th Mar 2018
FYI, this story is more than a year old

​Speaking at South by Southwest (SXSW) on Sunday entrepreneur Elon Musk revealed his startling beliefs around artificial intelligence (AI).

AI ‘scares the hell' out of Musk as he believes in the wrong hands it could be ‘more dangerous than nukes'.

Musk is known for his exploits that stretch the boundaries of regulations, but he says AI is one area where he is willing to make an exception.

"This is a case where you have a very serious danger to the public, therefore there needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely -- this is extremely important," Musk says.

"Some AI experts think they know more than they do and they think they're smarter than they are ... this tends to plague smart people, they define themselves by their intelligence and they don't like the idea that a machine can be way smarter than them so they just discount the idea, which is fundamentally flawed. I'm very close to the cutting edge in AI and it scares the hell out of me."

Musk used the example of AlphaGo and its successor AlphaGo Zero, the board-game players powered by AI to illustrate the rates of improvement, which according to Musk, no one predicted.

Over six to nine months AlphaGo went from being unable to beat a relatively good Go player to then beating both current and former world champions. AlphaGo Zero then completely destroyed AlphaGo 100-0.

"The rate of improvement is really dramatic, but we have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity. I think that's the single biggest existential crisis that we face, and the most pressing one," Musk says.

"The danger of AI is much greater than the danger of nuclear warheads, by a lot and nobody would suggest that we allow anyone to just build nuclear warheads if they want -- that would be insane. Mark my words: AI is far more dangerous than nukes, by far, so why do we have no regulatory oversight, this is insane."

Musk believes it's crucial that framework is implemented for the creation of digital super intelligence before further innovation and advances are made.

High-Tech Bridge CEO Ilia Kolochenko says the AI term is both amorphous and ubiquitous with many people unwittingly using it for a great wealth of unrelated topics and technologies.

“We are still far from the Strong AI, capable of replacing human in many different areas without continuous and thus expensive training,” says Kolochenko.

“On the other side, Machine learning (ML) technologies have proven their efficiency and capacity to outperform human in many precise, albeit limited, tasks such as Go or even Chess playing.

Kolochenko says currently financial institutions use ML algorithms to better score mortgage or leasing customers, insurances use ML to forecast customers' susceptibility to diseases, even law enforcement have started using ML and big data to forecast crimes and better plan patrol routes – but he doesn't see any danger to humanity from AI.

“Nonetheless, there is no risk to humanity from AI, other than unemployment in particular sectors that can be fully automated by machines,” says Kolochenko.

“I, however, remain optimistic: humans are still be able to concentrate their efforts on something more creative and valuable for society.