Elon Musk isn’t worried about killer robots, he’s worried about the development of unregulated, self-learning Super AI

Naysayers and doomsday heralds aren’t in short supply in this world and are largely ignored by the “saner” populace. Unfortunately, when one of the said heralds turns out to be of the likes of Tony Stark-esque Elon Musk, it’s in our best interest to at least pay attention.

Elon Musk. Reuters.

Elon Musk. Reuters.

Musk has been vocal about his fear of artificial intelligence (AI), stating in no uncertain terms that “AI is humanity’s biggest existential threat.” He uses every opportunity that comes his way to decry the state of AI policy and call for regulation of this “threat”.

Only recently, a passing comment on a video of Boston Dynamics’ back-flipping robot sparked yet again a debate on Musk’s views and rogue (or benevolent) future AI. At last count, his tweet had received 245,500 likes, 80,800 retweets and over 5,300 comments, with many coming out in support of Musk and his views.

 

But what really is Musk’s stand towards AI? Why is this man, who clearly is skeptical about AI, investing so heavily in the development of AI? Why is he excited when an AI funded by him defeats the best human players in DOTA 2? More importantly, how do you trust this man’s word when AI is increasingly becoming the cornerstone of his business? Is he just a hypocrite who is, as some seem to think, using fear to better his ends?

How irrational is Elon Musk’s fear of AI?

Without understanding Musk’s perspective on AI, it’s easy to see why such questions are relevant. In truth, Musk’s stance on AI stems from his perception of the dangers that unregulated AI development can pose. Whether you agree with his views or not, he makes some compelling points.

Musk’s investments in DeepMind — now owned by Google — and in OpenAI — an open-source AI research initiative, were not made for purely scientific and research purposes. As Musk explains in an interview with Vanity Fair, the investment in DeepMind was for keeping an eye on the state of AI development. The investment in OpenAI is for an even more noble cause, which we’ll explain in more detail when you have more context for it.

AI is inevitable

When it comes to super AI, it’s advent is inevitable, the question is only about when it’ll arrive.

Musk believes that companies like Google, which are investing heavily in AI, might have the best intentions, but that scientists will be so engrossed in their work that they won’t realise what they’re creating.

To prevent such an eventuality, Musk has called for regulation of AI. “Public risks require public oversight,” he says.

 

People like Facebook founder Mark Zuckerberg have rubbished Musk’s claims. Zuckerberg goes so far as to label them as “hysterical” and argues that AI is in too nascent a stage of development to merit regulation and oversight. Musk argues that AI is developing at too rapid a rate for such regulation to be ignored. If one can argue that Musk is just "buffing his brand", one can also argue that Zuckerberg's secret plans of world domination will be hampered by AI regulation.

Another notable tech figure who’s worried about AI is Microsoft co-founder Bill Gates. Gates is worried about what the advent of AI means for humanity, but isn’t as worried about regulation as Musk. When The Wall Street Journal asked Gates about Musk’s views, Gates said, “The so-called control problem that Elon is worried about isn’t something that people should feel is imminent.”

AI is powerful

AI is a powerful tool. Our current implementations of AI aren’t particularly intelligent. AI today is really good at improving the efficiency of repetitive tasks and working within a set of predefined rules.

The AI that defeated the world’s best Go player or the OpenAI project that trounced DOTA 2’s best players in a controlled environment are big achievements, yes, but AlphaGo can't play any other game and that DOTA bot can't even play the full game. Other “AI” like Google Assistant, Siri and Alexa are also quite primitive, incapable of holding conversations are doing anything beyond the simplest of tasks.

https://www.youtube.com/watch?v=7U4-wvhgx0w

These are just examples of AI that we can see. Those complicated ad-engines that practically print money for Facebook and Google are using AI, AI is helping scientists discover new proteins, AI is helping engineers come up with better designs, AI is heralding the self-driving car revolution, etc.

AI is useful and not very intelligent, but that still doesn’t mean that the AI of today can’t be dangerous. Right now, Google and Facebook, both leaders in AI tech, appear to be using AI for noble causes and the slightly less noble pursuit of padding the respective company’s bottom lines. AI is filtering out harmful content from search results, targeting advertisements, helping to translate languages and more.

However, this very same AI can be used to wreak havoc and destroy the world as we know it. Take the US elections, for instance, where state-sponsored Russian trolls allegedly rigged the elections with a targeted ad campaign. Now imagine if Facebook decided to do the same thing or Google. With their vast resources and absolute control over their domain, these companies have the power to do this and all it’ll take will be a modification to the algorithm that powers said AI. In other words, Google and Facebook can use AI to sway the opinion of the masses, and we wouldn’t even know about it.

The Terminator movies epitomise our irrational fear of robots

The Terminator movies epitomise our irrational fear of robots

It’s not just malicious intent, however. Take the example of Tay, Microsoft’s short-lived AI experiment. The chatbot started out as an innocent enough project wherein it was to learn through interaction with the internet. Within hours, Tay was spouting racist, anti-semitic comments. Who is to say that we won't condition even the most innocent of self-learning AI to evil?

In an earlier interview, Intel’s Artificial Intelligence Product Group (AIPG) head Amir Khosrowshahi told us“The problems that are of immediate concern and are difficult to address are building AI models which have an inherent bias in them. If you are trying to build a model to solve a problem then you need to ensure that your data set is vast, to avoid any inset of bias, as that could lead to some bad things.”

And when you’re talking about large data sets, the problem of privacy comes in, which is another matter altogether.

Again, the problem, in Musk's view, is that AI is being developed in isolation, with no regulation and no oversight.

Embracing AI

In Musk’s mind, the future of humanity is a brain-computer interface that will keep us permanently connected to the cloud. In his Vanity Fair interview, Musk explains that we’re already partway there, that our phones are already an extension of ourselves. The only shortcoming on that front is that voice and text input is very slow.

"Choose hope over fear", Mark Zuckerberg. Image: Reuters

"Choose hope over fear", Mark Zuckerberg. File photo of the Facebook CEO. Reuters

Zuckerberg has a different opinion entirely. He wants everyone to “choose hope over fear” and that AI is not as big a threat as disease or violence or fear. Popular culture is quick to paint AI as sentient, human-hating entities that will decide to wipe out humanity the moment they become self-aware. One need only mention Terminator for our minds to fill up with images of unstoppable, implacable, red-eyed killing machines systematically wiping out the last dregs of humanity. But then again, there are those rare works, such as Arthur C Clarke’s A Time Odyssey, Small Wonder or even Star Wars that show another possible side to AI.

The Singularity

AI need not be evil, and Musk is not saying that it is evil. All he seems to be asking for is regulation and policy that ensures over the creation and use of AI technology and on its development. In fact, he doesn’t even talk about malicious AI. What if AI just gets so incontrovertibly advanced that humans are irrelevant, maybe not even playthings? How much thought do you give to that army of ants industriously tunnelling through the walls of your home?

At a Vox Media Code Conference, Musk claimed that AI will be so much more intelligent than us that “Robots will use humans as pets once they achieve a subset of artificial intelligence known as ‘superintelligence.’”

A technological singularity is inevitable

A technological singularity is inevitable

AI today is created by a technique called machine learning. Algorithms “learn” from large datasets. Oversimplifying the process, these algorithms are designed to analyse data and identify patterns. They then use these patterns to perform tasks. Suppose you want to train an image recognition AI to identify cats in photos. You give the algorithm a large collection of photos, some with cats and some without. Usually, images with cats will be flagged as such, and the algorithm identifies ‘cat’ patterns in these images. It will note things like the shape of the animal, its eyes, the shape of the ears, etc. The algorithm can then look for these parameters in unlabelled images and determine if there’s a cat in them or not. A training regime with these images will improve the algorithm to the point that it becomes "intelligent" enough to recognise a cat in almost any image.

The more the data and the longer the training period, the better the AI.

Chinese Go player Ke Jie attempts to defeat Google's AlphaGo. Image: Reuters

Chinese Go player Ke Jie attempts to defeat Google's AlphaGo. Image: Reuters

In learning to play a game like Go, for example, DeepMind engineers designed an algorithm to learn from playing and watching games of Go. Once the algorithm learned to play Go, it spent endless hours of virtual time playing Go against itself and other players and improving to the point where it could defeat any human Go player in existence. Google’s Waymo self-driving car project employs the same technique, but these vehicles drive on virtual roads.

The point we’re trying to make here is that these algorithms (which eventually become AI) reached a point where human input was unnecessary. The speed of learning increased exponentially. In fact, the goal of AI creators is to create even more efficient algorithms that learn even faster, even more efficiently, and by themselves. The ultimate self-learning algorithm.

With such a process, we’ll eventually reach a technological singularity, an event where a super-intelligent AI will enter a “runaway reaction” of self-improvement cycles. An “intelligence explosion”, if you will. At this point, this super-intelligence will outstrip human intelligence and usher in an era of technological change unlike any other.

Musk has clearly accepted the fact that AI is inevitable, as is the singularity, but he also believes, and rightly so, that the power of AI, in the wrong hands, can wreak havoc on the world. Rather than concentrate that power into the hands of a few — corporations and governments, maybe — he seems to think that the best way to fight this is to create a super AI first and then hand it over to everyone. His investment in OpenAI will further that goal.

Musk does not fear the robot, he fears the unregulated algorithm. The Terminators did not destroy humanity, Skynet did.

Whatever else we may think of Musk, his plans and his fears, this is a man who knows more about AI than the average human being and his argument for AI regulation makes a lot of sense. It would do us well to at least heed his warnings.


Published Date: Nov 28, 2017 10:57 am | Updated Date: Nov 28, 2017 11:46 am