While Elon Musk worries about a rogue AI killing humans, Mark Zuckerberg focuses on AI's potential benefit

The disagreement between the two billionaires is about how regulatory agencies should approach AI.

Elon Musk has responded to barbs by Mark Zuckerberg on Artificial Intelligence (AI) technologies, and asserted that Zuckerberg's knowledge of the subject was "limited". The disagreement between the two billionaires and the owners of major tech companies is about how regulatory agencies should approach AI. Zuckerberg calls himself an optimist and sees no reason why the development of novel technologies should be held back. On the other hand, Musk believes that AI is one of the few emerging industries that requires proactive regulations instead of reactive ones.

Zuckerberg was grilling in his backyard and went live and answered questions from users. One of the questions posed pointed out that the biggest fear Elon Musk had for the future was the way AI would affect humans. Zuckerberg was asked about his thoughts on how AI could affect the world. Scrub to about 50 minutes into the video and 35 minutes before the end to get to the question.

Zuckerberg responded, "I have pretty strong opinions on this. I am really optimistic. Well, I am an optimistic person in general. I think you can build things, and the world gets better. But with AI especially, I am really optimistic, and I think that people who are naysayers and kind of try to drum up these doomsday scenarios are... I just don't understand it, it is really negative. And, in some ways I think it is pretty irresponsible. Because in the next five to ten years, AI is going to deliver so many improvements in the quality of our lives."

"If you think about just safety and health, and keeping people safe, AI is already helping us basically diagnose diseases better, match up drugs with people depending on what they are sick (with), that's the way they get treated better. So, it is helping a whole lot of people getting treated with better healthcare than they would have access to before. If you look at self-driving cars, they are going to be safer than people driving cars. That is only a matter of time, and one of the top causes of death for people is car accidents, still, and you know if you could eliminate that with AI, that is going to be just a dramatic improvement in people's lives," he continues.

Zuckerberg then goes on to address some of the fears that people have on what AI could become in the future, "Whenever I hear people saying you know, 'Oh! AI is going to hurt people in the future', I think yeah, you know technology can generally always be used for good and bad, and you have to be careful about how you build it, careful about what you build and how it is going to be used. But, people who are arguing about slowing down the process of building AI, I just find that really questionable. I have a hard time wrapping my head around that, because if you arguing against AI, then you are arguing against safer cars that aren't going to have accidents. And, you are arguing against being able to diagnose people when they are sick. I just don't see how in good conscience some people can do that."

The interview that sparked the question to Zuckerberg was the 2017 Summer Meeting of the National Governors Association in the United States. The particular question on how Musk sees AI in the future is asked at around 48 minutes into the video. Musk says, "I have exposure to the most cutting edge AI. I think people should be really concerned about it. I keep sounding the alarm bell, but you know, until people see like robots going down the streets killing people, they don't know how to react, you know because it seems so ethereal. I think we should be really concerned about AI."

While Elon Musk worries about a rogue AI killing humans, Mark Zuckerberg focuses on AIs potential benefit

Elon Musk

Musk then goes on to explain his point that there is a need for appropriate regulations for AI technologies, as they can disastrously get out of hand. "AI is a rare case where I think there should be proactive regulation instead of reactive. I think by the time we are reactive in AI regulation, it's too late. Normally, the way regulations are set up is that a whole bunch of bad things happen, there is public outcry, and then after many years, the regulatory agencies are set up to regulate that industry."

Then came the "doomsday prophecy", the worst case scenario. Musk said, "There is a bunch of opposition from the companies who don't like being told what to do by regulators, and it takes forever. That, in the past has been bad, but not something which represented a fundamental risk to the existence of civilisation. AI is a fundamental risk to the existence of the human civilisation. In a way that car accidents, airplane crashes, faulty drugs, or bad food were not. They were harmful to a set of individuals within society of course, but they were not harmful to society as a whole. AI is a fundamental existential risk for the human civilisation, and I don't think people really appreciate that."

In terms of regulations, there are two primary issues with AI. The lesser threat is the potential disruption to the job market. The greater threat is the risk that AI poses to the human civilization. The advent of artificial intelligence has led to a fear of humans losing out on jobs, especially among technophobes. The issue here is that robots can work for longer hours and for cheaper. Market research firm Gartner has warned that AI could replace highly trained law, medical and IT professionals as soon as 2022. According to UK-based public-services thinktank Reform, AI could gobble up up to 2.5 lakh human jobs by 2030. There are however those who dismiss these fears. According to Shanmugh Natarajan, Executive Director and Vice President (Products) at Adobe, AI will play the role of an assistant, enabling humans to work more productively and efficiently. Nasscom VP KS Viswanathan agrees, claiming that AI will actually create more jobs, and humans will only benefit by having to do fewer repetitive tasks.

Zuckerberg sees a future where humans will have a lot more free time and will primarily engage themselves with creative pursuits instead of slogging it off at work. AI stands to bring tremendous benefits in terms of healthcare, road safety, and financial and legal services. Remember the safer, AI-powered cars that Zuckerberg is talking about? Well, Musk makes them. In fact, Musk envisions a future where humans are actually banned from driving cars because of how dangerous it is. In fact, Musk is backing AI technology that promises to utterly transform what it means to be human, breaking down the need for media for two or more individuals to communicate. Musk is completely funding Neuralink, a startup that wants to links human brains to computers. The real problem is when humans are not involved in the equation.

The Taranis combat drone by BAE systems.

The Taranis combat drone by BAE systems.

The existential threat to humanity is a little more difficult to comprehend. The United Nations is only starting to formulate policies for lethal autonomous weapons systems (LAWS), armed drones that can operate independently of humans. In a battlefield, there might be factors that can make a human show mercy or avoid unnecessary killings. These are judgements that robots are less likely to make, depending entirely on the programming or commands provided to them. Kalashnikov, the weapons company that makes the infamous AK-47 rifles, has developed 20-ton tank drones. A Russian-made robot reportedly escaped from its lab and had to be arrested by the police. Put all of these together and you have a recipe for disaster. Suddenly, Musk's scenario of a rogue robot killing people on the streets does not seem so far fetched.

Musk is by no means the only one trying to sound the alarm bells. The Future of Life institute lists four existential threats to humans. These are nuclear weapons, biotechnology, climate change, and artificial intelligence. An open letter by over 8,000 signatories calls for research into how to prevent pitfalls of AI. Experts in the domain who have signed the letter include Elon Musk, Steve Wozniak, Jen-Hsun Huang, and Stephen Hawking.


also see