Google's AI charter is a good start, but it will not be enough to stave off the inevitable AI apocalypse

The objectives of the new policy state that Google’s use of AI must be socially beneficial, unbiased and safe. But Google has also left several loopholes.

When the world leader in artificial intelligence (AI) technology partners with the world’s largest military, visions of red-eyed robots laying waste to humanity don’t seem that far-fetched.

For some reason, this didn’t occur to Google. To several thousand Google employees, and the population at large, this was patently obvious. The trigger was Project Maven.

Representational image

Representational image

Project Maven aka Algorithmic Warfare Cross-Function Team is a US Department of Defense (DoD) project that was aimed at militarising AI. As per the requirements of the project, Google would work with the DoD and provide them with AI tools that would help analyse drone footage and enable military-grade surveillance. The project would use Google’s TensorFlow technology to identify people and objects of interest.

Google should not be in the business of war

On learning about the project, over 4,000 Google employees signed a petition demanding that Project Maven be cancelled. As reported by The New York Times, employees also demanded that Google announce a policy that it would not “ever build warfare technology.”

Google claimed that Project Maven is “non-offensive”. As reported by Bloomberg News, several Google engineers, who did not wish to be named, disagreed.

This happened in April 2018. By 15 May, as many as a dozen Google employees quit over Google’s refusal to back down from the project. The employees who quit told Gizmodo that their decision wasn’t just centred around Project Maven. They feared that Google’s open work culture, which encourages employees to question everything, including policy decisions and business dealings, was increasingly being eroded. The employees also stated that company executives now seemed less interested in listening to workers’ objections than they did earlier.

Google CEO Sundar Pichai published the charter after weeks of protest from employees forced the company to reconsider its stance on AI

Google CEO Sundar Pichai published the charter after weeks of protest from employees forced the company to reconsider its stance on AI. Reuters

Google did not budge. A week later, reports popped up stating that Google had erased its now famous, and unofficial, “Don’t be evil” motto from its code of conduct. This was replaced with, as Gizmodo put it, a new “Do the right thing” motto.

A change of heart

Despite Google’s initial reluctance to back off form Maven, something finally seems to have gotten through. Google announced that it would not renew the Project Maven contract when it expires on March 2019, and more importantly, unveiled a new AI policy statement titled “AI at Google: Our principles.

The charter as laid out by CEO Sundar Pichai describes the company’s attitude towards AI and how its impact will be monitored, limited and controlled.

The objectives of the new policy state that Google’s use of AI must be socially beneficial, unbiased and safe. The aim is to prevent or limit harmful or abusive application of Google’s AI technology.

The new guidelines also state that Google will not pursue technologies that cause overall harm, weapons and technology that are designed to “facilitate injury to people”, are violative of human rights and those that have surveillance applications that violate internationally accepted norms.

Google pledges that its AI will not be used in any application

Google pledges that its AI will not be used in any application that will harm a human being

A close observer will note that Google has left loopholes for itself to continue to use its AI technologies to further its business.

“Technologies that gather or use information for surveillance violating internationally accepted norms”, for example, gives Google the leeway to use its technologies to gather data on people for its own uses, as it already does to serve ads. Specifically, the objectives only refer to harm to people. Some have noted that this leaves Google with the option to pursue applications like cyber warfare and weapons that target infrastructure.

Google does also state that they while its technology will not be used in weapons, Google will continue to work with the military in areas like cybersecurity, training, search and rescue, etc.

That loophole in the charter is very important for Google’s business dealings as the company is bidding for two DoD contracts for establishing cloud services. Bloomberg reports that in total, the contracts are worth over $18 bn.

Google’s shareholders will be loth to give up on such lucrative contracts, especially when you consider that the Pentagon’s spending on cloud-computing is only going up.

Google’s charter is a good start and, despite the loopholes, can be a good basis for any leading tech company to build its AI policy on, but it's not enough.

AI will take over the world, it’s only a question of time. And when that happens, one can only hope that a well-established ethical and moral framework is already in place to keep the world from falling apart.





also see

science