The year 2016 was big for AI or Artificial Intelligence in the public consciousness. It seemed like people were talking about it everywhere, so surely there’s no need to discuss it more? The thing is, though, many of the most intelligent people in the world consider the potential creation of true AI to be the most revolutionary development in human history, and they’ve been extremely vocal about both its promise and its perils. So as we start 2017, (a year that some are calling the tipping point for AI) maybe it’s time for the rest of us to start thinking a little harder about it too?
The road to strong AI
Let’s start by clarifying some things — AI already exists in the world, but these are the very narrow, task-based systems that live in most of our phones, our maps, or quite recently, our translation apps, and are collectively referred to as narrow AI (or ANI). The breakthrough we’re waiting for though is the creation of AGI — Artificial General Intelligence, implying a system that’s able to tackle different types of tasks at essentially a human level. Beyond that, the really revolutionary world changing stuff will happen when we reach Artificial Super Intelligence (ASI).
Trying to go from narrow AI up to a human level has been a long journey, and nobody is quite sure exactly when it might happen, though there’s a general consensus that it should be at some point within the next 20-30 years. The thing is, though, that it could happen far sooner than that (and getting from AGI to ASI could happen even more quickly) and that’s because of one single development more than any other — machine learning.
Learning how to learn
A tech buzzword of the last few years, 'machine learning' essentially refers to the process getting a computer to teach itself how to perform certain tasks, by feeding it huge amounts of data and using only minimal human feedback. The theory has been part of computer science for decades, but it’s only recently that we’ve had the processing power and the sheer amount of raw data to make the technique practical (It’s estimated that 90 percent of the information in the world was created in the last two years — Big Data indeed). Machine learning is at the core of almost every current tech company’s narrow AI programs from Amazon’s recommendations to Facebook’s face-recognition features. Google is a leader in the field, using it to power Translate, Maps and their ubiquitous search algorithms.
Google was also behind AlphaGo, the AI project that in March this year beat the world champion Lee Sedol at four out of five games of the Chinese game ‘Go’. This is a huge deal because Go is an intricate game with more possible outcomes than there are atoms in the universe. So instead of simply number crunching to determine every possible outcome and make the best move (which is how computers conquered Chess years ago), an AI playing Go had to develop a rudimentary form of intuition, almost in the way an experienced human player might.
When machine learning is applied to the task of developing AI itself, it becomes a recursive loop — a machine that’s constantly learning to become better at learning, and becomes faster and faster at it. It’s this recursive process that could exponentially increase the pace of AI development, and that’s the part that has many people in the AI field voicing their concerns. While it may yet take a decade or more to get close to human-level AI (the flat part of the exponential curve), the jump from ‘near human’ to ‘smarter than any human ever’ and beyond that to ‘so smart that it’s incomprehensible to humans’ could take place in just months or even weeks. Humanity usually isn’t great at adapting quickly (even more likely if we suddenly found ourselves no longer the dominant intelligence on Earth), and that’s why it’s so critical to start thinking about the potential outcomes while we can. The countdown has already begun.
The many, many ways AI could end us
So what does this mean for us in the long run? The simple truth is that nobody knows for sure – super-intelligent AI could be an existential threat to humans, or it could be a transcendent force. Tim Urban details both extremes in this long but super-compelling piece. The dangers that AI researchers fear aren’t from the typical Hollywood killer robot scenarios like these guys. Or this guy. More than a malicious AI out to ‘kill all humans’, the real threat would probably come from an AI’s indifference to us in pursuit of some poorly programmed core goal (in this context, any AI programmed with core goals that don’t include some version of “ensure that all humans continue to be in a healthy, happy state, and also don’t destroy the planet or its natural resources” counts as poorly programmed).
So let’s say we build AIs well enough to not destroy us accidentally, and we also devise ways to prevent them from actively rising against us (an important piece could potentially be setting out rights and freedoms for artificial intelligence, in a rare instance of humans learning from the abuses of the past). Even then, we’re not out of the woods yet. Ever since the Industrial Revolution, human society has mostly settled in various forms of capitalist, consumer cultures, where people do work of economic value, in exchange for money to purchase whatever they want or need. It’s a messy system with many serious flaws, but it’s propelled us forward for much of the last few centuries. However, that system breaks down entirely when machines can perform any task as well or better than a person, which means that suddenly, most humans have no economic value.
This is far from a theoretical question, as even the basic automation we’ve seen so far, coupled with other market forces like globalisation, have led to rapidly growing inequalities as different types of jobs become obsolete (a shift which could also be responsible for the political unrest we have seen in many countries this year). The conventional wisdom has been that as technology erases old (unskilled and manual) jobs, it replaces them with new types of (higher skilled) work. It’s questionable whether that was ever entirely true in the past, but it certainly won’t be in the future — some estimates say that nearly half of current jobs could disappear within the next 20 years, and this is just based on current technology advances and narrow AIs. It won’t just be the manual, routine types of jobs either — there are already plenty of examples of machines being trained to write news articles and compose music (so if you’re a smug artist thinking that creative jobs are safe — surprise!).
One solution that’s being widely discussed is the creation of a ‘Universal Basic Income’. Simply put, a government would pay each citizen enough money to live on, for their whole lives, without any of the conditions and criteria that underpin traditional welfare systems. People could still choose to seek work, in order to afford a better lifestyle, but it would be down to individual choice.
There are strong arguments to be made for universal income, and certain pilot programs have shown that it might be a viable solution. It’s also a solution that can preserve the flexibility of our current capitalist systems, and for many countries that already have strong welfare benefits, this is a logical next step. There are still many questions and stumbling blocks of course, like how programs like this would be paid for, but if you assume that automation would also drastically reduce the costs of production and remove the need to pay people for meaningless work, then a reasonable quality of life could be much more affordable than in the past.
A universal income might be only an interim step though — a way to preserve people’s lifestyles in a world that while changing rapidly, is still recognisable. But a world with a true superintelligence could very quickly become entirely unrecognisable. After all, a benevolent ASI could potentially halt climate change (by harnessing new energy sources and devising new ways to safely cool down the planet), disease (by developing new gene manipulation techniques and bio-nanotechnology), starvation (new farming techniques, and lab-grown produce? Easy!), and practically any kind of human suffering. We could end up living in a world of absolute abundance, where the concept of incomes and earning are themselves obsolete. How ironic would it be if the destiny of capitalist corporate culture (and its relentless drive for automation in search of profit) was to make itself redundant?
The promise of ASI society
This might sound completely alien — after all, work challenges us and gives us something to pursue, and we all enjoy the status of a well-earned bonus or a nice house or a fancy car. But if those things were as ubiquitous as air and water, they’d cease to be status symbols, and we could find other goals. Perhaps judge each other on who can write the best poetry, or play the best cricketing shot (it wouldn’t matter that a machine might be able to do it better, just as it doesn’t matter today that driving is faster than running — the Olympics still exist). Our goal might simply be the pursuit of knowledge for its own sake. Buckminster Fuller may have been far more prescient than we realised when he said, “The true business of people should be to go back to school and think about whatever it was they were thinking about before somebody came along and told them they had to earn a living.”
If this sounds like some kind of utopian vision of society from Star Trek, well, that’s because we could actually get to that point (except that instead of depending on a heroic Captain Kirk to save the day, the starship’s ASI computer would have anticipated and pre-empted any alien conflicts). And meanwhile, the rest of us on Earth could choose to spend our time however we pleased as artists, athletes, philosophers, or wanderers. If that’s the end of the world that AI might bring, then just tell me where to sign up.
Updated Date: Jan 01, 2017 08:28 AM