By Calum Chace | April 27, 2018
The media today is full of stories about artificial intelligence, and there is universal agreement that it is a very big deal. But ironically, most people are not paying close attention. This is probably because the stories are confused and confusing. Some say that robots will take all our jobs and then turn into murderous Terminators. Others say that is all hype, and there is much less going on than meets the eye.
And so most people shudder slightly, shrug their shoulders and get on with the business of living. And who can blame them?
When you pull back from the headlines, much of the difference between the two camps is about timing. No, robots will not take all our jobs by 2019, but can we be so sanguine about 2039? And while few AI researchers agree with Ray Kurzweil’s confident prediction that we will create the first artificial general intelligence (an AI with all the cognitive abilities of an adult human) by 2029, surveys indicate that most of them think it likely to happen this century.
There are many reasons to be excited about AI and what it will do for us and to us. It is already making the world more intelligible, and making our products and services more capable and more efficient. This progress will continue – at an exponential rate; if we are smart and perhaps a bit lucky, we can make our world a truly wonderful place.
There are also many reasons to be concerned about AI. People worry about privacy, transparency, security, bias, inequality, isolation, killer robots, oligopoly and algocracy. These are all important issues, but none of them is likely to throw our civilisations into reverse gear, or even destroy us completely. There are two issues which could do precisely that: the technological and the economic singularities.
The technological singularity is the moment when (and if) we create an artificial general intelligence which continues to improve its cognitive performance and becomes a superintelligence. If we succeed in ensuring that the first superintelligence really, really likes humanity - and understands us better than we understand ourselves - then the future of humanity is glorious almost beyond imagination. The solutions to all our major problems should be within our grasp, including poverty, illness, war and even death. If we don’t manage that ... well, the outcome could be a lot less cheerful. Ensuring that we do manage it is probably the single most important task facing us this century – and perhaps ever, along with not blowing ourselves up with nuclear weapons, or unleashing a pathogen which kills everyone.
Before we reach the technological singularity we will probably experience the economic singularity – the point when we have to accept that most people can no longer get jobs, and we need a new type of economy. The stakes here are not so high. If we mis-manage the transition, it is unlikely that every human will die. (Not impossible, though, as in the turmoil, someone might initiate a catastrophic nuclear war.) Civilisation would presumably regress, perhaps drastically, but our species would survive to try again. Trying again is something we are good at.
On the other hand, assuming it is coming at all, the economic singularity is coming sooner than the technological singularity. The technological singularity is more important but less urgent, while the economic singularity is less important but more urgent.
The economic singularity is not here yet. The impact of cognitive automation is being felt in modest ways here and there, but the US, the UK, and many other leading economies are close to full employment because there are still plenty of jobs that humans can do. (Some of it doesn’t pay very well, but there are jobs.) This will not last.
Self-driving cars will be ready for prime time in five years or so. When they arrive, inexorable economic logic dictates that professional drivers will start to be laid off rather quickly. At the same time, most other sectors of the economy will be seeing the effects of advanced AI. The outcome can be wonderful – a world where machines do the boring jobs could be one where humans get on the important parts of life: exploring, learning, playing, socialising, having fun. But it is not obvious how to get from here to there: we need a plan, and we need to communicate that plan to avoid a dangerous panic.
It will probably take at least five years to develop that plan and generate a consensus around it. So we have to start now. We need to set up think tanks and research institutes all over the world, properly funded and staffed full-time by smart people with diverse backgrounds and diverse intellectual training. In the context of the importance of the challenge, the resources required are trivial - probably a few tens of millions of dollars - but they are sufficient to require significant political support.
At the moment, our politicians and policy makers are distracted. The US is understandably mesmerised by the antics of the 45th President, and in the UK, Brexit has swallowed the political class whole. Other countries have their own distractions, and the pain of the recession which started in 2008 endures. Artificial intelligence is poised to create the biggest changes humanity has ever been through, and yet it hardly featured at all in recent elections.
But the race is far from run. Politicians do respond to the public mood. (The most talented ones anticipate it slightly, although they are careful not to get too far ahead of us, or we sack them.) If we demand they pay attention to the coming impact of AI, they will. It is time to make that demand, and you can help. Talk to your friends and colleagues about this: get the conversation going. Insist that your political representatives pay attention.
A wonderful world can be ours if we rise to the challenges posed by the exponential growth of our most powerful technology, and navigate the two singularities successfully. Let’s grasp that wonderful future!