"Calum's Rule"

DEPOSITPHOTOS ENHANCED BY COGWORLD

Disagreements that suggest profound differences of philosophy sometimes turn out to be merely a matter of timing: the parties don't disagree about whether a thing will happen or not, they just disagree over how long it will take. For instance, timing is at the root of apparently fundamental differences of opinion about the technological singularity.

Elon Musk is renowned for his warnings about superintelligence:

“With artificial intelligence, we are summoning the demon. You know all those stories where there's the guy with the pentagram and the holy water, and he's like, yeah, he's sure he can control the demon? Doesn't work out.”  “We are the biological boot-loader for digital super-intelligence.”

Comments like this have attracted fierce criticism:

“I don't work on not turning AI evil today for the same reason I don't worry about the problem of overpopulation on the planet Mars.” (Andrew Ng)

"We're very far from having machines that can learn the most basic things about the world in the way humans and animals can do. Like, yes, in particular areas, machines have superhuman performance, but in terms of general intelligence, we're not even close to a rat. This makes a lot of questions people are asking themselves premature." (Yann LeCun)

"Superintelligence is beyond the foreseeable horizon." (Oren Etzioni)

If you dig a little deeper, these people don't disagree with Musk that superintelligence is possible — even likely and that its arrival could be an existential threat for humans. What they disagree about is the likely timing, and the difference isn't as great as you might think. Ng thinks, “There could be a race of killer robots in the far future,” but he doesn't specify when. LeCun seems to think it could happen this century: "if there were any risk of [an “AI apocalypse”], it wouldn't be for another few decades in the future.” And Etzioni's comment was based on a survey where most respondents set the minimum time frame as a mere 25 years. As Stephen Hawking famously wrote, “If a superior alien civilisation sent us a message saying, ‘We'll arrive in a few decades,' would we just reply, ‘OK, call us when you get here—we'll leave the lights on'? Probably not.”

Although it is less obvious, I suspect a similar misunderstanding is at play in discussions about the other singularity — the economic one, the possibility of technological unemployment and what comes next. Martin Ford is one of the people warning us that we may face a jobless future:

“A lot of people assume automation is only going to affect blue-collar people, and that so long as you go to university you will be immune to that … But that's not true, there will be a much broader impact.”

The opposing camp includes most of the people running the tech giants:

“People keep saying what happens to jobs in the era of automation? I think there will be more jobs, not fewer.”  “… Your future is you with a computer, not you replaced by a computer...”  “[I am] a job elimination denier.” (Eric Schmidt)

“There are many things AI will never be able to do… When there is a lot of artificial intelligence, real intelligence will be scarce, real empathy will be scarce, real common sense will be scarce. So, we can have new jobs that are actually predicated on those attributes.” — Satya Nadella

For perfectly good reasons, these people mainly think in time horizons of up to five years, maybe ten at a stretch. And in that time, they are surely right to say that technological unemployment is unlikely. For machines to throw us out of a job, they have to be able to do it cheaper, better, and/or faster. Automation has been doing that for centuries: elevator operator and secretary are very niche occupations these days. When a job is automated, the employer's process becomes more efficient. This creates wealth, and wealth creates demand, and thus new jobs. This will continue to happen — unless and until the day arrives when the machines can do almost all the work that we do for money.

If and when that day arrives, any new jobs, which are created as old jobs, are destroyed and will be taken by machines, not humans. And our most important task as a species at that point will be to figure out a happy ending to that particular story.

Will that day arrive, and if so, when? People often say that Moore's Law is dead or dying, but it isn't true. It has been evolving ever since Gordon Moore noticed, back in 1965, that his company was putting twice as many transistors on each chip every year. (In 1975 he adjusted the time to two years, and shortly afterward it was adjusted again, to eighteen months.) The cramming of transistors has slowed recently, but we are seeing an explosion of new types of chips, and Chris Bishop, the head of Microsoft Research in the UK, argues that we are seeing the start of a Moore's Law for software: “I think we're seeing … a similar, singular moment in the history of software … The rate-limiting step now is … the data, and what's really interesting is the amount of data in the world is — guess what — it's growing exponentially! And that's set to continue for a long, long time to come.”

So there is plenty more Moore and plenty more exponential growth. The machines we have in 10 years will be 128 times more powerful than the ones we have today. In 20 years, they will be 8,000 times more powerful, and in 30 years, a million times more powerful. If you take the prospect of exponential growth seriously, and you look far enough ahead, the idea that machines will do pretty much all the things we do for money cheaper, better and faster than us becomes hard to dismiss as a distinct possibility.

So I would like to propose a new rule, and with no superfluous humility, I'm calling it Calum's Rule:

“All forecasts should specify the time frame.”

If we all follow this injunction, I suspect we will disagree much less. (And in case that counts as a forecast, the time frame is immediate: let’s start now.)


https://twitter.com/cccalum

https://uk.linkedin.com/in/calum-chace-bb68168

http://www.pandoras-brain.com/

Calum Chace, columnist, is an author and speaker on artificial intelligence. Calum writes about how, in the course of this century, AI will change pretty much everything about being human. Calum’s books — ” The Economic Singularity" addresses the coming wave of cognitive automation;  “Surviving AI” looks further ahead to the arrival of strong AI, (aka human-level AI, or artificial general intelligence) which will lead to superintelligence; and “Pandora's Brain” is a novel about the first superintelligence on earth.

In Calum's 30-year business career, he's worked in, and consulted to, hundreds of businesses. He began his career with the BBC, then BP, and later worked as a strategy consultant with KPMG. Calum has also founded and helped to run a series of entrepreneurial businesses