Decades of Fearing Automation but Hoping for Augmentation
Image: Depositphotos
Source: Tom Davenport
Our guesses, for the most part, have been off the mark.
I’m reading Jill Lepore’s book If/Then about the origins of analyzing human behavior data with computers. One interesting aspect of it is the automation paranoia arising from the introduction of the IBM 704 mainframe computer in 1954 (the year I was born). The book even includes an image from an automation-focused campaign leaflet for John F. Kennedy’s 1960 presidential campaign.
Further research led to a website that discussed automation concerns, and a newspaper story from four days before my birth that could have been published today with minor changes in technology and gender wording (emphases from the blog post author, Matt Novak):
What future competition the office worker will meet from the mechanical brain still seems to be in doubt. In the United States, live office employees are more than holding their own, the Labor Department reports. They now number about eight million, 64 per cent more than in 1940. The department predicts clerical employment will continue to expand despite all the automatic files, and cash resisters, adding machines and “thinkers” coming on the market.
However, the International Labor Organization has reported in a worldwide survey, the transfer of work from men to machines is proceeding in offices much faster than it did in industry. Office machine production in the United States especially is booming, sales being about four times what they were before the war. As, a result, says the report, many businesses, including especially banks and insurance companies, are reducing office personnel. The report predicts that in the long run world demand for such jobs will exceed the supply.
It is likely that the office worker, while continuing in demand, will, like the horse and the bicycle, take on new functions as machines absorb more office routine. Hence those expecting to compete with the office robots will need more diversified education. They would do well to acquire a few skills that the robots cannot duplicate.
So the concern that computers and software are going to take our jobs has been around for a long time. I’ve been focused on it for more than a decade. At about this time In 2016 my old friend and co-author Julia Kirby and I were planning the launch of our new book. Called Only Humans Need Apply: Winners and Losers in the Age of Smart Machines, it was not the opening salvo on the question of whether and when AI would replace human workers. A year earlier Stanford lecturer Jerry Kaplan had published Humans Need Not Apply—suggesting that human jobs were about to become obsolete (they haven’t)—and futurist Martin Ford published Rise of the Robots, which posited that even burger flippers would soon lose their jobs (they haven’t). Even earlier in 2013, two Oxford researchers published a widely-cited paper called “The Future of Employment,” which employed technical analysis of tasks and AI capabilities to predict that 47% of US jobs were “automatable” with AI (they haven’t been automated).
Our book was one of the first to argue in favor of augmentation, but it didn’t sell terribly well. I think that this is a problem for optimistic books in general—at least none of my optimistic books have sold well, and I guess people prefer scary books over reassuring ones. And Julia is a great writer, so the book was definitely well-written.
But it turns out that in addition to being reassuring we were also correct—at least for a decade. We argued that augmentation of workers with AI and vice-versa was both the most likely and the smartest way to think about AI and jobs. Over the last ten years augmenting work with AI has been far more common than automating jobs out of existence. Even though we did not anticipate the rise of generative AI in our book, that powerful technology has helped hundreds of millions of people do their jobs more quickly and (in some cases) effectively, but by and large they are still employed.
In the book (and in an article preceding the book by a year) we also described five “steps” that motivated humans could take in order to effectively collaborate with AI:
“Step in”—keep your regular job but learn how AI works and apply it to your tasks—I discussed it here recently
“Step up”—oversee the application of AI to the business
“Step aside”—pick a job that doesn’t require the use of AI
“Step narrowly”—a job that could be automated, but is too rare to be worth the trouble
“Step forward”—build and maintain AI systems.
We suggested a decade ago that most jobs would involve “stepping in,” which has indeed turned out to be the case. We also argued that “stepping aside” would become increasingly difficult as AI became more capable, which also turned out to be true. We were probably wrong, however, in advocating for “stepping narrowly”—it turns out that almost any job has tasks that can be automated, and it’s not that difficult or expensive to do it.
Aside from that relatively minor mistake, our guesses about what would happen in the decade after the book was published were more or less correct. But that’s what they were—guesses. All of the prognostications thus far about the impact of AI on the job market have been guesses as well, and as Miguel Paredes and I argue in a recent article, they’ve all been wrong.
We concluded that article with a call to fewer predictions and more careful observation and description about what’s actually happening in the workplace. Some of that is beginning to happen. The Stanford working paper called “Canaries in the Coal Mine,” suggesting reduced hiring of entry level workers in a couple of jobs, is a good example. We just have to be careful not to over-generalize such findings; there are some entry-level workers (e.g., digital marketers) who are doing pretty well.
Steve Miller and I did some observation of 29 AI-augmented workers in our book Working with AI. Things seemed to be going well for them and almost all didn’t think there would be large-scale automation of their jobs anytime soon. But as we noted in the heading of the last section of the last chapter, “If the Singularity Comes, All Bets Are Off.”
I will make one prediction, however. Getting back to the JFK ad above about automation, I am predicting that if the current level of hype, fear, and (much less) actual AI-driven job loss continues, that issue will be much more prominent in the 2028 presidential election than it was in the 1960 one. And it should be!
Tom Davenport, PhD
Visit: Tom Davenport
Dr. Tom Davenport is a world-renowned thought leader and author, is the President’s Distinguished Professor of Information Technology and Management at Babson College, a Fellow of the MIT Center for Digital Business, and an independent senior advisor to Deloitte's Chief Data and Analytics Officer Program.
An author and co-author of 25 books and more than 300 articles, Tom helps organizations to transform their management practices in digital business domains such as artificial intelligence, analytics, information and knowledge management, process management, and enterprise systems.
He's been named:
- A "Top Ten Voice in Tech" on LinkedIn in 2018
- The #1 voice on LinkedIn among the "Top Ten Voices in Education 2016"
- One of the top 50 business school professors in the world in 2012 by Fortune magazine
- One of the 100 most influential people in the technology industry in 2007 by Ziff-Davis
- The third most important business/technology analyst in the world in 2005 by Optimize magazine
- One of the top 25 consultants in the world in 2003