COGNITIVE WORLD

View Original

What’s the Likely Long Term Evolution of AI?

Image Credit: Depositphotos

Source: Irving Wladawsky-Berger, Cognitive World Think Tank Member

Since the advent of the Industrial Revolution, general purpose technologies (GPTs) have been the defining technologies of their times. Their ability to support a large variety of applications can, over time, radically transform economies and social institutions. GPTs have great potential from the outset, but realizing their potential takes large tangible and intangible investments and a fundamental rethinking of firms and industries, including new processes, management structures, business models, and worker training. As a result, realizing the potential of a GPT takes considerable time, often decades. Electricity, the internal combustion engine, computers, and the internet are all examples of historically transformative GPTs.

For example, after the introduction of electric power in the early 1880s, it took companies about four decades to figure out how to restructure their factories to harness electric power with manufacturing innovations like the assembly line. It took even longer to develop highly popular electric household products like refrigerators, toasters, washing machines, and air conditioners.

The internet’s precursor, ARPANET,  was initially developed by the US Department of Defense in the late 1960s for military and research applications. It wasn’t until the 1990s that the internet was opened up to companies and consumers in the wider commercial marketplace, becoming the most significant platform for innovation the world has ever since, and ushering in a new 21st-century digital economy.

Similarly, artificial intelligence first came to light in the mid-1950s as a promising new academic discipline that aimed to develop intelligent machines capable of handling human-like tasks like natural language and playing chess. AI became one of the most exciting areas in computer sciences in the 1960s, ’70s, and ’80s, but after years of unfulfilled promises and hype, a so-called AI winter of reduced interest and funding nearly killed the field set in everywhere.

AI was successfully reborn in the 1990s with a totally different data-driven paradigm based on analyzing large amounts of data with sophisticated algorithms and powerful computers. Data-centric AI has continued to advance over the past 20 years with major innovations in a number of areas like big data, predictive analytics, machine learning algorithms, and more recently large language models (LLMs) and generative chatbots. AI has now emerged as one of, if not the key defining technologies of the 21st century.

Over the past two and a half centuries, the emergence of a new historically transformative technology was accompanied by fears of the loss of jobs through automation. But each time those fears arose in the past, technological advances ended up creating more jobs over time than they destroyed. Such automation fears have understandably accelerated in recent years, as our increasingly smart machines are being applied to activities requiring intelligence and cognitive capabilities that not long ago were viewed as the exclusive domain of humans.

In the spring of 2018, then MIT president Rafael Reif commissioned an MIT-wide task force to address the impact of AI on jobs, economies, and society. After working for two years, the task force released its final report, “The Work of the Future: Building Better Jobs in an Age of Intelligent Machines,” in November 2020.

The report’s overriding finding was, “Technological change is simultaneously replacing existing work and creating new work. No compelling historical or contemporary evidence suggests that technological advances are driving us toward a jobless future. On the contrary, we anticipate that in the next two decades, industrialized countries will have more job openings than workers to fill them, and that robotics and automation will play an increasingly crucial role in closing these gaps. Nevertheless, the impact of robotics and automation on workers will not be benign. These technologies, in concert with economic incentives, policy choices, and institutional forces, will alter the set of jobs available and the skills they demand.”

“The momentous impacts of technological change are unfolding gradually,” was a second major task force finding. “Indeed, the most profound labor market effects of new technology that we found were less due to robotics and AI than to the continuing diffusion of decades-old (though much improved) technologies like the internet, mobile and cloud computing, and mobile phones,” said the report. “This time scale of change provides the opportunity to craft policies, develop skills, and foment investments to constructively shape the trajectory of change toward the greatest social and economic benefit.”

The MIT task force was conducted between 2018 and 2020, prior to the impressive advances and explosive market interest in LLMs and generative chatbots. ChatGPT, released by OpenAI on November 30, 2022, has propelled AI into a whole new level of expectations, some realistic, some mostly hype. It’s been accompanied by an AI gold rush that’s attracting lots of attention from startups and investors. As a result, there’s an expectation that generative AI will advance and mature considerably faster than originally anticipated for such a complex, new technology.

Given the recent advances and investments in generative AI, how will it likely impact the longer-term evolution of AI, especially when compared to previous historically transformative technologies?

While previous technological revolutions have been accompanied by a similar gold rush, — remember the 1990s internet dot-com bubble, — AI may well be in a class by itself because of the serious concerns that have been raised about the potential impact of machines that may equal or surpass human levels of intelligence.

Some believe that generative AI will accelerate the evolution toward artificial generaI intelligence (AGI), when AI will be capable of performing just any human task, possibly even better than humans over time. Such a prospect is accompanied by fears that an increasingly powerful, highly intelligent, out-of-control AI could lead to unforeseeable changes in human civilization and become an existential threat to humanity. While previous technologies have mostly raised fears about job automation, none have engendered the kind of existential fears of AI achieving the so-called singularity.

How realistic are these fears?

First, can we expect the impact of generative AI to unfold much faster than expected, instead of gradually as has been the case with previous transformative technologies? I honestly doubt it.

A June 2023 McKinsey study, “The Economic Potential of Generative AI: The Next Productivity Frontier,” analyzed the potential economic impact of generative AI and concluded that “While generative AI is an exciting and rapidly advancing technology, the other applications of AI discussed in our previous report continue to account for the majority of the overall potential value of AI. Traditional advanced analytics and machine learning algorithms are highly effective at performing numerical and optimization tasks such as predictive modeling, and they continue to find new applications in a wide range of industries. However, as generative AI continues to develop and mature, it has the potential to open wholly new frontiers in creativity and innovation.”

In other words, most of the economic impact of AI in the early to mid-time frames will likely come from the more mature, better-understood versions of AI like analytics and machine learning, rather than from the much newer, less well-understood and highly complex versions like generative AI. The McKinsey conclusion is similar to the finding of the MIT Work of the Future task force that the most profound economic impacts of technological advances were primarily due to the diffusion of older technologies like the internet, mobile, and cloud computing rather than leading-edge AI.

 And, as the MIT study noted, a slower time scale of change provides the opportunity to craft the proper policies and regulations to mitigate AI’s downsides and to constructively shape the trajectory of change in support of its greatest economic and social benefits.

There’s much, much work to be done. The consumer applications envisioned for generative AI and chatbots, such as AI assistants, mentors, tutors, coaches, advisors, and therapists, and so are even more complicated and less understood than the business processes where, — as identified in the recent McKinsey study, — generative AI could have the greatest near- and mid-term economic impact: customer operations, marketing and sales, software engineering, and research and development.

Business processes are much better understood than human tasks and behaviors. I suspect that we will make great progress in improving fairly well-understood tasks like search with generative AI, and there will likely be major innovations in more focused, personalized applications for paralegals, nurse practitioners, math tutors, and job assistants. But more general personal applications, like therapists and career coaches, will likely take significantly longer to develop because we don’t really understand how humans perform those jobs.

Finally, all the research I’ve seen on historically transformative technologies has found that the more complex the technology, the longer it will take to realize its marketplace potential due to the increased investments and the major restructuring of industries, economies, and enterprises it will likely require. Generative AI, LLMs, and chatbots are very complex technologies since we cannot explain how their huge neural networks reach their decisions in terms a human can understand. In the end, AI will keep advancing and improving by relying on the intelligence of its human developers and collaborators rather than on its own.


Irving Wladawsky-Berger is a Research Affiliate at MIT's Sloan School of Management and at Cybersecurity at MIT Sloan (CAMS) and Fellow of the Initiative on the Digital Economy, of MIT Connection Science, and of the Stanford Digital Economy Lab.