Machine Learning Has an AI Problem
Source: Irving Wladawsky-Berger, CogWorld think tank member
“Machine learning has an AI problem,” wrote author Eric Siegel in a recent Harvard Business Review (HBR) article, “The AI Hype Cycle is Distracting Companies.” “With new breathtaking capabilities from generative AI released every several months — and AI hype escalating at an even higher rate — it’s high time we differentiate most of today’s practical ML projects from those research advances. This begins by correctly naming such projects: Call them ML, not AI. “ML initiatives under the AI umbrella oversells and misleads, contributing to a high failure rate for ML business deployments. For most ML projects, the term AI goes entirely too far — it alludes to human-level capabilities.”
Siegel’s article brought to mind similar concerns many of us had in the 1990s given all the frenzy and hype surrounding the then fast growing Internet which eventually led to what became known as the dot-com bubble. Let me explain.
In December of 1995 I was named general manager of IBM’s new Internet Division. Our main job was to figure out IBM’s overall Internet strategy and to coordinate the various Internet-oriented efforts across the company. I personally spent considerable time working on the impact of the Internet at two different levels, - a near term business strategy focused on the marketplace and our customers, and a more research oriented effort to understand the longer term impact of the Internet on the economy and society.
At the time, a lot was starting to happen around the Internet, but it wasn’t clear where things were heading, and in particular what the implications would be to the world of business and to our clients. We worked through 1996 to figure out what our strategy should be, and towards the end of the year we formulated what became known as IBM’s e-business strategy.
We essentially said that the Internet was the beginning of a profound business revolution with the potential to alter the shape of companies, industries and economies over time. But it was an incremental, not a rip-and-replace revolution. The universal reach and connectivity of the Internet were enabling access to information and transactions of all sorts for anyone with a browser and an Internet connection. Any business, by integrating its existing databases and applications with a web front end, could now reach its customers, employees, suppliers and partners at any time of the day or night, no matter where they were. Businesses were thus able to engage in their core activities in a much more productive and efficient way.
At the same time, I was also quite involved in trying to understand the long-term impact of the Internet on the economy and society. In particular, from 1997 to 2001 I was one of 24 members of the President’s Information Technology Advisory Committee (PITAC). PITAC conducted a number of studies on future directions for federal support of R&D. In February of 1999 we released a major report on “Information Technology: Investing in our Future,” where we wrote:
“The technical advances that led to today’s information tools, such as electronic computers and the Internet, began with Federal Government support of research in partnership with industry and universities. These innovations depended on patient investment in fundamental and applied research. … As we approach the 21st century, the opportunities for innovation in information technology are larger than they have ever been-and more important. We have an essential national interest in ensuring a continued flow of good new ideas and trained professionals in information technology.”
In his HBR article, Siegel expressed his concerns that the escalating generative AI hype was distracting companies from focusing on more immediate, practical, and concrete ML use cases. “You might think that news of major AI breakthroughs would do nothing but help machine learning’s (ML) adoption,” he wrote. “If only. Even before the latest splashes — most notably OpenAI’s ChatGPT and other generative AI tools — the rich narrative about an emerging, all-powerful AI was already a growing problem for applied ML. That’s because for most ML projects, the buzzword AI goes too far. It overly inflates expectations and distracts from the precise way ML will improve business operations.”
“Most practical use cases of ML — designed to improve the efficiencies of existing business operations — innovate in fairly straightforward ways. Don’t let the glare emanating from this glitzy technology obscure the simplicity of its fundamental duty: the purpose of ML is to issue actionable predictions, which is why it’s sometimes also called predictive analytics.”
ML translates into practical use cases and tangible value that deliver the greatest impact on existing business operations. Its predictions drive millions of operational decisions like which credit card transactions are fraudulent and which customers are likely to cancel a service. Such practical ML use cases are those that deliver the greatest impact on existing business operations.
The problem is that most people conceive of ML as AI, — a reasonable misunderstanding. “But AI suffers from an unrelenting, incurable case of vagueness — it is a catch-all term of art that does not consistently refer to any particular method or value proposition. Calling ML tools AI oversells what most ML business deployments actually do.” Calling something AI “invokes the notion of artificial general intelligence (AGI), software capable of any intellectual task humans can do.”
AI cannot get away from AGI because the term AI “is generally thrown around without clarifying whether we’re talking about AGI or narrow AI, a term that essentially means practical, focused ML deployments. Despite the tremendous differences, the boundary between them blurs in common rhetoric and software sales materials.”
ML-based projects often lack a concrete focus on exactly how they will render business processes more effective. “As a result, most ML projects fail to deliver value. In contrast, ML projects that keep their concrete operational objective front and center stand a good chance of achieving that objective.”
According to the AI Myths website, the first use of the term artificial intelligence appeared in a proposal for a workshop at Dartmouth College in the summer of 1956, which said:
“The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” Needless to say, the wildly ambitious summer study was unable to accomplish its aim, nor has it been accomplished over 65 years later.
The AI Myths website adds that there was significant disagreement at the 1956 Dartmouth workshop about calling the new field artificial intelligence. Two of its most prominent participants, Carnegie Mellon professors Allen Newell and Herb Simon, disagreed with the term and proposed instead calling it complex information processing, but ultimately the term artificial intelligence was favored as a more appealing name for the new field.
“The problem is with the word intelligence itself.” wrote Siegel. “When used to describe a machine, it’s relentlessly nebulous. That’s bad news if AI is meant to be a legitimate field. Engineering can’t pursue an imprecise goal. If you can’t define it, you can’t build it. To develop an apparatus, you must be able to measure how good it is — how well it performs and how close you are to the goal — so that you know you’re making progress and so that you ultimately know when you’ve succeeded in developing it.”
But, what if we define AI as software that can perform a highly complex task that traditionally required a human, such as driving a car, mastering chess, or recognizing human faces. “It turns out that this definition doesn’t work either because, once a computer can do something, we tend to trivialized it. After all, computers can manage only mechanical tasks that are well-understood and well-specified. Once surmounted, the accomplishment suddenly loses its charm and the computer that can do it doesn’t seem intelligent,” a paradox known as The AI Effect: once a computer can do it, it’s just a computation, no longer real intelligence. AI is thus redefined to mean whatever machines haven’t done yet.
“Therein lies the problem for typical ML projects,” wrote Siegel in conclusion. “By calling them AI, we convey that they sit on the same spectrum as AGI, that they’re built on technology that is actively inching along in that direction. AI haunts ML. It invokes a grandiose narrative and pumps up expectations, selling real technology in unrealistic terms. This confuses decision-makers and dead-ends projects left and right.”
“But there’s a better way forward, one that’s realistic and that I would argue is already exciting enough: running major operations — the main things we do as organizations — more effectively! Most commercial ML projects aim to do just that. For them to succeed at a higher rate, we’ve got to come down to earth. If your aim is to deliver operational value, don’t buy AI and don’t sell AI. Say what you mean and mean what you say. If a technology consists of ML, let’s call it that. … Otherwise, the danger is clear and present: When the hype fades, the overselling is debunked, and winter arrives, much of ML’s true value proposition will be unnecessarily disposed of along with the myths, like the baby with the bathwater.”
Irving Wladawsky-Berger is a Research Affiliate at MIT's Sloan School of Management and at Cybersecurity at MIT Sloan (CAMS) and Fellow of the Initiative on the Digital Economy, of MIT Connection Science, and of the Stanford Digital Economy Lab.