Posts in Irving Wladawsky-Berger
The Coming AI Revolution

The Coming AI Economic Revolution: Can Artificial Intelligence Reverse the Productivity Slowdown?” was recently published in Foreign Affairs by James Manyika and Michael Spence, two authors I’ve long admired. James Manyika is senior VP of research, technology and society at Google, after serving  as chairman and director of the McKinsey Global Institute from 2009 to 2022. Michael Spence, a co-recipient of the 2001 Nobel Prize in Economics, is professor in economics and business at NYU’s Stern School of Business, and was previously professor of management and dean of the Stanford Graduate School of Business.

Read More
The Current State and Future Trends of Enterprise Blockchain Technologies

“Blockchain is one of the major tech stories of the past decade,” said a December, 2022 McKinsey article, “What is Blockchain.” “Everyone seems to be talking about it — but beneath the surface chatter there’s not always a clear understanding of what blockchain is or how it works. Despite its reputation for impenetrability, the basic idea behind blockchain is pretty simple,” namely: “Blockchain is a technology that enables the secure sharing of information.”

Read More
Open Source AI: Opportunities and Challenges

“Since the 1980s, open source has grown from a grassroots movement to a vital driver of technological and societal innovation,” said “Standing Together on Shared Challenges,” a report published by Linux Foundation Research in December of 2023. “The idea of making software source code freely available for anyone to view, modify, and distribute comprehensively transformed the global software industry. But it also served as a powerful new model for collaboration and innovation in other domains.”

Read More
What’s the Likely Long Term Evolution of AI?

Since the advent of the Industrial Revolution, general purpose technologies (GPTs) have been the defining technologies of their times. Their ability to support a large variety of applications can, over time, radically transform economies and social institutions. GPTs have great potential from the outset, but realizing their potential takes large tangible and intangible investments and a fundamental rethinking of firms and industries, including new processes, management structures, business models, and worker training. As a result, realizing the potential of a GPT takes considerable time, often decades. Electricity, the internal combustion engine, computers, and the internet are all examples of historically transformative GPTs.

Read More
Machine Learning Has an AI Problem

“Machine learning has an AI problem,” wrote author Eric Siegel in a recent Harvard Business Review (HBR) article, “The AI Hype Cycle is Distracting Companies.” “With new breathtaking capabilities from generative AI released every several months — and AI hype escalating at an even higher rate — it’s high time we differentiate most of today’s practical ML projects from those research advances. This begins by correctly naming such projects: Call them ML, not AI. “Including all ML initiatives under the AI umbrella oversells and misleads, contributing to a high failure rate for ML business deployments. For most ML projects, the term AI goes entirely too far — it alludes to human-level capabilities.”

Read More
Can an AI System Exhibit Commonsense Intelligence?

“One of the fundamental limitations of AI can be characterized as its lack of commonsense intelligence: the ability to reason intuitively about everyday situations and events, which requires rich background knowledge about how the physical and social world works,” wrote University of Washington professor Yejin Choi in “The Curious Case of Commonsense Intelligence,” an essay published in the Spring 2022 issue of Dædalus. “Trivial for humans, acquiring commonsense intelligence has been considered a nearly impossible goal in AI, added Choi.”

Read More
Large Language Models: A Cognitive and Neuroscience Perspective

Over the past few decades, powerful AI systems have matched or surpassed human levels of performance in a number of tasks such as image and speech recognition, skin cancer classification, breast cancer detection, and highly complex games like Go. These AI breakthroughs have been based on increasingly powerful and inexpensive computing technologies, innovative deep learning (DL) algorithms, and huge amounts of data on almost any subject. More recently, the advent of large language models (LLMs) is taking AI to the next level. And, for many technologists like me, LLMs and their associated chatbots have introduced us to the fascinating world of human language and cognition.

Read More
The World Ahead 2023: Grappling with an Unpredictable World

A few weeks ago, The Economist published “The World Ahead 2023”, its 37th annual year-end issue that examines the trends and events that will likely shape the coming year. Two years ago, “The World in 2021” said that we should expect unusual uncertainty in the coming year, given the interactions between the still flourishing covid-19 pandemic, an uneven economic recovery and fractious geopolitics. Last year, “The World Ahead 2022” said that 2022 would be a year of adjusting to new realities in areas like work and travel being reshaped by the pandemic, and as deeper trends like the rise of China and accelerating climate change reasserted themselves.

Read More
Will the Metaverse and AR/VR Headsets Be IT’s Next Big Thing?

There is little question that the metaverse and AR/VR headsets are important trends to watch out for in the coming years. As The Economist wrote in a November 2021 article, “as computers have become more capable, the experiences which they generate have become richer. The internet began its life displaying nothing more exciting than white text on a black background.” The last major advance in user interfaces took place in the 1980s when text interfaces gave way to graphical user Interfaces (GUIS). GUIs were first developed at Xerox PARC in the late 1970s and later popularized by the Apple Macintosh in the ’80s. In the 1990s, GUIs were embraced by just about every PC and user device, and GUI-based Web browsers played a major role in the explosive growth of the internet.

Read More
The AI Maturity Framework

I recently attended a seminar, The Art of AI Maturity, by Accenture executives Philippe Roussiere and Praveen Tanguturi as part of MIT’s Initiative on the Digital Economy (IDE) lunch seminar series. The seminar was based on their recently published article The Art of AI Maturity: Advancing from Practice to Performance. “Today, so much of what we take for granted in our daily lives stems from machine learning,” wrote the authors in the article’s executive summary. “Every time you use a wayfinding app to get from point A to point B, use dictation to convert speech to text, or unlock your phone using face ID ... you're relying on AI. And companies across industries are also relying on - and investing in - AI to drive logistics, improve customer service, increase efficiency, empower employees and so much more.”

Read More
The 2022 State of AI in the Enterprise

“Rapidly transforming, but not fully transformed - this is our overarching conclusion on the market, based on the fourth edition of our State of AI in the Enterprise global survey,” said Becoming an AI-fueled organization, the fourth survey conducted by Deloitte since 2017 to assess the adoption of AI across enterprises. “Very few organizations can claim to be completely AI-fueled, but a significant and growing percentage are starting to display the behaviors that can get them there.”

Read More
The Promise & Peril of Human-Like Artificial Intelligence

In his 1950 seminal paper, Computing Machinery and Intelligence, Alan Turing proposed what’s famously known as the Turing test, - a test of a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human. If a human at a keyboard couldn’t tell whether they were interacting with a machine or a human, the machine is considered to have passed the Turing test. “Ever since, creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers and entrepreneurs,” wrote Erik Brynjolfsson, - Stanford professor and Director of the Stanford Digital Economy Lab, - in a recent article, The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence.

Read More
Foundation Models: AI’s Exciting New Frontier

Over the past decade, powerful AI systems have matched or surpassed human levels of performance in a number of specific tasks such as image and speech recognition, skin cancer classification and breast cancer detection, and highly complex games like Go. These AI breakthroughs have been based on deep learning (DL), a technique loosely based on the network structure of neurons in the human brain that now dominates the field. DL systems acquire knowledge by being trained with millions to billions of texts, images and other data instead of being explicitly programmed.

Read More
How Can You Trust the Predictions of a Large Machine Learning Model?

Artificial intelligence has emerged as the defining technology of our era, as transformative over time as the steam engine, electricity, computers, and the Internet. AI technologies are approaching or surpassing human levels of performance in vision, speech recognition, language translation, and other human domains. Machine learning (ML) advances, like deep learning, have played a central role in AI’s recent achievements, giving computers the ability to be trained by ingesting and analyzing large amounts of data instead of being explicitly programmed.

Read More