One of the downsides to the recent revival of popularity of Artificial Intelligence (AI) is that we see a lot of vendors, professional services firms, and end users jumping on the AI bandwagon labeling their technologies, products, service offerings, and projects as AI projects without necessarily that being the case. On the other hand, there isn’t a well-accepted delineation between what is definitely AI and what is definitely not AI.
Perhaps it is best to start with the overall goals of what we’re trying to achieve with Artificial Intelligence, rather than definitions of what AI is or isn’t. Since the beginning of the AI movement in the 1950s, the goal of intelligent systems are those that mimic human cognitive abilities. This means the ability to perceive and understand its surroundings, learn from training and its own experiences, make decisions based on reasoning and thought processes, and the development of “intuition” in situations that are vague and imprecise; basically the world in which we live in. For sure, it’s easy to classify the movements towards Artificial General Intelligence (AGI) as AI initiatives. After all, AGI systems are attempting to create systems that have all the cognitive capabilities of humans, and then some.
On the flip side, simply automating things doesn’t make them intelligent, as we’ve written about and spoken about many times. It may be complicated to train a computer to understand the difference between an image of a cat and an image of a horse or even between different species of trees, but that doesn’t mean that the system can understand what it is looking at, learn from its own experiences, and make decisions based on that understanding. Similarly, a voice assistant can process your speech when you ask it “What weighs more: a ton of carrots or a ton of peas?”, but that doesn’t mean that the assistant understands what you are actually talking about or the meaning of your words. So, can we really argue that these systems are intelligent?
In our most recent podcast with MIT Professor Luis Perez-Breva, he argues that while these various complicated training and data-intensive learning systems are most definitely Machine Learning (ML) capabilities, that does not make them AI capabilities. In fact, he argues, most of what is currently being branded as AI in the market and media is not AI at all, but rather just different versions of ML where the systems are being trained to do a specific, narrow task, using different approaches to ML, of which Deep Learning is currently the most popular. He argues, and we agree, that if you’re trying to get a computer to recognize an image, feed it enough data, and with the magic of math and statistics and neural nets that weigh different connections more or less over time, you’ll get the results you would expect. But what you’re really doing is using the human’s understanding of what the image is to create a large data set that can then be mathematically matched against inputs to verify what the human understands.
How Does Machine Learning relate to AI?
The view espoused by Professor Perez-Breva is not isolated or outlandish. In fact, when you dig deeper into these arguments, it’s hard to argue that the narrower the ML task, the less AI it in fact is. However, does that mean that ML doesn’t play a role at all in AI? Or, at what point can you say that a ML effort is an AI effort in the way we discussed above? If you read the Wikipedia entry on AI, it will tell you that, as of 2017, the industry generally accepts that “successfully understanding human speech, competing at the highest level in strategic game systems, autonomous cars, intelligent routing in content delivery network and military simulations” can be classified as AI systems.
However, the line between intelligence and just math or automation is a tricky one. If you decompose any intelligent system, even the much-vaunted AGI goal, it will look just like bits and bytes, decision-trees, databases and mathematical algorithms. Similarly, if you decompose the human brain, it’s just a bunch of neurons firing electrochemical pathways. Are humans intelligent? Are zebras intelligent? Is bacteria intelligent? Where’s the delineation between intelligence in living organisms? Perhaps intelligence is not a truly well-defined thing, but rather an observation of the characteristics of a system that exhibit certain behaviors. In this light, one of those behaviors is understanding and perceiving its surroundings, and another of those is learning from experiences and making decisions based on those experiences. In this light, Machine Learning definitely forms a part of what is necessary to make AI work.
Over the past 60+ years there have been many approaches and attempts to get systems to learn to understand its surroundings and learn from its experiences. These approaches have included decision trees, association rules, artificial neural networks (of which Deep Learning is one such approach), inductive logic, support vector machines, clustering, similarity and metric learning (including nearest-neighbor approaches), Bayesian networks, reinforcement learning, genetic algorithms (and related evolutionary computing approaches), rules-based machine learning, learning classifier systems, sparse dictionary approaches, and more. For the layman, we want to stress that AI is not interchangeable for ML and ML is not interchangeable with Deep Learning. But ML supports the goals of AI, and Deep Learning is one way to do certain aspects of ML. Or to put it another way, doing machine learning is necessary, but not sufficient, to achieve the goals of AI, and Deep Learning is an approach to doing ML that may not be sufficient for all ML needs.
What Parts of AI are not Machine Learning?
It’s an interesting exercise to think about how you, as an adult human, have gained the intelligence that you have now. In some instances, you learned from simply being part of your environment such as learning how gravity works, how to speak to others and understand what they are saying, and cultural norms. In other instances, you learned in an academic environment from instructors who knew a particular abstract subject area such as math or physics, and in other instances you learned from repeating a particular task over and over again to get better at it such as music or sports. Wait a second, isn’t this all just different kinds of learning, and therefore, isn’t this all just Machine Learning if we apply this from an AI perspective? Yes and no.
Some say that machine learning is a form of pattern recognition, understanding when a particular pattern occurs in nature or experience or through senses, and then acting on that pattern recognition. When you look at it from that perspective, it becomes clear that the learning part must be paired with an action part. Decisions and reasoning is not just applying the same response to the same patterns over and over again. If that was the case, then all we’re doing is using ML to just automate better. Given the same inputs and feedback, the robot will perform the same action. But do humans really work that way? We experiment with different outcomes. We weigh alternatives. We respond differently when we’re stressed than when we’re relaxed. We prioritize. We think ahead and think about the potential outcomes of a decision. We play politics and we don’t always say what we want to say. And the big one: we have emotions. We have self-consciousness. We have “awareness”. All of these things move us beyond the task of learning into the world of perceiving, acting, and behaving. These are the frontiers of AI.
The Moving Threshold of Intelligence
In reading this piece, we hope that you are thinking hard about Machine Learning and AI, the relationships to each other, and whether or not specific ML activities are accomplishing the goals of what we aim to achieve in AI. Likewise, even for those at the extremes of the AI spectrum considering only AGI to be truly AI or on the other polar opposite, any ML effort to be AI, to reconsider their perspectives. The technology industry continues to iterate on ML and address problems previously considered to be more complicated and difficult. And we can think of this like evolution. In the beginning the Earth was just a hot soup of chemicals and organic molecules. Over time, these molecules organized and become more orderly and were able to reproduce, respond to their surroundings, and combine to make more complicated organisms. At some point, this organic soup became a collection of intelligent beings. When and how that happened is still a matter of unresolved science and philosophy. Yet, here we are. Similarly, as the collection of ML activities mature, while some are definitely not AI-like at all — and we’ll be calling out these ML efforts that are not AI, others are progressing the industry down the path of AI. Eventually we’ll start to see the evolved technology organisms that have long been the goal of AI.