Machine Learning for Data Science vs. ML for Artificial Intelligence

Machine Learning as Data Science vs. ML as Artificial Intelligence

By Ron Schmelzer  |  June 14, 2018
Columnist Ron Schmelzer is Senior Analyst and Founder at Cognilytica

If AI is to mean something and be a useful term to help delineate different technologies and approaches from others, then it has to be meaningful. A term that means everything to everyone means nothing to anyone. In “Is Machine Learning Really AI?” I explored what I believe AI has to mean to be useful. In summary, AI systems need to be able to sense and understand their environment, learn from past behaviors and apply that learning to future behaviors, and adapt to new circumstances by reasoning from experience and learning and then generating new learning from those new circumstances and experiences. Professor Alex Wissner-Gross further defines intelligence as being able to increase your future freedom of action, and determine on an individual basis what future outcomes you want based on actions.

As defined in the above article, Machine Learning is the set of technologies and approaches that provide a means by which computer systems can encode learning and then apply future information to that learning to come to conclusions. Clearly, Machine Learning is a prerequisite for AI.  But ML is necessary, but not sufficient for AI.  Likewise, not all ML systems are operating in the context of what we’re trying to achieve with AI.

So, Which Parts of ML are not AI?

In the above linked article, I talk about what parts of AI are not ML, but I didn’t dive into what parts of ML are not AI. There seems to be two divergent perspectives of ML.  Some say that even the narrowest form of AI is still AI.  Since we have not yet achieved Artificial General Intelligence (AGI), despite some attempts to get us close, then all practical implementations of AI in the field are narrow AI of one form or another. I find this reductio ad absurdum unhelpful. It’s not useful to call a data science effort that uses random decision forests (a form of ML) for the specific task of achieving a very specific learning outcome to be at the same level as attempts to build systems that can learn and adapt to new situations.

On the other hand, I'm in the camp with those that say that forms of predictive analytics that use the methods of Machine Learning are indeed ML projects, but they are not AI projects in themselves. In essence, using ML techniques to learn one narrow specific application, and in which that training model cannot be applied to different situations or has any way to evolve or adapt to new situations is not an AI-focused ML project. It’s ML without the AI.  Hopefully this Venn diagram might be helpful as a way of explaining which parts of ML are contributory to AI and which parts are not:

The Data Science Revolution: ML for Predictive Analytics

Part of why we’re seeing a resurgence of interest to begin with in the field of AI is not only the development of better algorithms to do Machine Learning (notably Deep Learning), but also the sheer quantity of data we have and better processing power to deal with it. However, perhaps one of the more overlooked parts of this AI renaissance is that over the past years the entire field of Big Data emerged to deal with the voluminous amounts of data from internet, mobile, and all manner of networked systems. Not only did the Big Data revolution bring about new ways of managing and dealing with large data stores, but it helped usher in the fields of data science and data engineering to provide insight and hidden value in the data and better methods for manipulating large data sets.

It’s no wonder that the methods and techniques of Machine Learning are appealing to data scientists who have before had to deal with simply more advanced SQL or other data queries. ML provides a wide array of techniques, algorithms, and approaches to gain insight, provide predictive power, and further enhance the value of data in the organization, elevating data to information, and then to knowledge.

However what differentiates many data science-driven ML projects is that the models that are being built for these efforts and the scope of these projects are very narrowly constrained to a single issue, for example credit card fraud. Indeed, this fascinating Quora exchange between data scientists makes it clear that the ML approaches being used are being used to solve narrow problems of predictive analytics, not greater challenges of AI. In this way, these ML projects are not AI projects, but rather predictive analytics projects. We can call this “ML for Predictive Analytics”.

Likewise, there are other narrow applications of ML for specific single-task usage, such as forms of Optical Character Recognition (OCR) and even forms of Natural Language Processing (NLP) and Natural Language Generation (NLG) where ML approaches are used to extract valuable information from handwriting or speech. We’ve had OCR and NLP solutions for decades, and yet prior to this new AI summer wave, they’ve never called their systems AI. In this way, we can’t consider many forms of OCR and NLP, even ones that use ML approaches, to be AI. Rather, they have to be enabling some greater goal for any of these to be considered AI.

So, what is considered ML in the context of AI?

Clearly, in order to make AI work, we need ML, but we don’t need ML models narrowly built for something like credit card fraud detection to make intelligent systems work. Rather, what we need are ML systems that enable the AI efforts to learn not only the specific models in which they are taught, but rather a framework by which these systems can learn on their own. ML in the context of AI emphasizes not only self-learning, but also the idea that this learning can be applied to new situations and circumstances that might not have been explicitly modeled, trained, or learned before. In many ways, this sort of continuous, expanding learning is the goal of adaptive systems.

Adaptation and self-learning are keys to not only handling the explicit problems of today, but also the unknown problems of tomorrow. This sort of learning is how we humans, and many other creatures, pick up new skills, learn from peers, and apply learnings from one situation to another. ML systems that are built in this way support these goals for AI and are fundamentally more complex and sophisticated than their narrower, single-task ML brethren. The key insight is that it’s not the algorithm that determines whether ML is used in an AI context or not, but rather the way it is being applied, and the sophistication of the learning systems that surround those algorithms.