The Case for Explainable Artificial Intelligence (XAI)

The Case for Explainable Artificial Intelligence (XAI)

By Ganesh Padmanabhan  |  October 1, 2017  |  Source: LinkedIn


The internet is ablaze with the news of Artificial Intelligence (AI) being an existential threat, with the update from Facebook on them shutting down a bot framework that developed its own language that human’s couldn’t understand. View here.

The case for XAI


I got a call from my mom this morning asking if this is the kind of stuff I’m involved in, trying to eradicate humanity in the process! With all of the conspiracy theories spinning, I wanted to pen down my thoughts on how organizations should look at this.

AI is like any other disruptive technology in one regard — most folks don’t understand it yet. Those who do don't know how well they do, and we are all figuring this out as a market. This news article shows one aspect of AI for sure on what could happen if you develop machines that can develop things better than you do. If you read Nick Bostrom’s Superintelligence, that is his point around building machines that can build better machines. Language development is a complex process, so my first thought after reading the article was that that’s a great evolutionary step for machines (or AI programming) in itself!

But AI is different from everything else we have done in the past as it presents unique never-before opportunities to expand the human potential, particularly in the way we deploy our cognitive functions. (Read my previous blogs on this topic on why AI, and why now.) For organizations, this is an opportunity to be exponentially ahead of the competition or be left geometrically behind! I’ll explore more on that in a later blog.

AI is still an emerging field and there are many implications of how it intertwines with the current human world, which we still have to grasp better. But crying foul is not the way to solve it. Nor is regulation of an early market that is yet to mature, where fast innovation is critical.

One way that helps drive confidence for individuals and organizations is to promote/develop and request more transparent and explainable AI. For those early adopter organizations that are serious about applying AI to transform their business processes through anything from chatbots to knowledge worker automation to new connected customer experiences, the one non-negotiable should be to have transparency of what the AI is doing. Black box AI is what 99% of the startups come up with as their MVPs and products. So, if you are evaluating products or services around AI, ask the vendor/partner, can they stand behind it, can the AI explain itself on what it did, and why it did it. Can it give you evidence behind recommendations, actions and alerts? Is there an audit log? This is one thing I’m super proud of, to be part of the team whose AI, enterprise software and commercialization background made this a non-negotiable from Day 1.

So what is Explainable AI (XAI)? It’s basically the kind you can explain to your mom on why something happened when it happened. A more academic version is here from DARPA: https://www.darpa.mil/program/explainable-artificial-intelligence

The below picture defines it well in simplistic terms: 

Machine Learning

It’s simple. Just explain your AI, what it did and how it arrived at a decision. I know that industry leaders are mixed on the value, like Google’s AI chief Al Norvig doesn’t see the value in it, based on how humans make decisions today. I don’t agree personally. 

But coming to organizations looking at exploring ways to make AI a strategic capability, this should be non-negotiable. The value Explainable AI (XAI) brings to the organization is beyond better/faster adoption and enabling human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligence. 

But for organizations, there is a bigger reason this should be a priority. Imagine an insurance company defending a claim denial decision done by an AI to a Worker’s Council. Imagine a healthcare system who used AI to identify the right treatment plan for a patient, defending a lawsuit alleging extraneous medical procedures. Or a bank that used AI to drive sanctions screening, just getting through a regulatory audit. The cost, legal and reputational risk associated with Risk and Compliance is too huge to ignore for organizations. On top of this, the dangers of Rogue AI are also too catastrophic to ignore. 

So, if you are an organization that’s looking to leverage AI, don’t just plan for Explainable AI, make it a design tenet, demand it with your partners and make it a priority. 

The world is scary enough as it is. We need all the focus on AI we can muster to push us forward toward a better future, expanding human horizons and making it a better place. Making it more explainable takes us one step closer.