Understanding Explainable AI

Understanding Explainable AI (XAI) PIXABAY

As artificial intelligence becomes an increasing part of our daily lives, from the image and facial recognition systems popping up in all manner of applications to machine learning-powered predictive analytics, conversational applications, autonomous machines and hyperpersonalized systems, we are finding that they need to trust these AI-based systems with all manner of decision making and predictions is paramount. AI is finding its way into a broad range of industries such as education, construction, healthcare, manufacturing, law enforcement and finance. The sorts of decisions and predictions being made by AI-enabled systems are becoming much more profound, and in many cases, critical to life, death and personal wellness. This is especially true for AI systems used in healthcare, driverless cars or even drones being deployed during war.

However most of us have little visibility and knowledge on how AI systems make the decisions they do, and as a result, how the results are being applied in the various fields that AI and machine learning is being applied. Many of the algorithms used for machine learning are unable to be examined after the fact to understand specifically how and why a decision has been made. This is especially true of the most popular algorithms currently in use — specifically, deep learning neural network approaches. As humans, we must be able to fully understand how decisions are being made so that we can trust the decisions of AI systems. The lack of explainability and trust hampers our ability to fully trust AI systems. We want computer systems to work as expected and produce transparent explanations and reasons for decisions they make. This is known as Explainable AI (XAI).

Making the Black Box of AI Transparent with Explainable AI (XAI)

Explainable AI (XAI) is an emerging field in machine learning that aims to address how black box decisions of AI systems are made. This area inspects and tries to understand the steps and models involved in making decisions. XAI is thus expected by most of the owners, operators and users to answer some hot questions like why did the AI system make a specific prediction or decision? Why didn’t the AI system do something else? When did the AI system succeed and when did it fail? When do AI systems give enough confidence in the decision that you can trust it, and how can the AI system correct errors that arise?

One way to gain explainability in AI systems is to use machine learning algorithms that are inherently explainable. For example, simpler forms of machine learning such as decision trees, Bayesian classifiers and other algorithms that have certain amounts of traceability and transparency in their decision making can provide the visibility needed for critical AI systems without sacrificing too much performance or accuracy. More complicated, but also potentially more powerful algorithms such as neural networks, ensemble methods including random forests and other similar algorithms sacrifice transparency and explainability for power, performance and accuracy.

However, there is no need to throw out the deep learning baby with the explainability bathwater. Noticing the need to provide explainability for deep learning and other more complex algorithmic approaches, the US Defense Advanced Research Project Agency (DARPA) is pursuing efforts to produce explainable AI solutions through several funded research initiatives. DARPA describes AI explainability in three parts, which include prediction accuracy that means models will explain how conclusions are reached to improve future decision making, decision understanding and trust from human users and operators, as well as inspection and traceability of actions undertaken by the AI systems. Traceability will enable humans to get into AI decision loops and can stop or control their tasks whenever the need arises. An AI system is not only expected to perform a certain task or impose decisions but also have a model with the ability to give a transparent report of why it took specific conclusions.

Levels of Explainability and Transparency

So far, there is only early, nascent research and work in the area of making deep learning approaches to machine learning explainable. However, it is hoped that sufficient progress can be made so that we can have power and accuracy as well as required transparency and explainability. Actions of AI should be traceable to a certain level. These levels should be determined by the consequences that can arise from the AI system. Systems with more important, deadly, or important consequences should have a significant explanation and transparency requirements to know everything when anything goes wrong.

Not all systems need the same levels of transparency. While it might not be possible to standardize algorithms or even XAI approaches, it might certainly be possible to standardize levels of transparency/levels of explainability as requirements. Product recommendation systems, for example, need to have very little requirement for transparency and so might accept a lower level of transparency. On the other hand, medical diagnosis systems or autonomous vehicles might require greater levels of explainability and transparency. There are efforts through standards organizations to arrive at common, standard understandings of these levels of transparency to facilitate communication between end users and technology vendors.

Organizations also need to have governance over the operation of their AI systems. Oversight can be achieved through the creation of committees or bodies to regulate the use of AI. These bodies will oversee AI explanation models to prevent the rollout of incorrect systems. As AI becomes more profound in our lives, explainable AI becomes even more important.


https://www.linkedin.com/in/rschmelzer/

http://www.cognilytica.com

Ronald Schmelzer, columnist, is senior analyst and founder of the Artificial Intelligence-focused analyst and advisory firm Cognilytica, and is also the host of the AI Today podcast, SXSW Innovation Awards Judge, founder and operator of TechBreakfast demo format events, and an expert in AI, Machine Learning, Enterprise Architecture, venture capital, startup and entrepreneurial ecosystems, and more. Prior to founding Cognilytica, Ron founded and ran ZapThink, an industry analyst firm focused on Service-Oriented Architecture (SOA), Cloud Computing, Web Services, XML, & Enterprise Architecture, which was acquired by Dovel Technologies in August 2011.

Ron is a Parallel Entrepreneur, having started and sold a number of successful companies. The companies Ron has started and run have collectively employed hundreds of people, raised over $60M in Venture funding and exits in the millions. Ron was founder and chief organizer of TechBreakfast – the largest monthly morning tech meetup in the nation with over 50,000 members and 3000+ attendees at the monthly events across the US including Baltimore, DC, NY, Boston, Austin, Silicon Valley, Philadelphia, Raleigh and more.

He was also founder and CEO at Bizelo, a SaaS company focused on small business apps, and was Founder and CTO of ChannelWave, an enterprise software company which raised $60M+ in VC funding and subsequently acquired by Click Commerce, a publicly traded company. Ron founded and was CEO of VirtuMall and VirtuFlex from 1994-1998, and hired the CEO before it merged with ChannelWave.

Ron is a well-known expert in IT, Software-as-a-Service (SaaS), XML, Web Services, and Service-Oriented Architecture (SOA). He is well regarded as a startup marketing & sales adviser, and is currently mentor & investor in the TechStars seed stage investment program, where he has been involved since 2009. In addition, he is a judge of SXSW Interactive Awards and served on standards bodies such as RosettaNet, UDDI, and ebXML.

Ron is the lead author of XML And Web Services Unleashed (SAMS 2002) and co-author of Service-Orient or Be Doomed (Wiley 2006) with Jason Bloomberg. Ron received a B.S. degree in Computer Science and Engineering from Massachusetts Institute of Technology (MIT) and MBA from Johns Hopkins University.