HR and Explainable AI: A Human Being in 100,000 Dimensions

Image: Depositphotos enhanced by CogWorld

Imagine an approach to human resources whereby a human being is perceived in 100,000 dimensions. This new approach would replace the flawed traditional HR systems in place today based on consideration of limited criteria. When a human being gives us consent to use their data, HR organizations can use something called an ontology, a knowledge graph that demonstrates soft skills, hard skills and certifications — with explainable proof. With this approach, we can learn that the computer operator may have a degree in information systems or the customer service agent working two jobs really hadn’t had the time because she was doing her PhD in Physics and would be a great candidate for the quantum field. Today, HR organizations are only looking at people based on very few limited attributes. Now with ontologies, when we can see someone with 100,000 dimensions… we are not trying to find the needles in haystacks because the haystacks are full of needles!


Intro to Ontologies

It is important for organizations to have a framework for organizing information. The right framework can provide a competitive advantage by changing the way you envision your business. You can differentiate your company by anticipating what your customers will need and by doing some of the heavy lifting for them. However, as organizations scale, business needs evolve, and technologies continue to advance, organizing information becomes increasingly difficult. Lacking a suitable framework in this setting can mean falling behind as you lose out on the many possible insights and opportunities driven by new information.

Ever hear the saying "there is no AI without IA"? IA refers to Information Architecture, which is another term for ontology. Ontology refers to a domain of knowledge and the relationships among different concepts in your domain. In the most basic sense, an ontology is a graphical, machine-interpretable representation of fact that can represent data, processes, models, meta-models, and even other ontologies (see image below). This is why they are also referred to as knowledge graphs. Importantly, ontologies are very understandable and practical for capturing how an organization uses data, what they mean, and how they relate to the business.

An Ontology is a Graph that Links Explicit Facts

According the Seth Earley, the author of The AI-Powered Enterprise, "the ontology is the master knowledge scaffolding of the organization." It is not a single, static thing; it is never complete, and it changes as the organization changes. Ontologies begin as a holistic understanding of the language of the business and the customers and are then designed into processes, applications, navigational structures, content, data models, and the relationships between your domain concepts. Ontologies are built from taxonomies—clearly defined hierarchical structures for categorizing information. New taxonomies can be built as needed and incorporated into the framework, providing a logical structure that scales with your business.

Developing an effective ontology can have many benefits. It can buffer the impact if an individual leaves the company or it can help scale up to reach more customers. An ontology is also a powerful tool for AI because it not only provides deep business insight and value, but is trustworthy.

What is AI Ethics?

AI will increasingly be embedded in the processes, systems, products and services by which business and society function—all of which will and should remain within human control. Most importantly, these systems are not simply programmed; they learn from their own experiences, their interactions with humans, and the outcomes of their judgments.

Awareness of the potential impact of AI systems and their decisions, once confined to a niche domain amongst programmers and engineers, has become much more common as examples of problematic real-world deployments of AI models have accumulated. It has therefore become not only prudent but pragmatic to establish principles to guide what is developed and how it is brought to the world.

IBM’s Principles for Trust and Transparency

  1. The purpose of AI is to augment human intelligence

  2. Data and insights belong to their creator

  3. New technology, including AI systems, must be transparent and explainable.

The challenge is that many organizations think that there is an EASY button to press when it comes to implementing trustworthy AI models. This could not be farther from the truth. Earning trust in AI is a socio-technological challenge that must be addressed holistically.


Watch this 6-minute video from co-author Phaedra Boinodiris, practice leader for Trustworthy AI at IBM, as she explains the necessarily holistic approach required to earn people’s trust in an AI model.

As referenced in the video above, IBM recommends 5 pillars to consider when thinking about earning trust.


The 5 Pillars

This article specifically addresses the second pillar — explainability. Being able to detail to an end user how the decision was made by an AI model in such a way that they truly understand is not a simple task. A data scientist, from the start, must consider explainability as a goal and build, train, and test models with an eye towards achieving both desired accuracy and explainability (i.e., never performance for its own sake). By linking and contextualizing an organization's data explicitly, ontologies are the secret weapon for achieving the goal of explainability in building trustworthy AI.

AI: The Common Way

Oftentimes models are trained using keywords that are manually processed using basic natural language understanding and natural language processing tools. The problem with this scenario is these systems are typically trained to search for exact matches of the predefined keywords that are subsequently taken out of context. Further, they might not consider words that are similar or semantically adjacent. Finally, they also may not have the ability to appropriately weigh different sources of data. Pure matches in systems that strip out such context can lead to unintelligible or perverse results.

Some algorithms that are used in the market today also lack explainability of the outcome. People are less likely to trust or simply accept the decisions of an AI system when the explanations of how its determinations were made are hidden or lacking.

Consider an example of an AI model that helps companies search for skilled candidates: the recruiter gets a list of 45 candidates that are tagged as "good" matches to the requirements. What makes a candidate "good?" A "good" candidate in these systems might be arbitrarily defined with a numerical score or an ordinal classification, but how this result is derived is often unclear to the business user. Is the outcome biased? Is the weighting method one-size-fits all? Unwinding a complex model can prove impossible making it hard to know if it is truly identifying the best candidates.

Finally, AI is only as good as the data you can feed it. When data is missing, responsible systems should alert the user that a determination cannot be made on the individual and explain why. Instead, many systems will just apply a low confidence score because it did not find the keywords in the footprint - not that there was NO FOOTPRINT. Organizations that prioritize trust amongst employees and clients must be very careful how these kinds of systems are used, or else they could sow distrust.

Finding Haystacks of Needles (vs Needles in a Haystack)

Using an ontology offers a better way to train a natural language. A system that uses an ontology will gather keywords automatically for you that represent a category or an idea and give you a breadth of items to search for by understanding these keywords in context. A Subject Matter Expert (SME) should then review the ontology to make sure the associations created are accurate for the use case at hand.

We should prefer a system that relies heavily on data lineage and provenance to ensure that results are fully explainable. In this same example, an organization is seeking personnel proficient in the skills and competencies needed to perform well in a particular job role. The SME who knows these skills and competencies will search on those words to find the right candidates. When she sees the output, she can also see what data source was used to frame the ontological output. The result is not just a numerical "score," but the determination along with the contextualized evidence behind that determination drawn from the original data.



AI that leverages an ontology can also be used to help organizations find skills and competencies in areas otherwise unfamiliar to them. For example, today we can train an AI model to ingest data from numerous sources in a new field and auto-generate a proposed ontology to represent skills and competencies that make up key roles in that field. The depth of the ontology's conceptual understanding and relationships can often uncover second- and third-order linkages as well, resulting in determinations that have much greater depth than narrow models leveraging more basic processing features like keywords.

How to pull this off no matter where you on the AI maturity scale

To do this right and at scale, an org needs to start with vision of where they want to be. This will require courage to admit that so many of the HR human business processes are so dull and so limited. They need to staff multi-disciplinary teams in the truest sense of the word. This multi-disciplinary team will require psychological safety in order to create and to ensure that the end user truly has the power to both offer and withdraw consent. The end users must be empowered via the use of their data and there must be safeguards to ensure that the data will never be used without their consent and in ways that would ever disadvantage them. Again earning trust in AI is hard work as it is a socio-technological challenge. No easy buttons to be pushed here.

Conclusion

Society at large demands a lot more from those that are building AI models today. Those that design and develop AI models are accountable for the outcomes. Earning people's trust in AI requires employing a holistic approach made up of a set of best practices from the start. This holistic effort includes people, processes, and tools like incorporating intentional design; diverse and inclusive teams of data scientists; frameworks like ontologies, which are by their very nature explainable; assessment tools; and proper governance. AI has long been recognized for its potential to help businesses scale, become more efficient, and provide new insights. Building and maintaining an ontology will ensure you can adapt to rapidly changing ecosystems and continue to develop and deploy robust AI that will provide these benefits while mitigating the myriad risks of “AI gone wrong.”


Phaedra Boinodiris, IBM, @innov8game
A fellow with the London-based Royal Society of Arts, Boinodiris has focused on inclusion in technology since 1999. She is currently the business transformation leader for IBM’s Trustworthy AI consulting group and serves on the leadership team of IBM’s Academy of Technology. Boinodiris, co-founder of WomenGamers.com, is pursuing her Ph.D. in AI and Ethics at University College Dublin’s Smart Lab. In 2019, she won the United Nations Woman of Influence in STEM and Inclusivity Award and was recognized by Women in Games International as one of the Top 100 Women in the Games Industry.

Daniel Schnelbach
Daniel is a Senior Consultant in IBM's U.S. Federal Cognitive and Analytics practice.

Brian Gillikin
Brian Gillikin is a Technical Lead and Data Scientist at IBM. Previously, he worked on data analytics projects with the OECD, CSIS, and USAID. He holds master’s degrees from Carnegie Mellon and Tallinn University of Technology.

Beth Rudden, IBM, @ibethrudden
Beth Rudden transforms people and companies through applied AI and empowerment with the ethical use of data. She leads large, geographically dispersed advanced analytic and AI teams to develop cognitive solutions that deliver outcomes for IBM’s clients. Beth has received patents on solutions that provide more precise insights, better customer understanding, and faster implementation. Her anthropology, language, and data science background also help her develop models to transform the human experience. She is currently leading AI at Scale, a consult to operate model, offering Trusted AI that renders a business outcome.

Notes
Earley, Seth. 2020. “The AI-Powered Enterprise: Harness the Power of Ontologies to Make Your Business Smarter, Faster, and More Profitable.” Canada: LifeTree Media, 2020.