Building AI that Works: Domain Knowledge and the Impact on Team Building

February 28, 2019


Artificial Intelligence (AI) is undoubtedly one of the main enablers of the digital transformation of modern enterprises. It is used as a vehicle for increased automation of business processes and as a means of optimizing enterprise decisions. Building AI that works means its applications transcend almost all sectors of the economy from finance, cybersecurity, IoT data and devices, transportation, medical devices and healthcare and upcoming 5G mobile networks to industrial applications in areas like manufacturing, automation, logistics, oil & gas and smart energy. Moreover, AI is embodied in different systems such as robots, drones, autonomous guided vehicles, smart wearables, intelligent cyber-physical systems and within a wide range of software-based systems such as chatbots.

The surge of interest in AI is largely due to recent advances in computing and storage. While the main principles of building AI systems have been around for over two decades, the technological advances facilitate the development of AI systems, as they enable the management of large datasets and speed up the execution of complex computations, especially across the cloud.


Building AI that Works team building

In this context, it’s nowadays easier to build advanced deep learning systems that feature human-like reasoning, such as Google’s AI engine that has repeatedly beaten human grandmasters in the Go game. At the same time, advances in smart sensors and cyber-physical systems facilitate real world data collection, and the embodiment of AI agents in smart objects. 

Despite these fantastic advances it is still challenging building AI that works to solve real-life business problems in pragmatic settings. One of the main challenges relates to the need for overcoming bias issues when developing machine learning and deep learning agents.

Machine & Deep Learning Bias to Building AI that Works

The development of machine learning agents is typically based on training datasets, which are used to train software programs to learn from past observations. This is not much different from the way humans learn in several cases, as learning is usually based on experience and observation. However, this highlights the importance of a proper training dataset: without proper and representative data, computers cannot be trained to handle real-life problems.

Likewise, computers have various limitations in representing and expressing the knowledge they acquire, as this representation is based on programming languages and logical structures that are much more limited than the human mind in expressiveness. These limitations lead to the “bias” problem, which is one of the main setbacks against building AI that works for practical problems and in real-life settings.

It's probably no surprise that AI programs suffer from bias in their reasoning: Humans are also susceptible to different forms of bias, such as their faith on placebo effects, or their bias towards outcomes connected to ideological preferences. In the case of AI, the three main factors that lead to bias are:

  • Language bias: This refers to the language used to express the AI knowledge. In cases where this language is not universal, the AI program won’t be able to express how a machine-learning agent should act at all times. In other words, there might be cases where the AI won’t be able to take the right decision by applying knowledge available in the training dataset. Rulesets, decision trees and neural networks do not always offer the expressiveness needed.

  • Search bias: In several cases, the way training data examples are searched and/or the way rules are applied plays an important role in selecting the final decision. Consider for example an AI agent for chess or GO game: In case multiple moves qualify as the next best one, the order in which they will be evaluated can play a decisive role in the performance of the AI program and ultimately in the evolution of the game. Hence, the rules by which alternative options are selected or excluded play a significant role in the end results. Similarly, the way the training data are traversed is also important for the AI based decision-making.

  • Data overfitting bias: In several cases machine learning and AI agents are overly optimized to yield top performance (e.g., minimum classification error rates) on the training dataset. This leads to detailed and specific models that perform well on the training data, yet exhibit poor or sub-optimal performance when applied on the datasets of the problems that need to be solved, which are different from the training data. This is the reason AI architects and data scientists tend to stop in quite simple and more general knowledge descriptions rather than arriving to complex “overfitted” descriptions. Alternatively, it is common for AI experts to simplify complex knowledge representations as a means of avoiding overfitting to the deep learning training datasets.


Building AI that Works Infographic

Overcoming Bias: The Role of Domain Knowledge in Building AI that Works

One of the main ways of overcoming this bias challenges is to take advantage of domain knowledge i.e., knowledge that will be typically provided by experts in the problem domain. Such experts can affirm or reject knowledge representations produced during the training process. In particular, domain experts alleviate bias problems through:

  • Excluding some knowledge descriptions (e.g., rules) found on the data, as being seasonal or not applicable, which is one of the ways for overcoming data overfitting and search bias

  • Setting priorities for applying the discovered knowledge, taking into account the importance and frequency of some knowledge patterns and prioritizing them over others. This can help overcome search bias.

  • Detecting rules that are always too specific and not applicable, as a means of relaxing data overfitting bias. 

  •  Identifying and expressing patterns of knowledge that hold in the specific domain, even though they were not directly found on the data. This can relax language bias.

It is no accident that all mainstream methodologies for machine learning, deep learning, data mining and data science (e.g., such as the CRISP-DM (Cross Industry Standard Process for Data Mining) and KDD (Knowledge Discovery in Databases) methods) include distinct phases and activities for understanding the available datasets. Among other things, these activities help AI developers and data scientists spot problems in the training datasets that could lead to search or data overfitting bias.

Furthermore, these methodologies foresee the comparative evaluation of different models on a variety of test datasets that are typically different from the training datasets. This evaluation is a key to understanding bias factors and undertaking actions to alleviate them. 

Recruitment for AI Architects, Developers and Data Science Team Building

The importance of domain knowledge mandates the inclusion of business experts in AI R&D, ASI application developers and data science teams. This is yet another factor that makes the formation of a competent and appropriate team challenging, as the domain knowledge requirement comes on top of the need to overcome the proclaimed talent gap in machine learning, deep learning, statistics and relevant IT technologies.

Moreover, it increases the multi-disciplinary and inter-disciplinary nature of the AI development team. In non-trivial projects, the AI team should comprise expertise in the following areas:

  • Database infrastructures, including knowledge on all the different types of databases and datastores, such as SQL databases, NoSQL databases, cloud based (e.g., Microsoft Azure, AWS) and data lakes (e.g., Hadoop) and more.

  • IT and software systems, including expertise in data management systems and programming languages for data intensive applications such as R, Python, Java and Julia.

  • Machine learning, deep learning and statistics, which are prerequisites for data mining and knowledge extraction from massive datasets.

  • Data visualization experts, including individuals with knowledge in visualization charts that go beyond convention bar and pie charts, such as box & whisker plots, tag and word clouds, river charts, donut charts and other forms of visual knowledge representation that are suitable for Big Data.

  • Domain experts i.e. experts in the business domain at hand.

Sometimes individuals may excel in more than one of the above areas. Nevertheless, in most cases the above listed expertise maps to distinct roles in the AI team. It’s also important that in several cases teams need to bring together more than one individual in each of the identified expertise areas (e.g., AI R&D architects, mobile app developers, data scientists, product management, multiple programmers and statisticians). Furthermore, AI is all about collaboration between the various team members, which must therefore be good team players as well. For all these reasons building AI that works is largely about being effective in recruiting and building the proper Artificial Intelligence team.