CogWorld Member ~ Veloxiti and Cognitive Solutions Alliance | December 17, 2018
The discipline of artificial intelligence is valuable to business because computers that act more like humans will work better in our chaotic unpredictable world. The study of the human mind, both philosophical and functional, is at the root of artificial intelligence but demand for returns on investment and protectionist workforce hype makes choosing the right technique for the right problem a serious challenge. Consider, that academia has created the vast body of research and information regarding different artificial intelligence techniques, but due to a pull-back called the first AI winter in the mid 1980’s, enterprise decision makers are in the dark about two of the basic fundamental differences which define how systems provide valuable business return. Let’s start with a quick overview to remind readers of both ends of the AI spectrum.
Implicit (tacit) or machine learned models, are statistically created and often generate assumptions about frequency (occurrence) and relationships (matching) from data. Algorithmic processes that use statistical methods to find connections between images or text often result in relational graphs, data graphs, semantic graphs or trending charts. The argument for machine learning techniques, or all other “learning” varieties hinge their value on the discovery of connections between unknown or massive data sets. This has been valuable since the explosion of data creates chaos. Let’s look at a cognitive example like image recognition.
Image recognition describes an ability to search for and match an image to a known architype model that represents an object. You can find things like people in images by comparing a statistically learned model (architype) with images in your dataset, to discover such things as a person extracted from thousands of images. The statistical algorithm that created the primary architype model, for this example called Bob, cannot prove how or why the model was developed or how the system weighed evidence to conclude that this is the right Bob. Essentially, the image recognition system is an answer to a query with no way to determine its formula. The formula for how a system creates an answer is a critical attribute on our way to understanding artificial intelligence, often marketed as explainable AI.
One way to overcome the answer to the missing formula problem, can be found in explicit (declarative) graphs. The format for this model is often a highly structured directed graph, derived from what people observe and understand about a problem. Referring to the Bob (image recognition) example, in contrast, a declarative graph is engineered by humans to create a robust provable architype model illustrating how humans make a decision about facial features to recognize Bob. Explicit graphs can hold numerous attributes, linkages, and classes that represent ways of thinking or physical tasks to locate Bob.
To summarize the difference between these two fundamental concepts, implicit model creation is based on an algorithm that is supposed to “learn” something, whereas, explicit models are manually “engineered” to represent the problem space. The two techniques are complete opposites in the tools required, workflow best practices, and the talent pool. People who study machine learning rarely know anything about engineering an explicit graph.
The science of building a declarative and often directed graph is called, knowledge engineering. It is hard work and it takes specialized tools to create a financial return on the effort. In the end, the result of an engineered graph-based system that can be audited i.e. trustworthy, provides an ability to explain how the system formulated evidence to find the right image. If your AI plans are important to the business or your people, auditable graphs are a more valuable product, conforming to regulatory, legal, or teaching practices.
Engineering a graph from existing human knowledge has been around for decades, but only a few organizations have spent decades of research and evaluation to understand the power of a knowledge graph. To avoid the obvious philosophical discussions, knowledge exists in your natural mind or organized in sentences, paragraphs, or images. A powerful approach to knowledge engineering is a graph that contains employee plans and the business goals which those plans support. This plan and goal structure (model) was first described by Schank and Able in a series of books first authored in 1977 called, “Scripts, Plans, Goals, and Understanding: An Inquiry Into Human Knowledge Structures”.
Structured knowledge graphs are an important aspect to the discipline of artificial intelligence, so much so, that others like Google have recently begun reinvestigating a variety of graph formats and the mathematics behind them, promising to take society into next wave of stronger AI. A well-structured graph containing valuable expert knowledge, combined with a powerful graph engine, can synchronize with a workforce or employee, to assess large scale complex rapidly changing domains like a global supply chain, a satellite constellation, or global computer networks.
More on this to come..