The Behavior of Everything
Capturing and Executing the Behavior of Everything
By Helena Keeley and Tom Keeley | Compsim Member Article | March 26, 2018
Interest in the “behavior” of systems (machines, humans, and organizations) while operating and interacting in the world today is driving change in almost every industry and in every function. Having the ability to easily articulate “complex behaviors” in a mathematically explicit manner will be necessary to accelerate and respond to that dynamic. Having auditable and traceable systems will also be invaluable when capturing expert behaviors for mass produced delivery of advanced capabilities, and for studying how intelligent systems respond to decisions and actions of other entities. KEEL Technology makes the costly and time-consuming processes associated with capturing, testing, packaging, auditing and explaining complex behaviors far easier and faster, through the use of the KEEL Dynamic Graphical Language. The resulting small, high-performance, platform and architecture independent KEEL Engines allow them to be deployed almost anywhere; satisfying the needs of the manufacturer interested in managing recurring costs and to avoid dependence on any particular platform. KEEL Technology supports the need to provide “explainable AI” as it will be very important to know that these systems are “behaving appropriately.” And, when these intelligent systems need to be “fixed” or “expanded,” KEEL Technology makes it far easier than with previous approaches.
Consider this statement: everything exhibits behavior. Humans, machines, inanimate objects such as particles, rocks, and plants all exhibit behaviors. Wouldn’t it be useful in the world of autonomy to own (and control) behavior? And, if we can control behavior, we can use it to define destiny.
The Behavior of Humans
The behavior of humans is a more complex subject. Perhaps this is due to the number of influencing factors that can cause humans to exhibit different behaviors. Perhaps it is due to the flexibility that humans have in their senses; their ability to accumulate and exchange information, their ability to move and exert force, and their ability to create, accumulate and use tools. In all cases, however, the behaviors (active or potential) are driven by influencing factors.
The most basic driver for humans (and intelligent machines) is one of survival. Here the interest is in recognizing threats or potential threats, or recognizing threatening behaviors. On the other hand, humans (and intelligent machines) are interested in controlling their environment. This means that they want to control behaviors of other entities.
The Behavior of Objects
When a person studies physics, they study the behavior of matter and energy. Yet we sometimes look at inanimate objects without considering their behaviors. However, if you are hit by a rock, or crash your car to avoid a rock, or happen to find a rock in your food, the topic might stimulate more interest. Why, and what should be done about it?
One could say that the rock contains the potential for behavior if you consider that force is used to influence its movement. Gravity could be the force that causes a rock to dislodge itself from a mountain and fall to another surface, where it will eventually stabilize and stop rolling. The force vectors influence the behavior of the rock. Even if the force is insufficient to move the rock, the rock is still participating in the universe. If the rock blocks your way, it is threatening your objective. The rock is balancing the influencing factors to cause it to move, against the influencing factors that cause it to remain stationary. When humans learned that rocks could be used as weapons, they learned that they could influence the behavior of the rocks by directing their own force to propel rocks towards their enemies. The rock would integrate the force from the human, the force from gravity, and the force from air resistance. It would then navigate through space until it encounters a target. Then a new force of resistance would be added to the picture. Eventually the rock would stabilize at the target location. One could deduce, in this case, that the behavior of the rock is dictated by influencing factors.
If we translated the behavior of the rock into an "intelligent rock," we might say the rock would decide how to navigate through space until it encounters a target, at which time a new force of resistance would be added to the picture. Eventually the rock would decide how to stabilize at the target location by balancing the reasons to move against the reasons not to move. We could state that physics and intelligent behavior have somewhat similar responses to influencing factors.
Maybe a particular rock starts out to not be of interest to you. If that rock suddenly becomes of interest to you (because of its location, or its size, or its motion) it may cause you to re-evaluate your own situation and adapt.
Thinking about the rock example above, the important thing to consider is that information (influencing factors) is valued. Each piece of valued information contributes directly or indirectly (through intermediary abstractions) to a decision or action. This applies to humans and to objects.
As more services are transferred to machines, machines will also be interested in understanding the behaviors of objects within their universe.
The topic of “Predict and Prevent” (Chris Trainor, “The Intelligence Data Problem Has Changed”) has been applied to safety and security concerns for years. The more accurately you can predict behaviors, the more opportunity you have to influence those behaviors.
Predicting behaviors is an extension to understanding behaviors by projecting future events based on an understanding of influencing factors (capabilities and opportunities and intent). The topic is also of interest to planners who are challenged with prioritizing decisions and actions to influence behaviors and to respond to behaviors of other entities. Predicting behaviors that have not yet happened is a challenging task as unexpected things will still likely happen. These are often called “black swan events” (Nassim Nicholas Taleb, “The Black Swan,” 2007). This is when bad things happen and there is a high profile investigation to assign responsibility. This suggests that there is significant value in providing a detailed explanation regarding predictions.
Once humans began to understand the behaviors of things, systems, and of other humans, then they started becoming interested in influencing those behaviors. In this case we are differentiating “influencing” from “controlling.” When we talk about “influencing behaviors” we are recognizing that we only have partial control. We are recognizing that the behavior of the target entity may be driven by many influencing factors. One might suggest that more complex objects have more behaviors that are subject to influence. There is certainly interest in influencing the behavior of other humans. Whether this is for advertising, team building, environmental, military, safety and security, or political reasons, people will want to influence the behavior of other humans. As we transfer more responsibilities to machines, those machines will be interested in influencing the behavior of humans, other objects, tools, and other machines. The task of influencing the behavior of other humans has resided with sociologists, psychologists, politicians, ethicists, teachers, parents, and social engineers. This is done by using games, stories, writings, training material, media and more recently, social media. Since the introduction of reproducible media, humans have demonstrated interest in automating processes for influencing larger populations of humans. As machines take on broader responsibilities, there will be more interest in influencing the behaviors of these machines for both positive and adversarial purposes. Recent interest in machine learning suggests a transfer of responsibility from humans toward intelligent machines. This exposes a potential new threat as interested parties look at new ways to influence the behavior of machines where behaviors can be mass produced and delivered globally.
With the topic of controlling behaviors, we are identifying an objective of doing more than just influencing behavior. When the objective is to control behavior, we are identifying an ownership function. Human organizational structures, formal and informal, attempt to establish a set of guidelines to control behavior. With hard automation in manufacturing there is an attempt to limit influencing factors to the very minimum. These systems operate with very strict IF THEN ELSE rules. But even these systems break down occasionally. If we decided to personify the machines, we would say that they “decided to break”, because of some unplanned-for influencing factors. Examples might include wear and tear on parts (they get tired), outside environmental influences, and operator error. When systems get more complex, there are more opportunities for unexpected behaviors, even though there might be a desire to control them. And when one looks at organizations of humans and machines, there usually is a desire to exert absolute control. (Examples: corporate enterprises, Militaries, populations).
Guidelines, doctrines, and Rules-of-Engagement are terms that have been used in an attempt to control the behavior of human organizations. As we transition more complex tasks to machines, there will be a greater demand for a method of control that translates problem solving and behavioral skills to machines that operate only on numbers. When humans look at advanced unmanned combat weapon systems that will be used against an adversary, it is clear that the objective will be to retain control of those systems, even when they will be pursuing goals on their own.
Modeling and Simulating Behavior
Behaviors of things and humans tend to be dynamic and adaptive. To support understanding, influencing and controlling behaviors, the task of modeling and simulating behaviors is helpful in gaining insight into how systems respond to change. Modeling and simulation activities commonly provide a visual aspect that supports the need of humans to see or visualize behaviors. In the simplest form, the human watches the behavior of simulated entities in simulated situations. Sometimes there are controls that allow the human to interact with the simulated environment. The objective is to control costs and schedules before committing a concept into production where costs will escalate when changes are required. Modeling the behavior of complex systems, however, is still a costly and time consuming activity. Commonly there are complex relationships between information items that need to be translated into complex mathematical formulas. These formulas then need to be translated into code, and finally integrated into the simulation environment; all before the “human” can actually start observing the behaviors and influencing factors in operation.
Explaining behaviors is important to those responsible for systems that deliver behaviors, as well as those that are impacted by the behaviors of other entities. For those delivering behaviors, their interest is in confirming or validating the behaviors that are delivered. Are they (the behaviors) delivered as expected? What went wrong? What could have been done better? The requirement of having “explainable AI (XAI)” has been raised as more responsibilities are being transferred to machines, and those resultant systems have the potential of creating mass-produced problems. When things do go wrong, there are likely to be people that want to know why, so that blame or responsibility can be assigned.
“If you cannot measure it, you cannot improve it.” (Lord Kelvin, https://zapatopi.net/kelvin/quotes)
This might be compared to the black box review after an aircraft crash. The recipients of the behaviors want to know and understand the delivered behaviors in case the behaviors are unethical, unsafe, or illegal. The recipients of the behavior of external entities may also want to understand how, if they adjusted their own behavior, they could survive and thrive in a complex world with many threats and opportunities that are influenced by other behaviors.
Organizations that are interested in “packaging behaviors” want to insert complex, adaptive problem solving capabilities into target systems. These systems could be software applications, as more intelligent systems are being deployed with the understanding that the internet will transition from an information source to a source for expertise. Examples would be the delivery of medical and financial advice coupled with individualized, personalized data.
The objective of “containing behavior” is primarily the responsibility of a system manufacturer. The “system manufacturer” is building the production systems. They are interested in the ease or complexity of inserting behaviors into the main application or device. The system manufacturer is also interested in performance and the recurring and non-recurring costs associated with delivering and maintaining the system behaviors. The system manufacturer will be concerned with the task of “explaining behaviors” if that activity adds cost and complexity to the production systems. In some cases, the system manufacturer will be interested in the collaborative (configuration) capabilities of the production systems as these may also add costs and complexity to the production systems.
The world is a dynamic environment. To survive and excel in a dynamic environment requires intelligent entities to respond and adapt. In other words, entities spend a lot of time and effort responding to the behaviors of other entities. As more and more expert services are deployed in computerized software applications and devices, there will be an equal demand for systems that can react at Internet speeds. The military will be interested as intelligent weaponized devices are deployed with fully autonomous behaviors. But behavioral responses will also be of interest to financial operations operating in adversarial environments. The “response to behaviors” topic will be an afterthought to some people, who will initially be only interested in delivering behaviors. In the long term, it will be a constant evolution (a chess match, played out again and again). In the future, modeling and simulation environments will allow complex systems to compete in a manner where humans can observe the action-response interactions.
The topic of sharing behaviors will likely be an operational issue. One would expect to see the production of intelligent machines with exchangeable tools (or weapons). The behavior of those systems will likely change based on the tools. This means that behaviors will be shared and exchanged as needed. In some areas, where teams of collaborative intelligent systems will need to work together, there will likely be an exchange of roles. This presents another case where behaviors can be shared. The concept of packaging all behaviors in all systems will never be economical. It is also likely that there will eventually be libraries of behaviors developed and maintained. It will be possible to configure complex behaviors by selecting and integrating library objects. Sometimes the desired behaviors will dictate the necessary tools and information sources. At other times limitations to tools and information sources will dictate the behaviors that can be delivered.
When we refer to “Characterizing Behavior” we are describing behavior from the aspect of the entity that has the potential of delivering that behavior. This is the entity that will accept the external valued force or external valued information. It is the entity that processes the influencing factors by integrating the external factors with the internal value system in order to produce the behaviors. In intelligent devices, the information fusion process is driven by a model that describes the process. That model could be executed in a biological or electro-mechanical machine. These behaviors can be described by modeling how weighted influencing factors are integrated in order to control weighted or valued decisions or control functions.
The following section will decompose the topic of behavior into component terms. We will characterize behaviors as both decisions and actions. We will also characterize behavior as the integration of influencing factors (detected circumstances comprised of threats and opportunities, and with preconceived intent or desires) considering capabilities (possible outputs or the application of tools). There are informational decisions, and there are decisions to perform actions. All behaviors are therefore the delivery of decisions.
We suggest that in their atomic forms, there are only three types of elementary cognitive decisions that could be made in machines:
Three types of cognitive informational decisions:
- Binary decisions such as go/nogo, do something, or refrain from doing something, yes/no decisions (example: triggered from the breach of a threshold)
- Option selection from mutually exclusive options (example: go left, or right, however you cannot go both left and right at the same time)
- Allocation of resource decisions which will allow the system to do so much of something, or do a relative amount of multiple things that may or may not be dependent on other things. (Example: apply some measurable force on an accelerator, apply resources to respond to an enemy attack, apply available funds for entertainment, education, and retirement)
Behavior is controlled by “valued information,” or through the application of a “value system” for the main application that delivers the behavior. We have already talked about influencing behavior. This means that some valued piece of information is used to impact some other valued decision or action.
So when we describe behavior we are talking about how a valued force is influencing a valued decision or action. That action may be a valued force exerted external to itself. Or it may be a valued force influencing the integration of other valued forces within the information fusion process.
Behavior is Controlled by the Integration of Valued Information:
Behavior in humans is controlled by the collective interpretation of all valued influencing factors which the human brain integrates to control all of the decisions (and actions) that need to be made. The human is constantly reprioritizing their decisions and actions as new threats, opportunities, and needs bubble to the top.
For humans, it is likely to be a long time before decisions and actions can be articulated in mathematical terms. However, for digital machines (software applications) the only way they operate (using today’s microcontrollers) is with digitized, valued information. This means that it is important to add numbers to the influencing factors, to the goals, and to the internal value system that contains the rules of engagement.
Integrating (Combining) Valued Information
In physics, force is force, mass is mass, velocity is velocity, and acceleration is acceleration. There is little subjectivity in physics, except for our lack of understanding the details.
When dealing with more “intelligent” entities, like humans, we know that they do not all behave the same way. We know that human behavior changes with age and experience, exposure to new ideas, successes, and failures. It is likely that a human is constantly integrating new information, with previously learned information, to control behavior. We can suggest that the behavior of the human is driven by needs and circumstances. The needs change, and the circumstances change.
Example: We know that humans need food to survive. But if the human is well fed, then the need for food diminishes with respect to other needs. If the human is hungry, but there is no food, then the human cannot satisfy the need for food. It is impossible. In this context, we suggest impossible (to eat food) overrides must (eat food) from a behavioral aspect.
When dealing with more intelligent entities, those entities are likely to be driven by multiple objectives (or multiple needs, in Maslow’s hierarchy. (https://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs)). At the same time, the importance of different objectives, different maneuvers, different tactics, and different strategies need to be considered because they are constantly changing. The importance level of these objectives will change as threats and opportunities change based on depleted or expanding resources.
Figure 2 suggests two goals: food and safety. If you are hungry, food is more important than if you are not hungry. If your safety is at risk, then safety is more important than if you consider yourself safe. A bar graph is helpful in showing the importance of subjective items.
This image also highlights that an intelligent entity has multiple drivers that influence behavior. It is also important to suggest that the value of food and the value of safety are relative numbers. It is likely that safety carries the most absolute value. Perhaps Death = 100. So if you are falling from a tall building you may prioritize your safety above all other goals. Hunger may have a longer time constant, so even if you are very hungry, you probably would not worry too much about food while you were falling from the top of the building. This means that even if you are very hungry, and food is handy and plentiful, you will not “eat food” if the desire for safety is paramount.
Similarly, if you are very hungry, but food is not available, it is impossible to eat food.
What we are attempting to highlight is that intelligent entities will likely have several simultaneous goals or objectives, and that those objectives will likely be reprioritized dynamically. The intelligent entity will be rebalancing its operations, its tactics and its strategies on a continual basis.
We are also highlighting that information items have both a potential impact on a decision or action, and an actual impact. In this way we could say that hunger has a potential value on a decision or action, and an actual value. The actual value would be that you are somewhere between completely satisfied, and really, really hungry, and you need food to survive.
Integration of influencing factors can take several forms. Among them there are several primary influencing factors internal to the cognitive process:
- An “actual value” of one factor could control the importance level of the potential impact of another factor.
- An “actual value” of one factor could contribute to the resolution of another factor.
- An “actual value” of one factor could control the threshold of another factor.
- An “actual value” of one factor could trigger an event based on a threshold.
- A comparison of the “actual value” of several factors could identify a “best option.”
Behavior as an Analog Process
This article highlights that behavior is an analog process where influencing factors are combined to address all objectives simultaneously. It is the human’s judgment and reasoning capability that allows them to operate and survive in a complex, dynamic environment. And, using their internal value system they can make (in their opinion) the “best” decision at the time by balancing all the alternatives. An example of the concept of balancing alternatives was the example decision between food and safety. If you are “safe enough” you will choose to eat based on the availability of food and your hunger level. You balance the alternatives.
Driving Factors for Modeling Behavior
The primary factor driving interest in the topic of behavior is the attempt to do more with less so that manpower intensive functions can be automated. This objective has cost, schedule and quality implications. If it costs too much to model behavior, it will not be aggressively pursued and may remain an academic pursuit. If it takes too long to model behavior, the result may be obsolete before it is ever used. If it doesn’t offer a benefit or satisfy a need, it will not be pursued. The topic will be pursued only if these three factors can be satisfied. It also suggests that services that streamline this evolutionary process will be highly valued.
When a model defines behavior, as in the cognitive model (code) that defines the behavior of a self-driving car, the behavior and the model-of-the-behavior will be identical. That does not, however, mean that there will not be room for improvement.
Defining Behavior for Deployment in Machines
KEEL Technology (“Knowledge Enhanced Electronic Logic”) can be used by experts to define, model, test, and deploy behavior. The KEEL Dynamic Graphical Language (DGL) greatly accelerates the development process of creating and testing KEEL operational models that are delivered as KEEL Cognitive Engines. KEEL Engines can be deployed in software applications and devices. KEEL is an “expert system” technology in that it requires a human domain expert to create the models. The human expert defines the adaptivity of the models. The KEEL DGL helps the domain expert articulate his/her expertise by allowing the domain expert to “see” the internal value system and “see” (graphically) how influencing factors are integrated to control the behavior of the model. By exposing the value system graphically (without forcing the designer to translate the system first to formulas, and then to IF THEN ELSE “code”) the model can be rapidly modified and tested. See Figure 5.
Interactive KEEL models are created by dropping graphical objects on the screen. The graphical objects represent external influencing factors and information fusion points. Functional relationships are created with simple drag and drop between connection points. The cognitive model is created and executed behind the graphical front end, as the model is being created.
Developing KEEL models does not require any special mathematical skills. It also does not require programming skills to develop and test the models. The code for inclusion in the main application is auto-generated. It is provided in a text file in the target language of choice. The KEEL Cognitive Engine (captured in the code) has a very simple API. This resultant code is then handed off to the engineers who are responsible for building the production application. The system engineers are responsible for gathering the inputs (influencing factors) and providing them as normalized values to the KEEL Engine, and then calling the KEEL Engine to process the information. When complete, the results (representing decisions and actions) are used to control the outputs of the application.
KEEL Engine(s) processes information “collectively” (during a “cognitive cycle”) where information is integrated and balanced as if it was being processed on an analog computer. Inter-related factors are handled during the cognitive cycle.
When the models have been delivered to a production environment, there is support for monitoring and reviewing the behavior of the system. Using a concept called “language animation,” a production system can publish its view of the world, and the KEEL Toolkit can be used to animate the model. You can “watch your system think” in “almost” real time. At any time you can review the decisions and actions and see why those decisions and actions have been made. KEEL provides 100% auditable and traceable systems. Example: It should be possible to review black box recorded information in minutes for after-mission reviews (not months).
For more information about KEEL Technology, visit www.compsim.com. For a brief introduction to the technology, please see the first video in the following KEEL PLAYLIST of videos: https://www.youtube.com/playlist?list=PLZb31La4M4Ja-AHA0HQ9Q8tp1-HAjgS5U