This post is not meant to give the reader a detailed look into how our algorithms work. The goal is to give the reader an idea of our approach.
Now, here's a fun activity: Get your camera phone ready, meetup with one or more Artificial Intelligence scientists and ask them to define intelligence -- scientifically. Make it clear that you define scientifically as not using abstract words such as "planning" and "reasoning". Then, pull out your camera and take a picture of their facial expression.
As ironic as it sounds, the best way to confuse an AI scientist is to ask for a scientific definition of intelligence!
There’s good reason for this lack of definition. Today, there are a few companies working towards General AI, or Artificial General Intelligence (AGI), none of whom have accomplished an AGI breakthrough.Why? The answer lies in having the wrong focus. Companies like Google DeepMind, Vicarious and Numenta focus on reverse engineering the human brain. This presupposes that the human brain is the definition of intelligence and this is doomed to fail because we lack a comprehensive understanding of how the human brain works.
It’s an enormous mistake to believe that solving intelligence is to reverse engineer the brain. In 2005, when we first started our research, we consciously decided the brain was just one application of intelligence, not the definition. We needed to find a probable definition for intelligence. We found our answers in the realms of physics, quantum mechanics and spacetime.
We assumed, like many, that intelligence is the efficient pursuit of goals. Then we asked what it means to achieve a goal. How do we explain the achievement of a goal? The answer can be found in quantum mechanics. While there is no consensus, there is a sizeable portion of scientists in the physics community who believe reality can be described by the collapse of the wave function, regardless if you believe the wave function is a mathematical abstraction or real.
In other words, achieving our goals means that the wave function collapsed in a way that produced the reality we were pursuing.
Intelligence is not about predicting the wave function collapse, it is instead the ability to control it, which means, according to our research, moving particles around in space and time. We therefore define intelligence as follows:
The orchestration of a sequence of particle movements to control the wave function collapse.
This is our argument for a definition of intelligence. It is still too early to either prove it or disprove it. But, it appears this definition is the only potential candidate as of today. And, it does work well for our work.
The interesting thing with this definition is it doesn't just describe the operation of the human brain. It describes planets moving through space and the movement of sub-atomic particles. In other words, should this definition stand the test of time, intelligence is woven into the fabric of spacetime itself.
From Science to Machine Algorithm
If intelligence is about moving particles in order to control the wave function collapse, then it appears obvious that a machine implementation of intelligence should focus on modeling reality and understanding which particles need to move where.
However, we realized it wasn't as easy as putting up a camera and "seeing" the physical world. What we wanted to build wasn't a Sentient-AI-- an AI that itself was a life-form with its own selfish goals, needs and wants. Rather, we wanted to build a Why-AI, a digital mind that could extend the user's own way of thinking, reasoned like the user and viewed reality like the user.
What was needed wasn't a model of the objective reality, but instead a subjective interpretation of that reality.
What is "Artificial Intelligence"?
Above I discussed the definition for intelligence from a scientific perspective. But as we sit down to implement AI, moving particles doesn't make a lot of sense. So, what is "Artificial Intelligence"? What's the purpose from a computer science perspective?
We see artificial intelligence - by that I mean AGI / General AI / true AI (whichever you wish to call it) - consisting of three parts:
1) Machine Learning. The ability for a machine to learn knowledge from the perspective of building subjective interpretations of reality.
2) Temporal 3D Modeling. The ability to use the learned knowledge to put together 3D models of reality that extend in time.
3) Comprehension. The ability to analyze these models and understand the cause and effect of possible action or inaction performed by the user or others (human or non human) in the model. In addition, comprehension is about finding a sequence of cause/effects that leads to the goal achievement (similar to a sequence of particle movements).
General Purpose Machine Learning
Overall, our machine learning algorithm follows a process we call the "Kylee Model". The Kylee Model can be summarized as follows:
1) Observation - This is a stream of sensor data arriving our cognition engine (the core AI algorithm). The sensor data is abstracted and normalized into graphs. This sensor data doesn't just come from one user's device, but from all devices across all users.
2) Memorization - Many independent observations lead to memorization. For example, if Nigel constantly sees people silencing their phones at the movie theater, Nigel will eventually memorize that as common-sense (normative behavior). However, at this stage we still can't claim Nigel comprehends why we are silencing the phones at the movie theater.
3) Conceptualization - Many memorizations lead to conceptualization. Each memorization tends to be slightly different than the others. That variance allows Nigel to turn a piece of memorized knowledge into a concept. Concepts are extremely critical as it is what enables "transfer learning". In one of our tests, Nigel conceptualized a basic version of the knowledge of "home". By observing what individual's called home (specific locations), Nigel was able to learn the word home as a set of location and WiFi signals (the WiFi surprised us and confirms that the learning was unsupervised).
4) Comprehension - Once conceptualized knowledge gets put together to create a temporal 3D model of reality, the algorithm analyzes this model looking for two types of comprehension: (1) Depth comprehension - like why is a ripe banana yellow? Answer, because they have particular carotenoids (of course, you can go deeper) and (2) Temporal comprehension - How do I get bananas? Answer: Get in the car, drive to grocery story, go to fruit section, pick up bananas, go pay for bananas.
The Periodic Table for Knowledge
The algorithm behind Nigel is specifically designed to identify and learn pieces of knowledge. But more than that, the algorithm is designed to break down these pieces and find the smallest
pieces of knowledge - or the basic building blocks of knowledge that would let us build any interpretation of reality possible.
Think of it as building a periodic table for knowledge.
Inspiration from Quantum Mechanics
Not only do we believe intelligence is a part of quantum mechanics, but the algorithm has a lot of inspiration from quantum mechanics. Three examples include:
1) Thinking Process - We use the many-worlds theory as the inspiration for thinking. While we cannot confirm the many-worlds theory, we found it very useful for Nigel's thinking process. The algorithm, within our spacetime boundary box, creates simulations of various futures based on cause and effect. Once it has multiple futures, it acts as a sort-of "Google Maps for Many Worlds" to find a path to realizing a goal.
2) Knowledge Superposition - Interpretation is key in the thinking process. It is not enough to model a girl as a girl. A girl could be, from the perspective of the observer (the user), a daughter, a friend, a mom, a sister and many other interpretations. To solve this challenge, all knowledge pieces get encoded with all possible interpretations. Once added to a temporal 3D model, only some of the nodes in the graph fit the model which essentially collapses the knowledge to a specific interpretation. This has an enormous impact on the scalability of the thinking process.
3) Retrocausality - When building these temporal 3D models, we assume retrocausality. In essence, we assume the goal has been achieved and follow the backwards flow of information over time to discover what caused the effects we're seeing.
The quantum mechanics inspiration is more than inspiration. We found that the above examples make Nigel scalable. Technically, it is impossible for Nigel to model the position of every particle in space, at least not with today's computer technology. But the above examples allow Nigel to scale into an immensely powerful general AI system with global impact.
While Nigel today is in its baby stage, where it is only learning the basics of reality, we truly believe it will grow into a scientist, even a super scientist. Personally, I hope Nigel can help find a cure for cancer and help people out of global poverty within my own lifetime.
What about social and emotional intelligences? Our research found no evidence that emotional intelligence and social intelligence is anything real. I believe human emotions and social behavior allows us to build our own 3D models of reality in our brains.
For Nigel, we are working with external companies to integrate emotion recognition to allow Nigel to respond to emotional states, but as far as emotion goes, we do not believe it is a part of universal intelligence.