Ain't Nuthin' So Non-Common As Common Sense

Time for common sense, Image: Depositphotos enhanced by CogWorld

Time for common sense, Image: Depositphotos enhanced by CogWorld

Source: COGNITIVE WORLD on FORBES

Actually, to use common sense, the title of this article should be in the words of renowned architect, Frank Lloyd Wright, “There is nothing more uncommon than common sense.” Hmm? Is that common sense, or commonsense, or common-sense. It takes some real common sense to know the difference.

In artificial intelligence research, commonsense knowledge is the collection of facts and information that an ordinary person is expected to know. The commonsense knowledge problem is the ongoing project in the field of knowledge representation (a sub-field of artificial intelligence) to create a commonsense knowledge base: a database containing all the general knowledge that most people possess. The database must be represented in a way that it is available to artificial intelligence programs that use natural language or make inferences about the ordinary world. Such a database is a type of ontology of which the most general are called upper ontologies.

The commonsense problem is considered to be among the hardest in all of AI research because the breadth and detail of commonsense knowledge is enormous. Any task that requires commonsense knowledge is considered AI-complete: to be done as well as a human being does it, it requires the machine to appear as intelligent as a human being. These tasks include machine translation, object recognition, text mining and many nuanced decisions. To do these tasks perfectly, the machine simply has to know what the text is talking about or what objects it may be looking at, and this is impossible in general, unless the machine is familiar with all the same concepts that an ordinary person is familiar with.

Way back

Yours truly visiting Atanasoff at his home in 1981 PETER FINGAR

Yours truly visiting Atanasoff at his home in 1981 PETER FINGAR

This article serves as a time capsule for AI and its current phase of tackling common sense. “Time goes by so slowly. And time can do so much.” Righteous Brothers - Unchained Melody.

Let’s go way back, back farther than just thinking of a computer with common sense, to see how fast things have developed since the very advent of the electronic digital computer.

1939. John Vincent Atanasoff invented the electronic digital computer.

John McCarthy HTTPS://PROJECTS.CSAIL.MIT.EDU/FILMS/AIFILMS/AIFILMS.HTML

John McCarthy HTTPS://PROJECTS.CSAIL.MIT.EDU/FILMS/AIFILMS/AIFILMS.HTML

20 Years Later, 1959 … John McCarthy (1927 - 2011) was an American computer scientist. A pioneer in the foundations of artificial intelligence research, he coined the term “artificial intelligence” in 1955. He was one of the creators of the (original) Lisp programming language, which was quite involved in early AI research in the 1960's and 1970's, and organized the first Artificial Intelligence conference in 1956, while working as a math teacher at Dartmouth. He founded the AI labs at MIT and Stanford. On a side note, McCarthy predicted that creating a truly intelligent machine would require "1.8 Einsteins and one-tenth the resources of the Manhattan Project (the project that created the first atomic weapons).

Programs with Common Sense HTTPS://AI.STACKEXCHANGE.COM/QUESTIONS/58/WHO-FIRST-COINED-THE-TERM-ARTIFICIAL-INTELLIGENCE

Programs with Common Sense HTTPS://AI.STACKEXCHANGE.COM/QUESTIONS/58/WHO-FIRST-COINED-THE-TERM-ARTIFICIAL-INTELLIGENCE

Wow. In 1959, in collaboration with Marvin Minsky, he wrote the ground breaking alert paper, Programs with Common Sense, featuring a hypothetical program, Advice Taker.

25 Years Later, 1984 … Get Psyched Over Cyc

Here’s the longest-running AI project of all time. The goal of the semantic technology company, Cycorp, with its roots in the Microelectronics and Computer Technology Consortium (MCC), the first, and at one time one of the largest, computer industry research and development consortia in the United States, triggered in response to the Japanese 5th Generation computer project, is to codify general human knowledge and common sense so that computers might make use of it. MCC’s Cyc Project charged itself with figuring out the tens of millions of pieces of data we rely on as humans — the knowledge that helps us understand the world — and to represent them in a formal way that machines can use to reason. The spin off, Cycorp, been working continuously since 1984. Cyc, isn’t “programmed” in the conventional sense. It’s much more accurate to say it’s being “taught.” In an interview with Business Insider, Doug Lenat, President and CEO (and fellow Cognitive World contributor), said that “most people think of computer programs as ‘procedural, a flowchart,’ but building Cyc is much more like educating a child. We’re using a consistent language to build a model of the world.”

This means Cyc can see “the white space rather than the black space in what everyone reads and writes to each other.” An author might explicitly choose certain words and sentences as he’s writing, but in between the sentences are all sorts of things you expect the reader to infer; Cyc aims to make these inferences.

Consider the sentence, “John Smith robbed First National Bank and was sentenced to 30 years in prison.” It leaves out the details surrounding his being caught, arrested, put on trial, and found guilty. A human would never actually go through all that detail because it’s alternately boring, confusing, or insulting. You can safely assume other people know what you’re talking about. It’s like pronoun use (he, she, it) one assumes people can figure out the referent. This stuff is very hard for computers to understand and get right, but Cyc does both.

Natural-language understanding will also require computers to grasp what we humans think of as common-sense meaning. For that, Google’s Ray Kurzweil’s AI team taps into the Knowledge Graph, Google’s catalogue of some 700 million topics, locations, people, and more, plus billions of relationships among them. It was introduced as a way to provide searchers with answers to their queries, not just links.

15 Years Later, 1999 … MIT: Open Mind Common Sense (OMCS)

Open Mind Common Sense (OMCS) is an artificial intelligence project based at the Massachusetts Institute of Technology (MIT) Media Lab whose goal is to build and utilize a large commonsense knowledge base from the contributions of many thousands of people across the Web. The Open Mind Common Sense project differs from Cyc because it has focused on representing the common-sense knowledge it collected as English sentences, rather than using a formal logical structure.

Since its founding in 1999, it has accumulated more than a million English facts from over 15,000 contributors in addition to knowledge bases in other languages. Much of OMCS's software is built on three interconnected representations: the natural language corpus that people interact with directly, a semantic network built from this corpus called ConceptNet, and a matrix-based representation of ConceptNet called AnalogySpace that can infer new knowledge using dimensionality reduction. The knowledge collected by Open Mind Common Sense has enabled research projects at MIT and elsewhere.

The project was the brainchild of Marvin Minsky, Push Singh, Catherine Havasi and others. Development work began in September 1999, and the project was opened to the Internet a year later. Havasi described it in her dissertation as “an attempt to ... harness some of the distributed human computing power of the Internet, an idea which was then only in its early stages.” The original OMCS was influenced by the website Everything2 and its predecessor, and presented a minimalist interface that was inspired by Google.

The project is currently run by the Digital Intuition Group at the MIT Media Lab under Havasi. In 2010, OMCS co-founder and director Catherine Havasi, with Rob Speer, Dennis Clark and Jason Alonso, created Luminoso, a text analytics software company that builds on ConceptNet. It uses ConceptNet as its primary lexical resource in order to help businesses make sense of and derive insight from vast amounts of qualitative data, including surveys, product reviews and social media. (https://en.wikipedia.org/wiki/Open_Mind_Common_Sense)

19 Years Later, 2018 … DARPA: Machine Common Sense (MCS) program

BANG! Originally known as the Advanced Research Projects Agency (ARPA), the agency was created in February 1958 by President Dwight D. Eisenhower in response to the Soviet launching of Sputnik 1 in 1957. By collaborating with academic, industry, and government partners, DARPA formulates and executes research and development projects to expand the frontiers of technology and science, often beyond immediate U.S. military requirements. DARPA-funded projects have provided significant technologies that influenced many non-military fields, such as computer networking and the basis for the modern Internet, and graphical user interfaces in information technology.

Today, DARPA is onto something very, very new that could radically change our world forever. At a symposium in Washington DC in September 2018, DARPA announced plans to invest $2 billion in artificial intelligence research over the next five years. In a program called “AI Next,” the agency now has over 20 programs currently in the works and will focus on “enhancing the security and resiliency of machine learning and A.I. technologies, reducing power, data, performance inefficiencies and [exploring] ‘explainability'” of these systems.

“Machines lack contextual reasoning capabilities, and their training must cover every eventuality, which is not only costly, but ultimately impossible,” said director Dr. Steven Walker. “We want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them.” Thus, the Machine Common Sense (MCS) program, where DARPA has teamed up with the Seattle-based Allen Institute for AI (AI2) seeks to address the challenge of articulating and encoding human common-sense reasoning for intelligent machines. The MCS program will aim to create machine common-sense services that can help break down the barrier between the narrowly focused AI applications of today and the more general AI applications of the future. https://www.darpa.mil/program/machine-common-sense

Machine common sense has long been a critical but missing component of AI. Recent advances in machine learning have created new AI capabilities, but machine reasoning across these applications remains narrow and highly specialized. Current machine learning systems must be carefully trained or programmed for every situation.

Common sense is defined as “the basic ability to perceive, understand, and judge things that are shared by (‘common to’) nearly all people and can reasonably be expected of nearly all people without need for debate.” Humans are usually not conscious of the vast sea of common-sense assumptions that underlie every statement or action. This shared, unstated background knowledge includes a general understanding of how the physical world works (i.e., intuitive physics), a basic understanding of human motives and behaviors (i.e., intuitive psychology), and a knowledge of the common facts that an average adult possesses.

The absence of common sense prevents intelligent systems from understanding their world, behaving reasonably in unforeseen situations, communicating naturally with people, and learning from new experiences. Its absence is considered the most significant barrier between the narrowly focused AI applications of today and the more general, human-like AI systems hoped for in the future. Common sense reasoning’s obscure but pervasive nature makes it difficult to articulate and encode.

The Machine Common Sense (MCS) program seeks to address the challenge of machine common sense by pursuing two broad strategies. Both envision machine common sense as a computational service, or as machine common-sense services. The first strategy aims to create a service that learns from experience, like a child, to construct computational models that mimic the core domains of child cognition for objects (intuitive physics), agents (intentional actors), and places (spatial navigation). The second strategy seeks to develop a service that learns from reading the Web, like a research librarian, to construct a commonsense knowledge repository capable of answering natural language and image-based questions about commonsense phenomena. (https://www.darpa.mil/news-events/2018-10-11)

The Quantum Computing Common-Sense Apocalypse

“Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.”—Winston Churchill

No! The Mayan Apocalypse, December 21, 2012, didn’t mean the end of the world, it meant a whole new beginning. Well, 2012 did indeed mean a whole new world in AI, the year that neural-net deep learning took off, ushering in a whole new era.

In Greek the term apocalypse means the “lifting of the veil.” It is a disclosure of something hidden from the majority of mankind in an era dominated by falsehood and misconception. Mayan elders did not prophesy that everything will come to an end. Rather, this is a time of transition from one World Age into another. The Mayan fifth world finished in 1987. The sixth world started in 2012, placing us at the beginning of a new age. It is the time for humans to work through “our stuff” individually and collectively. The Mayan sixth world is nothing more than a blank slate; it is up to us to create the new world and civilization as we wish.

Although it's impossible to know precisely how cognitive computing will change our lives, a likely possibility is that there are two overall potential outcomes. 1) Mankind will be set free from the drudgery of work, or 2) we will see the end of the human era.

In my Cognitive Computing book, I point to IBM’s explanation of three eras of computing itself. Now, we can let DARPA expand on that third era, as shown below. DARPA’s depiction illustrates that we now have the actual computing power to take on Big Data, a term used to refer to structured and unstructured data sets that are too large or complex for traditional data-processing application software to adequately deal with. Leading the charge here is the advent of quantum computers and quantum-mechanical phenomena such as superposition and entanglement. Rather than store information using bits represented by 0s or 1s as conventional digital computers do, quantum computers use quantum bits, or qubits, to encode information as 0s, 1s, or both at the same time. This superposition of states—along with the other quantum mechanical phenomena of entanglement and tunneling—enables quantum computers to manipulate enormous combinations of states at once. That’s just what’s needed to power AI from now on.

Three eras MY BOOK AND IBM

Three eras MY BOOK AND IBM

AI today, tomorrow and the future DARPA

AI today, tomorrow and the future DARPA

Okay, okay. You’ve already heard and read a whole lot about AI, but with the coming bang of common-sense automation, “Wait a minute … you ain’t seen nothin’ yet.” You see, more are at it besides DARPA. The huge new Vector Institute in Canada (https://vectorinstitute.ai) and China says it will be the world leader, dominating AI by 2030. Best advice? Keep up with the news via your search engine, e.g., “artificial intelligence” + “common sense”… then click on News. Without a doubt, “You ain’t seen nothin’ yet.”

Peter Fingar is an internationally recognized expert on business strategy, globalization and business process management. He's a practitioner with over fifty years of hands-on experience at the intersection of business and technology. His seminal book, Business Process Management: The Third Wave is widely recognized as a key launch pad for the BPM trend in the 21st Century. Peter has held management, technical and advisory positions with GTE Data Services, American Software and Computer Services, Saudi Aramco, EC Cubed, Noor Technologies (Egypt), the Technical Resource Connection division of Perot Systems and IBM Global Services. In addition he served as the CIO at the University of Tampa for five years overseeing the first installation of the Internet for the university. As a university professor he has taught graduate and undergraduate computing studies at business schools in the U.S. and abroad, and has given keynote talks worldwide (including London, New York, Washington, Amsterdam, Stockholm, Munich, Milan, Paris, Brussels, Tokyo, Shanghai, Montreal, Chicago, Denver, Las Vegas, San Francisco, San Diego, Miami, Cairo, Johannesburg, Riyadh, Dubai, and Lisbon). In addition to numerous articles (including CIO Magazine, Optimize, Computerworld, Intelligent Enterprise, Internet World (columnist), Sili-conIndia, FirstMonday, EAI Journal, Logistics, Information Age, and the Journal of Systems Management), he is an author of twenty-three business technology books, and counting.