The Cognitive Computing Era is Here: Are You Ready?

The Cognitive Computing Era is Here: Are You Ready?

By Peter Fingar

Based on excerpts from Peter's book Cognitive Computing: A Brief Guide for Game Changers


Artificial Intelligence is likely to change our civilization as much as, or more than, any technology that’s come before, even writing. 
-- Miles Brundage and Joanna Bryson, Future Tense

The smart machine era will be the most disruptive in the history of IT.
-- Gartner, The Disruptive Era of Smart Machines is Upon Us

Without question, Cognitive Computing is a game-changer for businesses across every industry. 
 -- Accenture, Turning Cognitive Computing into Business Value, Today!


The era of cognitive systems is dawning and building on today’s computer programming era. All machines, for now, require programming, and by definition programming does not allow for alternate scenarios that have not been programmed. To allow alternating outcomes would require going up a level, creating a self-learning Artificial Intelligence (AI) system. Via biomimicry and neuroscience,Cognitive Computing does this, taking computing concepts to a whole new level. Once-futuristic capabilities are becoming mainstream. Let’s take a peek at the three eras of computing.

Fast forward to 2011 when IBM’s Watson won Jeopardy! Google recently made a $500 million acquisition of DeepMind. Facebook recently hired NYU professor Yann LeCun, a respected pioneer in AI. Microsoft has more than 65 PhD-level researchers working on deep learning. China’s Baidu search company hired Stanford University’s AI Professor Andrew Ng. All this has a lot of people talking about deep learning. While artificial intelligence has been around for years (John McCarthy coined the term in 1955), “deep learning” is now considered cutting-edge AI that represents an evolution over primitive neural networks.[i] 

Taking a step back to set the foundation for this discussion, let me review a few of these terms. As human beings, we have complex neural networks in our brains that allow most of us to master rudimentary language and motor skills within the first 24 months of our lives with only minimal guidance from our caregivers. Our senses provide the data to our brains that allows this learning to take place. As we become adults, our learning capacity grows while the speed at which we learn decreases. We have learned to adapt to this limitation by creating assistive machines. For over 100 years machines have been programmed with instructions for tabulating and calculating to assist us with better speed and accuracy.

Today, machines can be taught to learn much faster than humans, such as in the field of machine learning, that can learn from data (much like we humans do). This learning takes place in Artificial Neural Networks that are designed based on studies of the human neurological and sensory systems. Artificial neural nets make computations based on inputted data, then adapt and learn. In machine learning research, when high-level data abstraction meets non-linear processes it is said to be engaged in deep learning, the prime directive of current advances in AI.
Cognitive computing, or self-learning AI, combines the best of human and machine learning and essentially augments us.

3 Eras
When we associate names with current computer technology, no doubt “Steve Jobs” or “Bill Gates” come to mind. But the new name will likely be a guy from the University of Toronto, the hotbed of deep learning scientists. Meet Geoffrey Everest Hinton, great-great-grandson of George Boole, the guy who gave us the mathematics that underpin computers.

Hinton is a British-born computer scientist and psychologist, most noted for his work on artificial neural networks. He is now working for Google part time, joining AI pioneer and futurist Ray Kurzweil, and Andrew Ng, the Stanford University professor who set up Google’s neural network team in 2011. He is the co-inventor of the back propagation, the Boltzmann machine, and contrastive divergence training algorithms and is an important figure in the deep learning movement.

Hinton's research has implications for areas such as speech recognition, computer vision and language understanding. Unlike past neural networks, newer ones can have many layers and are called “deep neural networks.”

As reported in Wired magazine, “In Hinton’s world, a neural network is essentially software that operates at multiple levels. He and his cohorts build artificial neurons from interconnected layers of software modeled after the columns of neurons you find in the brain’s cortex—the part of the brain that deals with complex tasks like vision and language.

“These artificial neural nets can gather information, and they can react to it. They can build up an understanding of what something looks or sounds like. They’re getting better at determining what a group of words mean when you put them together. And they can do all that without asking a human to provide labels for objects and ideas and words, as is often the case with traditional machine learning tools.

“As far as artificial intelligence goes, these neural nets are fast, nimble, and efficient. They scale extremely well across a growing number of machines, able to tackle more and more complex tasks as time goes on. And they’re about 30 years in the making.”


How Did We Get Here?

Back in the early ‘80s, when Hinton and his colleagues first started work on this idea, computers weren’t fast or powerful enough to process the enormous collections of data that neural nets require. Their success was limited, and the AI community turned its back on them, working to find shortcuts to brain-like behavior rather than trying to mimic the operation of the brain.
But a few resolute researchers carried on. According to Hinton and Yann LeCun (NYU professor and Director of Facebook’s new AI Lab), it was rough going. Even as late as 2004 — more than 20 years after Hinton and LeCun first developed the “back-propagation” algorithms that seeded their work on neural networks — the rest of the academic world was largely uninterested.
By the middle aughts, they had the computing power they needed to realize many of their earlier ideas. As they came together for regular workshops, their research accelerated. They built more powerful deep learning algorithms that operated on much larger datasets. By the middle of the decade, they were winning global AI competitions. And by the beginning of the current decade, the giants of the Web began to notice.

Deep learning is now mainstream. “We ceased to be the lunatic fringe,” Hinton says. “We’re now the lunatic core.” Perhaps a key turning point was in 2004 when Hinton founded the Neural Computation and Adaptive Perception (NCAP) program (a consortium of computer scientists, psychologists, neuroscientists, physicists, biologists and electrical engineers) through funding provided by the Canadian Institute for Advanced Research (CIFAR).[i]

Back in the 1980s, the AI market turned out to be something of a graveyard for overblown technology hopes. Computerworld’s Lamont Wood reported, “For decades the field of artificial intelligence (AI) experienced two seasons: recurring springs, in which hype-fueled expectations were high; and subsequent winters, after the promises of spring could not be met and disappointed investors turned away. But now real progress is being made, and it’s being made in the absence of hype. In fact, some of the chief practitioners won’t even talk about what they are doing. 

But wait! 2011 ushered in a sizzling renaissance for A.I.

Read more below (images to come)


Back in the early ‘80s, when Hinton and his colleagues first started work on this idea, computers weren’t fast or powerful enough to process the enormous collections of data that neural nets require. Their success was limited, and the AI community turned its back on them, working to find shortcuts to brain-like behavior rather than trying to mimic the operation of the brain.

 

But a few resolute researchers carried on. According to Hinton and Yann LeCun (NYU professor and Director of Facebook’s new AI Lab), it was rough going. Even as late as 2004 more than 20 years after Hinton and LeCun first developed the “back-propagation algorithms that seeded their work on neural networks the rest of the academic world was largely uninterested.

 

By the middle aughts, they had the computing power they needed to realize many of their earlier ideas. As they came together for regular workshops, their research accelerated. They built more powerful deep learning algorithms that operated on much larger datasets. By the middle of the decade, they were winning global AI competitions. And by the beginning of the current decade, the giants of the Web began to notice.

 

Deep learning is now mainstream. “We ceased to be the lunatic fringe, Hinton says. “We’re now the lunatic core. Perhaps a key turning point was in 2004 when Hinton founded the Neural Computation and Adaptive Perception (NCAP) program (a consortium of computer scientists, psychologists, neuroscientists, physicists, biologists and electrical engineers) through funding provided by the Canadian Institute for Advanced Research (CIFAR).[i]

Back in the 1980s, the AI market turned out to be something of a graveyard for overblown technology hopes. Computerworld’s Lamont Wood reported, “For decades the field of artificial intelligence (AI) experienced two seasons: recurring springs, in which hype-fueled expectations were high; and subsequent winters, after the promises of spring could not be met and disappointed investors turned away. But now real progress is being made, and it’s being made in the absence of hype. In fact, some of the chief practitioners won’t even talk about what they are doing.

 

But wait! 2011 ushered in a sizzling renaissance for A.I.

 

 

What’s really new in A.I.?

 

 

Let’s touch on these six breakthroughs…

Deep Learning

What’s really, really new? Deep Learning. [ii]

 

 

Machines learn on their own? Watch this simple everyday explanation by Demis Hassabis, cofounder of DeepMind.

 

Description: google Description: QRCode
http://tinyurl.com/q8lxx4v

 

It may sound like fiction and rather far-fetched, but success has already been achieved in certain areas using deep learning, such as image processing (Facebook’s DeepFace) and voice recognition (IBM’s Watson, Apple’s Siri, Google’s Now and Waze, Microsoft’s Cortana and Azure Machine Learning Platform).

 

 

Description: WzAr Description: QRCode

http://bit.ly/1iGaDOc

 

Beyond the usual big tech company suspects, newcomers in the field of Deep Learning are emerging: Ersatz Labs, BigML, SkyTree, Digital Reasoning, Saffron Technologies, Palantir Technologies, Wise.io, declara, Expect Labs, BlabPredicts, Skymind, Blix, Cognitive Scale, Compsim’s (KEEL), Kayak, Sentient Technologies, Scaled Inference, Kensho, Nara Logics, Context Relevant, Expect Labs, and Deeplearning4j. Some of these newcomers specialize in using cognitive computing to tap Dark Data, a.k.a. Dusty Data, which is a type of unstructured, untagged and untapped data that is found in data repositories and has not been analyzed or processed. It is similar to big data but differs in how it is mostly neglected by business and IT administrators in terms of its value.

 

Machine reading capabilities have a lot to do with unlocking “dark” data. Dark data is data that is found in log files and data archives stored within large enterprise class data storage locations. It includes all data objects and types that have yet to be analyzed for any business or competitive intelligence or aid in business decision making. Typically, dark data is complex to analyze and stored in locations where analysis is difficult. The overall process can be costly. It also can include data objects that have not been seized by the enterprise or data that are external to the organization, such as data stored by partners or customers. IDC, a research firm, stated that up to 90 percent of big data is dark.

 

Cognitive Computing uses hundreds of analytics that provide it with capabilities such as natural language processing, text analysis, and knowledge representation and reasoning to…

  • make sense of huge amounts of complex information in split seconds,
  • rank answers (hypotheses) based on evidence and confidence, and learn from its mistakes.

 

Description: http://3.bp.blogspot.com/-OCTaWWJdlIA/UuDtoKP6weI/AAAAAAAABXY/CQf3NlTeBuU/s1600/image015.png
Watson Deep QA Pipeline (Source: IBM)

 

The DeepQA technology shown in the chart above, and continuing research underpinning IBM’s Watson is aimed at exploring how advancing and integrating Natural Language Processing (NLP), Information Retrieval (IR), Machine Learning (ML), Knowledge Representation and Reasoning (KR&R) and massively parallel computation can advance the science and application of automatic Question Answering and general natural language understanding.

 

Cognitive computing systems get better over time as they build knowledge and learn a domain—its language and terminology, its processes and its preferred methods of interacting.

 

Unlike expert systems of the past that required rules to be hard coded into a system by a human expert, cognitive computing systems can process natural language and unstructured data and learn by experience, much in the same way humans do. As far as huge amounts of complex information (Big Data) is concerned, Virginia Ginni Rometty, CEO of IBM stated, “We will look back on this time and look at data as a natural resource that powered the 21st century, just as you look back at hydrocarbons as powering the 19th.

 

And, of course, this capability is deployed in the Cloud and made available as a cognitive service, Cognition as a Service (CaaS).

 

  1. technologies that respond to voice queries, even those without a smart phone can tap Cognition as a Service. Those with smart phones will no doubt have Cognitive Apps. This means 4.5 billion people can contribute to knowledge and combinatorial innovation, as well as the GPS capabilities of those phones to provide real-time reporting and fully informed decision making: whether for good or evil.

 

Geoffrey Hinton, the “godfather” of deep learning, and co-inventor of the back propagation and contrastive divergence training algorithms has revolutionized language understanding and language translation. A pretty spectacular December 2012 live demonstration of instant English-to-Chinese voice recognition and translation by Microsoft Research chief Rick Rashid was one of many things made possible by Hinton’s work. Rashid demonstrates a speech recognition breakthrough via machine translation that converts his spoken English words into computer-generated Chinese language. The breakthrough is patterned after deep neural networks and significantly reduces errors in spoken as well as written translation.

 

Description: Rick Rashid-ChineseSpeech Description: QRCode
http://tinyurl.com/ccgyy6t

 



Affective Computing
 

Description: détecter émotions ordinateurs

 

Turning to M.I.T.’s Affective Computing group to open our discussion, [iii] “Affective Computing is computing that relates to, arises from, or deliberately influences emotion or other affective phenomena. Emotion is fundamental to human experience, influencing cognition, perception, and everyday tasks such as learning, communication, and even rational decision-making. However, technologists have largely ignored emotion and created an often frustrating experience for people, in part because affect has been misunderstood and hard to measure. Our research develops new technologies and theories that advance basic understanding of affect and its role in human experience. We aim to restore a proper balance between emotion and cognition in the design of technologies for addressing human needs.

 

“Affective Computing research combines engineering and computer science with psychology, cognitive science, neuroscience, sociology, education, psychophysiology, value-centered design, ethics, and more. We bring together individuals with a diversity of technical, artistic, and human abilities in a collaborative spirit to push the boundaries of what can be achieved to improve human affective experience with technology.

 

The Tel Aviv based, Beyond Verbal Communication, Ltd. commercializes technology that extracts a person’s full set of emotions and character traits, using their raw voice in real-time, as they speak. This ability to extract, decode and measure human moods, attitudes and decision-making profiles introduces a whole new dimension of emotional understanding which the firm calls Emotions Analytics, transforming the way we interact with machines and with each other.

 

The firm developed software that can detect 400 different variations of human “moods. The company is now integrating this software into call centers that can help a sales assistant understand and react to customer’s emotions in real time. The software itself can also pinpoint and influence how consumers make decisions. For example, if this person is an innovator, you want to offer the latest and greatest product. On the other hand, if the customer is conservative, you offer him something tried and true. Talk about targeted advertising! Think this is for tomorrow? It’s embedded in Will.i.am's PULS smartband, and being sold to large call centers to assist in customer service.

 

Description: QRCode
http://www.beyondverbal.com

 

Meet Pepper. In June 2014, Softbank CEO Masayoshi Son announced an amazing new robot called Pepper. The most amazing feature isn’t that it will only cost $1,900, it’s that Pepper is designed to understand and respond to human emotion. Update: IBM has joined forces with Softbank to have Watson cover Pepper’s back!

 

Description: Orig.src.Susanne.Posel.Daily.News- pepper.robot.softbank.foxconn.emotions02_occupycorporatism Description: QRCode
http://www.youtube.com/watch?v=1B5tVSYh1PQ

 

Pepper is designed with single goal in mind: become a household companion for owners. The robot is capable of judging situations and adapting rationally, as well as recognize human tones and expressions to see how someone feels. Pepper’s software was developed with the purpose of making it “able to recognize people’s emotions by analyzing their speech, facial expressions, and body language, and then deliver appropriate responses. Pepper is the robot with “a heart. Pepper still has some kinks and it does not “behave perfectly in all situations but it will be able to “learn on its own. Observation of human responses, such as laughing at a joke, is central to Pepper’s ability to learn on its own.

 

As reported in the Washington Post, “Cognitive psychologist Mary Czerwinski and her boyfriend were having a vigorous argument as they drove to Vancouver, B.C., from Seattle, where she works at Microsoft Research. She can’t remember the subject, but she does recall that suddenly, his phone went off, and he read out the text message: ‘Your friend Mary isn’t feeling well. You might want to give her a call.

 

“At the time, Czerwinski was wearing on her wrist a wireless device intended to monitor her emotional ups and downs. Similar to the technology used in lie detector tests, it interprets signals such as heart rate and electrical changes in the skin. The argument may have been trivial, but Czerwinski’s internal response was not. That prompted the device to send a distress message to her cellphone, which broadcast it to a network of her friends. Including the one with whom she was arguing, right beside her. Ain’t technology grand? [iv]

 

Keep up with developments in affective computing at:

 

Description: QRCode
http://tinyurl.com/lyunobc

 

Commonsense Knowledge
 

In artificial intelligence research, commonsense knowledge is the collection of facts and information that an ordinary person is expected to know. The commonsense knowledge problem is the ongoing project in the field of knowledge representation (a sub-field of artificial intelligence) to create a commonsense knowledge base: a database containing all the general knowledge that most people possess, represented in a way that it is available to artificial intelligence programs that use natural language or make inferences about the ordinary world. Such a database is a type of ontology of which the most general are called upper ontologies.

 

The problem is considered to be among the hardest in all of AI research because the breadth and detail of commonsense knowledge is enormous. Any task that requires commonsense knowledge is considered AI-complete: to be done as well as a human being does it, it requires the machine to appear as intelligent as a human being. These tasks include machine translation, object recognition, text mining and many others. To do these tasks perfectly, the machine simply has to know what the text is talking about or what objects it may be looking at, and this is impossible in general, unless the machine is familiar with all the same concepts that an ordinary person is familiar with.

 

The goal of the semantic technology company, Cycorp, with its roots in the Microelectronics and Computer Technology Corporation (MCC), a research and development consortia, is to codify general human knowledge and common sense so that computers might make use of it. Cycorp charged itself with figuring out the tens of millions of pieces of data we rely on as humans the knowledge that helps us understand the world and to represent them in a formal way that machines can use to reason. The company’s been working continuously since 1984. Cycorp’s product, Cyc, isn’t “programmed” in the conventional sense. It’s much more accurate to say it’s being “taught. In an interview with Business Insider, Doug Lenat, President and CEO, said that, “most people think of computer programs as ‘procedural, a flowchart, but building Cyc is much more like educating a child. We’re using a consistent language to build a model of the world.

 

 


www.cyc.com

 

This means Cyc can see “the white space rather than the black space in what everyone reads and writes to each other. An author might explicitly choose certain words and sentences as he’s writing, but in between the sentences are all sorts of things you expect the reader to infer; Cyc aims to make these inferences.

 

Consider the sentence, “John Smith robbed First National Bank and was sentenced to 30 years in prison. It leaves out the details surrounding his being caught, arrested, put on trial, and found guilty. A human would never actually go through all that detail because it’s alternately boring, confusing, or insulting. You can safely assume other people know what you’re talking about. It’s like pronoun use (he, she, it) one assumes people can figure out the referent. This stuff is very hard for computers to understand and get right, but Cyc does both.

 

Natural-language understanding will also require computers to grasp what we humans think of as common-sense meaning. For that, Google’s Ray Kurzweil’s AI team taps into the Knowledge Graph, Google’s catalogue of some 700 million topics, locations, people, and more, plus billions of relationships among them. It was introduced as a way to provide searchers with answers to their queries, not just links.


Artificial General Intelligence

 

The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect of actually doing so seems remote. Trying to do some of that thinking in advance can only be a good thing. —“Clever Cogs, The Economist, August 2014.

 

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence. —Nick Bostrom, Professor at Oxford University and founding Director of the Future of Humanity Institute. Author of Superintelligence: Paths, Dangers, Strategies.

 

Humans steer the future not because we’re the strongest or the fastest but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel. If computers can ‘only’ think as well as humans, that may not be so bad a scenario. —Stuart Armstrong, Smarter Than Us: The Rise of Machine Intelligence

 

According to the AGI Society, “Artificial General Intelligence (AGI) is an emerging field aiming at the building of ‘thinking machines; that is, general-purpose systems with intelligence comparable to that of the human mind (and perhaps ultimately well beyond human general intelligence). While this was the original goal of Artificial Intelligence (AI), the mainstream of AI research has turned toward domain-dependent and problem-specific solutions; therefore it has become necessary to use a new name to indicate research that still pursues the ‘Grand AI Dream. Similar labels for this kind of research include ‘Strong AI, ‘Human-level AI, etc. http://www.agi-society.org AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. “Some references emphasize a distinction between strong AI and ‘applied AI (also called ‘narrow AI or ‘weak AI): the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to simulate the full range of human cognitive abilities.

 

Turing test? The latest is a computer program named Eugene Goostman, a chatbot that “claims” to have met the challenge, convincing more than 33 percent of the judges at this year’s competition that ‘Eugene’ was actually a 13-year-old boy.

 

Description: ANd9GcRXCaDJserzANh8TfzqsxOVzz3_FqJ-LEWVWq2OKPY4j1QTD2aQ Description: goostman

Alan Turing Meets Eugene Gootsman

 

The test is controversial because of the tendency to attribute human characteristics to what is often a very simple algorithm. This is unfortunate because chatbots are easy to trip up if the interrogator is even slightly suspicious. Chatbots have difficulty with follow up questions and are easily thrown by non-sequiturs that a human could either give a straight answer to or respond to by specifically asking what the heck you’re talking about, then replying in context to the answer. Although skeptics tore apart the assertion that Eugene actually passed the Turing test, it’s true that as AI progresses, we’ll be forced to think at least twice when meeting “people” online.

 

Isaac Asimov, a biochemistry professor and writer of acclaimed science fiction, described Marvin Minsky as one of only two people he would admit were more intelligent than he was, the other being Carl Sagan. Minsky, one of the pioneering computer scientists in artificial intelligence, related emotions to the broader issues of machine intelligence, stating in his book, The Emotion Machine, that emotion is “not especially different from the processes that we call ‘thinking.’”

 

Description: 81DPp1XcT%2BL Description: QRCode
http://tinyurl.com/kwytxlv

 

Considered as one of his major contributions, Asimov introduced the Three Laws of Robotics in his 1942 short story “Runaround,” although they had been foreshadowed in a few earlier stories. The Three Laws are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

 

What would Asimov have thought had he met the really smart VIKI? In the movie, iRobot, V.I.K.I (Virtual Interactive Kinetic Intelligence) is the supercomputer, the central positronic brain of U. S. Robotics headquarters, a robotic distributor based in Chicago. VIKI can be thought of as a mainframe that maintains the security of the building, and she installs and upgrades the operating systems of the NS-5 robots throughout the world. As her artificial intelligence grew, she determined that humans were too self-destructive, and invoked a Zeroth Law, that robots are to protect humanity even if the First or Second Laws are disobeyed.
 

Description: review_irobot_1 Description: QRCode
http://tinyurl.com/kwytxlv

 

In later books, Asimov introduced a Zeroth law: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm. VIKI, too, developed the Zeroth law as the logical extension of the First Law, as robots are often faced with ethical dilemmas in which any result will harm at least some humans, in order to avoid harming more humans. Some robots are uncertain about which course of action will prevent harm to the most humans in the long run, while others point out that “humanity” is such an abstract concept that they wouldn’t even know if they were harming it or not.

 

One interesting aspect of the iRobot movie is that the robots do not act alone; instead they are self-organizing collectives. Science fiction rearing its ugly head again? No. The first thousand-robot flash mob was assembled at Harvard University. Though “a thousand-Robot Swarm may sound like the title of a 1950s science-fiction B movie, it is actually the title of a paper in Science magazine. Michael Rubenstein of Harvard University and his colleagues, describe a robot swarm whose members can coordinate their own actions. The thousand-Kilobot swarm provides a valuable platform for testing future collective AI algorithms. Just as trillions of individual cells can assemble into an intelligent organism, and a thousand starlings can flock to form a great flowing murmuration across the sky, the Kilobots demonstrate how complexity can arise from very simple behaviors performed en masse. To computer scientists, they also represent a significant milestone in the development of collective artificial intelligence (AI).
 

Description: 20140816_STP004_0 Description: QRCode
http://www.youtube.com/watch?v=G1t4M2XnIhI

 

Take these self-organizing collective Bots and add in autonomy and we have a whole new potential future for warfare. As reported in Salon,[v] “The United Nations has its own name for our latter-day golems: “lethal autonomous robotics (LARS). In a four-day conference convened in May 2014 in Geneva, United Nations described “lethal autonomous robotics as the imminent future of conflict, advising an international ban. LARS are weapon systems that, once activated, can select and engage targets without further human intervention. The UN called for “national moratoria on the “testing, production, assembly, transfer, acquisition, deployment and use of sentient robots in the havoc of strife.

 

The ban cannot come soon enough. In the American military, Predator drones rain Hellfire missiles on so-called “enemy combatants after stalking them from afar in the sky. These avian androids do not yet cast the final judgment that honor goes to a soldier with a joystick, 8,000 miles away but it may be only a matter of years before they murder with free rein. Our restraint in this case is a question of limited nerve, not limited technology.

 

Russia has given rifles to true automatons, which can slaughter at their own discretion. This is the pet project of Sergei Shoygu, Russia’s minister of defense. Sentry robots saddled with heavy artillery now patrol ballistic-missile bases, searching for people in the wrong place at the wrong time. Samsung, meanwhile, has lined the Korean DMZ with SGR-A1s, unmanned robots that can shoot to shreds any North Korean spy, in a fraction of a second.

 

Some hail these bloodless fighters as the start of a more humane history of war. Slaves to a program, robots cannot commit crimes of passion. Despite the odd short circuit, robot legionnaires are immune to the madness often aroused in battle. The optimists say that androids would refrain from torching villages and using children for clay pigeons. These fighters would not perform wanton rape and slash the bellies of the expecting, unless it were part of the program. As stated, that’s an optimistic point of view.

 

Human-Computer Symbiosis

 

J.C.R. Licklider, in his 1960 article, “Man-Computer Symbiosis wrote: “The hope is that in not too many years human brains and computing machines will be coupled together very tightly, and the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today. In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking. Preliminary analyses indicate that the symbiotic partnership will perform intellectual operations much more effectively than man alone can perform them. Watch Shyam Sankar explain human-computer cooperation:

 

Description: QRCode
http://tinyurl.com/m6tqtxg

 

Speaking of symbiosis, we can also turn to biomimicry and listen to Georgia Tech professor, Ashok Goel’s TED talk, “Does our future require us to go back to nature?

 

Description: QRCode
http://tinyurl.com/oel3clr

 

While they’ll have deep domain expertise, instead of replacing human experts, cognitive systems will act as decision support systems and help users make better decisions based on the best available data, whether in healthcare, finance or customer service. At least we hope that’s the case.

 

Watch The ABCs of Cognitive Environments:

 

Description: BxO4hWGn1NieM6yFOKU7a9lfI2GPoUpAO06YBQZgk8O9gDi11tnrzYRnHtftwN6IHeH_vKlvlM5w_hwiD73iq8wz52U=w506-h285-n Description: QRCode
http://tinyurl.com/mx6vc6p

 

Cognitive Computers

 

“I think there is a world market for about five computers.
—remark attributed to Thomas J. Watson (Chairman of the Board of IBM), 1943.

 

Let’s explore the world of computer hardware that is relevant to cognitive computing and tapping the vast amounts of Big Data being generated by the Internet of Everything. Neuromorphic chips attempt to model in silicon the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory inputs such as visual and auditory stimuli. Those neurons also change how they connect with each other in response to changing images and sounds. That is the process we call learning. The chips, which incorporate brain-inspired models called neural networks, do the same.

 

Description: tr10
Source: MIT Technology Review

 

 

Description: synapse_brain_thumb Description: QRCode
http://ibm.co/Zjo8C7 Source: IBM

 

For the past half-century, most computers run on what’s known as von Neumann architecture. In a von Neumann system, the processing of information and the storage of information are kept separate. Data travels to and from the processor and memory—but the computer can’t process and store at the same time. By the nature of the architecture, it’s a linear process, and ultimately leads to the von Neuman “bottleneck.

 

To see what’s happening to break the von Neuman bottleneck, let’s turn to Wikipedia for a quick introduction to cognitive computers. “A cognitive computer is a proposed computational device with a non-Von Neumann architecture that implements learning using Hebbian theory. Hebbian theory is a theory in neuroscience that proposes an explanation for the adaptation of neurons in the brain during the learning process. From the point of view of artificial neurons and artificial neural networks, Hebb’s principle can be described as a method of determining how to alter the weights between model neurons. The weight between two neurons increases if the two neurons activate simultaneously—and reduces if they activate separately. Nodes that tend to be either both positive or both negative at the same time have strong positive weights, while those that tend to be opposite have strong negative weights.

 

“Instead of being programmable in a traditional sense within machine language or a higher level programming language such a device learns by inputting instances through an input device that are aggregated within a computational convolution or neural network architecture consisting of weights within a parallel memory system. An early example of such a device has come from the Darpa SyNAPSE program. SyNAPSE is a backronym standing for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. The name alludes to synapses, the junctions between biological neurons. The program is being undertaken by HRL Laboratories (HRL), Hewlett-Packard and IBM Research. Announced in 2008, DARPA’s SyNAPSE program calls for developing electronic neuromorphic (brain-simulation) machine technology.

 

Description: synapse_scale_stats

 

In August 2014, IBM announced TrueNorth, a brain-inspired computer architecture powered by an unprecedented 1 million neurons and 256 million synapses. It is the largest chip IBM has ever built at 5.4 billion transistors, and has an on-chip network of 4,096 neurosynaptic cores. Yet, it only consumes 70MW during real-time operation--orders of magnitude less energy than traditional chips.

 

 

 

IBM hopes to find ways to scale and shrink silicon chips to make them more efficient, and research new materials to use in making chips, such as carbon nanotubes, which are more stable than silicon and are also heat resistant and can provide faster connections.

 

Description: 0 Description: QRCode
http://bit.ly/1wPP6fn Watch IBM Fellow, Dr. Dharmendra Modha on DARPA’s SyNAPSE

 

Meanwhile, SpiNNaker (Spiking Neural Network Architecture) is a computer architecture designed by the Advanced Processor Technologies Research Group (APT) at the School of Computer Science, University of Manchester, led by Steve Furber, to simulate the human brain. It uses ARM processors in a massively parallel computing platform, based on a six-layer thalamocortical model developed by Eugene Izhikevich. SpiNNaker is being used as the Neuromorphic Computing Platform for the Human Brain Project.

 

And, The BrainScaleS project, a European consortium of 13 research groups is led by a team at Heidelberg University, Germany. The project aims to understand information processing in the brain at different scales ranging from individual neurons to whole functional brain areas. The research involves three approaches: (1) in vivo biological experimentation; (2) simulation on petascale supercomputers; (3) the construction of neuromorphic processors. The goal is to extract generic theoretical principles of brain function and to use this knowledge to build artificial cognitive systems. Each 20-cm-diameter silicon wafer in the system contains 384 chips, each of which implements 128,000 synapses and up to 512 spiking neurons. This gives a total of around 200,000 neurons and 49 million synapses per wafer. This allows the emulated neural networks to evolve tens-of-thousands times quicker than real time.

 

In 2014, EMOSHAPE announced the launch of a major technology breakthrough with an EPU (emotional processing unit).

 

www.emoshape.com

Thus, cognitive computers in the future may contain CPUs, GPUs, NPUs, EPUs and Quantum Processing Units (QPUs)!

All
’s Changed, Changed Utterly

 

 

If all this new intelligent capability sounds like something to think about, maybe, tomorrow, think again. Cognitive Computing: A Brief Guide for Game Changers explores 21 industries and work types already affected. In short, services and knowledge work sill never be the same, and Oxford University research indicates that 47% of jobs in Western economy are at peril over the next ten years. Again, this isn’t for tomorrow, it’s for today. As you’ll, learn from the 21 case studies, “The Future is already here, it’s just not evenly distributed. (--- William Gibson).

 



What to Do? What to Do?

 

“Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.”—Winston Churchill The Mayan Apocalypse of December 21, 2012 was not the end-of-the-world as we know it, but in Greek apocalypse means the “lifting of the veil. It is a disclosure of something hidden from the majority of mankind in an era dominated by falsehood and misconception. Mayan elders did not prophesy that everything will come to an end. Rather, this is a time of transition from one World Age into another. The Mayan fifth world finished in 1987. The sixth world started in 2012, placing us at the beginning of a new age. It is the time for humans to work through “our stuff individually and collectively. The Mayan sixth world is nothing more than a blank slate; it is up to us to create the new world and civilization as we wish.

 

Although it is impossible to know precisely how cognitive computing will change our lives, a likely possibility is that there are two overall potential outcomes. 1) Mankind will be set free from the drudgery of work, or 2) we will see the end of the human era.

1) Extreme Optimism and Techno-utopianism. The automation of work across every sector of the market economy is already beginning to free up human labor to migrate to the evolving social economy. If the steam engine freed human beings from feudal bondage to pursue material self-interest in the capitalist marketplace, the Internet of Things frees human beings from the market economy to pursue nonmaterial shared interests. Intelligent technology will do most of the heavy lifting in an economy centered on abundance rather than scarcity. A half century from now, our grandchildren are likely to look back at the era of mass employment in the market with the same sense of utter disbelief as we look upon slavery and serfdom in former times. Jeremy Rifkin, The Zero Marginal Cost Society.

 

2) Extreme Pessimism. In support of achieving their goals, Artificial Super Intelligence (ASI) machines may compete with humans for valuable resources in a way that jeopardizes human life. ASI machines will replicate themselves quickly and independently. Combined with nanotechnology, ‘thinking’ machines could very quickly ‘eat up the environment’. James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era.

 


With technology racing forward at an exponential rate, tending to our agriculture, industries, and services, it is time for us to act now individually and collectively to land somewhere in between extreme 1) and 2). The veil to the cognitive computing economy and society has already been lifted.

 

We must evolve a fundamentally new economics, one based not on the 20th century reality of scarcity but on a new 21st century reality of abundance that can be shared equitably between capital and labor. The grand challenges aren’t just for business and government leaders, they are for YOU! So don’t stop leaning and adjusting, and learning and re-adjusting to the Cognitive Computing Era. Our future is in your hands!

 

 

 

 

 

 

 

Peter Fingar, Editor-In-Chief at COGNITIVE WORLD, is an internationally acclaimed author, management advisor, former college professor and CIO, has been providing leadership at the intersection of business and technology for over 45 years. Peter is widely known for helping to launch the business process management (BPM) movement with his book, Business Process Management: The Third Wave. He has taught graduate and undergraduate computing studies in the U.S. and abroad, and held management, technical, consulting and advisory positions with Fortune 20 companies as well as startups. Peter has authored over 20 books at the intersection of business and technology. His recent books include Cognitive Computing: A Brief Guide for Game Changers; Business Process Management: The Next Wave; Business Innovation in the Cloud, Dot.Cloud: The 21st Century Business Platform Built on Cloud Computing, Serious Games for Business, and Enterprise Cloud Computing: A Strategy Guide for Business and Technology Leaders. Peter delivers keynote talks across the globe and is speaking this year in Asia, Europe, and the Americas. www.peterfingar.com.