Artificial Intelligence is likely to change our civilization as much as, or more than, any technology that’s come before, even writing.
-- Miles Brundage and Joanna Bryson, Future Tense
The smart machine era will be the most disruptive in the history of IT.
-- Gartner, The Disruptive Era of Smart Machines is Upon Us
Without question, Cognitive Computing is a game-changer for businesses across every industry.
-- Accenture, Turning Cognitive Computing into Business Value, Today!
The era of cognitive systems is dawning and building on today’s computer programming era. All machines, for now, require programming, and by definition programming does not allow for alternate scenarios that have not been programmed. To allow alternating outcomes would require going up a level, creating a self-learning Artificial Intelligence (AI) system. Via biomimicry and neuroscience, Cognitive Computing does this, taking computing concepts to a whole new level. Once-futuristic capabilities are becoming mainstream. Let’s take a peek at the three eras of computing.
Fast forward to 2011 when IBM’s Watson won Jeopardy! Google recently made a $500 million acquisition of DeepMind. Facebook recently hired NYU professor Yann LeCun, a respected pioneer in AI. Microsoft has more than 65 PhD-level researchers working on deep learning. China’s Baidu search company hired Stanford University’s AI Professor Andrew Ng. All this has a lot of people talking about deep learning. While artificial intelligence has been around for years (John McCarthy coined the term in 1955), “deep learning” is now considered cutting-edge AI that represents an evolution over primitive neural networks.[i]
Taking a step back to set the foundation for this discussion, let me review a few of these terms. As human beings, we have complex neural networks in our brains that allow most of us to master rudimentary language and motor skills within the first 24 months of our lives with only minimal guidance from our caregivers. Our senses provide the data to our brains that allows this learning to take place. As we become adults, our learning capacity grows while the speed at which we learn decreases. We have learned to adapt to this limitation by creating assistive machines. For over 100 years machines have been programmed with instructions for tabulating and calculating to assist us with better speed and accuracy.
Today, machines can be taught to learn much faster than humans, such as in the field of machine learning, that can learn from data (much like we humans do). This learning takes place in Artificial Neural Networks that are designed based on studies of the human neurological and sensory systems. Artificial neural nets make computations based on inputted data, then adapt and learn. In machine learning research, when high-level data abstraction meets non-linear processes it is said to be engaged in deep learning, the prime directive of current advances in AI.
Cognitive computing, or self-learning AI, combines the best of human and machine learning and essentially augments us.
When we associate names with current computer technology, no doubt “Steve Jobs” or “Bill Gates” come to mind. But the new name will likely be a guy from the University of Toronto, the hotbed of deep learning scientists. Meet Geoffrey Everest Hinton, great-great-grandson of George Boole, the guy who gave us the mathematics that underpin computers.
Hinton is a British-born computer scientist and psychologist, most noted for his work on artificial neural networks. He is now working for Google part time, joining AI pioneer and futurist Ray Kurzweil, and Andrew Ng, the Stanford University professor who set up Google’s neural network team in 2011. He is the co-inventor of the back propagation, the Boltzmann machine, and contrastive divergence training algorithms and is an important figure in the deep learning movement.
Hinton's research has implications for areas such as speech recognition, computer vision and language understanding. Unlike past neural networks, newer ones can have many layers and are called “deep neural networks.”
As reported in Wired magazine, “In Hinton’s world, a neural network is essentially software that operates at multiple levels. He and his cohorts build artificial neurons from interconnected layers of software modeled after the columns of neurons you find in the brain’s cortex—the part of the brain that deals with complex tasks like vision and language.
“These artificial neural nets can gather information, and they can react to it. They can build up an understanding of what something looks or sounds like. They’re getting better at determining what a group of words mean when you put them together. And they can do all that without asking a human to provide labels for objects and ideas and words, as is often the case with traditional machine learning tools.
“As far as artificial intelligence goes, these neural nets are fast, nimble, and efficient. They scale extremely well across a growing number of machines, able to tackle more and more complex tasks as time goes on. And they’re about 30 years in the making.”
How Did We Get Here?
Back in the early ‘80s, when Hinton and his colleagues first started work on this idea, computers weren’t fast or powerful enough to process the enormous collections of data that neural nets require. Their success was limited, and the AI community turned its back on them, working to find shortcuts to brain-like behavior rather than trying to mimic the operation of the brain.
But a few resolute researchers carried on. According to Hinton and Yann LeCun (NYU professor and Director of Facebook’s new AI Lab), it was rough going. Even as late as 2004 — more than 20 years after Hinton and LeCun first developed the “back-propagation” algorithms that seeded their work on neural networks — the rest of the academic world was largely uninterested.
By the middle aughts, they had the computing power they needed to realize many of their earlier ideas. As they came together for regular workshops, their research accelerated. They built more powerful deep learning algorithms that operated on much larger datasets. By the middle of the decade, they were winning global AI competitions. And by the beginning of the current decade, the giants of the Web began to notice.
Deep learning is now mainstream. “We ceased to be the lunatic fringe,” Hinton says. “We’re now the lunatic core.” Perhaps a key turning point was in 2004 when Hinton founded the Neural Computation and Adaptive Perception (NCAP) program (a consortium of computer scientists, psychologists, neuroscientists, physicists, biologists and electrical engineers) through funding provided by the Canadian Institute for Advanced Research (CIFAR).[i]
Back in the 1980s, the AI market turned out to be something of a graveyard for overblown technology hopes. Computerworld’s Lamont Wood reported, “For decades the field of artificial intelligence (AI) experienced two seasons: recurring springs, in which hype-fueled expectations were high; and subsequent winters, after the promises of spring could not be met and disappointed investors turned away. But now real progress is being made, and it’s being made in the absence of hype. In fact, some of the chief practitioners won’t even talk about what they are doing.
But wait! 2011 ushered in a sizzling renaissance for A.I.
Cognitive Digital Transformation
By Jim Sinur
Based on excerpts from his new book Digital Transformation
As Shawn Dubravac wrote in Digital Destiny, “Humanity is entering a new era. Beyond the mere acquisition of ever more digital devices with ever more incredible functionality, the immediate future will usher in the all-digital lifestyle and an ‘Internet of Everything.’ The transformation of our lives will be breathtaking.”
Digitization is all the rage, and it’s hard to find an organization that is actively resisting its siren call. We are in the early stages of the digital age and it is becoming clear that no organization or individual is going to come out the same. This means that organizations and individuals can act in several ways. One can resist and fight the digital era with all their might. One can go with the flow and select those things that we deem good in the digital era and reject those that don’t seem to fit at the moment. Another approach is to creatively embrace this new age and really take advantage of it. While I am in that latter camp, I believe that smart organizations and people will try to educate themselves about the changing face of the digital trend, and that looks to be a long journey.
Like any journey, knowing where you want to end up and calculating the best path there is a best practice. The digital journey has a twist on this best practice. You can only plan one leg of the journey at a time. What you learn in the first and subsequent legs will determine where you go next and where you ultimately end up. Digital Transformation is different from the “fail fast and fail often” and the “static plan and manage” approaches in that it combines aspects of both. There are planned targets and paths, but the effort exercises innovation through experimentation along the way.