Cognitive Computing: Five “I wish I would haves” to Avoid

Source: Analytics Magazine

By Paul Roma
Paul Roma

Let’s talk about cognitive computing. After all, everybody else is, right? In fact, there’s so much chatter about cognitive, in the worlds of both academia and business, that your instincts probably tell you to ignore it as much as possible – to let everything cool down a bit so that we can recognize it for what it really is.

But there’s just one problem with that approach.

Cognitive computing is already truly huge, and it’s only expected to get bigger. I don’t say that lightly. I’ve seen plenty of next-big-thing flameouts, just like anyone who’s been engaged in the analytics world for years. But today, when it comes to cognitive, in my work with businesses around the world, I’m seeing signals that remind me not of the flameouts, but of the truly monumental advances made in technology. Remember that moment when we all realized that there were as many mobile devices in the world as there were people? Or, going way back, when it became clear that the Internet was not solely the domain of government researchers, and had serious commercial applications? That’s the kind of moment we’re experiencing right now with cognitive computing.

Paul Roma ArticleThink about it. Computing capabilities are unbelievably strong today. There’s a greater discipline in algorithms than we’ve ever seen. Data storage costs, what, around 3 cents to store a gig of data today? Put it all together, and you realize that whatever we’ve done in cognitive computing today will soon be considered quaint early indicators of the seismic changes that follow. We are heading down an exponential change curve.

Because cognitive computing is already a burgeoning reality among the businesses I work with every day, I’ve already observed a few serious risky views on it. Why are they risky? Because if they take hold, they’re likely to lead many to say, “I wish I would have” in the not-so-distant future. And in this case, the implications of getting it wrong, or simply not getting on board fast enough, could be serious. Don’t let yourself get caught saying these things a year from now:

“I wish I would have known what cognitive can really do.” What if you realized too late that cognitive can enable the quantification of historically qualified domains? Or that it could amplify your analysis of existing problems? You’d miss out on a ton of potential. Because cognitive capabilities should be able to combine hard facts – how much something costs, how long it took to manufacture, when it was delivered – with adjacent interaction data from social media, customer service, surveys, you name it. All in order to generate scores for sentiment, buying patterns, bundling patterns, context for change and more.

When it comes to amplifying analysis, consider that text, voice and video data (for example) can cast sentiment analysis, behavior patterns and human decision-making in an entirely new – and more accurate – light.

“I wish I would have known cognitive wasn’t 10 years away.” Cognitive computing is here now. Let me repeat: It has arrived. It’s not 10 years away. It’s not even 10 days away. It’s now. Maybe you’re not doing anything with it today – and maybe that’s OK. But at the same time, you have to account for its presence. That’s where I see clients at risk of missing the boat – they assume that because they don’t need a tangible cognitive strategy today, they can just sit this one out until they do. In reality, they should be planning for it. Can you imagine knowing that virtually everyone in your organization owned a mobile phone, while continuing with business plans as if mobile phones basically didn’t exist? Or, can you imagine the possibilities had every business been aware of web and cloud capabilities? Of course not. But that’s exactly what some are doing when it comes to cognitive.

“I wish I would have known cognitive wasn’t an ‘edge’ technology.” What do you do when a new technology appears on the scene quickly? You likely deploy it in the margins – give it a test run in some pesky part of the business, see how it works, expand, move to another area, and so on. Wash, rinse, repeat. That’s the definition of “edge” technology. Unfortunately, an edge mindset is exactly wrong in this environment. Cognitive computing is no more of an edge technology than mobility or cloud computing – it is (or should be) a pervasive technology embedded at the core of the business.

Today, I’m seeing this realization slowly take hold among CTOs and others who have approached cognitive as a stand-alone investment – off-property, handled by a third party, just a piece of a larger machine. They’re increasingly concerned about the potential for a competitor to emerge with cognitive computing capabilities driving their ability to provide better service, higher performance, smarter supply chains, you name it. The good news is that we’re still at the front end of cognitive; there’s time to change direction. But that window will probably close faster than anyone expects.

“I wish I would have known how to start – and when to scale.” Train and learn, or build and develop? Specific purpose or general purpose? These are the types of getting-started questions that will ultimately help shape an organization’s entire approach to cognitive. They should not be taken lightly, which is why many feel a sort of paralysis at the moment it’s time to get underway.

Begin with perceived accuracy problems – not technical accuracy. These are the problems that are rooted in opinion rather than technical or statistical correctness. From there, move on to apprentice roles – the next scoring mechanism. These are typically defined by guiding principles and rules of thumb, but are not necessarily fact-based rules that are set in stone.

Rule-based problems are also ripe targets for cognitive efforts. In this case, the problems are knowable, but the task of actually maintaining the rules is too hard, so it’s possible to enable computing models to learn the rules along the way.

As you progress down these paths, extending into new domains and complexities can make the models more useful in a cognitive environment. In a more traditional system, meanwhile, adding new dimensions means a rewrite. Cognitive systems are capable of creating new relationships and learning how the next domain correlates to older ones, which strengthens the model.

“I wish we would have planned for future changes.” Cognitive capabilities are typically delivered in the form of flexible technologies that can adapt and learn, but they can tend to learn in “straight lines” following the example of the humans that guide them. After all, we are creatures of habit, and our habits are linear. In a cognitive environment, it’s possible to break this paradigm by combining adjacent data domains to give your models perspective, in much the same way that you might guide a child learning about the world. This is important because effectively designing for the future requires a multi-dimensional approach – one that incorporates a multitude of perspectives. This sort of future-proofing isn’t simply a challenge of computer science or engineering. It demands a fuller depth of understanding and context – the ability to view a situation through many dimensions at the same time. In many cases, these are dimensions that humans can’t even see. And that is the beauty of what’s next in cognitive.

Are these the only questions we may wish we had better answers for in the future? Almost certainly not. Expect more twists and turns in the road to cognitive ahead. But at the same time, these are all legitimate, known issues. It’s just that they’re clearer to some leaders than to others – a fact that will become painfully obvious as cognitive continues down its restless, surprising, high-stakes path.

Published originally in Analytics Magazine.


Paul Roma, chief analytics officer of U.S. Deloitte, directs the company’s analytics offerings across all businesses.