COGNITIVE WORLD

View Original

Debunking The Myths And Reality Of Artificial Intelligence

Intelligence should be "distributed" where "knowledge" is created and "decisions" are made. Image credit: Image: Depositphotos enhanced by CogWorld.

Source: COGNITIVE WORLD on FORBES

Introduction

Years ago, it was hard to find anyone to have a serious discussion about Artificial Intelligence (AI) outside academic institutions. Today, nearly everyone talks about AI. Like any new major technology trend, the new wave of making AI and intelligent systems a reality is creating curiosity and enthusiasm. People are jumping on its bandwagon adding not only great ideas but also in many cases a lot of false promises and sometimes misleading opinions.

Built by giant thinkers and academic researchers, AI adoption by industries and further development in academia around the globe is progressing at a faster rate than anyone had excepted. Accelerated by the strong belief that our biological limitations are increasingly becoming a major obstacle towards creating smart systems and machines that work with us to better use our biological cognitive capabilities to achieve higher goals. This is driving an overwhelming wave of demands and investments across industries to apply AI technologies to solve real-world problems and create smarter machines and new businesses.

AI overcame many obstacles over the last decades mainly on the academic side. However, it is now facing one of its major challenges so far, which is the adoption in real-world industry scenarios and the myths and misunderstandings surrounding it. Unfortunately, with confusing and conflicting messages about what AI can and can’t do, it is challenging for industry leaders to distinguish between facts and fiction in the rapidly crowded and noisy ecosystem of enthusiasts, platform vendors, and service providers. However, once the dust settles down and things get clear, the truth of AI will endure, eventually losers and winners will be declared.

The challenge is how industry leaders would have a realistic opinion about what AI can and can’t do for their business and continuously update it so that they can lead their organizations to apply AI in the right way in solving real-world problems and transform their businesses. Also, academics and AI practitioners have the responsibility to get out of their bubble and engage with industry experts to be able to further develop the academic foundations of AI in a way that would make its real-world adoption faster, more rewarding and responsible.

The current “messy” state of AI adoption in industries

Over the last few years, business leaders from nearly every industry have been trying to understand the new magical technology called Artificial Intelligence (AI) and how their businesses can benefit from it. Unfortunately, until now most of the implementations of AI-powered solutions haven’t gone beyond Proof of Concepts (PoCs) in the form of scattered Machine Learning (ML) algorithms with a limited scope. While this level and approach of AI adoption is wasting many opportunities and resources for companies, it has helped to convince business and IT leaders that Artificial Intelligence can drive transformative and relevant innovation.

Many PoC projects today are using basically simple statistical methods to add some simple prediction or classification capabilities to their analytics solutions and call it AI solutions. This is still defined as analytics or possibly advanced analytics which still needs extensive human intervention in understanding the outcome and make a decision or take an action.

As the business processes and operational conditions continuously change, the newly generated data and the continual changes in different business factors are reducing the level of precision and the value such algorithms can offer rendering them over time to be useless or even lead to dangerous decisions.

Such an approach and its outcome are just another part of the frustrating reality that is confusing business leaders and hindering the right adoption of sophisticated AI technologies in the appropriate way to gain valuable results.

The current approach of trying to squeeze in a few Machine Learning (ML) algorithms in some business areas for quick gains is alone a risk and might cause a setback to AI adoption across industries triggering another “AI winter” this time on the industry side not on the academic side. Applying even mature AI technologies in such a way might add some value but could add new dangerous “artificial stupidity” to the organization with catastrophic consequences.

Over the next years, companies can’t afford to continue accepting a status of confusion and hesitation around what AI can and can't do, how it can be integrated with other technologies to create intelligent solutions or machines and where to apply it appropriately.

Over the next few sections, I’ll highlight some of the current myths and misunderstanding overshadowing the reality of AI and hindering its right adoption. I’ll also share some ideas on how to overcome them to accelerate the real-world adoption of AI and reduce the risks to business and societies.

The curse of scattered ML algorithms

Over the last few years, some motivated business leaders started AI initiatives on their own and within their business areas using open source ML libraries focusing on a few key decisions to optimize. Those efforts were usually not part of an organized company-wide plan. While such efforts added some values and helped different teams to have the first experience in using AI capabilities to solve some business problems, it resulted in scattered ML algorithms on the loose across organizations. Unfortunately, such scattered ML algorithms don’t fully unlock the values hidden in the data nor tap into valuable business knowledge organizations have. Furthermore, they add potential risks to companies.

Some of the main risks that scattered ML algorithms bring are:

  • Algorithms might be trained using a limited set of features and data resulting in the wrong or sometimes dangerous business decisions either inside or outside the business area.

  • While optimizing local operational decisions, such algorithms might unintentionally negatively affect other business areas or even global operations.

  • Such individual algorithms can be easily manipulated and mislead in making wrong decisions by internal or external actors adding a major new category of cybersecurity risks.

  • Training some machine learning algorithms might require expensive computing power adding high costs to small business units. In many cases this caused business units to abandon AI completely based on the false impression of high costs of adoption.

Usually, most if not all business functional and operational units are directly connected. The data they generate, the knowledge they create and the roles they adhere to are shared and are interdependent. AI can see some interdependencies and relationships in huge amounts of data and features which humans usually can't see. This could be used to create a strong data and knowledge platform enabling cross-organization distributed AI systems converting the scattered nature of data, knowledge, and decision making from weakness to a major strength.

Organizations must act quickly to consolidate all AI initiatives and the ML algorithms on the loose and move them to a standard enterprise-grade secure AI platform within an overall AI adoption strategy.

This would enable distributed yet interconnected AI solutions offering intelligence where the decision needs to be made with maximum benefits and transformational power to the business. This move would also accelerate the successful adoption of AI, reduce costs of adoption, increase the ROI as well as reduce the internal and external risks to the company.

Artificial Intelligence adoption or Intelligent Enterprise creation?

Until AI systems can take such decisions for us, companies must decide whether they want to just adopt AI or ultimately create an intelligent enterprise which will take more than AI adoption to achieve. The current discussions around Robotic Process Automation (RPA) and whether they’re part of AI are taking the discussions about AI adoption off track. Yes, RPA is not part of AI at least based on the academic definition of AI and can’t be despite all the misleading loud marketing voices. The current RPA technology is nothing but simple scripts that unfortunately in many cases just automate the current business processes accumulated over the years and was designed mainly with only the human in mind.

If done right, RPA and Intelligent Process Automation (IPA) would be an opportunity to redesign and automate the underlying processes for the new workforce where humans and machines collaborating intelligently and closer together.

Business leaders should have plans to create the intelligent enterprise that offers intelligent products and services wrapped with intelligent processes designed to leverage biological intelligence of humans and artificial intelligence capabilities of machines together not just to automate repetitive processes to reduce costs and affirm some decisions which they could do alone without new technologies.

Some of the basic capabilities of intelligent enterprise would be that, it’s products, solutions and services can to intelligently use the collective knowledge they and humans created, be able to continuously learn to do things better and do new things as well as to intelligently react to ever-changing environments and demands.

Given the artifacts of an intelligent enterprise and the fast-growing complexity of the internal and external business environment, having too much of the traditional human intervention would be increasingly a major bottleneck in achieving the goal of an intelligent enterprise. This is due to our limited biological capabilities such as even simple tasks like finger and eyes movement.

Therefore, organizations need to stop wasting time discussing RPA and have a strategy and roadmap towards the intelligent enterprise which should include among others:

  • The overall vision, and a definition and roadmap for their intelligent enterprise including products, solutions, and services that in a dynamic way address the why, what, how and when.

  • A roadmap for new Intelligent Processes designed for Man + Machines working closer together.

  • A strategy that goes beyond AI and ML algorithms to identify other technologies which are essential to have an end to end intelligent solutions and products such as new sensing technologies, intelligent IoT gateways, edge computing hardware as well as HPC including quantum computing.

  • A plan to create the required culture and organizational operational shift in the way we build, use, operate and maintain such intelligent systems and solutions.

  • A plan to create an innovation ecosystem which should be an integral part of the new business to envision and deliver new intelligent services to the new businesses and the joint clients.

  • A new definition of Human-Machine Interfaces (HMI) given the new UI/UX enabled by AI technologies such as Natural Language Processing and Understanding (NLP/NLU) and advanced computer vision accelerated by Augmented Reality/Extended Reality technologies.

AI technologies are not yet ready for industrial adoption. Is it a Myth?

The current AI benefited from decades of serious high-quality academic research. However, it’s clear that one of the major weaknesses of current AI systems is the lack of real-life experience, which is needed to make it reliably useful for all of us. When AI systems fail to give the right answer at the beginning of using it, this doesn’t usually mean that the underlying AI algorithms or mathematical models are not mature enough.

Like humans, AI algorithms need more real-world experience that might include more data created through algorithms’ own trials and errors in the real world.

Therefore, it would be unfair and technically wrong to judge AI solutions in the early stages while they still have no or little experience. This is one of the most common mistakes done today and usually lead to frustration and misunderstanding around the maturity of AI underlying models. We’ve to give AI-powered solutions time to learn and be carefully evaluated before deploying them in the enterprise.

For instance, machine learning capabilities which gained enough real-world experience such as computer vision (CV) and Natural Language Processing (NLP) are the most mature and widely adopted parts of AI today. They’re the cognitive engines behind many industrial and consumer applications and products with the most positive impact on business and our personal life so far.

This is a key difference between traditional analytics and AI solutions. In analytics, software vendors build software solutions without having the actual data. On the contrary, in AI solutions, we use the problem description, actual data, domain knowledge and a set of specific goals to be able to create, train and verify ML algorithms. No Data, No Algorithms! in AI, there is no turnkey solution. This is a key mindset shift which must happen immediately to avoid this misunderstanding.

Such shift in mindset combined with new principles of designing distributed intelligent systems such as multi-agent distributed and interconnected cognitive systems would play a major role in deciding whether the organization’s efforts to leverage AI capabilities would succeed or just add more frustration, wasted opportunities and new risks.

Additionally, one of the key capabilities AI systems must have per design is that it should have the ability to continuously learn as well as dynamically leverage effective learning approach(es) over time. Selecting the right initial architecture as well as continuous learning approaches such as supervised, unsupervised, reinforcement learning or a mix of them is very important in a successful AI adoption. Lifelong Continues Learning (LLCL) is one of the main and most promising AI research areas today. However, it continues to be a challenge for the current machine learning and neural network models since the continual acquisition of new information from non-stationary data sources generally lead to catastrophic forgetting of previously learned knowledge or abrupt decrease in the precision.

While there is a lot to be done to enable AI systems to continuously learn and evolve with their environments, most of the current AI platforms from startups and established vendors provide powerful tools to make this happen.

What makes or breaks AI adoption in business is not the AI academic methods and algorithms or the technology platforms built around them but the way we adopt, architect and integrate them in business solutions and industrial products.

The overhyped promise of Data

As of today, it is a challenge to train, test and verify the current machine learning algorithms, especially deep machine learning, as they require a lot of good data. Moreover, the last few years have shown that in many cases businesses don’t have enough historical data in the required quality and quantity needed for current ML approaches. Also, the data we have today was generated and collected with humans and their biological strengths and weaknesses in mind.

Even in cases where we have enough data, we must invest a massive amount of effort in different areas such as data engineering, data analysis, feature engineering, feature selection, predictive modeling, model selection, and verification before having the initial algorithms. Additionally, manually adapting the design and continuous refining of the internal architecture of the first-generation algorithms takes a lot of tedious and repetitive work and requires massive computing power especially if we want to solve serious problems with constant or increasing precision.

The current “predictive analytics” solutions use simple statistical models to predict something based on the available historical data. It assumes that the future will follow the past in a simple and straightforward way. An assumption which in many cases has been proven to be wrong. For example, the failure history of industrial equipment in terms of causes, nature, and consequences is varying as workers are better trained, have access to more information and even have better tools to test, repair and maintain the same. This makes parts of “expensive” historical data misleading.

Our limited or occasional biased understanding and interpretations of the past events and the context in which those events have happened reduce the precision of AI solutions to accurately predict complex events which usually happen in a reality that we can't fully understand.

Additionally, there is increased warning among scientists against the statistical inference and misuse of statistical significance and “P values - probability value” which in some cases might lead to catastrophic consequences. Some scientists even call for the concept of statistical significance to be abandoned especially in the case where high precision prediction is required. In the end, as seen in many cases, we find the outcome is not more than the reaffirmation of predetermined decisions.

The human brain was initially trained with less data but extended, verified and continuously taught with large amounts of data gathered over years of life experience.

Today, ML algorithms are trained with large amounts of data and tested with less data (~70 % to 30%). Therefore, instead of just collecting and analyzing massive amounts of data, AI systems should start with simple tasks and be able to continuously learn, add more capabilities, expand its knowledge, increase its reasoning capability and adapt to its new environment through data they collected or synthetically generated.

One of the key lessons from using AI to solve complex problems over the last years is that we need new AI systems architecture which relies on fewer data and less supervision by humans. Therefore, AI practitioners as well as academic researchers, are going beyond traditional machine learning architectures and trying to create new ML algorithms which can create and comprehend its self-created or acquired data.

We see a lot of progress in some AI/ML areas such as new biologically inspired mathematical approaches, more efficient neural networks architectures, Generative Adversarial Networks (GAN), Multi-Agent Deep Reinforcement Learning (MADRL) and genetic & evolutionary algorithms. One common goal is to reduce AI dependency on massive amounts of data and knowledge created by humans. Also, the tremendous advances in developing AI specialized hardware such as GPU, TPU and FPGA are making such new approaches possible.

Also, incorporating more efficient knowledge representation techniques such as evolutionary and co-evolutionary modular multi-tasking knowledge representation techniques even with the current ML algorithms will help organizations uncover more knowledge from the same or fewer data. Human brain-inspired new reasoning approaches are emerging fast to enable us to build systems that over time can reason like humans but without our biological limitations enhancing the precision and speed of decisions taken by machines and avoiding catastrophic decisions which with the absence of machine reasoning is highly possible.

While human-generated data will continue to be important especially in this early stage of adopting AI in industries, using the right mix of AI techniques and architectures would, over time, require fewer data and leverage more of the collective knowledge of companies to save time and efforts while creating safer and more efficient AI-powered business systems.

AI solutions are secure by design, really?

We all hoped that intelligent solutions would be able to defend themselves in different ways other than traditional software solutions. Technically it is possible that AI-powered systems can detect hostile behavior and, in some cases, proactively take preemptive measures to defend themselves. Today, AI is being used effectively to enhance the traditional cybersecurity solutions enabling them to be able to early identify or predict attacks and recommend primitive strikes on adversarial systems.

Given the strong reliance on human-generated data, ML algorithms with even deep neural networks architecture also can be easily misled to take wrong or even dangerous decisions.

Usually, hackers access traditional software systems to steal data. They hack industrial control systems and misguide them to do the wrong action(s). However, the core of AI systems has mainly algorithms and not much data. This created the illusion of absolute security by nature among some people, as nothing inside to steal. However, instead of stealing data, cyber attackers can feed AI systems with wrong data to manipulate their ability to make the right decisions. For example, attackers could access Electronic Medical Records (EMR) to add or remove medical conditions in MRI scans which will lead to the wrong diagnosis by ML algorithms. Same could happen to financial data or the operational data of critical equipment in a Nuclear Power Plant (NPP) or a smart grid.

One of the most advanced and promising features of some AI-powered solutions is the ability of continuous learning from their own behavior, the way we use them to solve problems or make decisions as well as the external data sources we grant them access to. Even this unique feature makes AI solutions more vulnerable to new types of cyber-attacks such as influencing their behavior that they generate the wrong learning data (experience) which will lead to wrong or biased decisions in the future. This is like exposing humans to specific experiences with the goal to misguide their behavior in a certain direction.

The so-called “adversarial examples” are sets of data given to AI systems with the intention to mislead them and cause misclassification and wrong decisions. Such a new type of hacking of intelligent digital systems is creating a major security vulnerability of even state-of-the-art deep learning systems. It is all about misleading the brain or disabling the backbone of an organization’s collective intelligence and even its critical physical assets. This can be more catastrophic and might cause irreparable damage and threaten even the existence of companies in some cases.

Organizations should be aware of this new cyber threat and consider new approaches and tools for designing, implementing and securing AI-powered digital and physical systems as well as the systems they interact with internally and externally.

AI systems can’t be biased. A huge misunderstanding!

One of the most frequently discussed topics nowadays is AI ethics and biases. As we use data generated by humans based on rules we created to train the machine learning algorithms today, this data will directly reflect the way we think and approach things. This data will decide the behavior of each algorithm.

In many cases such as diagnosing medical images, predicting failure of equipment or optimizing production throughput, ethics and social bias might not be a part of the problem to be solved. This creates another misunderstanding that the problem of AI bias is irrelevant in such cases leaving many to believe wrongly that the algorithms are not biased. In such cases, many companies are not aware that ML algorithms might represent a high risk and even legal burden on organizations.

While it is very important to eliminate social bias from the data we use to train ML algorithms and verify their behavior, companies must understand that there’re different types of AI biases and be aware of them.

For instance, we usually use technical data of specific equipment combined with other operational and environmental data to train ML algorithms that can proactively predict equipment failure or guide us on how to increase its performance. In some situations, because of many known and unknown variables, algorithms are biased toward predicting more failures or fewer failures causing major disruption to the business.

AI bias should be defined and identified based on the problems we’re trying to solve or decisions we’re trying to make.

We should develop new methods and tools that will enable us to expose biases using adequate humans and machine reasoning based on relevant business and technical knowledge. Ethics, accountability, and governance of AI systems are some of the most important roles of leadership in the era of AI and they’ve to proactively engage to inform themselves, provide guidance and increase awareness across the organization.

Until we’ve AI regulations or government appointed regulatory bodies, companies must ensure their AI systems operate at least using the same standards and regulations they use to run their business every day. This is especially crucial in serious AI applications which span digital and physical systems.

Conclusion

Companies must carefully create a comprehensive and dynamic AI strategy and immediately start adequate execution initiatives to get ready for the new era of many intelligent things powered by AI. This strategy towards intelligent enterprise will help in creating the new Man + Machine workforce of the future and reimagine their overall business. This is urgently required before new intelligent products, solutions or services from far smaller new disruptors will become a real threat to not only their businesses but also to their very existence.

This will require the business and IT leadership to have a realistic and accurate view about what AI can and can’t do now and in the near future. Also, having someone with robust academic and practical experience in AI leading such initiatives would help organizations cut through the hype and avoid costly misunderstandings and misleading myths.

Intelligence can’t be centralized, it should be distributed and not limited to a few functional areas. A hybrid and balanced approach of embedded, edge and centralized intelligence should be considered upfront to guarantee a well-orchestrated growth of the collective intelligence of the organization across all teams, functional areas, products, and services.

Most importantly, the adoption of AI and other related technologies towards the intelligent enterprise will bring the more productive and augmented Human and intelligent Machines closer creating a powerful workforce of the future. Companies should understand that humans and machines will continue to be the two pillars of the new workforce and wisely plan to leverage their combined strengths and understand their limitations of biological and artificial nature.


Note: Special thanks to my colleagues Karthik P. Rao and Guruprasad B. Gudi for their great support in putting this together.

Ahmed El Adl is a technology thought leader and change agent with a proven track record of achievements in envisioning and creating new technologies. Ahmed's current focus is the advancement of AI and its adoption in industries. Another focus area is the Cognitive Digital Twin - CDT. Ahmed has a PhD in computer science and robotics. Visit Ahmed on LinkedIn.