Scaling AI: The 4 challenges you’ll face
Source: CogWorld Think Tank on VentureBeat
Organizations of all sizes are embracing AI as a transformative technology to power their digital transformation journeys. Still the challenges around operationalizing AI at scale can still seem insurmountable, with a large number of projects failing.
I’ve worked in big data and AI with several organizations and have seen some clear trends on why AI efforts are floundering after an enthusiastic start. These are large established organizations that have done an amazing job of garnering support from their board, C-suite, business stakeholders, and even customers to embark on AI-powered transformation journeys. They have most likely set up some form of a Center of Excellence (CoE) for AI, with key hires both in leadership and technical roles, and have demonstrated the promise of AI, using a few machine learning projects in a limited scale. Then they move to scale a project into production, and they get stuck.
The reasons why scaling AI is so challenging seem to fall under four themes: customization, data, talent, and trust.
Customization. Solving problems with machine learning (ML) to drive business outcomes requires customization. Most of the models for solving AI problems — ML, deep learning (DL), and natural language processing (NLP), for example — are open sourced or freely available. And these models themselves aren’t the critical factor in solving production-grade problems. Your team will need to customize and train each model to fit your specific problem, data, and domain. Then you need to optimize the model parameters so that they align to your business’s target outcomes/key performance indicators (KPIs). Then, to deploy your models, you need to integrate them into your existing IT architecture. Building AI systems from scratch for every problem and domain thus requires a ton of customization work. Or, if you opt instead to buy off-shelf solutions that are not optimized for your specific needs, you compromise on performance and outcomes. Both paths have their advantages and disadvantages, but it’s important to recognize that AI requires customizations for every project, and every business problem, and that a key part of operationalizing AI is making the customization process as efficient as possible.
Data. I’ve seen a number of organizations fail at AI because they underestimated the effort needed to harness, prepare, and access the data to drive these projects at a production scale, and it becomes a rabbit hole. In most such cases, they realize they don’t have standardized data definitions or proper data management, or they struggle with distributed data sources. This kicks off a multi-year transformation journey. While a ton of big data projects exist to handle accessing, organizing, and curating these disparate datasets, these are not sufficient in providing a scalable solution for this problem. Advanced machine learning techniques to work with smaller data sets and noisier data in production are also needed to eliminate this blockage to getting AI pilots to production.
Talent. Most organizations where I’ve seen AI projects fail to scale hired ML engineers and data scientists and realized that it was impossible to find someone who has a combination of statistical (ML) skills, domain expertise (both in the business domain and the process domain), and software development experience. So, using classic organizational design, they try to work around it. While you will eventually form a formidable in-house capability if you can retain and develop this highly coveted talent, the need to ramp up a team delays your value realization with AI. This affects your ability to innovate fast enough. I call this the “AI throughput,” the number of AI projects that can be put into production. It takes years for these teams to start producing real results. More successful organizations have brought a holistic ecosystem approach to scaling talent by augmenting internal AI teams with external partners to design a faster pilot-to-production path and improve AI throughput.
Trust. People across the world have mixed feelings towards AI and fear it may make their jobs obsolete or irrelevant. So designing AI systems that emphasise the human-machine collaboration is foundational to scaling AI in these organizations. Although full automation through AI may be the solution for many business challenges, the most impactful and high-alpha processes are still the ones humans run. For large-scale adoption of AI across an organization, you need buy-in, support, and integration across multiple business processes, IT systems, and stakeholder workflows. AI implementation into business processes also introduces a variety of risks. One risk is to business performance in cases where the business impact of the AI system is unclear, costing organizations time, resources, and opportunity cost. Another risk is maintaining compliance with internal audit and regulatory requirements, an area that is largely fast evolving. A third type of risk is reputational, with concerns that biased decisions or decisions made by black box algorithms can negatively impact stakeholder experiences. This is a critical obstacle that even the most advanced teams will run into when trying to scale AI across their organizations.
Overcoming the challenges I’ve outlined here requires more than just technology and toolsets. It involves a combination of organizational processes, being able to bring different teams along, and collaborating actively with a curated ecosystem of internal and external partners. The $15.7 trillion opportunity with AI is in front of us, but it requires us to come together as an industry to solve these key challenges. I will be exploring these areas in future posts with a focus on sharing some best practices.
Ganesh Padmanabhan is VP, Global Business Development & Strategic Partnerships at BeyondMinds. He is also a member of the Cognitive World Think Tank on enterprise AI.