COGNITIVE WORLD

View Original

Best Practices for Insuring AI Algorithms

Image: Depositphotos

Insuring AI could very well be the last brick needed to make true adoption and acceleration of AI across industries a reality.

When my family and I finished binge-watching the 'Queen's Gambit' on Netflix a couple of weeks ago, an AI system analyzed my viewing pattern and suggested I watch 'The Crown' next.

Today, AI systems are gleaning all kinds of insights about people to do a myriad of things- from making movie recommendations to driving your autonomous car. Some of these decisions do not have major consequences. If I do not like 'The Crown' I am not going to lose my livelihood or my life. Other decisions made by AI systems do have major consequences, and thus, have high risk if a bad decision is made. To help enterprises address this high risk, a new industry is being created around insuring companies that deploy high risk AI models.

What is being insured? Algorithms versus Data versus IT Infrastructure

When assessing the risk of an AI model one needs to consider the various components that ensure the model behaves as expected. This would include not only the model and its training data, but also the IT infrastructure where the model is deployed. You could have a perfect algorithm trained with unbiased data that is run on a completely hackable infrastructure- just as you could have a great infrastructure and AI algorithm that is trained with bad data. Ensuring that these risk components are identified individually is key.

This article proposes a 3-step best practice approach for such insurance companies to think about when specifically insuring AI algorithms and their data (versus IT/tech infrastructure) such that those companies are ultimately generating the most value for clients and the most trust in the market.

Step 1: Assessing the Model

Having a methodical way of assessing the risk attributed to the AI algorithm is key for any insurer. The risks that need to be considered include:

  1. Is the AI fair? - ensuring that the model is not biased against any unprotected classes of people. This often is due to biased data used to train the AI

  2. Is it easy to understand? - can various personas both inside and outside the organization understand why an AI decision was made

  3. Can anyone tamper with it? - is it possible to fool the AI system into an unintended decision

  4. Is it accountable? - does the development of the AI system conform to the organization's governance standards

Today, tech companies have both donated open source tools and/or sell tools that will address these 4 risks.

As an example, IBM's AI Fairness 360 is an open source python toolkit for detecting and mitigating bias in datasets and ML models. Various metrics from the scientific community are provided in the toolkit to quantify the level of bias in a dataset or model. This allows stakeholders to choose the metric appropriate for their use case and to determine what levels are acceptable. The website demo illustrates how 5 different metrics can be computed. DISCLOSURE: the authors here serve in roles at IBM concerned with trust in AI.

AI Explainability 360 is an open source toolkit for explaining ML models and data. It is critical that various stakeholders/personas fundamentally understand why decisions were made. A Data Scientist may wish to know whether they can deploy the model with confidence. A loan officer might want to know why her client's loan application was declined by the AI, and a bank customer may want to know what she can do to increase the chances of being approved for a loan.

Adversarial Robustness Toolbox (ART) provides tools for developers and researchers to both evaluate and defend machine learning models and applications from attacks.

IBM's AI Factsheets both enable AI insurers and consumers to better understand/trust/assess AI technology AND are key to establishing governance for an AI Lifecycle. It helps to automate the documentation of such a lifecycle, specify and enforce actionable AI lifecycle policies and make information accessible to all stakeholders.

For clients that have live, deployed instances of AI algorithms, IBM's Openscale AI solution has a set of tools that addresses the 4 risks.

Multi-Stakeholder, Multi-Disciplined approach

The key to using tools like these effectively is to ensure that you include all of the relevant stakeholders as part of the assessment effort. For example, the metrics in AI Fairness 360 toolkit require someone to identify the protected and unprotected classes to be analyzed. Is that really the expertise of a data scientist? Once a bias value is obtained someone needs to determine what levels of bias are acceptable. These are not things that data scientists are normally trained to answer by themselves. Bringing the other stakeholders to the table will ensure these decisions are made to reflect the values of the enterprise.

Step 2: Mitigation

Let's assume that your insurance company has scored an AI algorithm as being high risk with respect to any of the 4 classifications above, and thus, will be expensive to insure. Being able to offer consulting services to those clients to teach them how to move to a less risky state has the potential to be a strong revenue stream. By embracing a partnership strategy, insurance companies embrace this lucrative mitigation process.

This is a dramatic change from more traditional insurance models. One might opine that by lessening the risk through consulting, an insurer might eat into their profits. The contrary is in fact true. We have already seen an example of this during COVID. After the pandemic lockdowns, fewer people drove their cars which means there were fewer accidents on the roads and as a result more insurers made significantly more money as they did not have to dole out as many claims. The same can be said of AI. The more trustworthy AI implementations that are insured, the fewer claims that insurer will have to dole out.

Additionally, with a more precise understanding of mitigation strategies, an insurer can make the most use out of dynamic premium pricing- giving better rates to those clients that follow more advice and counsel.

Obviously tech consulting is not the traditional role of insurers. There is an opportunity for digital insurers to form partnerships with responsible tech companies with this kind of acumen and history to provide this value-add for clients.

Step 3: Monitoring

Once the AI is assessed (and possibly the risk is mitigated), the insuring company must have a way of monitoring changes to the model once it is deployed. Again here is where dynamic pricing can be best employed. If there is a client that has an AI model that is not trained with refreshed datasets once deployed, the premium could be cheaper than ones that need to be trained with new and possibly risky data.

For AI models that require regular infusions of new training data, this monitoring should ultimately be a live subscription based model where the monitoring is constant (not just once or twice a year), offering a dashboard to the insurer to be able to detect changes in training data that would affect the risk.

Partnering

By partnering with a tech leader with a proven track record in trusted systems, an insurance company can scale its AI strategy in a responsible way that generates value for all stakeholders. Find a tech leader that has the automated governance tools, the consulting acumen, the responsible culture that prides diversity and inclusivity, and the global, human governance chops to help your company be a responsible steward of AI.

Insuring AI could very well be the last brick needed to make true adoption and acceleration of AI across industries a reality.


Author Bios

Phaedra BOINODIRIS, FRSA has focused on inclusion in technology since 1999. She holds five patents, serves as an executive consultant on the Trust in AI team at IBM and is currently pursuing her PhD in AI and Ethics. [Twitter: @Innov8game]

Michael HIND is a Distinguished Research Staff Member in the IBM Research AI department in Yorktown Heights, New York. His current research passion is the area of Trusted AI, focusing on governance, transparency, explainability, and fairness of AI systems.