COGNITIVE WORLD

View Original

10 Lessons Learned for Assessing and Mitigating Unexpected Patterns in AI Models (Part 1 of 3)

Image credit: IBM's online AI Maturity Assessment tool

You procured or developed an AI model to help your organization. You have heard about the potential for unintended harm if the model output is not tested for bias, so you want to take the responsible steps of auditing the models and then mitigating for any potential harm. Excellent!

Here are some useful steps and best practices for your consideration that we are sharing based on the work we at IBM are doing with customers.

1.  Adopt frameworks for systemic empathy

Organizations need to adopt frameworks to systemically empathize with groups and individuals who may fall through the cracks. Systemic empathy offers a step-by-step approach for guiding people to consider unintended effects towards those who may not be generally considered when developing and deploying AI models. This may in turn lead to the AI project being halted altogether (because it is not a good use of AI) or to be redesigned effectively in order to mitigate harm. We use design thinking as a means to do this as described here. These design thinking sessions offer true epiphany moments for the organizations we work with, where clients indeed realize they need a more holistic approach towards their AI roadmap plan. There are instances where the problems with a model are in the deployment (e.g., who uses it, how people experience it, how the output is used in downstream processes and decisions), or in the very nature of what it is meant to do and not do (from conceptualization, to design, to model build and train). These frameworks help teams catch potential harm at all points of the AI lifecycle - optimally, well before code is written. Remember, ethics does not happen at the end of a project. It starts at the beginning.

Tech Ethics by Design from IBM Design Fundamentals

2. Prioritize empowering end users

AI models work best on a level playing field, where all stakeholders are fully informed about the presence and use of a model and are empowered to participate in and contribute to its proper usage. We recommend being hyper-focused on empowering end users to give them the knowledge and autonomy they need to make better decisions about whether to trust an AI model, and to have agency over the use of their data in the model. Also consider, when we use AI to automate, a holistic perspective would consider the research and utility of what the human can do with their new liberated time. This aspect is not discussed nearly enough. Viewing AI deployment through this lens opens up conversations across stakeholders, shines a light and enables conversations on how various data sources are being used, and contributes to a shared view of the beneficial nature of the technology. Also do consider that groups that could potentially be harmed might not be what are typically labeled as protected classes of people (example: employees with low tenure). To do this well, it is really important to have people who really understand the nature of the data as part of the working team.

3.  AI ethics is a team sport like no other

Changing human behavior is hard and there is no “easy button” for responsible AI. Getting people to understand the value of multiple varied skillsets for both assessing and mitigating for bias holistically is crucial. Too often, there are conversations about AI that underscore this false notion that it is all too easy – “just massage the data until you get the answer you want.” It is critical to invite other stakeholders to the table who offer a more holistic perspective and work across siloes. This is not an effort for Data Scientists (or any single stakeholder group) to tackle alone! Designers, behavioral scientists (such as industrial-organizational psychologists) and other domain experts must be at the table when curating AI models that are both explainable and transparent. And never forget, the more diverse and inclusive the team, the less chance for error!

4.  Prioritize holistic education of practitioners and a culture that nurtures responsible AI — and no, practitioners are not just your data scientists

The field of AI and machine learning are continually evolving, including an increasing recognition of the potential harms algorithmic decision-making can cause. This knowledge, along with the methods and tools for mitigating such harms, are an essential part of practitioner education and training. Ensure you have methods to train practitioners on best practices across the entire AI lifecycle. Establish communities in the form of Centers of Excellence for the sharing of ideas. Include all relevant stakeholders in your education efforts. Join open-source communities where practitioners from trusted organizations are sharing their best practices and lessons learned. Contribute to the conversations. Help move the field forward in a positive direction.

Image credit: IBM's online AI Maturity Assessment tool

5. Invite your legal team to the table EARLY as a stakeholder 

Your legal team should be leading and coordinating AI risk assessment and recommending program enhancements. They should not be caught by surprise with concerns that are surfaced after algorithms have been deployed; rather, they should be involved upfront in the decision to bring AI to the organization. And by putting proper safeguards in place regarding how these systems will and will not be used, as well as opt-out mechanisms and alternative paths to supplement AI scoring and inferences, they will properly set the stage for safety and mitigation. The legal team should work to create a culture where people embrace their security standards and proudly hold them up as an example to which other organizations can aspire.

Image credit: IBM's online AI Maturity Assessment tool

6.  Prioritize governance

There are three key activities in AI Governance:

1) Setup and agree upon the framework to assess risk.

2) Perform an inventory of the model and data.

3) Assess risk using the agreed upon framework.

These activities NEVER end. They are not a one and done. Thinking through and designing upfront how your organization will oversee and manage the use of AI will pay dividends in time saved and problems avoided. Prioritize structuring and documenting your organization's principles, policies, processes and standards for AI use. Monitor the governance and adjust over time as necessary. Ensure you have the necessary governance frameworks in place to assess against emerging risks, and provide clarity around roles, responsibilities, and accountabilities. Watch a short video about IBM’s 3 principles (and 5 pillars) for Trustworthy AI below.

There are 4 more lessons learned. Stay tuned for Parts 2 and 3!


Author Bios:

Sheri Feinzig

Sheri L. Feinzig, Ph.D. is a Partner with IBM Consulting’s Talent Transformation service line. She is an experienced executive with a history of successfully leading teams through a range of business challenges, transformations, and growth. Sheri has expertise in human resources research, organizational change management and business transformation. Recent focus areas include diversity, equity and inclusion, AI ethics, people analytics, employee experience, and workforce planning. Sheri is an adjunct professor for New York University’s Human Capital Analytics and Technology master’s program, and co-author of the critically acclaimed workforce analytics book The Power of People.

Phaedra Boinodiris

A fellow with the London-based Royal Society of Arts, Boinodiris has focused on inclusion in technology since 1999. She is currently the business transformation leader for IBM’s Trustworthy AI consulting group and serves on the leadership team of IBM’s Academy of Technology. Boinodiris, co-founder of WomenGamers.com, is pursuing her Ph.D. in AI and Ethics at University College Dublin’s Smart Lab. In 2019, she won the United Nations Woman of Influence in STEM and Inclusivity Award and was recognized by Women in Games International as one of the Top 100 Women in the Games Industry.