Mitigating the risks of Intelligent Automation: Bias, worker displacement and more

Depositphotos_152599450_s-2019.jpg

Adopting a Responsible Framework

Intelligent automation offers much promise to companies to drive efficiencies but it is not the panacea. By adopting a responsible framework for the deployment of such systems, companies can ensure that they are not inadvertently causing individual, societal or indeed economic harm to their business.


Intelligent automation promises to drive improvements in efficiencies and process outcomes in businesses. The ultimate expression of optimization is in how the business is able to translate the operational hours saved into either Operational Expenditure (OpEx) or Capital Expenditure (CapEx) reductions, or revenue increase, thus providing a noticeable increase to operating income. Observable changes to workflows and consumption models, while beneficial to the organization, can impact those who are developing the resources to optimize as well as the resources, recipient processes or persons. These workflow impacts can occur across a massive spectrum of breadth and depth - as low-level as explicit hardware or pipeline resource consumption or as high level as to impact business unit investments.  These processes can be fraught with implicit or explicit biases which may lead to detrimental effects within the organization.  The goal of this article is to determine the following: 

  • Are business leaders able to critically evaluate how to integrate intelligent automation into business processes and mitigate the risk of bias? 

  • Are businesses prepared to perform independent evaluation of vendor-introduced bias?

  • Are businesses able to effectively identify and weigh the loss of control vs advantages introduced by intelligent automation?

  • What is an appropriate balance between humans and machines? 

We will review these challenges by evaluating the areas of organizational optimization, artificial intelligence and machine learning (AI/ML), as well as business process automation. 

This article proposes a framework for companies to adopt in order to ensure that they are deploying intelligent systems responsibly. Responsible Systems are those systems whose stakeholders ensure that they are not causing either intentional or unintentional harm to people or to the environment. 

A Framework for Intelligent Automation using Responsible Systems

Remember, Intelligent Automation is not the Panacea.

The limitations from intelligent automation stems from the limitations in AI/ML technologies. Baseline limitations of both include accuracy and transparency. Before adopting any AI models, businesses must assess if high-accuracy or high-touch is a requirement. Businesses must then assess models to ensure the necessary capabilities can be provided. For example, in this article about using AI to mitigate toxic hate speech, community managers are absolutely critical in the process to help continuously train the AI and help manage community expectations. Today, state-of-the-art natural language classifiers are only able to achieve between 60-99% accuracy, depending on the corpus. If the business determines that level of accuracy is acceptable, AI model adoption can be considered without support for manual supervision. If a business determines a need for higher accuracy, use a well-trained human or models that support human supervision.

AI/ML adoption within infrastructure solutions must also consider accuracy and transparency. Even in infrastructure, AI is only as accurate as the model and the data it uses to make decisions. Utilization of data that is not well-controlled or well-suited for the intended purpose may result in unintended downstream impacts. These can range from service outages to biased decision making, with real-world litigation consequences.  Any organization adopting AI in this context must get their data management tooling and processes to a point where data is reliable enough to meet their needs. They must then determine if the AI accuracy and supervision capabilities meet the needs of the business.

When to automate

There are many good reasons for incorporating intelligent automation. It can improve human workers’ productivity by freeing them from low-value or repetitive tasks. Intelligent automation can also make a business process more consistent or predictable by incorporating AI-based decision making. However, process owners may be tempted to incorporate AI into every possible step in a business process. It is critical to do a careful cost-benefit analysis to understand the ROI from automation before they gradually (and cautiously) scale. One approach to AI incorporation is to allow it to work on a small subset of active cases and have a human worker review its work before deciding whether it can be trusted to handle the full workload.  This may seem slower at first, but it builds necessary confidence in intelligent automation and increases the likelihood of success over time. 

This approach is also useful when adopting vendor or open source models in infrastructure automation. Algorithms often contain (likely unintentional) bias toward vendor priorities. These may not align with the priorities of an organization. Particular attention should be paid to the ranking and weighting variables to ensure that any decision making matches the organizations’ priorities. Algorithms that favor specific capabilities may result in unintended vendor lock-in or increased consumption of a single vendors’ solutions. Look for options that include approval mechanisms, transparency, and tuning to ensure the organization’s best interests are protected. 

Building and training your own AI models can be a costly process. First, you must consider the full lifecycle costs. This includes the cost of obtaining and labeling training data, training state-of-the-art models on high-end compute systems, evaluating those models for performance, fairness, and robustness, and deploying those models into a production system. The full life cycle costs should be included when weighing the potential benefits that come from intelligent automation systems (e.g. cost savings, additional revenue, improved customer or worker satisfaction). Second, AI models are susceptible to biases in the underlying data set, so care must be taken to ensure trained models are fair. Data set content and lineage must be well-understood to ensure they match the needs of the model.  Data created internally is often considered more trustworthy than externally acquired data, but both have their place. Finally, to avoid negative consequences when a business process becomes too automated, organizations must consider both skills and technology impacts. Is your organization able to recover from infrastructure or data automation gone awry? If human workers lose their skills when a digital worker takes over, is such de-skilling tolerable and ameliorated?

Cultural Responsibility to upskilling workforce

One might argue that a company that purports to create or acquire Responsible Systems has some level of responsibility to displaced workers. According to a recent IBM Research Study, more than 120 million workers in the world’s 12 largest economies may need to be retrained or reskilled in the next 3 years. And yet, shared prosperity and impact on jobs are identified by executives as the least important ethical considerations related to AI. 60% of workforce needs retraining/reskilling due to intelligent automation in 3 years. And yet the vast majority, 62%, of Chief Human Resource Officers believe that they have no or minimum obligation to offer retraining.

It is imperative that management think of ways to provide opportunities for employees to upskill to assist the business in growth in a way that aligns with progression and supports the responsible adoption of AI.  One method of achieving this is the adoption of  “Education Credits”. 

“Education Credits” may be achieved through measuring the effects on workers that are displaced by such systems (did they find other jobs in the same company that offered equal or better pay? Are they happier with their new less repetitive work?), and determining how much investment should be put into re-skilling workforces. This type of program could help displaced workers (ie they did NOT find another job with equal or better pay) move into roles that add value and an equal or better living.

Mitigating the Risk of Bias

Understanding AI adoption bias risks and weighing the risks against the potential inability to keep up with the changing business landscape is critical.  We propose that there is a 3-pillared approach for companies to mitigate the risk of bias in AI, and all THREE pillars are critical. 

  1. Culture

  2. Forensic technology

  3. Governance Standards

Culture

Culture initiatives ensure that AI ethics is incorporated in mechanisms for institutionalizing values across the entire organization.  Culture must embed ethics governance and training into all AI initiatives and comes from the top down as CEOs and C-level teams are fully aware of and engaged in AI ethics issues.   The Culture of an organization that embraces responsible AI includes practices like Red Team vs Blue Teams. As described by Kathy Baxter, Salesforce’s AI Ethics lead, “…this helps identify unintended consequences or use cases that team members too close to a project/product might not be able to see. They treat ethical holes with the same priority as security holes. If there aren’t enough resources to create a dedicated team, rotate among team members each release to play the role of adversary”. Another best practice is teaching about cognitive bias in a holistic way, assessing AI impact on skills and workforce; taking ownership for outcomes. 

When developing AI internally, it is imperative that management think of ways to motivate employees to contribute to training algorithms. When an employee takes time to correct a model, they may inadvertently be flagged for being less productive than just using the model alone (with the error)! Value is gained by rendering transparent these contributions to management so that they understand who is providing value. One common misconception that must be corrected is the perception that if an employee has a different answer from the intelligent system, that the employee is automatically wrong. 

Bringing technology that utilizes AI into an organization should also be considered against the organization’s cultural priorities. For each acquisition that includes AI capabilities, there may be implications of a loss of control and skillset displacement. These must be carefully considered. Some questions you should ask: Will the loss of control result in potential business harm? Could the technology impact critical systems? If this technology takes over a specific job type, will eliminating this skillset cause other potential harm?  Consider adding bias detection and recovery requirements to the evaluation criteria for external technology purchases.  Ask questions up front about any claims of AI within the products that will be introduced to the organization. Make it a priority to hold vendors accountable for meeting the same standards and transparency required within your own organization.

Forensic Technology

Forensic Technology must offer transparency and investigation into the inputs and outputs of AI-based decisions. Intelligent Data Management systems accomplish this through the creation of audit trails of all activities as well as providing algorithms to mine for bias in datasets. Mining for bias typically includes label generation demonstrating how the data was mined for specific kinds of bias and what the findings were. A best practice is to subscribe datasets to a forensic technology such that anytime new data is ingested. This would ensure the dataset corpus is reviewed for bias. A great forensic technology tool will generate clear, concise labels that will instill trust in the system. 

Governance

Lastly, the third pillar exemplifying best practices to mitigate bias is to have a form of published policy that is enforced by the organization. This policy will provide definitions for what comprises Explainable and Transparent AI. The policy will also help to inform the rules upon which an organization must adhere to in order to be responsible with your intelligent automated system. It may set standards that ensure that such a system cannot be deployed either internally or externally unless it meets certain standards and criteria. It will provide best practices for the ethics board to follow.

Policies must take into consideration the necessary vertical-focused, geographic-aligned and content-specific appropriate standards and laws. Organizations should also be prepared for shifts in this area. What may be seemingly harmless at a point in time can potentially violate future rules or become subject to governance and/or standards.  Creating an inclusive and diverse Ethics board can enable an organization to adhere to governance standards and feedback loop mechanisms.  

A note on Ethics boards

Not all ethics boards are the same! An Ethics board that oversees intelligent systems that make decisions about people would need to be effective, diverse, and provide safety measures for those in the company to submit concerns. The World Economic Forum has published a set of best practices here.

Conclusion

In Conclusion, Intelligent automation offers much promise to companies to drive efficiencies but it is not the panacea. By adopting a responsible framework for the deployment of such systems, companies can ensure that they are not inadvertently causing individual, societal or indeed economic harm to their business.


Bios

Phaedra Boinodiris, FRSA has focused on inclusion in technology since 1999. She holds five patents, has served on the leadership team of IBM’s Academy of Technology and is currently pursuing her PhD in AI and Ethics. 

Nicole Reineke designs and builds revolutionary IT technology. She holds patents in cloud computing, was on multiple startup founding teams (acquisitions by Citrix & IRM), and is currently a Senior Engineering Technologist at Dell Technologies.

David Graham has worked across technology and social work since 2000.  He currently works for Dell Technologies and is a member of their World Economic Forum team looking at AI Governance.  He is currently pursuing his PhD in Marginalized Communities, Data Trust and Emerging Technologies.