Can AI Enhance Meritocracy within the Workplace?

By Phaedra Boinodiris and Rebecca James

Phaedra Boinodiris, FRSA, has served on the leadership team of IBM’s Academy of Technology and is currently pursuing her PhD in AI and Ethics. 

Rebecca James is a Data Scientist at IBM, member of Carolina's TEC and leads the University Collaboration Initiative.

HR organizations are increasingly turning to AI as a means to identify and rate candidates for employment and promotion.  There has understandably been a backlash as oftentimes companies bake discriminatory signals into their algorithms, without even realizing it. Many negative articles have been published about the pseudo-science of facial scanning algorithms to determine job match-worthiness.  Obviously, it is critically important that the unfounded claims of an AI company to weed out people based on things that have not been scientifically proven are discounted. This amounts to selling snake-oil. In this article, we will explore other ways in which AI is being used in HR and introduce some steps organizations can take to ensure that the algorithms and AI models used to determine one's merit for employment and promotion are not mistakenly causing harm. 

1.  'These are the traits that Historically worked'  

Simply put, companies do not have the best track record with respect to hiring diverse and inclusive teams. To mitigate for this, some companies have turned to AI to optimize the hiring process.  When building an AI to aid in the hiring process, if the model is trained to identify candidates based on what worked great in the past, the model will recommend more of the same which may limit your range of applicants. High profile companies like Amazon were called out for rolling out a hiring tool with job matching algorithms that favored white men from economically advantaged communities over anyone else.   When using historical datasets to train a model, it may be tempting to strip out gender or race but it is in fact critical that these features are maintained.  Just because race has been removed from a dataset, does not mean that the socio-economic data or the geography data isn't still skewed towards a particular race - it just gets baked in.  MIT beautifully demonstrated this in their article, 'Can you make AI fairer than a Judge?'  It is important that companies become aware of inadvertent bias.  Once aware, organizations can begin to scrutinize their data more thoroughly to identify where bias is being introduced.  Software, such as IBM Watson OpenScale, can be used to mine those datasets and flag for bias. 

2. Using AI to infer information about people

Another popular use of AI is to mine information about people and to categorize them in a meaningful way.   For example, AI can be used to mine information about a person's skillsets.  In a world where both soft skills and hard skills are desired, the ability to infer the skillset of an employee through informal means can be very useful.  An Organization may want to gauge the level of Social Eminence that Jane Doe has among her peers. An AI model can be used to determine how many twitter followers she has, how many connections she has on LinkedIn,  number of publications in major journals, as well as notifications and shares about her public speaking engagements. Instead of asking Jane Doe for this information (which understandably can be challenging to attain), AI can be used to mine that information and 'rate' her. 

The risk of this approach makes the very erroneous assumption that all people are intrinsically motivated to publish what they do.  It rewards those who publish their contributions on social media for the accolades and the eminence and punishes those who are motivated to contribute JUST because they are intrinsically motivated to do so. So in fact, by using this approach, those that are extrinsically motivated benefit over those that are intrinsically motivated - a dangerous precedent and one that most Organizations would not mandate. It is really important to hire people with different motivations - 1) to add to the collective pool of thought - 2) especially those intrinsically motivated - these individuals are at times the "unsung" heroes doing the right things and contributing in big ways without recognition.

An alternative might be to invest in a system that would query current and future employees for their career interests and curate a list of mentors and coaches.  In addition, the system could recommend enablement and stretch assignments that would benefit the employee's career and the Organization.  Again, one would need to ensure that these more dynamic and personalized algorithms are scrutinized for both fairness and accuracy.  

3. The Effects of the Global Pandemic speeding up worker displacement

A very disturbing statistic in IBM’s recent IBV study states that only 38% of CHROs believe that organizations have an obligation to retrain or reskill those workers displaced by AI and automation.  There are many signs that COVID-19 is speeding up this predicted worker displacement. More and more scenarios are cropping up where companies are investing in AI and automation in order to displace workers that have been sickened by COVID-19 and/or make it safer for people to get services.  Those jobs are not coming back after this is over. CHROs must assess AI and automation impact on skills and workforce, and take ownership for outcomes.

4.  Use AI to curate an Improved  Experience for candidates, new recruits and employees. 

 It is always hard to get started as a new recruit in an organization. Organizations can use AI to boost engagement by providing both candidates and new recruits with updates, feedback and guidance, as well as answer their questions in real-time.  But in doing so, as organizations are gathering info about candidates, they must ensure that all data privacy laws are being followed to the largest extent.  Additionally, a developer can still introduce bias and create a negative experience without meaning to - for example: A developer can make a determination about a person's expertise or interests and then downgrade what feedback or guidance that individual should be presented. It is important to explain what criteria was used to influence the decision to present one experience over another.  Would the developer give an entrepreneur out of high school a different experience from a PhD grad? Why and how?

Consider instead using AI not to make the final call about a person, but rather to flag when an intervention might be used. As referenced by the Brookings Institute, "This should be a constant consideration for implementing AI systems, especially those used in governance. For instance, the AI fraud detection systems used by the IRS and the Centers for Medicare and Medicaid Services do not determine wrongdoing on their own; rather, they prioritize returns and claims for auditing by investigators. Similarly, the celebrated AI model that identifies Chicago homes with lead paint does not itself make the final call, but instead flags the residence for lead paint inspectors."  In the world of HR, perhaps AI could be used to flag when someone might be ready to be considered for that next stretch assignment.

What you can do TODAY?

There are three main things that an organization can do to mitigate the risks of bias in AI:

1) Culture- AI ethics need to be embedded into existing corporate mechanisms, from the CEO’s office and the C-suite down to the operational level. This includes business conduct guidelines, values statements, employee training, and ethics advisory boards. Start by insisting on diverse and inclusive AI development teams. Educate employees about bias and how it affects the decision-making process, as well as the dangers of introducing our biases into AI.  Ensure that Ethics has been incorporated into the full AI life-cycle from design thinking to deployment and monitoring so that development teams can better identify where bias is introduced and which groups may be disadvantaged.  It is important that Organizations continually educate teams to increase awareness of biases and how they affect decision making processes and AI models we train based on our "human" decision making processes.

2) Governance- Create an AI Ethics board following these WEF standards and ensure that all data used to develop AI is trusted, and that all procured and developed AIs pass certain standards.  Do NOT have your CTO or CIO as the head of your Ethics board. That is very much like having the fox watch your hen house.  

According to the G20/OECD Principles of Corporate Governance:

“The board has a key role in setting the ethical tone of an organization, not only by its own actions, but also in appointing and overseeing key executives and consequently the management in general. High ethical standards are in the long-term interests of the organization as a means to make it credible and trustworthy, not only in day-to-day operations but also with respect to longer-term commitments.”

Data governance should be the cornerstone of all AI development.  When developing AI models, it is of utmost importance that the data comes from trusted sources, is representative, unbiased and fair.  Organizations need to be aware that data from even the most trusted sources can have "baked in" bias. In addition, Organizations should adopt standards around AI so that once deployed, they can be monitored for bias and ensure the AI continues to meet criteria. Feedback loops can be used to address bias in models that can then be re-calibrated over time.  Mitigating bias is not a one and done effort but one that must be maintained over time.  A diverse ethics board 'with teeth' could be very useful here as this is a fast changing space.  Likewise, it always requires subject matter expertise to know if models will continue to work in the future, be accurate on different populations, and enable meaningful interventions.

3) Technology- use technologies that mine datasets for bias and monitor AIs for fairness.  Ensure those technologies are employed for continuous monitoring as data is continuously used to train AI algorithms. Don't have data robust enough or concerned about maintaining privacy? A synthetic data creation tool like Geminai, helps the healthcare industry by allowing institutions to share data without ever sharing a single piece of sensitive information. Such synthetic data creation tools can be used in other industries as well to maintain data integrity. The output of your AI platform should be comprehensive, defensible, and clear about how it arrived at a certain decision and exactly what data informed that choice. Through human review, the platform should allow an organization to pinpoint and analyze potentially highly biased data and remove it from all future analysis. 

In short, ensuring meritocracy within organizations takes more than just the use of an AI for identifying individuals with merit or ability to perform a job.  Organizations need to invest in their culture, in ensuring proper governance models including ethics boards, and lastly through the use of technology to assure datasets are fair and accurate.


This article represents the views of the authors and not the view of any other company. 

Phaedra Boinodiris FRSA has focused on inclusion in technology since 1999. She holds five patents and has served on the leadership team of IBM’s Academy of Technology and is currently pursuing her PhD in AI and Ethics. 

Rebecca James is a Data Scientist at IBM.  She is a member of Carolina's TEC and leads the University Collaboration Initiative and a mentor to students at North Carolina State University and University of North Carolina Wilmington.  She holds an M.S. in Chemistry and Computer Science.