How can AI be a Trustworthy Magic Mirror?

Image: Depositphotos enhanced by CogWorld

Image: Depositphotos enhanced by CogWorld

We have already jumped into the pool of AI innovation. Just to be clear, this isn't a human inventing the AI but the human-invented AI inventing something new. There have been interesting debates on the patentability of the “machine” inventor. Some recent research has opened up a can of nutritious worms that are food for thought for AI interpreters and developers. Soon there will be giant corporate birds claiming ownership of these worms.

From Cloud and Edge Wars to AI innovation wars, is this creating a stage for another nasty battleground or can we turn it into a healthy competition or game? Well, it depends. It depends on how fast we wake up to realize that we need to set the rules clearly for the players and their machines. Up until now, we had referees creating governance structures for these players but now we need “new kinds” of referees, for the game has transformed from a player-only game to a combination of players and their machines.

Assuming this man-machine duo isn't complicated enough, let's throw in another question. Will this “referee” be a man, an organization, a machine or some combination of all three? Before the yarn gets completely tangled, let's clarify what men/women, organizations and machines can do here. An AI governance organization blessed by the C-Suite can create hooks from the get-go for projects to validate risks and biases associated with their projects. The strength of these validations will depend upon the strength of these hooks. These hooks need to have an ethics filter that needs to be evaluated from time to time based on what it needs to sift. There have been some standard sieves established by the industry to sift AI issues with transparency, diversity, fairness, privacy, etc. However, these sieves stagnate very quickly based on the environments they are subjected to. What looks like a robust model for a few datasets may become infiltrated by biases when used in a new production environment. Hence these sieves need to be included as part of product qualification checklists. Doing so will ensure that products/algorithms are designed, built and implemented with the expectation that they will have to pass through the sieves. It will also force accountability for changes made to the algorithms along with traceability on who made these changes. Engineering and re-engineering these sieves will be a work of art by itself as these sieves evolve dynamically based on the feedback coming from customers using these products and being part of a community of sieve engineers who are constantly observing modes of failures in the sieves. Before we put superhuman expectations on these sieve engineers, we should acknowledge that they too will need machines to help them with their job.

Here’s where startups like Modzy come to help. Modzy has created a Trustworthy AI Checklist (super sieve) showing that the five major pillars for AI growth and health are Transparency, Diversity and Fairness, Technical Robustness and Safety, Privacy and Data Governance, and Accountability. Modzy offers a platform that manages, maintains and adheres AI technologies to these pillars helping companies deliver healthful, secure, agile, and trustworthy AI products.

Thankfully, the sieve engineers now have a strategist in the role as well called the `AI Ethicist’ who helps voice their concerns and connect their voices to the global amplifiers. Companies like Data Robot, which builds 2.5 million AI models a day, have influential Ethicists like Haniyeh Mahmoudian who are personally invested to make sure they’re all as ethically and responsibly built as possible. Others in this role include Francesca Rossi (IBM fellow and AI Ethics Global leader), Paula Goldman (Chief Ethical and Humane Use Officer at Salesforce), and Natasha Crampton (Chief Responsible AI Officer at Microsoft). While there are more influencers in this ‘role’ the number is nowhere near the number of organizations rolling out AI products that are growing like weeds.

Having talked about the sieves, let’s focus on what they need to filter out. Bias is one of the toughest dirt to filter. Bias can be broken out into five major categories- dataset bias, association bias, automation bias, interaction bias, and confirmation bias. Nikon’s facial recognition software demonstrated dataset bias when shown pictures of Asian people and suggested that they were blinking, which shows that the algorithm was trained primarily on faces of other ethnicities (Lee, 2009). When training data is collected with one type of camera, but the production data is collected with a different camera, it may result in a measurement bias. Amazon stopped using a hiring algorithm after finding it favored applicants based on words like “executed” or “captured” that were more commonly found on men’s resumes, for example. The use of Google’s algorithms in online advertising was criticized when a study by Carnegie Mellon revealed that women were far less likely than men to receive ads for high-paying jobs with salaries greater than $200,000 (Spice, 2015). Finally, confirmation bias creeps in as shopping recommendations based on past purchase history often demonstrate confirmation bias by showing customers similar products (Chou et al., 2017).

The “Global AI Adoption Index 2021” conducted by Morning Consult on behalf of IBM, has revealed Business adoption of AI was flat compared to 2020, but significant investments in AI are planned. COVID-19 accelerated how businesses are using automation today. Trustworthy and explainable AI is more critical to business than ever before. In the words of Google CEO, Sundar Pichai, “..... You know, if you think about fire or electricity or the internet, it’s like that, but I think even more profound.” Given the current environment, it's high time corporations took charge of not just speaking about, but delivering a more inclusive and encompassing AI that can be trusted to step up the innovation game to a pedestal hard for the human alone to reach. Only then will AI reach a horizon that touches the sky of innovation imagined by us. If we are looking for a magic mirror that speaks nothing but “the truth,” we need the polish to be free of any and all biases.


EKTA DANG, PHD, is CEO and Founder at U First Capital. Dr. Ekta Dang has been a successful executive, speaker and writer in Silicon Valley for two decades. She is currently Founder and CEO of U First Capital which provides venture capital as a service to mid-size and large corporations. Ekta has held positions in venture capital and operations at Intel. She is a startup enthusiast and mentor at Alchemist Accelerator, Stanford, UC Berkeley, Google Launchpad, etc. She has been a member of the US Govt's Technology Policy Advisory Committee. Ekta has a PhD in Physics and is a graduate of UC Berkeley Haas School's Venture Capital program. She has published several research papers in IEEE and other reputed international journals. Visit Dr. Dang’s LinkedIn and U First Capital’s website.