Ethical AI? It's All About Perspective

By Neil Sahota  |  December 3, 2018  |  Source: CogWorld on FORBES


In 2018, sex scandal after sex scandal seared the pages of our newspapers and digital screens. Hashtags, such as #timesup and #metoo, became rallying cries for harassed and disenfranchised women who found their voices with high-profile allegations. Likewise, as a result of marches in numerous countries, and record numbers of women running for office, 2018 has been dubbed The Year of the Woman. It also has been called The Year of AI. Ralph Haupter, president of Microsoft Asia, said in January, “We are on the cusp of a new revolution, one that will ultimately transform every organization, every industry and every public service across the world.”

Now, what if we married these two ideas? What if our technological advances promised to improve the lives of women on this planet? AI technologist and Obama Foundation Summit Civic Leader Kriti Sharma believes in such a possibility. My coauthor Michael Ashley and I recently interviewed Sharma, who is on the Forbes 30 under 30: Technology list and is a U.N. Young Leader, for our upcoming book, Uber Yourself Before You Get Kodaked: A Modern Primer on A.I. for the Modern Business.

“It was through the Obama Foundation Summit that I met some amazing leaders, activists and changemakers from around the world,” said Sharma. “This is where I realized the opportunity that exists in using AI for social impact and our biggest humanitarian challenges. Technology can bring real change to people going through domestic violence and abuse, in particular, and adolescents who don’t have access to sexual and reproductive healthcare and education.”

Such innovation couldn’t come at a better time. Despite rising prosperity throughout the globe, a recent U.N. report found that 35 percent of women have experienced either physical and/or sexual violence from a partner at some point in their lifetime. This problem can be further compounded due to the prevalence of victim-blaming across cultures. “Many women feel embarrassed and ashamed to ask for help,” explains Sharma. “And even when they do, they most often get asked things, like, ‘What did you do wrong to provoke this behavior?’ Or ‘What were you wearing that day?’”

Cognizant of how this devastating mindset can perpetuate cycles of further abuse, Sharma and her colleagues created a new AI platform on which victims could talk to a machine instead of a human. They chose Johannesburg, South Africa, as their test site due to the high rate of femicide and violence against women. “I had an assumption when we did this that the participants might feel offended,” said Sharma. “After all, these women are going through a difficult situation, and yet, we're asking them to speak to a machine about it. But my assumption was completely wrong. They loved talking to an unbiased, nonjudgmental computer. They really opened up and started asking for help.”

Known as rAInbow, the technology was built by AI for Good, combined with founding partners, The Sage Foundation (of which Sharma is VP of Artificial Intelligence) and Soul City Institute for Social Justice. Comprised of technologists and leading experts in combating abuse and achieving social justice, the project is the result of what can happen when higher ethical purposes combine with emerging technology. RAInbow offers an alternative to the challenges abuse victims typically face. As opposed to picking up the phone for help if you are living with the person you are reporting, its platform allows for anonymity and support. Additionally, it offers immersive and personalized narratives to individuals who feel isolated and helpless. In just a few weeks since its launch to the public, rAInbow has already had more than 50,000 interactive conversations with South African women affected by domestic violence.

However, Sharma’s humanitarian aspirations don’t end there. Along with like-minded individuals inside and outside her organization, she seeks to improve the disenfranchised in other realms through AI. We often hear much about the threat of automation taking our jobs. What we don’t hear enough, suggests Sharma, are the ways in which biases could affect job recruitment and other forms of discrimination. Sharma warns the problem may not even be intentional. For instance, she doesn’t believe HR departments design their hiring processes to prefer one group over another. Nevertheless, problems exist because machines learn from historical datasets and historically companies have made decisions favoring certain individuals at the expense of others.

Though AI should reflect true diversity, the fact is that the speed of innovation is occurring so rapidly — and often haphazardly — that bias occurs as an unintended consequence because siloed organizations within companies are making decisions in a vacuum. In addition, a development team may never dream it is building in bias, yet it occurs without their notice. “Ultimately,” says Sharma, “we need more awareness. If you’ve never faced bias yourself, you're less likely to see it. It’s more about empathy. If you have diverse teams, then you will create more diverse products.”

Richard FranziAuthor of Killing Cats Leads to Rats: Mitigating Unintended Consequences of Business Decisions, reiterates the severity of the problem and the need to correct the issue at its source. “The potential for bias and downstream unintended consequences associated with AI deployments is a clear and serious challenge. Taking steps to expand a development team's knowledge, by including diverse perspectives, can be a valuable practice to uncover hidden bias prior to implementation.”

To put the problem Sharma and Franzi warn of in greater perspective, it’s worth considering the long-term effects of biases in the workplace, the health sector and beyond. “The challenge is that without the realization of its creators, algorithms can end up creating a system that automates at scale the occasional human biases and the systemic issues we now have. Only now it can do it to millions of peopleat the same time, rather than just one person.”

To mitigate the chances of disenfranchisement on an exponential scale, as well as the many other threats AI poses, Sage is taking purpose-driven steps. Serving 3 million customers globally with cloud-based management in accounting, operations, payments and banking, its members see a need to develop AI-powered technologies in a mindful way. As a result, they developed an Ethics of Code. Five guiding principles inform their AI development: 1) AI should reflect the diversity of the users it serves. 2) AI must be held to account — and so must users 3) Reward AI for “showing its workings” 4) AI should level the playing field 5) AI will replace, but it must also create.

In many ways, the future of AI innovation mirrors the final principle of Sage’s Ethics of Code. Principle 5 unsentimentally asserts AI will replace jobs. There is nothing that can be done about this fact; the Fourth Industrial Revolution is here whether we like it or not. Yet, rather than stew about the upheavals AI may cause, it focuses on the good it can offer. Similarly, Sharma doesn’t dwell on the problem of biases when contemplating our future. She is too busy envisioning ways in which AI can help others, like providing universal health care to remote, underserved populations in developing countries.

Sharma finds herself being inspired even more through her experiences mentoring today’s young people. “I do a lot of work with kids in diverse places, teaching them engineering and coding. So far, no one has tried to build a killer robot yet or something that’s going to destroy everybody. Instead, nine out of 10 times they come up with a project that has a strong social purpose without any prompting —­ they develop ideas and solutions all on their own. They really want to create a better world that’s never existed before.”


Neil Sahota (萨冠军), contributor, is an IBM Master Inventor, United Nations (UN) Artificial Intelligence (AI) subject matter expert, and Professor at UC Irvine. With 20+ years of business experience, he works with clients and business partners to create next generation products/solutions powered by AI. His work experience spans multiple industries including legal services, healthcare, life sciences, retail, travel and transportation, energy and utilities, automotive, telecommunications, media/communication, and government. Moreover, Neil is one of the few people selected for IBM's Corporate Service Corps leadership program that pairs leaders with NGOs to perform community-driven economic development projects. For his assignment, Neil lived and worked in Ningbo, China where he partnered with Chinese corporate CEOs to create a  leadership development program.