Pop Culture, AI And Ethics

https___blogs-images.forbes.com_cognitiveworld_files_2019_02_RobotWithEmotionFaces-F-1200x539.png

I am a major sci fi fan. Well, at least I thought I was until I went to my first Star Trek convention in my 20s and realized that I was in the minority of people who did not speak Klingon or know episode numbers, titles or dates.

Science fiction inspires technologists every day. Most recently, I have become inspired by Black Mirror, a show originally aired by the BBC and now offered on Netflix. The brainchild of Charlie Brooker, Black Mirror is the Twilight Zone for our times, giving us a glimpse as to how technology trajectories can be used to affect society in unintended ways in the coming decades. As Frederik Pohl used to say, ‘A good science fiction story should be able to predict not the automobile but the traffic jam.’  Metaphorically speaking, this show sure is predicting traffic jams. In this day and age as we are on the cusp of a technology tsunami (the 4th industrial revolution), this show stops short of imploring us to consider the unintended consequences of our AI and robotics implementations.

In this article, I would like to take the opportunity to do a deep dive into three of the show’s episodes and offer a Design Thinking framework for how to adopt a thoughtful approach on AI implementations. Warning - there are spoilers!

Episode name: Nosedive

In this episode, we live in a society where most people wear digitized, smart contact lenses that can relay information about the people you are looking at, their names, and most importantly, their social score. The social score is aggregated over time as individuals you interact with grade your conversations. Great customer service offered by the barista? Give her a 5 out of 5. Was that a snide raise of the eyebrow I saw from that parking lot attendant? I'll give them a 2 out of 5. Our main character is desperately trying to raise her score so that she can rent an apartment at an exclusive complex where only high-ranking social scorers can live. The show gives us a glimpse as to what can happen to everyday interactions when we feel we are being constantly graded and the inevitable backlash that occurs. What was fascinating to me was that shortly after I saw this episode, news from China about their social scoring came out. In a speech in late 2018, US Vice President Pence described the Chinese Social Scoring system as “an Orwellian system premised on controlling virtually every facet of human life.” In following publications though, it was revealed that the whole point of the Chinese social scoring system was to ensure that those that were not compliant with government rules would be punished in order to stem corruption.

Regardless, one can see that there are indeed implications for when people are scored by other people and indeed by AI. In 2016, ProPublica came out with an astounding expose, titled ‘Machine Bias' that detailed how software applications meant to help judges predict the levels of recidivism of prisoners was biased against blacks.

It is quite easy to introduce biased data to an AI. This past summer, my friend Joe Kozhaya worked with students from Raleigh Charter High School to develop a Watson-powered Harry Potter sorting hat. You type in information about yourself like your hobbies (I like to read) or your skills (I can speak to snakes), and the hat will bellow out your Hogwarts school. As I was curiously diving into the spreadsheet used to train Watson, I see opportunities, PLENTY of opportunities to sway the data. You don’t like my cooking, or you didn’t vote the same way I voted? You obviously belong in Slytherin. Being an earnest mom of 4, I try to always nab opportunities to teach. In order to teach my kids about AI and how easy it is to bias the data that train an AI, I ensured that when they typed in their names, the hat would bellow out ‘Slytherin!’ This, of course, had the intended effect…. The crossed arms, the glare in my direction. I wanted them to remember this moment- not to ever trust an AI that is not fully transparent.

The truth is, we live in amazing times. We need to continuously ask ourselves these 4 questions: How can humanity benefit from this AI/tech? What products and services can you imagine in this space? How might AI be manipulated, or unintended consequences lead to harmful outcomes? What are the suggestions for a responsible future?

Next episode.

Episode name: Hated in the Nation

In this reality, bees are extinct so to preserve our way of life, we have created bee drones to help us survive the bee decline. What could possibly go wrong? As you might guess, the bees get hacked and are used for nefarious spying and murderous intent. A week goes by after this episode when an article is released from Futurism stating that Japan has indeed invented bee drones. As our lives become more and more frequented by smart devices concerns over hacking are even more pronounced. Although you can’t imagine a smart refrigerator being hacked to be used for murderous intent, it can tell a thief whether you are home or not. And that smart little robotic vacuum that cleans your floor? It may be mapping and sharing every nook and cranny of your home’s layout. The 4 questions, again.

How can humanity benefit from this AI/tech? What products and services can you imagine in this space? How might AI be manipulated, or unintended consequences lead to harmful outcomes? What are the suggestions for a responsible future?

Episode name: Metalhead

Filmed completely in black and white, this terrifying episode depicts a couple attempting to break into a facility that is being guarded by dog-like security drones. These drones are heat seeking, highly strategic killers with facial recognition that are armed to the teeth with deadly weaponry. I recall shortly after watching this episode seeing a video from Boston Dynamics showing two dog-like robots working together to open a closed door. Yes, we have security dogs and we have security cameras and systems, but there is something truly chilling about creating machines that kill on their own volition. As teams work towards training drones how to play paintball autonomously, we must consider what are the ramifications of our decisions? Is this the future we want?

How can humanity benefit from this AI/tech? What would products and services can you imagine in this space? How might AI be manipulated, or unintended consequences lead to harmful outcomes? What are the suggestions for a responsible future? 

Looming Large: Manipulating Perceptions

Looming large is the very real and now implications of having AI manipulate our perceptions of the world. Looking for a video explaining climate change on YouTube? Don’t be surprised if after watching a video or two or three from valid credentialed scientific sources that the AI recommendation engine will offer you a video from a climate change denier. Why? Because an AI can be tailored to measure success based on how many clicks an offering gets, and salacious material is clickbait. This is a practice that YouTube now (finally) says they will counter. Vladimir Putin is correct in saying that whoever leads in Artificial Intelligence will rule the world because that is who gets to hold the pen of history and how it is perceived by the masses.

There is Hope

Indeed, there is hope on the horizon. In October of 2018, the US Congress passed a bill called AI in Government bill which creates a steering committee to help the US government navigate policy with regard to how AI is used in government, ostensibly to help prevent bias and unintended consequences. For the record, I would give my eyeteeth to be part of this steering committee. (Just putting that out into the universe.)

Other interesting innovations that can be used in the effort to combat unintended consequences include:

* There are products being developed that help flag when AI has been trained with biased data.

* There are AIs being built with enforced transparency.

* Companies are deliberately taking stands regarding data privacy as it pertains to their clouds and AIs.

* GDPR (General Data Protection Regulation)- a regulation in EU law that protects the data and privacy of all EU citizens.  Now the state of California has passed its own version.

OpenMined is an open source pilot that uses blockchain to credential every learning pattern that is introduced to an AI, in this way enforcing transparency and ensuring that biased data can’t hide behind a black box.

Incorporating this Practice into Design Thinking

This all got me thinking about how we can plan for a more responsible future by training developers, designers, engineers how to think about AI in a responsible way as they are designing their products and applications. Design Thinking offers a fantastic framework for how to approach creating a product or an experience for an end audience. There is a really compelling Design thinking guide that has been created with a specific aim to curate an ethics ethos in design and development teams called ‘Everyday Ethics: a Practical Guide for Designers and Developers’.   It has a compendium toolkit called 'Design Ethically'.  Both are excellent.

The guide goes on to outline five areas of ethical focus for consideration including accountability, fairness, explainability and enforced transparency, user data rights, and value alignment.

Practicing Design Thinking has become so ingrained with many dev groups now that I am hopeful that adopting this expansion to include Everyday Ethics into the practice will make a difference. But in truth, I don’t think we can stop there, hoping that people will self-govern. We need responsible public policy, regulations, and governance to help us navigate this brave new world. We need to be teaching about these technologies as early as K-12 and how they can be used and misused. I do NOT believe that AI unto itself is the harbinger of apocalyptic mayhem, as it is a tool like any other. In fact, if you think of our largest dreams and aspirations as human beings like traveling to and living on other planets, we will NEED AI to do this. We have to get this right and we will through education, responsible policy, governance, and best practices.


Phaedra Boinodiris is a member of IBM’s Academy of Technology where effectively she is an INTRA-preneur, kicking off internal startups that range from IBM’s first Serious Games and Advanced Simulation program to IBM’s first K-12 program- influencing curriculum in traditional and non-traditional learning spaces through entrepreneurship and social impact. She is keenly and wholeheartedly invested in Tech for Good and Ethics and she is pursuing her PhD in AI and Ethics due to a generous scholarship from the European Union. Boinodiris happily mentors startups around the world as well as business school students at her alma-mater UNC-Chapel Hill where she is an active Adam’s Coach. She is also the author of Serious Games for Business, published in 2014 by Megan-Kiffer press. Boinodiris' earlier work in serious games are being used in over 1000 schools worldwide to teach students the fundamentals of business optimization. Boinodiris was honored by Women in Games International as one of the top 100 women in the games industry. Prior to working at IBM, she was a serial entrepreneur for 14 years where she co-founded WomenGamers.Com, a popular women’s gaming portal. There she subsequently started the first scholarship for women to pursue degrees in game design and development in the US.