COGNITIVE WORLD

View Original

Parenting 101: How to train an AI System to DO GOOD

When considering the development of an AI that is meant to mine insights on human behaviors (and then possibly make decisions based on those insights), there is one really critical ‘Parenting 101’ lesson to consider. Incentivize the behaviors you want to see more of in three steps: Replicate what works, MEASURE and REPEAT. Consider a mix of 80% positive reinforcement and 20% negative re-enforcement, precise to the context of the situation as negative has more repercussions.

Here is a quick test as described in the Harvard Business Review. “Consider making a list of the behaviors you are currently measuring in your organization. Don’t concern yourself at this point with whether your measurements are objective or subjective or whether they’re included in your annual performance reviews. Then compare each of the behaviors on your “more of” and “less of” lists to the list of behaviors you’re currently measuring. Put a circle around the behaviors you are not now measuring. This is your danger list!… If a behavior you care about is not being measured, you aren’t able to reward the people who are doing what you want. Nor can you penalize people who are not doing what you want.”

Another quick test. Take this online assessment. How many women are on your data science team? How many minorities? Without ensuring that you have a truly diverse and inclusive team incubating data to train your model that makes decisions about people, you are skating on thin ice.

Although we may all nod at this statement ‘incentivize the behaviors you wish to see, because it feels obvious, it may not be so obvious when we think about AI. Our first reaction is oftentimes to use AI to mine the behaviors we do NOT want to see… the outliers. Who is planning to leave the company. Who is not productive. Who is harassing others in League of Legends. In this article, we propose to put those pattern recognizing neo-cortexes to work and make them discover the pattern to get the reward.  

Using AI to mitigate online harassment in online multiplayer games

Take as an example using an AI to mitigate the threat of online harassment in an online game.  After COVID hit, there was a mad rush to play games because many people are stuck at home not commuting, they have nothing to do and they just got bored.  What we are finding is that the kinds of games that most people are currently gravitating towards are those that support a multiplayer experience.  simply put, online gaming for many has become a social lifeline. Players are hoping to connect with other players and find ways of diverting themselves from the doldrums of being stuck in the same apartment or house every day.  Unfortunately, for many players, oftentimes when they get on these platforms, they are experiencing harassment instead, which can lead to both high attrition and brand decay.  

Incidents of cyber-bullying 

Incidents of cyber-bullying in general are more than double what they were in 2007. 87% of young people have seen cyber-bullying online. Roughly 4 in 10 Americans have personally experienced online harassment, and 62% consider it a major problem. Many want technology firms to do more, but they are divided on how to balance free speech and safety issues online. One of the most prolific mediums for this online harassment is in online games. 

Community Manager tactics

Game publishers have repeatedly tried to address the problem of toxicity but face numerous challenges common to Internet-based forums, especially those permitting anonymity. They have struggled to decide on broader strategic issues, such as how to balance free speech with ensuring a safe and less hostile environment, an issue shared by old-guard social networks like Facebook and Twitter. 

Community managers deploy various strategies with varying levels of success to mitigate toxicity. A few strategies that have worked to varying degrees include rewarding good behavior from in-game play, making avoiding those players that have established patterns of toxic behavior easy, establishing protocols to ensure that no one is above the rules, and crowd sourced reporting from the community. 

Using AI to support Community managers

Artificial Intelligence could be a key tool in the repertoire of community managers to combat griefing (harassment).  Community managers are the heroes in our story, curating optimal player environments. Below is a short list of ways that community managers might use AI to curate communities that are more hospitable to more gamers.

1) Auto-gift good behavior using in-game rewards

2) Auto-BLOCK a player for consistent bad behavior in a new way (a-la Black Mirror's White Christmas episode)

3) Flag a community manager when toxic behavior is occurring for further analysis

4) In game alerts to griefers that warn them about very specific ways in which their behavior is unacceptable, with varying degrees of consequences to their behaviors from a text warning, short term account freezes, figure and voice blocks (per above), all the way to an account suspension.

5) Enhanced tools for communities to self police powered by AI

6) The ghost-block: allow the user to continue to post, but block the visibility of their posts to other users for a period of time or until a benchmark of acceptable content or behavior has been met.

Inversion- use AI to measure and reward GOOD behaviors

Today, oftentimes the most loyal players will be rewarded by a community manager to give them some level of authority as an ambassador for the community.  This can feel much more like being given a sash as a hall monitor in elementary school more than anything else. This ambassador will oftentimes be the person to raise issues of malcontent in the community to a community manager- acting as a spokesperson or go-between. There is an opportunity to reward more than those players that are just LOYAL, promoting the kinds of behaviors a community manager would ultimately wish to replicate.

Other examples that can be positively flagged include:

  • encouraging a frustrated player,

  • helping a newbie advance the game,

  • sharing "secret codes",

  • thanking someone,

  • supporting one on their team when they do something well,

  • or making a "Leroy Jenkins" joke

We could train AI to recognize these patterns which are then chosen by the community manager as behaviors to mine and recognize. How rewarding would it be to a player whose natural play style is helpful, and has contributed to other casual players enjoying the game more and continuing their play, to receive a gift that recognizes that contribution? AI could recognize these longer lasting impacts and see connections for enjoyable game play that aren't as obvious in the moment.  And if such a system itself were “gamed”, worse-case we have people trying to out "manner" one another.....vs have people figure out ways to put someone down without using all of the "toxic" language that has been classified.

What does toxic behavior look like? The absence of good behavior. Yes community managers will be informed, yes they can directly address those players that are repeatedly showing an absence of good behaviors. But unless we use AI in a positive way, to re-enforce positive behaviors in people, there is always an opportunity for misuse and most certainly for mistrust. If you are measuring good behavior then the deviant behavior is so much easier to spot - it becomes an even louder signal for you to see. How much are we missing by attempting to measure toxic things in all languages across all games across all platforms?

In Conclusion

Simply put, one must approach training an AI much as one would approach training a child. Would you teach a child to behave by solely pointing out what NOT to do? No, of course not.

Thinking of using AI to measure attrition in your org? Invert it! Mine insights to find out why people STAY!  Using AI to measure who is not following the rules or learning? Invert it! Use AI to find out what needs to be in place for people to be intrinsically motivated to follow rules or learn.

Invert, invert, invert. Make damned sure that your model is explainable per this blog.

In lieu of using AI to police people, with threats to measure (and act on) our patterns of possible attrition, our curse words, our lack of productivity, our lack of ‘X’, let’s measure and thus INCENTIVIZE when we help, inspire, collaborate, lift up others and contribute.

Want to learn how to do this at scale??? How to train your org how to think in THIS way? Incorporate Tech Ethics by Design workshops as part of your regular AI Lifecycle Model.

Want to learn more? Reach out!


Phaedra Boinodiris, @Innov8game

Phaedra Boinodiris FRSA has focused on inclusion in technology since 1999. She is responsible for IBM's Trust in AI practice and is the former CEO of WomenGamers.com, having started the first scholarship program in the US for women to pursue degrees in game design and development. She is currently pursuing her PhD in AI and Ethics from UCD in collaboration with NYU.

Beth Rudden, IBM, @ibethrudden

Beth Rudden transforms people and companies through applied AI and empowerment with the ethical use of data. She leads large, geographically dispersed advanced analytic and AI teams to develop cognitive solutions that deliver outcomes for IBM’s clients. Beth has received patents on solutions that provide more precise insights, better customer understanding, and faster implementation. Her anthropology, language, and data science background also help her develop models to transform the human experience. She is currently leading AI at Scale, a consult to operate model, offering Trusted AI that renders a business outcome.

Joahna Kuiper, IBM

Joahna Kuiper has been playing on computers since the days of Zork. Her career has since then been focused on customer experience, emerging technologies, and business strategy. She is presently a director in the Salesforce Services practice at IBM and studying at Saïd Business School, Oxford University (post-graduate research in AI ethics and frameworks). She might still play WoW...