Can AI Really Be Responsible?
This article is an excerpt of the Responsible AI? Report
by Alka Roy, Founder of The Responsible Innovation Project.
Leaders in Tech, Policy, Business and Social Science Gather to Examine the Myth and Challenges Of Responsible AI
On August 20, 2020, The Responsible Innovation Project, held an academic and industry roundtable on Responsible AI, raising the question: Can AI really be responsible? The goal was to arrive at a collective understanding of the challenges and strategies for building AI responsibly. The participatory roundtable included multi-disciplinary academic and industry leaders, practitioners, and researchers working on technology and AI or at the intersection of technology, policy, and humanities.
Thirty (30) leaders and researchers with tech, policy, business, and social science/philosophy expertise participated in the roundtable. They shared perspectives shaped by their experiences with several academic, industry, open source and non-profit institutions including UC Berkeley, Stanford, Harvard, Google, Dell, IBM, HPE and had affiliations with industry groups like Linux Foundation, Cognitive World, ISSIP and CITRIS Foundation. To allow for sincere and honest conversations, all delegates were asked and encouraged to share their personal perspectives rather than represent their respective organizations.
Responsible AI? Report
Technology for People
The resulting roundtable report contains a collective narrative that is full of nuance and complexity. The diverse group at the roundtable converged on one meta-theme: The need and desire to put society and people front and center of Technology and AI. And the struggle to figure out how.
Some key questions that were raised included:
How do we put people’s well-being and user-centered development at the forefront of the development of AI rather than an afterthought?
How do we define machine intelligence and who should be part of that discussion?
Could AI be used to make high volume and low-risk decisions, but high-risk, less frequent, and specialized decisions need to be made by people?
How can more people be more informed about the decisions that affect them and understand who is making those decisions?
Who decides and how do we decide where technology should or shouldn’t be used? Are we automating the right things at the right rate?
The Process
The two-hour online roundtable started with an overview of the current challenges and opportunities with emerging technology and AI and The Responsible Innovation Framework. The remaining 1.5 hours followed a participatory World Café process to combine the benefits of small group conversations and cross-pollination of diverse perspectives. At the end of the roundtable, the observations and strategies for building technology and AI responsibly were synthesized using the Collective Narrative Methodology and further edited and grouped into four key areas:
The Value of Diversity of Perspectives
The Challenge of Bias
Will Common Standards and Frameworks Get Us There?
Is Democratizing AI & AI Literacy the Answer?
Collective Strategies and Their Challenges
The strategies and approaches that the collective started to converge on to build AI responsibly come with their own set of challenges:
Diversity of Perspectives: If diversity is an embedded problem in an ecosystem, how can we begin to tackle this?
The Challenge of Bias: Defining bias itself becomes biased. Before we start buying tools to “correct” biases, how do we understand exactly what to fix and how? Who is ensuring that these tools are trustworthy?
Will Common Standards and Frameworks Get Us There?: What is the motivation for a common standard or framework? How flexible will it need to be? How accountable and to whom?
Is Democratizing AI & AI Literacy the Answer?: Sounds good, right? What could go wrong? What does this look like when we don’t have a level playing field? Who gains or controls this democratization? What safeguards do we need? What does AI literacy for all and the marketplace it serves look like? What is being taught? What has it knowingly or inadvertently left out? How is it taught so that we know when not to use it?
When we filter these themes through the lens of the earlier meta-theme, of putting people first, the question becomes: How would we innovate, educate and organize differently if we put people and society’s well-being first?
An earlier survey that was run in parallel to the roundtable showed a Trust Gap with both tech companies and government institutions.
When People are the Problem and the Solution
Why hold this roundtable when the adoption of new technologies often outpaces our understanding of how they really work or impact society?
Why bother with more surveys and discussions when the Trust & AI and Ethics field is already crowded with countless principles and guidelines?
This is a hard and messy problem and there is no one-size-fits-all solution. We can not solve hard and interconnected problems that were created over time in isolation, without a community that helps us shift the culture that created the problem.
There is a desire to align and the challenge is that we have to figure out how to trust the same people who may be at the center of creating the problem. We have to understand things all over again. Reframe them, which can be hard for experts. Other than the clear cases of theft and violence like cyber attacks and killer robots, no one has it figured out.
That doesn’t mean that we don’t do anything. We haven’t stopped tech adoption and massive digitization of our interactions for consensus and regulations. Why are we waiting for someone else to figure out responsibility and accountability? What if we start including social impact in innovation in that same iterative way? Otherwise, we’ll keep amassing more and more technical and social debt and it’ll get harder for us to dig our way out. The most trustworthy solutions and people are those who admit what they don’t know, design their inquiry with consideration and develop strategies that can be evolved and adapted.
Taking Responsibility
The greatest success and impact of the roundtable was that a set of people with diverse experiences had a thoughtful and respectful exchange. At the roundtable, these leaders, practitioners, and researchers were willing to take responsibility for figuring out what needs to change in their own domains. They were also willing to accept that they didn’t have all the answers. They were willing to be vulnerable as well as simplify complex concepts for non-tech participants.
It is easier to point at others and look at what they need to change. It is harder to figure out what we should change. When the focus moves closer to us, the challenges amplify. In the post-roundtable survey and discussions, several attendees shared that they have begun to consider the impact of their design on others, and started asking questions about what a diverse community can look like. They were invigorated by the power of new connections, energy, and perspectives. The most important shift after the roundtable came from technology practitioners who were interested in the problem but started to see responsible innovation as their responsibility.
This shift is an important step in our personal and collective journey of figuring out what can and needs to be done.
Getting to Trust
How do we get to trust? To arrive at the key takeaways from the roundtable: diversity, bias, common frameworks, and wider access and literacy to AI in a trustworthy way, here’s what we need:
An independent, trustworthy, and accountable community.
To collaborate and put society and people front and center of Technology and AI.
Address the trust gap in the people, the systems, and institutions that are both creating the problem and are capable of solving it.
Reframe how and what we teach people.
Revisit what and how we define and build systems and technology.
We need to take responsibility and be accountable. We need to understand different industries, environments, and ecosystems and collaborate. We need a strong and collaborative community to lead us to a delightful, trustworthy, dependable, inclusive, open, and safe world.
This article is an excerpt of the Responsible AI? Report, published by The Responsible Innovation Project’s RI Labs. RI Labs is bringing a multi-disciplinary community to explore the impact of innovation and AI on the way we live, learn, and work. Download or read the Full Responsible AI? Report!