Do You Trust a Machine?
Like most things in life, the answer is probably: “it depends.” We would probably trust a machine to maintain room temperature, find us a restaurant, or even fly a plane. But how about provide healthcare services? Or drive a car? One of the most fundamental values of doing business and providing value to customers is trust, and artificial intelligence is the most-heavily debated technology regarding ethical concerns and related trust issues. That’s why truth and trust in technology is crucial.
According to a study released by Infosys, 71% of executive business decision-makers said the rise of AI in the workplace is inevitable. However, most participants also believe that employees (90%) and customers (88%) face concerns in adopting AI. One reason is people still focus on doomsday scenarios, like killer robots and massive job automation. In a recent global customer survey released by Pegasystems Inc., about 25% of respondents worried about robots taking over the world. Among the many people, my coauthor Michael Ashley and I spoke to for our upcoming book, Own the A.I. Revolution: Unlock Your Artificial Intelligence Strategy to Disrupt Your Competition, we heard some common fears and concerns:
- AI will take our jobs
- AI cannot substitute for the human/empathy of our work and life
- AI can be weaponized
- AI cannot react to the unexpected
- AI will conquer the world
These concerns are valid and should be considered. At the same time, we must balance this with the tremendous opportunities both for businesses and for society.
To help people’s understanding of the benefits from AI, organizations need to adopt a problem-driven mindset and have appropriate tools and people in place. In addition, we need policies, procedures, and plans to ensure the ethical use of the technology as well as build trust in the reliability of an AI solution.
Put the Problem First
AI is not the solution to all problems. Building products do not start with thinking about AI but finding a meaningful problem that once solved adds value for the customer or society. Many organizations fall into the trap of adding AI to their strategy without first defining the problem in detail. Alternatively, they think of something cool to do but without a specific problem to solve. Unfortunately, these approaches do not provide value. The first step is to understand the problem and the drivers behind it. Only once the problem is specified, we can evaluate if and how AI can help to solve it.
Engage the Right Talent
We need people with the right skills and knowledge. This is more than having great technologists. Successful AI projects require business acumen, domain experts, and a variety of other stakeholders engaged in the development process. In addition, we need diversity among the team to bring different perspectives to create a best-fit solution. This is an important step to establish truth and trust in technology. AI solutions cannot be built in isolation from the people and social circumstances that make them necessary. We’ve seen too often that when isolation occurs, failure is the usual outcome as we’ve seen in the Amazon AI Recruiting Tool.
Neil Sahota (萨冠军), contributor, is an IBM Master Inventor, United Nations (UN) Artificial Intelligence (AI) subject matter expert, and Professor at UC Irvine. With 20+ years of business experience, he works with clients and business partners to create next generation products/solutions powered by AI. His work experience spans multiple industries including legal services, healthcare, life sciences, retail, travel and transportation, energy and utilities, automotive, telecommunications, media/communication, and government. Moreover, Neil is one of the few people selected for IBM's Corporate Service Corps leadership program that pairs leaders with NGOs to perform community-driven economic development projects. For his assignment, Neil lived and worked in Ningbo, China where he partnered with Chinese corporate CEOs to create a leadership development program.