AI for the Rest of Us: two years later

Image: Beth Rudden

Coauthors: Beth Rudden, Phaedra Boinodiris

When we released "AI for the Rest of Us" two years ago, we stood in a liminal space, observing as artificial intelligence seemed poised to transform not only Silicon Valley but the entirety of human experience. In hindsight, we recognize that our predictions have been realized in both expected and surprising ways.

The elegant rebellion of lean AI

When we first wrote this book, we argued that massive datasets and supercomputers are unnecessary for creating meaningful AI. Innovations like DeepSeek have confirmed our stance; more innovative, focused models can outshine the outdated "bigger is better" mentality. Think of it as building a race car: precision triumphs over sheer horsepower.

Ontologies: your AI's rulebook

Structured knowledge maps, often referred to as "ontologies," remain essential. As we discussed in Chapter 10, they offer a logical foundation for AI, boosting the intelligence of chatbots and enhancing the safety of medical tools. The remedy lies close to the cause: ontologies serve as the antidote to AI's tendency toward opacity.

Beyond the veil of big data

Big tech continues to hide the training processes of AI models (What data? What biases?). Moreover, providing unrefined data to GPT is like expecting a toddler to grasp physics from Wikipedia. Transparency is essential: if we cannot observe how AI learns, how can we place our trust in it?

As we emphasized in Chapter 5, "the end of opacity or ambiguity means that we can hold corporations responsible for their impact." This truth has only grown more urgent.

The rise of agentic AI

The latest buzzword? Agentic AI: systems that operate autonomously in the physical or digital realm. They:

  • Take actions (e.g., adjust factory machines)

  • Use tools (e.g., consult databases)

  • Decide processes (e.g., pick the best algorithm)

However, with fewer humans involved, governance becomes crucial. Without safeguards, autonomous AI could amplify biases or make irresponsible decisions.

The ethical imperative

You can't have great AI without doing AI right:

  • Ethics over compliance: Laws are the floor, not the ceiling. An AI can be "lawful but awful" (e.g., biased hiring tools)

  • Funded, empowered leaders: Organizations need teams with budgets and authority to:

    • Audit AI models (in-house and vendor systems)

    • Align AI with organizational values (e.g., fairness, transparency)

    • Train staff to build/buy AI responsibly

  • Accountability contracts: Make vendors prove their AI meets your standards with clauses for spot-check audits

There's no "easy button": Ethics demand hard work, literacy, and courage.

Diversity as intelligence

In Chapter 12, we stressed that "the culture required to curate AI responsibly includes a growth mindset, multi-disciplinary teams, and diverse and inclusive leadership." Rosabeth Moss Kanter nailed it in 1977: Skewed teams make skewed AI; if your AI team looks like a frat house, your tech will, too.

We need every voice—firefighters, nurses, teachers—to shape AI. Bast AI, for example, boosted ROI at Maryville University by collaborating with educators, not just coders.

Data as truth-teller

AI reveals how power works. When a CEO's diversity program hides discriminatory algorithms or a politician's words clash with policies, data tells the truth. Believe what you see.

As we wrote in Chapter 5, hold corporations "accountable for ensuring what they deploy aligns with principles for trust and transparency."

Wisdom in an age of automation

AI isn't a shortcut for critical thinking. As Socrates said, learning involves struggle. Once you've wrestled with ideas, AI becomes a synthesizer—transforming your thoughts into reports or lesson plans. “AI is great if you are already wise,” says my good friend Erin Schnabel.

Teachers should be rock stars (with salaries to match) because guiding humans to think is irreplaceable. We remain "hundreds of millions of people short of creating AI representative of the human race."

Aligning authority with responsibility

Too many AI projects fail because responsibility and authority do not align (e.g., asking interns to pilot a $1M system). Leaders must take ownership of outcomes, and we need "AI janitors" to:

  • Clean biased data

  • Make models show their work

Chapter 11 outlines the "Roles and Responsibilities for Responsible AI," yet many organizations still have not created the necessary positions.

The path forward

In our book, we "attempted to teach that we need YOU." This call has become increasingly urgent. AI is rapidly advancing in areas such as software and education. Let's ensure this revolution includes everyone—guiding it wisely, driven by ethics, and shaped by genuine human expertise.

What divides us pales in comparison to what unites us. We borrow our world from our children, so we must be better stewards of AI. The stories that keep us awake at night are nothing compared to our resilience as a species.

We need a billion more humans who are literate in AI systems. We need more people involved in the creation of AI. We need you; your story is crucial for developing effective AI.


Beth Rudden has over 20 years of IT and data science experience, and is a global executive leader, a former IBM distinguished engineer, chief data officer and a practicing cognitive scientist. She has a proven track record of driving transformation for clients through the design and delivery of operationalized AI systems that are sustainable, ethical, and can be adopted by every human. She has also been recognized as one of the 100 most brilliant leaders in AI Ethics in 2023. As the CEO and Chairwoman of Bast AI, she is on a mission to redefine the experience of the human for our shared future. Bast AI is a software company that enables humans to experience trusted AI.

Phaedra Boinodiris is IBM Consulting's global leader for Trustworthy AI. She is a prolific public speaker and author of the book 'AI for the Rest of Us'. She is on the IBM Academy of Technology's leadership council, a Fellow of the RSA, and serves on the advisory boards of several academic institutions. She is currently pursuing a PhD in AI and Ethics thanks to the European Union. She won the United Nations Woman of Influence in STEM and inclusivity Award in 2019, received the Social Innovator Award by IBM in 2018, received the 2014 Kenan Flagler Young Alumni Award, became a Fellow of the American Democracy Institute in 2011 and in 2007 was recognized by Women in Games International as being one of the Top 100 Women in the Games industry.