Protecting Artists from AI: My Interview with Time 100 Professor Ben Zhao

Image: Depositphotos

When I interviewed Professor Ben Zhao from the University of Chicago, I didn’t just learn about computers and AI. I learned about kindness, fairness, and standing up for people who are being hurt by technology. Professor Zhao was named in Time Magazine’s AI 100 List in 2024 for his amazing work, but what impressed me most was how much he cares about helping artists.

Professor Zhao studies something called adversarial machine learning, which means he looks at how AI can be tricked or attacked and how to stop that. But he also uses his research to protect real people, especially artists whose work is being copied by AI programs.

He told me that AI models learn by looking at millions of pictures, songs, or stories made by real humans, but most of the time, the artists never gave permission. “These AI models are trained on people’s work without consent or credit,” he said. “That’s wrong.”

Professor Zhao explained that many artists spend years learning how to draw, paint, or make music. But now, AI can copy their styles in minutes. “Someone can take 20 of an artist’s pictures, train an AI model, and make it draw in that same style,” he said. “It’s like wearing someone’s skin as a costume.

Because of this, some artists are losing their jobs. “I’ve seen people who spent 20 or 30 years doing art now driving Ubers to feed their families,” he said. Hearing that made me feel sad because those people worked so hard for their talent, and now AI is taking it away.

To fight back, Professor Zhao and his team created tools called Glaze and Nightshade. These tools help protect artists’ work. Glaze changes the artwork a little so AI can’t copy it, and Nightshade “poisons” the data that AI tries to steal.

“Glaze has been downloaded millions of times,” he said. “But what matters most isn’t the numbers, it’s each artist we can help.” He told me that many artists message him every day saying, “Now I can post my art again.” That made me realize how one person’s work can make a huge difference to so many people.

I asked him if he thinks people should learn about ethics, meaning right and wrong, when they study AI. He said yes, definitely. “This generation of computer scientists failed because we didn’t study ethics,” he said. “If we understood it better, we could talk about what’s right and what’s not right.” He thinks that too many people in big tech care only about money or the “future of AI,” but they forget about the artists losing their jobs right now.

When I asked what advice he would give to students like me, he said, “Do what drives you. Don’t follow someone else’s path.” He said that making a difference doesn’t mean you have to be a professor; you can do it in your own way. He told me that luck also plays a big part. “Sometimes you’re just at the right place at the right time,” he said. “But what matters is that you care and try to make things better.”

Talking to Professor Zhao made me realize that AI isn’t just about technology, it’s about people. His work isn’t about getting rich or famous. It’s about helping others and protecting creativity.

As he said at the end, “It’s never enough. There’s always one more artist waiting to be protected.” That sentence really stuck with me.


Gurnoor Singh Dang

Gurnoor Singh Dang is a speaker, writer, and CEO at Gurnoor Academy. Gurnoor is also a contributor at Cognitive World. Connect with Gurnoor on LinkedIn.