Moderation in All Things—Including AI
Image: Depositphotos
Should we eat, drink, and chat with GPT moderately over the holidays?
This is the season of holiday overeating and over-drinking—despite the fact that moderate consumption of food and alcohol is widely believed to lead to a better life. Although I sometimes agree with Oscar Wilde in advocating “moderation in all things—including moderation,” I am beginning to think that AI—particularly the generative variety—is no different than food, alcohol, or other good things that become problematic when used excessively.
Don’t get me wrong; using genAI in moderation is fine—even a virtue. It can make us more productive, stimulate our thinking, and reduce our drudgery. I think virtually everybody should learn how to use it.
But three years into our post-ChatGPT lives, it’s becoming apparent that using it too much isn’t good for us humans. There are several different aspects and implications of excessive AI use. I and others have written, for example, about deskilling that can result from excessive use. We don’t yet know exactly how much deskilling results from how much AI use, and it will probably vary by the context. But we should be thinking now about how much AI is harmful to professionals in various fields, and what we can do to prevent that harm.
It is also increasingly obvious that some people can develop emotional dependence on AI, typically in the form of chatbots. And while it is perhaps better to talk to a chatbot than to talk to no one at all, there is evidence that conversing with AI on personal topics tends to increase loneliness rather than reducing it. I suspect that more studies will further reinforce this finding. If my kids (or more likely in my case, grandkids) were going to use an AI chatbot, I would certainly emphasize moderation.
At work, genAI is a seductive aid to creating content faster and with less mental engagement. The “faster” benefit might be okay, except for the fact that AI-generated work is often not of high quality and creates a need for others to improve it. I wish that I had coined the term “workslop” to describe this phenomenon. And the “less mental engagement” benefit means that the AI user may not remember much about what has been created. If you believe that work is in part about learning from experience, you may be out of luck if genAI does much of the work.
In fact, I would not be surprised if there is someday a scientific study suggesting that heavy genAI use is associated with increased risk of dementia. I have no evidence for this hypothesis, but there is some logic to it. A Mayo Clinic study, for example, found that using your brain for mentally challenging activities helped prevent or postpone the onset of mild dementia. Creating original content is mentally challenging; asking ChatGPT or Claude to do it is not (much).
My favorite relevant research study (actually a set of studies summarized in this article) involves a group of elderly nuns who were studied for 35 years, and then allowed their brains to be studied for evidence of dementia. More education was correlated with lower risk of dementia. Even more to my point, how the nuns wrote seemed to help them avoid dementia. The nuns had written autobiographies before taking their vows, and both the idea density and grammatical complexity of their writings were found to be associated with less mental impairment in old age. Imagine what might have happened to their brains if they had let genAI do their writing for them!
Of course, this is all supposition on my part, but there is perhaps more reason to believe genAI use only in moderation can help prevent dementia more than the widely-advertised over-the-counter drugs that some people take for the same purpose. Just sayin’.
This article was originally published on Dr. Tom Davenport’s Substack here.
Dr. Tom Davenport
Dr. Tom Davenport is a world-renowned thought leader and author, is the President’s Distinguished Professor of Information Technology and Management at Babson College, a Fellow of the MIT Center for Digital Business, and an independent senior advisor to Deloitte's Chief Data and Analytics Officer Program.
An author and co-author of 25 books and more than 300 articles, Tom helps organizations to transform their management practices in digital business domains such as artificial intelligence, analytics, information and knowledge management, process management, and enterprise systems.
He's been named:
- A "Top Ten Voice in Tech" on LinkedIn in 2018
- The #1 voice on LinkedIn among the "Top Ten Voices in Education 2016"
- One of the top 50 business school professors in the world in 2012 by Fortune magazine
- One of the 100 most influential people in the technology industry in 2007 by Ziff-Davis
- The third most important business/technology analyst in the world in 2005 by Optimize magazine
- One of the top 25 consultants in the world in 2003