Deskilling Healthcare: Will doctors and nurses remember how to care for patients?

Image: Depositphotos, AI generated

There has been a rise in concern about de-skilling and loss of critical, thorough thinking in various fields because of AI. I have written about aspects of this issue, and Princeton professor Kwame Anthony Appiah wrote about it more generally in this Atlantic article.

I am most concerned about this issue in healthcare—perhaps obviously, because de-skilled clinicians could endanger human health. The other general reason why healthcare professionals might let AI de-skill themselves is that they are under considerable pressure to be more productive—at least in the U.S. The average patient visit with a physician is only about 15 minutes, and there are temptations galore to take AI-enabled shortcuts. If AI can easily summarize patient visits in clinical notes, diagnose diseases and communicate with patients about them, and help navigate highly bureaucratic insurance processes, why shouldn’t doctors and nurses take advantage of this capability?

My Family’s Experience with AI Shortcuts

My wife and I have experienced several issues with shortcuts (or the absence of them in one case, I think) in the medical field over the last several weeks.

1. The first event was with my primary care physician (PCP) in California. I had experienced an upper respiratory infection for several weeks that just wasn’t getting better. I was beginning to think that I had a sinus infection, but wasn’t sure if it was viral or bacterial. So I emailed my PCP about it, asking whether I should take something or see a doctor (I was far away from him at the time). He almost immediately emailed me back, and the message began:

Tom [in a different font than the rest of the message],

Thanks for reaching out about the patient with a lingering cough after a recent viral illness.

The emailed response went on to tell me that I should just wait out the infection and cough. But it certainly appeared that my doctor had used generative AI to prepare his response to my message, and that he hadn’t bothered to edit out the incriminating introduction. I asked him, “So what AI model are you using?,” and he admitted to using OpenEvidence, the fast-growing “medical information platform.” I don’t object to my doctor using that tool, but I would like to think that he read it enough to edit out the part revealing that the message didn’t really come from him.

2. Shortly thereafter, I was at a conference on AI in Boston. One of the speakers was Dr. John Halamka, a former prominent medical CIO in Boston and now head of the Mayo Clinic Platform. It is gathering patient information from a variety of sources and incubating healthcare startups, particularly in AI. One of its incubated companies is OpenEvidence. In his talk Halamka described a glowing future for healthcare AI, and mentioned that physicians were becoming increasingly dependent upon it. So I asked a somewhat cheeky question when audience Q&A time came around: “How does this AI-enabled future relate to the events of November 13, 2002?” I guessed that Halamka would know what happened on that date, and he did—even remembering the exact time of day it happened. He was CIO of Beth Israel Deaconess hospital system at the time, and the entire computer network at the hospital went down—for four days. During that time the hospital and its employees couldn’t use any electronic medical records, order labs or procedures electronically, or use any sort of decision support. Were it not for some old paper forms—some accounts suggest they were taken out of a dumpster—and some old modems, all care processes might have broken down. Halamka said in answer to my question that those events point out the importance of “business continuity” approaches, but what happens when people forget how to diagnose diseases and write up clinical notes?

3. My wife and I had patient checkups a few weeks later at a different health provider in Massachusetts. Like many clinical practices these days, the physician makes use of “ambient AI” to capture all conversation in the exam room. Fine, I thought, and we agreed to the use of it. However, the clinical notes for our visits were both inaccurate. My wife said that she had chronic fatigue, sleep problems, and a child enrolled at Northeastern University—none of which she said anything about. My notes said that I was visiting the doctor for a chronic cough (yes, the one I mentioned above—although it had passed several weeks earlier, and I made the mistake of telling this doctor about the AI-generated diagnosis message) and shoulder pain, which the doctor mentioned from my California patient records but also was resolved more than a year ago. As a result of this text in the clinical notes—which appeared not to have been reviewed by the doctor—our visits were coded as medical problem visits rather than routine checkups, which resulted in $1200 of medical bills. We’re still trying to get that problem reversed.

4. My last story was not a medical shortcut (to my knowledge) but an admission that they are a problem. I had a routine colonoscopy a couple of weeks ago (I recommend the pills for the awful prep process rather than the exceedingly nasty liquid). It went fine, but as I was chatting with my gastroenterologist before the procedure I asked him if he planned to use AI to identify potential polyps. He was quite familiar with the de-skilling issue revealed in this article, which found that gastroenterologists’ ability to identify polyps declined after only 3 months of AI use. He said he’s worried about it as AI use among doctors performing colonoscopies grows quickly. My doctor’s operating room did have a screen using AI, but he said he tried not to rely on it to identify polyps. Of course, I was sleeping during the procedure, so I have to take his word for it.

So How to Avoid De-Skilling?

These were all relatively minor problems, and our health was never in danger. But it’s easy to see how big problems could result. What can we do to avoid AI-related deskilling? Some degree of regular recurrent training in non-AI-based clinical decision-making—perhaps making use of simulations—may be necessary. Doctors do already have continuing education requirements (particularly to retain board certification in specialty areas), but they don’t generally address this de-skilling issue.

There is another field that has a similar problem—aviation. De-skilling because of automation in aircraft has already been blamed for at least one horrible airline crash. Commercial pilots do have regular recurrent training, but their simulators often use the same automated systems that real planes do. The de-skilling issue is probably greater for general aviation pilots, who have no recurrent training and whose planes are being increasingly automated over time.

In both cases, people can die if AI or automation leads to the loss of critical skills by those in charge of safety. Given the rate at which AI is advancing, professional associations in these safety-critical domains had better move fast to address the problem.

This article was originally published on Dr. Tom Davenport’s Substack here.


Dr. Tom Devanport, Cognitive World Think Tank Member

Dr. Tom Davenport is a world-renowned thought leader and author, is the President’s Distinguished Professor of Information Technology and Management at Babson College, a Fellow of the MIT Center for Digital Business, and an independent senior advisor to Deloitte's Chief Data and Analytics Officer Program.

An author and co-author of 25 books and more than 300 articles, Tom helps organizations to transform their management practices in digital business domains such as artificial intelligence, analytics, information and knowledge management, process management, and enterprise systems.

He's been named:
- A "Top Ten Voice in Tech" on LinkedIn in 2018
- The #1 voice on LinkedIn among the "Top Ten Voices in Education 2016"
- One of the top 50 business school professors in the world in 2012 by Fortune magazine
- One of the 100 most influential people in the technology industry in 2007 by Ziff-Davis
- The third most important business/technology analyst in the world in 2005 by Optimize magazine
- One of the top 25 consultants in the world in 2003