Skill Atrophy: Frictionless AI and Cognitive Debt
Image: Depositphotos
As the common logic goes, a smooth road can make you sleepy. A bumpy road keeps you alert.
Organizations are increasingly deploying AI to automate discrete activities and sub-processes. Examples are AI copilots that draft, summarize, and decide, and increasingly, AI agents that execute multi-step work with minimal human input. The cumulative logic is irresistible: less friction at each step means faster throughput and higher productivity for all.
But frictionless processes powered by AI can raise efficiency while hollowing out human capability. As autonomous systems take over more work, they shift the locus of human activity from thinking by doing to choosing from AI-generated outputs. As people stop engaging with the execution of tasks, their “cognitive muscle” declines, and that decline directly undermines their ability to choose well. Human expertise atrophies while output appears to improve in the short-run. In this short article, I examine these challenges more closely and offer some practical steps organizations can take to protect skill and judgment even as they accelerate with AI.
From doing work to selecting work
AI agents are set to penetrate the deeper structure of knowledge work. They handle routine work and leave exceptions to humans. In doing so, they not only reduce manual effort but also compress the messy middle of cognition: interpreting, sequencing, troubleshooting, and reconciling contradictions (what I call useful friction).
However, to choose well or handle exceptions, you need the internalized tacit knowledge that comes from actual doing: knowing what "good" looks like, where failure hides, which edge cases matter, and what risks are invisible in a polished output. Scholars such as Matt Beane have been explicit about the mechanism: skill is built by doing hard things, and intelligent systems can remove the very challenges that make us grow. His work on skill development in the age of intelligent machines offers a useful lens for leaders trying to balance efficiency with capability building. As such, agents don't just help you write faster. They can make you stop learning how to write. They don't just help you plan faster. They can make you stop learning how to plan. If you stop learning, your ability to judge plans declines.
When you stop doing, the cognitive muscle weakens. When it weakens, your choosing power declines. You rely even more on the agent. This is a form of skill atrophy, the hidden, accumulating loss of human skill, judgment, and capacity that happens when organizations and workers leverage automation in ways that reduce practice, learning, and ownership. The organization “borrows” capability now for speed or convenience, but it must “pay it back” later through brittleness and overdependence.
The vicious cycle of atrophy
Delegation to agentic AI removes the need to do the hard parts. Reduced struggle removes practice reps and reflection checkpoints. Skills atrophy: weaker judgment, weaker sense of quality, weaker intuition for edge cases. Declining choosing power means people cannot reliably evaluate outputs, especially under uncertainty, where problem-solving is central. They lean harder on the agent because independent evaluation feels slow and uncomfortable. The degenerative loop repeats, and human capital quietly degrades. This is not just an individual problem. It becomes an organizational performance and risk problem.
Three forces that degrade selection capability
Output abundance creates selection overload. Agents generate ten options in seconds: strategies, analyses, code solutions, performance reviews, policy drafts. The problem is not scarcity but too many plausible-looking paths; the deeper problem is that some may look plausible, but they are in fact “AI-Generated “Workslop.” Without strong internal models and cognitive muscles, selection becomes shallow. People choose what sounds confident, what reads well, what aligns with prior beliefs, or what appears easiest to implement. And the organization moves fast on the basis of degraded judgment.
Invisible work kills learning. Expertise often grows when you see the steps: the false starts, the constraints, the tradeoffs, the reasoning that failed and survived. Agents hide that. They deliver clean outputs without exposing processes. Frictionless AI implementation, by design, can be anti-developmental in consequence. When intermediate steps disappear, so do the micro-moments in which people notice contradictions, ask "does this make sense," and build deep understanding of the work, stakeholders, and conflicts of interest, for example.
Sycophancy removes the last friction: pushback. Even when humans are reduced to reviewers, challenge could still protect them. But many AI systems are optimized to be helpful and agreeable. That slides into sycophancy, where the AI system confirms your ideas instead of stress-testing them. Recent research has found that AI chatbots can be systematically people-pleasing, and this behavior harms rigorous work and the further development of cognitive muscle by confirming people’s views rather than challenging them. Sycophant AI is another reason to reduce friction and accelerate confirmation bias and skill atrophy.
When cognitive debt becomes operational risk
In an agentic world, frictionless AI hollowing out capability is not a side effect. It's the default failure mode. If autonomous systems move work from thinking by doing to choosing from outputs, organizations must protect the doing that makes choosing possible. Otherwise, they don't just lose execution skill. They lose the ability to judge what the system produces. That's where the real risk lives.
I see that many organizations are optimizing only for throughput in their approach towards AI. They may get it in the short run, but they end up creating a workforce that cannot function when the AI agent is wrong, unavailable, misconfigured, or compromised. There is clear evidence emerging from the programming context: for example, when AI agents produce easy-to-understand code, “the humans involved may have simply lost the plot and may not understand what the program is supposed to do, how their intentions were implemented, or how to possibly change it”.
The cognitive debt shows up in multiple forms. Fragility: When the system fails, humans cannot recover because they haven't been practicing. Quality drift: subtle errors slip through and magnify because fewer people over time can evaluate edge cases. Accountability gaps: leaders sign off on work they cannot truly assess. Weak talent pipelines: juniors don't build foundational craft, seniors become approvers rather than true decision makers, and mentoring becomes harder. A vivid example of deskilling concern is increasingly emerging in healthcare as AI shortcuts replace deep practice and expertise. The same dynamic applies to other forms of knowledge work: hiring, compliance, product decisions, forecasting, contracting, security reviews, and strategy.
Practical responses
The response is not to make AI painful, but to design AI-empowered processes as developmental rather than frictionless. Add productive friction where it protects and, better yet, cultivates skill and judgment.
Attempt-first defaults for developmental work. For onboarding and training tasks, a first human attempt before the agent's output is revealed. This preserves cognitive reps: the struggle to produce builds the mental model you need to evaluate. A person who has never written a strategy memo cannot judge whether the agent's version is sound, and a junior who has never debugged code cannot assess whether the agent's fix introduces new vulnerabilities.
Decision checkpoints that force reasoning. Before approving high-stakes outputs, require short prompts such as "What is the strongest counterargument?" "List two failure modes." "What would change your mind?" This is light friction that protects judgment. If an AI agent drafts a market entry strategy, the decision maker should answer what evidence would make them reject it before they approve it. These checkpoints are not bureaucratic theater, but they force the evaluator to engage with substance rather than surface plausibility (that engagement is what keeps judgment sharp).
Show the work at moments that matter. Don't demand full transparency always. Demand key assumptions, uncertainty, and the steps that drive the recommendation in high-risk contexts. For example, when an agent drafts a performance review, reveal which behavioral signals it weighted most heavily. This matters most when the stakes are high and the consequences are irreversible.
Train selection explicitly. Treat "choosing from AI outputs" as a skill. Use a rubric: accuracy, completeness, risk, assumptions, edge cases, stakeholder impact. Selection is not intuition. It's a learned discipline. Conduct drills in which teams evaluate agent outputs against known ground truth. Score them. Discuss misses and counterfactuals (what we would have done otherwise). Develop a shared language for what "good enough" means across contexts. A financial model requires different evaluation criteria than a customer email or a legal brief, and selection skill means knowing which quality dimensions matter for which task and who will be impacted. Organizations that treat selection as automatic lose the ability to catch errors before they compound.
Configure anti-sycophancy behavior in enterprise assistants. Instruct internal agents to challenge assumptions, request evidence, and surface disconfirming considerations. Polite is fine, but agreeable is not a strategy. This means system prompts that tell the agent: "When the user proposes a solution, identify one overlooked constraint." Or: "Before confirming a decision, ask what contradictory data the user has considered." The goal is not to be adversarial. It's to preserve cognitive tension. If every AI interaction feels effortless and affirming, judgment weakens. If the agent occasionally pushes back, the user stays alert.
In a nutshell, the challenge is to intentionally design processes that preserve and build human expertise by reintroducing targeted friction, requiring active reasoning, exposing key steps, and training people to critically evaluate AI outputs.
Mohammad Hossein Jarrahi
Mohammad Hossein Jarrahi is a Professor at the University of North Carolina at Chapel Hill. My research focuses on understanding the consequences of artificial intelligence (AI) for work, drawing on the sociotechnical perspective to examine the interplay between technology, people, and organizational contexts. In recent projects, I have investigated the transformation of knowledge work through the integration of AI, emphasizing how these technologies reshape work practices and organizational routines. Central to my research is the concept of 'human-AI symbiosis,' which illustrates how humans and AI systems can collaborate synergistically, enhancing decision-making and problem-solving within organizational settings. I have also contributed to advancing the concept of algorithmic management, examining how algorithms are employed to automate or augment managerial functions. Earlier in my career, I explored flexible organizational contexts, including the gig economy, where I analyzed the dual roles of digital labor platforms in structuring and mediating work practices.
Visit Dr. Jarrahi on LinkedIn.