The Hidden Costs of AI: Deskilling
On a dark night in 2009, Air France Flight 447 fell from the sky into the Atlantic Ocean. The tragedy wasn't caused by an engine failure, but by a failure of human-AI collaboration. When the plane's airspeed sensors iced over and the autopilot disconnected, the pilots, who had become accustomed to the autopilot handling most of flight and smoothing their inputs, failed to recognize and correct an aerodynamic stall. They had all the information they needed to save the plane, but years of over-reliance on automated systems had eroded their ability to act in a novel, high-risk, and scary situation.
As we integrate generative AI into our organizations, we face a similar, albeit less dire, risk: Deskilling.
What Is Deskilling, and Why Should You Care?
Deskilling is the reduction in skills required to do a job. It doesn't sound that scary at first—who doesn't want their job to get easier? But it can harm workers in two distinct ways.
First, your skills can get rusty. If an AI tool handles your budget forecasts every quarter, you stop practicing the analytical thinking that built those forecasts by hand. That's fine until you need to switch jobs, move to an organization that doesn't use the same tools, or face a situation where the AI gives you something wrong and you’re accustomed to trusting it.
Think of the Air France pilots: they had thousands of flight hours, but so many of those hours were spent monitoring an autopilot that when the system disconnected, they couldn't access skills they'd once had.
Second, the skill demand of your occupation can drop. A job that once required years of training might now be done by someone with a few months of experience and strong AI prompting skills. That can open doors for some workers, but it can also push down wages for everyone in that field once the barrier to entry is lower.
For mission-driven organizations, deskilling carries a third layer of risk that goes beyond individual workers. Social workers, teachers, community organizers, and caseworkers develop their expertise through relationship-building, cultural competency, and situational judgment. These skills can get harder to exercise when too much of their work gets mediated by AI systems, and the underlying social dynamics shift without them. This not only can cause those skills to atrophy, but it can also compromise the quality of service your organization provides to the communities you exist to serve.
It's Not Just About Replacing Jobs
Public conversation about AI and work tends to focus on one dramatic scenario: robots replacing people. That will happen in some cases. But the more widespread impact is subtler and plays out across several dimensions.
Task reallocation shifts time from automatable tasks to ones that require distinctly human capabilities. In theory, this is great: your staff spend less time on data entry and more time building relationships. In practice, the productivity gains can create pressure. If each worker now produces more, the organization (or the broader labor market) may need fewer workers to meet the same demand. Or the cost of the work may drop, pulling wages down with it.
Skill polarization can emerge alongside task reallocation. Organizations start to need people with highly specialized expertise at the top and basic oversight capability at the bottom—but far fewer people in the middle.
Task augmentation offers a more hopeful path. Ethan Mollick, in his book Co-Intelligence: Living and Working with AI, describes two collaborative approaches. In the centaur approach, you divide tasks between yourself and the AI, then stitch the results together—half human, half not, like the mythical creature. In the cyborg approach, you collaborate closely with AI throughout the task itself. If you can master the cyborg approach so that AI helps you produce higher-quality work in the same amount of time (rather than just doing the same work faster), you can benefit from AI without contributing to the cheapening of your labor.
Then there's task elimination, where AI takes over entire categories of work. Maybe one person now "supervises" the AI that handles what an entire department used to do. That one remaining role is probably lower-paid—not necessarily because the person is less skilled, but because many workers are now competing for a handful of positions instead of many.
The Oversight Paradox Makes It Worse
Here's a cruel irony: the better an AI system performs, the harder it is for humans to catch its mistakes.
When AI gets things right 97% of the time, the humans reviewing its output develop what you might call "vigilance fatigue." They stop paying close attention. And then, in exactly the novel situation where the AI is likely to fail—because it's never seen this pattern before, or because the data shifted—the human reviewer has already stopped looking carefully.
Picture this: it's 5:25 pm on a Friday. You ‘re staying late on a deadline. On top of all that, last month you were put in charge of checking an AI system that decides whether applicants are eligible for a program. Out of hundreds of recommendations this month, you've seen three false positives and zero false negatives. You have 13 more cases to review before you can go home, and they're all labeled "ineligible." How much attention are those 13 cases going to get?
This is the oversight paradox. The very conditions that make AI useful—handling routine decisions quickly and accurately—are the same conditions that make meaningful human oversight difficult. And deskilling compounds the problem: a nonprofit financial analyst who gets accustomed to AI-generated budget forecasts may gradually lose confidence in their own analytical abilities, making them even more dependent on the system they're supposed to be checking.
Mission-Driven Organizations Have Unique Vulnerabilities—and Unique Strengths
When a for-profit company's AI goes wrong, the fallout is usually financial losses and reputational damage. For mission-driven organizations, the stakes cut deeper. Not only do we risk financial losses and mission-critical trust, people who need help might not get it.
Workers in mission-driven organizations are also distinctly vulnerable. Many have chosen their careers because they care about impact, sometimes sacrificing higher salaries to do so. They're already financially stretched, which makes job displacement or wage pressure from automation especially painful. And the entry-level positions that AI could automate first are often the ones that provide economic mobility to the very populations many nonprofits serve. An organization dedicated to "empowering individuals and strengthening communities" that automates away those positions is working against its own values.
But here's the good news: mission-driven organizations also have more flexibility than profit-maximizing companies to implement AI in ways that protect workers. A profit-driven firm is producing what the market can bear and faces intense pressure to cut costs. A public behavioral health service or direct aid organization, on the other hand, is generally producing as much as it can but not getting anywhere close to meeting demand. That means productivity gains from AI can go toward serving more people rather than cutting staff.
So What Do You Do About It?
The answer isn't to avoid AI—that's neither possible nor desirable. It's to be deliberate about how you use it. Here are a few principles drawn from the framework in Amplify Good Work:
Start with augmentation, not replacement. Instead of asking "What jobs can AI replace?", ask "How can AI make our people more effective at the work only humans can do?" The goal is to eliminate tedious, repetitive tasks so your staff can focus on relationship-building, creative problem-solving, and other distinctly human contributions.
Invest AI savings in your impact and people, not just cost reduction. When AI makes your work more efficient, use those gains to do more good. If AI helps caseworkers handle administrative tasks more quickly, they can serve more clients or provide more intensive support. If AI streamlines accounting, your finance team can provide better analysis to support program decisions. Resist pressure from funders or board members who see AI purely as a cost-cutting tool.
Rethink the work itself. This is the stance from Amplify Good Work that I find most powerful. Rather than bolting AI onto existing processes and hoping for the best, step back and redesign roles, workflows, and tools so that AI strengthens human expertise instead of eroding it. A nonprofit executive who no longer spends hours on progress reports can invest that time in one-on-one conversations with major donors. A caseworker freed from routine documentation can put more effort into direct client interaction and advocacy.
Train for calibrated trust. Your staff need practical skills for working with AI—not just how to use the tools, but how to evaluate their outputs. Can they recognize when an AI recommendation doesn't align with their professional judgment? Do they know what factors the AI might have missed? Training should help staff develop the ability to trust AI when appropriate while maintaining healthy skepticism about its limitations. Questioning AI outputs is a professional skill, and your organization should create a culture that rewards it.
Protect against deskilling deliberately. Make sure your staff keep practicing the skills that AI is currently handling. Rotate people through tasks so nobody loses touch with the underlying work. Build in regular exercises where staff perform key tasks without AI assistance—not because the AI can't do it, but because they need to stay sharp for the situations where the AI can't.
What kind of deskilling are you most worried about, and what’s your plan?
This post draws on ideas from Amplify Good Work: Effective, Ethical AI for Mission-Driven Work by Karen Boyd, PhD. For free resources to jumpstart effective, ethical AI use at your organization, download the Mission-First AI Starter Kit.
I asked Gemini 3 Thinking:
I'd like to write a blog post about Deskilling and what organizations can do about it.
Discuss how over-reliance on AI can erode a team's ability to handle high-risk or novel situations, hamstring workers in their career, limit organizational growth and flexibility, and compromise institutional learning/knowledge retention. There's a lot of information about deskilling in the job quality and stability chapter, and abotu the importance of organizationa capacities in Part 3. Can you draft a post?
But, I only really liked the introduction. So I asked Claude Opus 4.6:
Can you finished the following blog post based on the book content?
and I pasted in the first paragraph. Claude went quite long, so I did a lot of editing.

