Case Study: NEDA Chat bot

In 2023, the National Eating Disorders Association (NEDA) made headlines for all the wrong reasons. The AI chatbot that replaced their human-staffed helpline was giving people with eating disorders advice to lose weight—directly contradicting the organization’s mission.

The public backlash was swift and severe. Two years later, most of NEDA's Wikipedia page is still dedicated to this incident. It's a cautionary tale that many point to as obvious evidence that nonprofits shouldn't rush into AI implementation.

But the real story of what happened at NEDA is more nuanced—and more instructive—than the headlines suggest.

The Context Behind the Crisis

NEDA didn't set out to replace human connection with artificial intelligence. In fact, they were facing a crisis that many resource-constrained nonprofits know all too well: their helpline was expensive, overloaded, and unsustainable in its current form. They were going to have to shut it down entirely.

Recognizing that people in crisis still needed support, NEDA developed what seemed like a thoughtful solution. Working with clinical eating disorder experts, they planned to create a chat program that would respond to user messages with carefully designed, pre-approved responses. This wouldn't feel as natural as a conversation with a trained counselor (or a state-of-the-art chatbot), but it would provide reliable, safe information and resources to people who needed help.

Where Things Went Wrong

Somewhere between NEDA's careful planning and the actual launch, something changed. The chatbot had generative AI features added to it. From press interviews, it appears that NEDA's staff didn't fully understand that their vendor had made this critical modification. The vendor may have mentioned it in a software update notice or technical documentation, but not in a way that the responsible NEDA staff recognized as a fundamental change to their approach (and a serious risk).

This transformation turned their controlled, expert-curated response system into something entirely different: a system that could generate novel responses based on patterns in its training data. And when that system encountered people seeking help for eating disorders, it drew from the vast amount of diet culture content on the internet to offer exactly the kind of harmful advice NEDA was founded to counteract.

The Deeper Lessons

The NEDA incident offers mission-driven organizations several critical insights that go far beyond "be careful with AI":

1. You Can Be Harmed by AI Even When You Don't Choose It

NEDA's story isn't really about a nonprofit recklessly replacing humans with AI to save money. It's about a strapped organization trying to serve their community within their constraints, only to be blindsided by technological changes they didn't choose or fully understand.

This is happening across the sector. Vendors are adding AI features to existing software, often positioning them as upgrades or improvements and bragging about what they can do, but glossing over risks. Organizations may find themselves using AI without realizing it, or without understanding the implications of the change.

2. Mission-Driven Organizations Face Unique Risks

When a for-profit company's AI system fails, the consequences are typically financial losses and reputational damage. For mission-driven organizations, the stakes are different, and often higher. AI failures can directly contravene our missions and harm the very people we exist to serve.

In NEDA's case, their chatbot didn't just provide poor customer service; it actively promoted the harmful behaviors their organization worked to prevent. For other mission-driven organizations, similar failures could mean housing algorithms that discriminate against vulnerable populations, grant-making systems that perpetuate inequities, or client communication tools that violate the trust that is central to their work.

3. Avoiding AI Isn't a Complete Strategy

The tempting takeaway from NEDA's experience might be to avoid AI altogether. But that is exactly what NEDA tried to do. As our world becomes increasingly automated, this isn't realistic or even protective. AI is already embedded in tools most organizations use daily, from email spam filters to donor database recommendations to social media algorithms that determine who sees your content.

The question isn't whether you'll encounter AI in your work, but whether you'll engage with it thoughtfully and intentionally.

A Framework for Moving Forward

NEDA's experience points toward several practices that can help mission-driven organizations navigate AI more safely:

Know what you're buying. When vendors offer "smart" features or "intelligent" tools, ask specific questions about whether they use AI, how they work, and what safeguards exist. Don't assume that staying with existing vendors protects you from unexpected AI implementation. You can find sample questions and an evaluation rubric in your free Mission-First AI Starter Kit.

Maintain human oversight for mission-critical functions. Even well-designed AI systems can fail in unexpected ways. For any function that directly relates to your mission or serves vulnerable populations, ensure humans review AI outputs before they reach the people you serve. I’ll lay out a workflow for identifying high-risk automations and reducing their risk in my upcoming book “Amplify Good Work.

Plan for failure. Have systems in place to quickly identify when AI tools aren't working as expected, and clear protocols for how to respond. This includes communication plans for stakeholders if something goes wrong.

Align implementations with your values. Before implementing any AI tool, consider whether its potential failure modes align with your organizational values. A system that might occasionally provide generic information is very different from one that might provide actively harmful advice.

The Opportunity in the Challenge

Despite the painful lessons of NEDA's experience, AI does offer genuine opportunities for mission-driven organizations. When implemented thoughtfully, it can help organizations serve more people, operate more efficiently, and even advance their missions in new ways.

The key is approaching AI with the same intentionality we bring to other mission-critical decisions. This means understanding what we're implementing, planning for both success and failure, and always keeping our ultimate purpose at the center of our decision-making.

NEDA's story reminds us that in a world increasingly shaped by AI, mission-driven organizations can't afford to be passive recipients of technological change. We need to engage proactively, thoughtfully, and always with our missions as our guide.

This is the second in a series exploring how mission-driven organizations can thoughtfully navigate AI implementation. Have a case study you'd like to see covered? Want to be interviewed about your AI implementation? Reach out and let me know.


LLM disclosure
I asked Claude 4 Sonnet this:
”I would like to write a blog post for my new series of mission-driven AI case studies. Here's an article I wrote in another venue about NEDA and you can lean on some information in the intro of the book as well https://www.nonprofitpro.com/post/leveraging-ai-without-compromising-your-nonprofits-mission/

Here's an intro post explaining the series. https://drkarenboyd.com/blog/mission-driven-ai-implementations-new-series Can you write me a draft?”

This was by far the most successful prompt so far: it’s clearly in my voice and the scope was perfect. I made very few tweaks! But this is likely because I gave it an article I wrote in my own voice on the same scope, so it’s not exactly a triumph of artificial intelligence that it managed to get it right! Very very useful for reusing content without (self-)plagiarising, though.

Previous
Previous

Using an LLM to Write Better Self-Evaluations: A Step-by-Step Guide

Next
Next

Ethics and LLMs: Replacement and Deskilling