Introducing the free Mission-First AI Starter Kit!

Over the few past years, I've watched mission-driven leaders wrestle with how to approach AI in organizations built on trust, values, and service to vulnerable communities.

AI can help nonprofits serve more people with limited resources, government agencies create efficient processes, and social enterprises scale their impact. But there are risks, too. When your work depends on community trust and you serve people who have limited recourse if things go wrong, the stakes of getting AI implementation wrong are genuinely high.

With uncertainty at the federal level of the US government, the citizens of the US and its allies need stable, effective institutions. I built the Mission-First AI Starter Kit because I want to help mission-driven organizations amplify their impact without compromising their values.

The Problem with "Just Wing It" (And "Wait and See")

In my research and consulting work, I've seen two equally problematic approaches emerge:

The "Move Fast and Break Things" plan would have mission-driven organizations jump into AI tools without any guardrails, risking privacy violations, hallucinated grant proposals, or insensitive content. When you're serving vulnerable populations and stewarding public trust, "break things" isn't an acceptable strategy.

The "Wait for Perfect Clarity" plan would have you endlessly researching and debating while their teams either ignore the prohibition (creating shadow AI use with zero oversight) or fall behind competitors who are using AI to serve more people with the same resources.

Neither approach serves the communities these organizations exist to help.

Why Generic AI Policies Fall Short for Mission-Driven Work

Most AI guidance is written for corporations focused on efficiency and profit. But mission-driven organizations face fundamentally different considerations:

  • Higher ethical stakes: When you're working with vulnerable populations, AI errors aren't just inconveniences that can be solved with an apology and a gift card—they can cause real harm to people with limited recourse.

  • Trust-based relationships: Your donors, volunteers, and community members choose you partly because they trust your values., your people, and your processes. AI missteps can feel like betrayal in these critical relationships.

  • Resource constraints: You need approaches that work with real-world budgets and staffing in organizations that are not oriented around seeking profit.

  • Mission alignment requirements: It's not enough for AI to be fast and powerful, it has to actually support your values and advance your mission.

Building Bridges, Not Walls

The Mission-First AI Starter Kit came from a simple idea: how can mission-driven organizations start exploring emerging technology safely while they build the knowledge to create truly customized approaches.

Think of it as training wheels for organizational AI use. The templates and frameworks provide immediate structure and guardrails while staff gain the experience needed to make informed decisions about their specific context.

The kit includes:

  • A policy template with built-in rationales for key restrictions, so your team understands the "why" behind the rules

  • Safe start prompts with common failure modes highlighted, so you know what to watch for

  • Vendor evaluation questions and scorecard designed around mission-driven priorities, not just technical features

  • Communication templates for transparent communication about your AI use for your staff, donors, community, and board.

But here's what's crucial: these are starting points, not endpoints.

Why Context Still Matters Most

Every mission-driven organization operates in a unique ecosystem. A rural health clinic, an urban arts nonprofit, and a national advocacy organization all face different regulatory environments, serve different communities, and have different risk tolerances. Their AI strategies should reflect those differences.

The starter kit helps you begin safely, but your long-term success depends on developing approaches tailored to:

  • Your specific mission and values: How can AI use align with your theory of change? What choices or errors could undermine your work?

  • Your community's needs and concerns: What cultural considerations and trust factors should influence your AI implementation?

  • Your operational reality: What's actually feasible given your staffing and resources?

  • Your regulatory environment: What compliance requirements apply to your work?

This is why my upcoming book "Amplify Good Work" (out in September 2025) focuses on developing contextually-grounded AI approaches rather than one-size-fits-all solutions and developing the sensitivity to AI’s biggest risks, so you can make confident decisions as the technology changes. The real work happens when you move from generic best practices to mission-specific strategy.

Ready to take those first steps? Download your free Mission-First AI Starter Kit: practical tools to help you start exploring AI safely while you develop the strategy that's right for your organization. Downloading the toolkit will NOT put you on a mailing list. If you want, you can select “sign up for news and updates” to get more tips and techniques for safe, effective LLM use in your mission-driven work straight to your inbox, plus free updates to this toolkit as I build out more resources in response to your feedback!

LLM disclosure:
”Can you draft a blog post to explain why I built this and how it will help mission-driven leaders build their own, contextually grounded policies and practices around LLMs in their organizations? You can refer to past chats to do this.” <— i don’t know whether Claude (Sonnet 4 in this case) can refer to past chats. I thought I’d test it, but I didn’t get any useful answer from this test.

”It seems like you invented an email! Can you replace that with a more general description of my desire to help my audience leverage AI to scale their impact without undermining their values?”

lol

This prompt was much too general; the outcome required a lot of editing.

Previous
Previous

Mission-driven AI Case Studies: New Series!

Next
Next

Ethics & LLMs: Sustainability