“AI for Good” Is Not a Get-Out-of-Tough-Decisions-Free Card
Every nonprofit leader I talk to wants to use AI for good. That instinct is fantastic. But "AI for Good" is sometimes used as a “get out of tough decisions free card,” and I want to talk about it.
Often, “AI for Good” doesn’t grapple with ethical impacts at all, it’s straight techno-solutionism: “This technology is powerful, and it will fix things, like, necessarily.” Sometimes, it does reflect concern, but there’s nothing more behind it than "We're a mission-driven organization, so we are using AI for a good cause." Perhaps a more honest use of the label is “we are trying to do good with it, so we hope that makes up for some of the potential harm.”
But I really think we can do better if we thoughtfully confront the ways AI conflicts with our specific values.
Something like these:
"We looked at how this tool handles client data, decided the privacy risk was too high for intake, so now we only use it for internal drafts, and we added a quality assurance process we didn't have before."
“We’ve judged that our use of the tool does more harm than good because it allows us to serve more people and improve service quality. However, we recognize that it trained on artists’ work without their permission and might harm their ability to make a living. We’ve decided to restrict its use to create images and set aside some money to commission human-produced art for some highly-visible documents we might have otherwise used a free template for, like our annual report, and will highlight those artists on our website.”
“We are a sustainability-focused organization, so we make a point to train people how to compare the carbon impact of different activities and give people the space to choose the less harmful method, even if it takes longer. We put out a one-pager comparing LLM use to different activities so people can make informed choices.”
I should be honest about where I stand. I am not an AI enthusiast. I did my PhD on AI because I was afraid of its economic impact, years before the generative tools arrived. I think developing generative AI was and remains a risky bet, and that is more likely than not to create economic disruption that we, especially in the United States, are not prepared for. If I could had a button that would put this technology back in the bottle, I think I might press it.
But I can't, and neither can you. Global competition, market forces, and the speed of adoption mean that organizations are already using these tools. If American companies, or even the US and our allies, somehow all agreed to stop building it, China and others would continue to build it. For better and worse, (in my opinion!) the cat is out of the bag.
So it’s up to us as mission-driven workers to decide how to use it in ways that align as well as they can with our organization’s missions and our personal values.
What’s not working
When I look at the nonprofit and government landscape, I see three common paths that aren’t working well.
The first is no policy at all, which in practice means a free-for-all: every staff member figures it out on their own, with no shared understanding of risk. Every one operates on their own understanding of the organization’s goals, the technology, and the legal environment.
The second is an outright ban which sounds conservative, but in practice, usually just means a secret free-for-all.
The third is bringing in a vendor or consultant to decide for you. Vendors are necessarily biased toward selling you their product, and so many of the independent consultants I've encountered are AI enthusiasts who don't engage deeply with the ethical tradeoffs. This leaves you open to the risk of undermining your mission and leaves AI skeptics in your organization feeling unheard and fearful of losing their jobs.
None of these approaches help an organization make deliberate, values-driven, human-centered decisions about a technology that touches privacy, sustainability, equity, job quality, and more.
What I Actually Do (and Why)
Through Our AI Futures Lab, I work with organizations that need a policy, training, or an implementation plan that takes ethics seriously. My goal is not to sell anyone on AI, and I will not tell your organization to force employees to use it. My goal is to help them see the ethical landscape clearly and build their own judgment about how it aligns (and doesn’t!) with their mission.
I think of this as harm reduction. I spent three years after my PhD doing other work before I could find a position on AI that felt honest. For a long time, I could barely have a productive conversation with someone who was enthusiastic about the technology, because I feel so strongly about the risks.
But over time, I came to terms with the cat’s location vis-à-vis the bag and realized that, in addition to critical voices highlighting the risks and other problems with the technology, we need people to help organizations that are implementing it do so thoughtfully. In addition to having done some time in the “critical voices” trenches, had a decade in mission-driven work and a knack for explaining science and technology in an accessible way; one of those people could be me.
I want to emphasize that critical researchers who document AI's harms are essential in my view. I know many of them won’t like what I am doing, but I love what they are doing :) I cite their work and it helps me convince enthusiasts that unrestricted, free-for-all AI use has some serious downsides.
Seven Stances (Not Just "Use It" or "Ban It")
One of the problems with the current conversation is that it treats AI adoption as binary: you're either in or you're out. But there are lots of options in between. Mission-driven organizations face dozens of distinct ethical questions — about privacy, sustainability, equity, labor, accessibility, error, accountability — and the right response to each one may be different.
So I developed a framework of seven deliberate stances an organization can take toward AI and crucially, you can apply different stances to different values and different use cases at the same time. A domestic violence shelter might refuse to let client data touch any cloud-based AI tool, constrain AI use to internal drafts with mandatory human review, and compensate for its energy footprint by dedicating part of its technology budget to community climate projects.
The stances range from Refuse (this conflicts too deeply with our values — we won't use it here) to Shape the Ecosystem (we'll use our purchasing power and our voice to push for change upstream). In between are Wait and See, Constrain, Mitigate, Compensate, and Rethink the Work — each a different way to engage with AI deliberately rather than by default. I've written about each stance in detail here, and the book walks through how to apply them value by value, with case studies and practical examples.
What matters is that choosing a stance — any stance — replaces what I call Drift: the thing that happens when no one decides on purpose. A vendor turns on new AI features by default. A few staff members start using whatever tools they find online. You notice a conflict between AI and your values, but you don’t feel you can refuse to use it, so you ignore your instincts. Drift is not a stance. It's the absence of one.
The Point Is the Work
"AI for Good" is not a thing you are. It's a thing you do — continuously, deliberately, value by value. There is no harmless, risk-free AI use. Every implementation involves tradeoffs: in privacy, in sustainability, in the quality and equity of the work, in the jobs and communities affected. The question is whether you make those tradeoffs on purpose, with your mission and values in front of you, or whether you let them happen to you by default.
I wrote a book to help mission-driven organizations do that work. Amplify Good Work: Effective, Ethical AI for Mission-Driven Work walks through each of these values — privacy, sustainability, equity, labor, accessibility, error, accountability, and more — with case studies, practical guidance, and examples of what each stance looks like in practice.
It won't tell you what to decide. It will help you develop your own perspective on your specific mission and AI, a perspective that will serve you as the technology continues to change.
—
LLM disclosure
I asked Claude Opus 4.6(!) ”I am thinking about writing a blog post about the idea of "AI for good." Here are some thoughts on it from a conversation I had with a friend:” and I pasted in a conversation I had with a dear friend and excellent critical technology researcher, Michaelanne.
The draft it gave me was actually great. I made extensive edits to it, because it resonated so effectively with me that it became a manifesto :) I attribute that to 1) recent model improvements—Claude has long been better at writing, but Opus 4.6 is especially. 2) The combination of a thorough book reflecting my perspective and the more personal point of view and voice from my conversation with a friend.
One of those edits I made with the help of the same Claude model (in a different chat and outside of the project): “I've written this blog post. Can you tighten up the 7 stances section? I already have a stances blog post I can point to, but I do want to make a point about how important the stances are to the thougthful AI ethics approach and point out that it's all laid out and applied in the book.”

