Taking a Stance on Values in AI

Mission-driven organizations are usually very good at naming their values. Environmental protection, equity, accessibility, social connection, and accountability, for example, are central to a lot of our missions, after all. The harder part comes later: deciding what those values actually require when making decisions, including when implementing AI.

Values alone do not tell you how to act. The same value can justify very different decisions depending on context, risk, and role. Two organizations can both care deeply about sustainability and still make very different choices about AI use.

This is where stances become useful as a way to think and talk about how we put our values in action.

Here’s how I describe stances in Amplify Good Work:


• Refuse. “We don’t use AI here,” or “We don’t use AI for this part of the work.” You might ban AI from interacting directly with the people you serve to ensure that every output is staff-reviewed. . .

• Wait and See. Here, an organization pauses AI implementation in a sensitive area until technology, conditions, or information about either change. For example, they may wait until they have stronger legal protections, better contracts, or a safer technical option.

• Constrain.  You might choose to use AI, but only inside a tight box with guardrails. A domestic violence shelter might require on-premises implementation for safety planning or case notes: any data leak could endanger people, and they aren’t willing to take the risk of sending any of it off site.

• Compensate.  This approach is like buying carbon offsets for a flight. A nonprofit that relies heavily on cloud-based software, including AI, might dedicate part of its tech budget to community climate projects or worker training in fields where automation hits hard.

• Rethink the Work. Redesign the process, roles, or tools so the values clash softens. Once you’ve identified a values clash with AI in your work, step back, involve staff and community members, and change the work so that AI plays a smaller, less risky role or is no longer needed at all.

• Shape the Ecosystem. Use your voice (and help organize community voices!) to motivate change upstream. In conversations with vendors, elected officials, and other high-power people, emphasize your concerns and how they can help. For example, ask your vendor about their energy sourcing and data center cooling, or talk to your government about environmental regulations.

All of these are valid options, but there may be better and worse options for your organization, mission, and community. Stances help organizations move from abstract commitments to concrete, explainable choices .

Importantly, you do not have to take the same stance everywhere. Different values, use cases, or risk levels can justify different responses within the same organization. The problem is not disagreement or variation; the problem is Drift—letting AI spread into consequential work without deliberate policy, practice, or technical choices that protect your mission-critical values.

Stances in Action: Sustainability

After the “thank you” slide in every deck I make for a public talk or private training, I put slides about sustainability. Concerns about AI’s energy and water use are widespread, and I always get asked about it. Organizations frequently agree that environmental impact matters, yet struggle to decide what that belief should mean in practice. This makes it a perfect example to demonstrate how stances work in practice.

Refuse

An organization taking a Refuse stance decides that certain AI uses conflict with its sustainability commitments and keeps them out entirely.

For example, an environmental nonprofit working on water scarcity might refuse to use energy-intensive, always-on generative AI tools for nonessential tasks, especially if those tools undermine trust with communities already facing environmental harm. The refusal can be targeted, not absolute: an organization could judge that AI’s use as a thought partner could save a video call (a much more resource-intensive activity) but Refuse to use it to draft emails, judging it not worth the environmental cost.

Wait and See

An organization taking a Wait and See stance might delay adopting AI tools until vendors provide clearer disclosures about energy sourcing, water use, or efficiency improvements in data centers. During this period, the organization actively monitors peer experiments, regulatory guidance, and infrastructure changes rather than experimenting on its own.

This stance only works when someone owns the pause and revisits it intentionally. Otherwise, delay quietly slides into Refuse (or staff Drifts into using it quietly when the motivation for Wait and See becomes less compelling).

Constrain

Using a Constrain approach, AI use is permitted but bounded. An organization might require staff to use smaller, more efficient models by default; implement a prompt library to reduce repeat attempts; or require human review before resource-intensive tools are used.

Here, sustainability shapes how AI is used rather than whether it is used. Constraints keep environmental costs visible and prevent incremental expansion without review.

Compensate

A Compensate stance accepts that some environmental costs will occur and commits resources to counterbalancing them.

An organization that relies heavily on AI for their mission-critical operations might dedicate funding or staff time to climate resilience work, energy audits, or community-based environmental projects. I always use carbon offsets for flights as an example to explain how Compensate works: you could do that as well!

Rethink the Work

Rethink the Work shifts attention from tools to systems.

Taking a Rethink the Work stance, an organization might redesign workflows to eliminate resource-intensive steps altogether or make thoughtful choices about how to reinvest time, money, and effort to support sustainability. This is a great place to ask and LLM to do a quick and high-level audit of your organization’s (or job role’s) operations and identify high-impact changes you can make to protect the environment.

Shape the Ecosystem

A Shape the Ecosystem stance recognizes that many environmental impacts happen upstream, beyond any single organization’s control. Organizations, especially large ones or those with organizing capabilities, can use procurement rules, contracts, coalitions, and public commitments to push vendors toward renewable energy use, water reuse, transparency, and efficiency improvements.

This stance focuses less on individual restraint and more on systemic change.

Drift

Drift is a common risk to the sustainability value. Drift occurs when AI use expands by default, environmental costs remain invisible, and no one claims responsibility for trade-offs .

Last Thoughts

Values play out in technology in a much more nuanced way than “Never use AI, it kills the environment,” and stances help us talk and make decisions about those complications. Sustainability is only one value among many that AI implicates. The same structure applies to privacy, equity, governance, and the 9 other values I review in Amplify Good Work. I look forward to hearing from you: what stances have you taken on AI?

Previous
Previous

Preorder Open for “Amplify Good Work!”

Next
Next

What a Deepfake Did to a School and What We Can Learn