The 13 Values Your Organization Should Consider Before Adopting AI
When people talk about AI's impact on artists, they usually talk about two things at once: creative ownership ("they trained on my work without permission") and job security ("now they don't need to hire me"). Those are both real concerns. But they're different concerns — and if you lump them together, the solutions you reach for won't actually address both.
For example, if your worry is jobs, you could take a compensate approach: use the money you save with AI to commission a mural from a local artist. That's a meaningful response to the jobs question, but it doesn't do anything about the intellectual property issue. Conversely, training an image model on rights-cleared, freely licensed images addresses the ownership question, but it doesn't help the artists make a living.
I run into this kind of conflation constantly — when I work with mission-driven organizations, when I read tech ethics research, and when I make the mistake of mentioning I work with AI at a bar. People's concerns are real, but the values underneath them get tangled together in ways that make it hard to plan, communicate, or act.
In Amplify Good Work: Effective, Ethical AI for Mission-Driven Work, I try to untangle them. The heart of the book walks through thirteen values that AI can threaten or support in mission-driven work. I'm not going to tell you what to do — your organization is different from the next one — but I want to give you the language to name what's actually at stake so you can make confident, specific decisions.
Here’s a quick tour of all thirteen.
A Note on Stances
Before we dive in, one concept runs through every chapter: the idea that organizations can take different stances toward AI, depending on the value at stake. These range from Refuse (keep AI out entirely) to Wait and See, Constrain, Compensate, Rethink the Work, and Shape the Ecosystem. There's also Drift — which is what happens when nobody makes a deliberate choice and AI just creeps in. Different values, and even different parts of your work, may call for different stances. The goal is to choose on purpose.
1. Privacy and Security
You might assume hackers focus on for-profit companies. But terabytes of sensitive data were stolen from Doctors Without Borders, the Red Cross, Save the Children, and UNICEF in recent years. Mission-driven organizations hold donor records, medical information, case files, and more — and the people behind that data trust you to protect it.
Privacy and security are related but distinct. Privacy is a person's control over information about themselves. Security is the lock on the door that enforces that boundary. AI systems interact with your data in different ways depending on the type of system — from conversational LLMs to AI-integrated applications to custom-built tools — and each carries different risks. The chapter walks through strategies like data minimization, compartmentalization, and on-premises deployment, and includes a case study of how the Mayo Clinic built a "data under glass" approach that lets algorithms visit the data without the data ever leaving their secure environment.
2. Environmental Protection and Sustainability
I get asked about the environmental impact of AI after almost every talk I give. The concerns are legitimate — data centers use electricity and water — but the discourse often misses important context. Individual LLM use is a relatively small footprint, comparable to a Google search or printing a page, and dwarfed by common office activities like video calls or brewing coffee. The oft-repeated claim that each ChatGPT query uses a full water bottle of water appears to overstate the impact by roughly 30x, based on the source it seems to originate from.
That doesn't mean we shouldn't care. It means we should focus where our efforts count: choosing efficient models, auditing vendors, and — above all — pushing for systemic changes like renewable energy for data centers and non-potable water for cooling. The chapter helps you put the impact in perspective and figure out where to direct your energy.
3. Authenticity and Trust
I was listening to a podcast recently when an ad started playing. Something about the voice was just slightly off. I couldn't tell you what the ad was for, because I spent the whole time thinking: a human lost a voice acting job; could they not convince a person of this message; what else do they use AI for?
The book distinguishes three types of authenticity: representational (do your communications accurately depict your work?), relational (do your interactions feel personal and sincere?), and operational (do your practices match your stated values?). AI can threaten all three — but it can also support them, for example by freeing up staff time so they can invest in the personal, human interactions that build real trust.
4. Effort and Craft
When a donor receives a heartfelt thank-you letter, part of what makes it meaningful is the effort behind it: someone thought about what would resonate, took the time to personalize it, and cared enough to write it. AI can produce a similar output, but the signal of care changes.
Beyond signaling, there's a practical risk: skills atrophy without practice. A grants manager who relies on AI for proposals may find their own writing weakening — and the situations where AI can't help are exactly the unusual, high-stakes ones where you need your sharpest skills. The chapter explores how to use AI to redirect effort toward high-value work rather than eliminate it, and when a Constrain or Wait and See approach might protect critical skill development.
5. Ownership and Intellectual Property
Some critics call the mass scraping of creative works without permission the "original sin" of AI. Large language models trained on text and images that creators never consented to share. Whether you find that framing persuasive or not, it affects how people perceive your organization's AI use — especially if you serve or work alongside creative communities.
The chapter separates the ethical questions (should creators be compensated? is AI-generated content truly original?) from the practical ones (can you copyright AI-derived work? what's the reputational risk?) and explores how organizations can move from an extractive relationship with AI to a more reciprocal one: choosing tools with clear licensing, being transparent about AI use, and reinvesting savings into local creative communities.
6. Information Integrity and Reputation
For mission-driven organizations, reputation represents years of trust built through ethical work and community engagement. AI-enabled deepfakes, voice cloning, prompt injection attacks, and disinformation campaigns can disrupt that trust in ways that are hard to recover from.
Unlike the other values in the book, these threats come from bad actors — people outside your organization (or disgruntled insiders) — so they can't be fully prevented by internal policies. The chapter focuses on preparation and response: how to anticipate attacks, how to build verification systems, and how to recover when something happens.
7. Infrastructure Dependence and Service Reliability
In 2022, GiveDirectly used a custom AI system called "Delphi" to deliver hurricane aid in Puerto Rico, getting cash to 90% of recipients the same day they applied. That's an inspiring example of AI-powered service delivery — but it also raises a question: what happens when the infrastructure fails?
AI systems depend on servers, data pipelines, network connectivity, and APIs that can all break, sometimes during the same disaster that's creating the need. The chapter covers building systems that degrade gracefully rather than fail catastrophically, including offline modes, manual fallback procedures, and vendor contracts that address uptime and backup plans.
8. Inclusion and Accessibility
AI offers some of its clearest promise in accessibility: real-time transcription for Deaf and hard-of-hearing participants, instant translation into dozens of languages, image recognition that helps vision-impaired people read printed text. These aren't just nice features — they can transform who can participate in your work, both as staff and as the people you serve.
But AI also creates new barriers. Models trained on limited data may perform poorly for people with non-standard speech patterns, rare languages, or atypical communication styles. Automated systems can strip away the human flexibility that made services accessible in the first place. The chapter draws on universal design principles and emphasizes involving disabled people and language-minority communities early in AI design and testing.
9. Governance and Accountability
In 2018, six organizations sued the Dutch government over its System Risk Indication (SyRI), an automated welfare fraud detection system. In 2020, a court dismantled it, ruling that the system was too opaque, collected too much data, and didn't strike a fair balance between privacy and social benefit.
Mission-driven organizations answer to communities, board members, regulators, donors, and staff — not just shareholders. That means AI governance isn't optional: you need clear policies about who decides when and how AI is used, how decisions are audited, and how people affected by AI-driven decisions can seek recourse. The chapter covers governance frameworks that match the scope and accountability structures of mission-driven work.
10. Good Jobs
The AI risk that keeps me up at night is its potential impact on workers. Widening wealth inequality is already eroding social cohesion and slowing economic growth. AI doesn't just threaten to replace jobs outright — it can also "deskill" them, reducing the training and judgment required and, eventually, the wages attached.
For mission-driven organizations, this is especially complicated. Your workers often chose meaningful work over higher pay, making them financially vulnerable to displacement. Your entry-level positions may provide economic mobility for the very populations you serve. And automating those roles can directly contradict your stated mission. The chapter walks through how task reallocation, augmentation, and elimination each play out differently in mission-driven settings, and makes the case for Rethinking the Work — using AI-driven efficiency to do more with the same team rather than doing the same with fewer people.
11. Equity and Justice
AI can perpetuate existing inequalities through biased outputs and decisions, widen digital divides between well-resourced and under-resourced organizations, and channel attention toward technological fixes instead of addressing root causes of injustice. Predictive policing algorithms, automated eligibility screeners, and surveillance systems disproportionately affect marginalized communities.
But AI can also advance equity. It can help organizations track the impact of policy changes across demographic groups, proactively identify people eligible for services they don't know about, and allocate resources based on need rather than who has the capacity to apply. The chapter explores both sides and emphasizes that treating inequality as a technical problem — without challenging the systems that created it — can actually impede justice.
12. Error and Biased Error
When a retail company's AI makes an error, someone gets a wrong product recommendation. When a social services organization's AI makes an error, someone might lose access to housing, healthcare, or food.
Not all error is created equal. Answers can be wrong, and they can also be biased — recommending different things based on demographic characteristics. Error rates themselves can be biased too, like facial recognition systems with higher error rates for people with darker skin. The chapter covers how to monitor, set thresholds, and build feedback loops that catch errors before they cause harm, with special attention to ensuring that the people who bear the consequences of mistakes have clear ways to report problems and receive corrections.
13. Social Connection
Humans are wired for connection, and generative AI systems can displace and distort the relationships we form with each other. Staff lose the casual collaboration that builds trust when AI handles the drafts, the troubleshooting, the feedback. Donors feel less valued when appreciation is automated. Community members stop engaging when communication becomes something an organization "produces" rather than something it builds together.
People can also develop parasocial relationships with chatbots — a risk that's especially acute for people who are already socially isolated. The chapter makes the case that the biggest threat here isn't a dramatic failure but quiet Drift: slowly routing check-ins, thank-yous, and feedback through bots because they're faster and less awkward, until one day you realize the human connections that powered your mission have thinned out.
What Now?
These thirteen values aren't the only ones AI implicates, but they're ones I see come up again and again in my work with mission-driven organizations. Your organization will weigh them differently based on your mission, your stakeholders, and the communities you serve.
If you want to go deeper — into the case studies, the practical mitigation strategies, and the step-by-step process for building a values-driven AI strategy — that's what the book is for. Amplify Good Work is available [LINK]. You can also grab the free Mission-First AI Starter Kit at drkarenboyd.com, which includes an AI use policy template, a safe-start prompt pack, a vendor evaluation checklist, and transparency communication templates.
I'd love to hear which values are top of mind for your organization. Drop me a note — I read every one.

