Discussion Questions

For your class, book club, or work team!

Questions are grouped by purpose and labeled with chapter numbers. Please feel free to submit others that you found provoking :)

SET 1: PERSONAL REFLECTION

Use for: individual pre-reading or journaling, opening a session, low-stakes warm-up. These ask people to examine their own assumptions before engaging with others.

1. Where do you start? (Preface / Chapter 1) Before reading this book, where did you fall on the spectrum between "AI evangelist" and "refusing to touch it on principle"? Did anything shift your position?

2. The "jagged frontier" in your own experience. (Chapter 3) The author cites Ethan Mollick's concept of the "jagged frontier" to describe how LLMs perform brilliantly at some tasks and fail unpredictably at others that seem equally difficult. Describe a time an AI output surprised you, either by being much better or much worse than you expected. Were you able to improve the output by changing your approach?

3. Effort as signal. (Chapter 8: Effort and Craft) Boyd opens the Effort and Craft chapter with examples of things that feel cheap when you realize a human didn't make them: a "handwritten" card printed by a machine, a charcoal drawing that's actually a photo filter. She argues that effort is a costly signal: it tells stakeholders you care enough to invest real time and skill. Where in your work is that signal of effort essential to maintaining trust? Where is effort being spent on tasks that don't send any meaningful signal at all? Could be reclaimed?

4. The human skill that mattered. (Chapter 2) The author describes human strengths that AI cannot replicate: reading subtle, cultural signals; building trust through vulnerability and real mistakes; saying "no" when it matters. Think of a moment in your work where one of these distinctly human skills made a real difference. Could AI have played any supporting role, or would it have gotten in the way?

5. What you won't automate. (Chapter 24 / closing reflection) After reading this book, what is one thing you've decided not to automate? Why?

6. Your own sycophancy trap. (Chapter 2) Boyd describes how LLMs tend to agree with and reinforce whatever the user says, and how this interacts with human confirmation bias: we like to find information that validates what we already believe. How do you approach and interpret output from LLMs to mitigate this risk?

SET 2: APPLY IT TO YOUR ORGANIZATION

Use for: leadership retreats, strategy sessions, teams evaluating whether and how to adopt AI. These ask participants to map the book's frameworks onto their specific workplace.

7. The NEDA test for your context. (Chapter 1) The book argues that mission-driven organizations face higher ethical stakes than for-profit companies when AI goes wrong, because the people they serve often have limited recourse. Think of a specific population your organization serves. What would a NEDA-scale mistake look like in your context?

8. Contextual integrity applied. (Chapter 5: Privacy and Security) Boyd cites Helen Nissenbaum's concept of "contextual integrity": we expect data to be used in ways that fit the context in which we shared it. You accept that your bank knows your finances, but you'd be uncomfortable if it tracked your location. Think of a specific type of data your organization collects. What would a contextual integrity violation look like if that data were fed into an AI system?

9. Where have you drifted? (Part 2 cross-cutting / all values chapters) The author introduces "Drift" as the default stance where no one makes a deliberate decision and AI just creeps in. Looking back across your organization, where has Drift happened? What would it take to convert that drift into a deliberate stance?

10. The process triage. (Chapter 19: Identify and Prioritize Use Cases) Boyd's process for identifying automatable tasks starts with your mission statement: you list your business processes, break them into tasks, and triage each one using four questions. Is it part of a mission-critical process? Does it threaten a core value? What do stakeholders think? And if everything goes wrong, how bad is it? Pick one business process and walk through this triage. Which tasks get a green flag, and which get a red flag?

11. Efficiency masking harm. (Chapter 20: Measuring Impact) The author identifies a common trap: confusing operational efficiency with mission impact. A food bank might process twice as many applications while inadvertently screening out eligible families because of algorithmic bias. The operational numbers look great; the mission is being undermined. Can you think of a scenario at your organization where improved efficiency could mask a mission-level problem?

12. The vendor debrief. (Chapter 21: Evaluate and Select Technology Solutions) Boyd offers specific advice for interviewing vendors: bring your values list, focus on scenarios where things go wrong rather than demos where everything works, and be wary of vendors who claim their AI is "objective" or who can't explain when you should not use their product. Think about the last technology purchase your organization made. Which of these red flags were present, and which evaluation steps were missing?

13. Your single point of failure. (Chapter 11: Infrastructure Dependence and Service Reliability) Boyd warns about what happens when AI tools you depend on go down, change their terms, or disappear. Pick a technology your organization currently relies on for a core function. If that vendor had a prolonged outage starting tomorrow, what would happen to the people you serve within the first 48 hours? Do you have a documented fallback, or would you be improvising?

SET 3: DEBATE AND DISAGREEMENT

Use for: lively book club sessions, seminar discussions, or any setting where you want people to take opposing positions. These questions have no single right answer, and reasonable people will disagree.

14. Speed vs. equity. (Chapter 15: Equity and Justice / GiveDirectly case study) GiveDirectly used satellite imagery and AI to identify hurricane-damaged neighborhoods and distributed cash aid six times faster than their previous process, but only through a smartphone app that some eligible people did not have. When speed and reach trade off against equity, how do you decide which matters more? Is there a point where "faster for some" becomes "unfair to the rest"?

15. Values in tension. (Part 2 cross-cutting) Several values in Part 2 pull in opposite directions. Sustainability argues for using AI to replace energy-intensive human processes; Effort and Craft argues that human labor signals care. Accessibility argues for using AI to serve more people; Social Connection warns against displacing human relationships. Pick a pair of values that feel most in tension for your organization. How would you resolve or manage that tension?

16. Why does biased tech persist? (Chapter 16: Error and Biased Error) ProPublica found in 2016 that the COMPAS recidivism prediction system was biased against Black defendants and it was still in use nearly a decade later. Why do you think a tool with known, documented bias can persist in use for so long? What would it take to actually stop using it?

.17. The unintended message. (Chapter 7: Authenticity and Trust / Chapter 8: Effort and Craft) The author describes a scenario: a community member asks a thoughtful question at a town hall about a proposed development and later discovers the city's response was generated by AI. The implicit message: "Your concerns weren't important enough for a human to think through." How might your organization talk to its constituents about your AI use? Is there some use that shouldn’t be disclosed?

18. Deskilling vs. efficiency. (Chapter 14: Good Jobs) Boyd distinguishes "replacement" (AI eliminates an entire position) from "deskilling" (AI reduces the skills required for a job, which can lower wages and reduce professional growth). Deskilling is harder to see. Think about a specific role at your organization. How might AI change what that person does day to day? What would your organization lose if the person in that role gradually stopped developing expertise because AI handled the challenging parts?

19. The disclosure question. (Preface / Chapter 7: Authenticity and Trust) Boyd used AI to help write this book and published every prompt she used. Should authors, grant writers, and organizational leaders be expected to disclose when and how they use AI? If so, how much detail is enough? (& how much detail is too much?) Does your current or proposed disclosure policy have loopholes or encourage using AI differently to make the disclosure look better? Are those incentives aligned with your values or misaligned?

20. The original sin. (Chapter 10: Ownership and Intellectual Property) Many people find it difficult to overlook the "original sin" of profiting off of others' work. Others think it's straightforward fair use. Still others think it was wrong, but it's over and done with, and your next LLM prompt doesn't add harm. What do you think?

21. Compensate as a cop-out. (introduction to Part 2) The author describes a Compensate stance, in which an organization does not stop causing harm, but attempts to balance it out with other behavior. When does compensation genuinely address the harm, and when does it function as a permission structure to continue causing harm?