Prompts used to create “Amplify Good Work”
This page discloses the prompts I used to help me create “Amplify Good Work.” I'm sharing them for transparency and to demonstrate that using AI as a tool requires judgment, effort, and intention. You’ll see that it’s a lot more work than, “Hey Claude, write me a book,” but also a lot more useful.
As I recommend in the book and to any one who will listen, I did not use any of the text provided by an LLM without carefully reviewing and editing it.
Some notes on the presentation of this information.
For Claude prompts: I tried to the best of my ability to present them in the order in which I submitted them. This is challenging, because the prompts are organized by chats. Also, in their UI, Claude’s date labels are vague (e.g. 7 months ago). I tried extracting the prompts with more precise dates from the json file of my conversations, but they don’t have a field connecting conversations to projects (so I couldn’t extract them with code) and the file was too large to open on my computer, so I couldn’t go through manually.
When ChatGPT updated to GPT5 and deleted past models, it got rid of the labels that indicated which model was used for each prompt unfortunately. However, given that these models are no longer available, perhaps that information is not as relevant as I had assumed it would be.
The fact that Claude doesn’t have precise dates made it impossible to line up ChatGPT and Claude together, so they are presented separately.
These prompts are reproduced with errors in context; if I made a spelling or punctuation or strategic error, I included it. I did this because we don’t yet know how errors like these influence model output (or don’t.) I did not include all of the context I added to these prompts, although if I didn’t refer to what that context was in the prompt, I added a note about what it was. The context I included was often entire chapters, pages of notes, or even the current draft.
Somewhat embarrassingly, it didn’t occur to me to use a large language model until I had been working on this project for more than a year. That is why there are not a lot of developmental prompts early on and they jump straight into the specific. There are some developmental prompts for chapters that I hadn’t written yet later on, though.
“—> “ Indicates a new prompt
—— Indicates a new chat started on the same day as the previous one
ChatGPT Prompts
May 1
—> I am writing a chapter about how to use LLMs to augment work in mission driven organizations.
Right now, I am making the argument that (as opposed to trying to automate as much as possible or trying to avoid AI as much as possible), augmentation allows organizations to take advantage of the opportunities of AI while mitigating the risks. Under opportunities, I have efficiency, new products, process improvements, accessibility, and custom training. Under risks, I talk about how people bring human connection and trust, innovation, deep expertise, context sensitivity, uncertainty, and negativity to the table.
1. Can you improve the framing of this information? I feel like "risks" isn't the right word, but it feels balanced against "opportunities" and I am not sure what the right word would be. Selecting a new framing for the entire section is allowed. Can you give me 5 ideas for alternative ways to frame this?
2. What am I missing on my lists? What else can I cover that augmentation can offer?
—> Thanks! I had forgotten about moral imagination, and i added the organizational memory piece, too. Can you review what I have so far and make some suggestions about how I can improve it? Please don't spare my feelings-- I want this to be as good as it can be.
I would like to add a section about values that organizatons need to balance when creating an AI augmentation strategy (e.g. privacy and data security, governance and accountability, etc.). Can you recommend where I might put that?
May 5
—> Here are some thoughts I have about content that should go in the book. Right now, I am torn between trying to lay out the entire thing in the form of steps (inspired by "Write Useful Books") and keeping related content together.
Can you give me a couple options for how I can organize this content? For each, please give me a brief summary of the argument, an outline and, if there are any additional topics that outline provides space for, note what they are and where they could go. When reviewing the content, I'm submitting, please ignore the order it is currently in and any signposting (e.g. "in the next chapter," or "Introduction:") so that I can get several views on how this content could flow and work together. If there is content you think does not fit, you don't need to include it, just let me know what that approach left out.
—> Great! Thank you! I like the step by step best, I think. Can you give me a couple options for this approach?
—> Thank you! I think I'll do a version of option A. I want this book to be as useful as possible for readers. Here are some ideas for supplementary content or appendicies. I'd like to know what you think would better serve as linked content and which should be written in the back of the book as an appendix. I'd also like to know what else I could do that would make the book more useful for readers in mission driven orgs (nonprofits, gov/agencies, and b-corps). feel free to suggest an unreasonable number of ideas-- some of them I can use as content marketing after publication, too.
1. draft survey
2. Sankey vignette
3. Queries I used while writing this book.
—> Thanks! my website is drkarenboyd.com. I would like to update it to center around this book and some of this supplemental content. What suggestions do you have?
—> I love it, thank you. I am looking for names for the book and/or approach that I can center the website around.
Here are the thoughts I have.
Should reflect empowerment/augmentation (people plus AI, not AI replacing human work)
Should invoke mission/doing good/service (for leaders/workers in non-profits/gov/b-corps, not solely-profit-motivated orgs)
Should look at ethics + effectiveness.
Here are ideas I've had: "The Mission and the Machine," "Amplify good work" with the subhead "effective, ethical AI" or "helping non-profits thrive in the future of work.
I am interested in book titles, website headings, taglines for me/my services.
May 8
—> I am writing a chapter about identifying which business processes an organization has, what the tasks involved with each business process are, and what they need to consider when deciding whether and how to automate those tasks. What considerations should I include when they are deciding whether and how to automate?
—> thank you! What are some judgment calls in non-profits (etc) that AI would be good at making and some examples where people should be making the judgments?
Note: this query didn’t produce any text that I used, perhaps because I didn’t give it enough context.
—> I have written a chapter and included a checklist that people can use to help automate tasks. I've attached the chapter, which specifies with the checklist should be used for and what it should do. Do you think this checklist is possible to make, and what should I keep in mind? Can you create a document with a draft checklist in it.
—> Thanks! I imagine a checklist separate from the green/yellow/red flag system, with different questions. Once a task is identified as all green, what else do they need to think about when deciding whether, when, and how to automate it?
Note: this checklist was so bad as to be discouraging. I abandoned the idea of a checklist after reading it. Probably not the right choice, but I definitely needed to put in more information.
May 11
—> Can you summarize the copyright law in the US around AI generated content?
May 14
—> Can you create a document summarizing peer-reviewed research around brainstorming with LLMs?
- What is the range of findings around idea quantity and quality? (For each paper that scores quality, please describe how quality was assessed: by whom? how were they related to the project? were they blind to whether AI was involved or not?)
- Are there other ways that the brainstorming sessions were evaluated, how were they measured, and what were the results?
- what do researchers attribute those results to? (if they are bad, why do they think LLM-augmented teams did worse? if they are good, what to they attribute that to?)
- Were there cases where interventions allowed LLM augmented brainstorming sessions to improve their results?
Please cite each study that you included.
—> Thank you! does this include the following studies? Why or why not? If they should be included, please add them:
https://arxiv.org/pdf/2406.06561
https://arxiv.org/pdf/2310.10677
https://arxiv.org/pdf/2410.11877
https://www.mdpi.com/2504-4990/5/4/65
https://dl.acm.org/doi/pdf/10.1145/3613904.3642414
study or studies by Jeremy Utley, Kian Gohar, and Henrik Werdelin
—> thanks! There are some weird artifacts throughout the document and the outcome column aren't very clear. e.g. "Originality +0.17 D vs. baseline citeturn7view0" which group is the originality higher for? can you clear this up?
—> Hi! I would like the citations to remain, but right now they are gibberish. Can you regenerate the citations?
—> Thanks! Based on the results of what you read, can you please correct or confirm this thesis:
AI on its own can generate a huge volume of ideas, but the quality isn't great/(consistent?). When humans generate with AI, they often do worse, perhaps because they don't iterate enough or build their own ideas off of AI's--they treat it like google instead of a brainstorming partner. You can fix this by 1) asking the AI several times, rather than just once. 2) seed it with example ideas 3) delay the use of the LLM until the people have had time to ideate on their own 4) use worse AI-- having but great but wildly varying answers gives the humans ideas to build off of, but they stay engaged.
Please identify where this paragraph is contradicted by the evidence and link citations to the parts that are correct. Help me edit this so it is accurate.
—> thank you! This is extremely helpful. The idea about the low quality AI comes from a Dell'Acqua study that gave recruiters (not brainstormers) good AI, worse AI, or no AI. AI helped, but bad AI helped the most. I'll make sure this is stated speculatively.
—> Can you explain:
1. What is GPS scaffolding
2. what was the domain for brainstorming in each of the studies? What question were they asked to find solutions or ideas for?
May 14
—> Can you read this document and tell me if this summary is accurate?
https://www.ncbi.nlm.nih.gov/books/NBK594445/
Mayo Clinic has implemented a carefully designed framework for using AI with sensitive patient data. Rather than sending patient data to third-party LLM providers, Mayo created what they call a "data under glass" approach in partnership with Google.
This system includes two key components:
The Mayo Clinic Cloud - A secure environment housing patient records
The Mayo Clinic Platform - A controlled enclave where Mayo can share de-identified clinical data for advanced analytics and AI development
What makes their approach distinctive is that algorithms are permitted into the enclave, but sensitive patient data never leaves Mayo's secure environment. This federated learning model allows Mayo Clinic to leverage AI capabilities while maintaining strict privacy and security protocols for sensitive patient information.
As explained by Mayo Clinic representatives: "With algorithms permitted into the enclave and data never leaving the home institution, the Mayo-Google partnership illustrates an approach to how health systems and technology companies can partner to facilitate knowledge generation while addressing privacy and cybersecurity concerns" (National Academy of Medicine, 2022).
Is there anything that I can add to or change about this summary to better illustrate data privacy and security solutions for mission-driven organizations with sensitive data?
—> This is great! Can you make these edits to the summary and replace jargon (e.g. hashing, least-privlege) with a plain ENglish explanation and put the jargon in parentheses (or remove it altogether?)
May 22
—> I am writing a short section or chapter on how to choose an AI consultant. I want to make sure I have everything in this section that could help someone bring in an external expert on AI who can work with internal strategic and legal experts to help guide the development of an AI strategy. Here's what I have so far. Can you suggest more things I could include with brief explanations of each factor and why someone should look for a consultant with that quality?
June 5
—> Hi! I am writing values chapters for the book. Here's what I have so far. What other chapters should I write? Can you give me a sentence or two about how they relate to AI in mission-driven organizations?
I'm not sure if you can read the doc, but heres what I have so far:
Privacy and Security
authenticity and trust
Effort and Craft
Deep Fakes & Disniformation
Inclusion and Accessibilty
Replacement and Deskilling
Error and Baised Error
Governance and Accountability
Ownership and Intellectual Property
Sustainability
Anything that isn't on this list?
June 7
I am aware of several other books about AI for mission-driven organizations. I'd like to to ame a list of such books and as you to review the current draft and explain to me whether how it overlaps and is different from each of them? Which ones is it most similar to, and how can I differentiate it (including by changing the draft)?
Thanks! Can you add these titles: [three specific books I found in my own search]
June 7
—> I am writing a chapter in the strategic implementations section about bringing together the right people. I think executives need to be there, IT if they aren't already execs, key enthusiasts and skeptics maybe, and any external experts: legal, strategy, and technical. Can you draft a chapter about that? I've pasted in what I have so far:
—> thank you! Can you write this more in the style of the rest of the draft? (fewer bullet points and tables, more paragraphs?)
{Note: I should have started a new chat here, but I did not] I have a chapter on how to select an AI strategy consultant, and I'd like to make sure I've got all the red flags and positive signals. Can you give me a list?
June 8
—> I am trying to make a decision about whether to automate a particular task. Here's what I am thinking so far. How should I go about making this decision? What framework should I use? I have a brief one at the end of these notes, but I want to make sure it is complete.
{then, I asked another model:] I am trying to decide whether to automate a particular task. I'll attach what I've been thinking. I want to make sure I consider values and how they are implicated in my particular mission and work. I have started a framework at the bottom of my notes. Can you help me flesh it out>
June 9
—> I'd like to write a values chapter on social isolation and disruption. Here are the notes I have. Please make it match the structure and tone od the other values chapters.
—> I don't want lists and tables please. The definitions can be lists, but everything else should be in paragraphs
July 23
—> I am writing the last section of this chapter, talking about the overlaps in human and AI weaknesses so that users can be aware. Can you create a list of these intersections? Here's the introduction to the section:
July 30
(for some reason, this chat still has a label: I used o3 pro. I probably tried another model for the first attempt)
—> I am writing the last section of this chapter, talking about the overlaps in human and AI weaknesses so that users can be aware. Can you create a list of these intersections? Here's the introduction to the section:
—> Thank you! Can you merge these ideas into your output if they arent already there? Feel free to expand your suggestions to look at cases where one side's strengths become weaknesses when combined with the others (e.g. RLHF makes AI aligned with what we want, but when what we ask for is biased, we get bias back without realizing it (3 in the list below))
—> I would like to complete this chapter with an discussion of the intersection of AI and human weaknesses. Before I do that, I want to make sure that this text is clear and comprehensive. Should I group some of the cognitive biases together (e.g. anchoring and confirmation bias?) I would like to group them together in a way that makes sense. Please let me know which ones you think I should group together and draft text and examples to match the others. It's OK if the text for the sets are a little longer. If there are suitable existing names for the groups, please use and cite that name. Otherwise, make up your own and make it simple.
—> Can you write 2-4 sentence explanations of the following biases along with examples of how it could play out in mission driven organziations? Overconfidence, Apophenia / Spurious Correlation
July 30
—> Here is a chapter. I am not sure if I like the structure. Can you give me some ideas about how I could make the structure clearer and more useful for readers? Please summarize your ideas in 1-3 sentences each before executing and I will pick some options to create outlines from.
Aug 9
—> I am working on a section in Part 3 about identifying new tasks or processes that mission-driven organizations can add with AI (not just LLMs, but AI broadly.) Here's what I have so far and some additional notes. Can you create a detailed outline of this section?
—> I added an fourth question to the flagging stage. can you add it?
Aug 11 (GPT 5)
[I asked Claude Opus 4.1 the first two prompts as well (verbatim) to get a different perspective. In this case, I like ChatGPT’s answer better, although mainly because it is closer to what I already have and would require less work to implement. It made me feel as if ChatGPT understood what I was trying to do more than Claude did. Don’t read too much into this: to know whether Claude or ChatGPT is better at this type of task, we should do repeated trials in new chats and compare a larger sample of outputs.]
—> I've just uploaded the current draft of the book. Can you review it? Please give me honest and constructive feedback; I want to make this text as useful for mission-driven workers as I can make it. Please note that I have not included the entire process in Part 3, so the draft is incomplete.
—> Thank you! will you please write up a complete outline of the most useful book I could make? You can reorder and merge existing content as long as you let me know where that content is currently and where you think it should go. You can leave out content or indicate that some sections should be shorter. You can suggest adding content as well. Please include: an order of content including notes about whether sections are merged or removed. suggest a way to complete part 3, including remaining steps. I am particularly not sure how to incorporate case studies: an appendix? sprinkled throughout? attached to relevant topics?
[because I liked ChatGPT’s answer better, I followed up]
—> Thank you! Can you package this into a step-by-step guide for me (the author) to make the changes you are suggesting? I can't use the links to the text that you have included for some reason, so please indicate where in the text using excerpts (e.g. "from 'A policy that covers risk assessment. . . ' to '. . . improve efficiency without compromising sensitive data.")
—> Thank you! Can you explain what you think should go in the AI primer section that isn't currently in the section starting with "Artificial Intelligence, or AI, is a broad term . . " and ending with ". . . by which point they are already late?" Or is it similar content arranged differently?
August 12
—> I am drafting an implementation chapter in Part 3 about how people ought to select technology and vet vendors once they have decided on a use case. This chapter will go right after the chapter currently labeled "Identify efficiency gains." Please draft a chapter based on the notes attached. Please take care to structure this chapter clearly. The chapter should be concrete and actionable. It could refer to the attached free resource on my website (the "mission first AI starter kit") which includes a vendor evaluation guide.
—> I am interested in the flow chart, thanks [it suggested a flow chart, but it was just a single line of boxes, so I did not use it]
—> can you also explain more about custom development. I understand that people often make wrappers or custom data storage and analysis solutions that protect sensitive data. What is the benefit of a wrapper? what other types of custom development exist? Can you point me to some real use cases, preferably in mission driven organizations? [normally, I would have told it to give me linked sources for the case studies, because it seems to reduce hallucinations. However, this was my first time using GPT 5 for the type of task that past models like to hallucinate on, and GPT5 is supposed to hallucinate less, so I wanted to test it.]
August 14 ChatGPT5
Can you draft a chapter in part 3 (a concrete actionable chapter similar to the ones in there already) about Measuring impact and value? Here are my notes. Please feel free to add or change the structure and add more content than I have. If you bring in sources and citations, please include that content in brackets and include a linked citation to the source you're relying on.
[I used OpenRouter for this query because my workspace went away. This means that it didn’t have access to my project content, and I just added my notes and a copy of the manuscript as context. Annoyingly, it gave me an outline instead of a chapter, or assumed that (despite all the other chapters being in long-form prose) I wanted this one to be entirely bullet points.]
August 15
—> Can you read this chapter draft and fill out the rest of the outline?
It gave me a still pretty outline-like draft, so I followed up
—> I would like the full narrative prose draft, thank you
It still gave me lots of bullet points, so I gave up on this line of questioning.
August 16
—> I would ideally like to add a case study to every values chapter, like there currently are in Privacy (Mayo Clinic) and Disinformation (the school principal). These can be positive or cautionary tales. Please give me three ideas for each chapter that are real case studies, ideally about mission-driven organizations. I would rather have an example from a for-profit company than a composite case study. Please include links for each case study that I can use to learn more about it.
It didn’t respond to all the chapters, so I followed up:
—> Thank you? can you see the sustainability; error and biased error; governance and accountability; and ownership and intellectual property chapters? If so, can you offer some case studies for those, too?
—-
—> Can you draft a chapter based on this outline?
—> can you give me some examples of adversarial attacks on production models? What could happen, why, and are there any types or circumstances where it's more likely or more harmful for mission-driven organizations
—-
—> I would like to finish this chapter. I've attached what I have so far. I've also attached more information about adversarial attacks that you can use to finish that section. PLease draft any additional sections in line with the structure and tone of what I have so far. Please include a case study with linked citations (similar to the one in the disinformation chapter or privacy and security chapter.)
—> I'd like to condense the section on adversarial attacks down to a similar format and length to the other subsections (e.g offline modes and model drift)
—> Can you put this in prose like the rest of the chapter draft and only use bullet points when necessary?
—> The audience here is not a technical one. We don't know what hygiene means in this domain, what a trojan is, or what half of the suggested responses are. Please rewrite this for a lay audience.
—> thanks. What else should go in this chapter about infrastructure dependence and reliability?
—-
—> I have separate chapters about 1) equity and justice and 2) error and biased error. Can you merge them?
I didn’t end up taking this approach, but I being able to quickly see it without having to combine them by hand made it easier to let it go :)
August 17
—>I am adding to the readiness chapter information about identifying data risks, and I want to make sure my list of risky data categories is complete. What am I missing?
—-
—> I have uploaded the current draft of the manuscript. I would like your thoughts on its organization, both globally and on a chapter by chapter basis. Please be honest: I want to make this as good as it can be.
—> Thank you! I have many follow ups, but I will start with these:
Please explain or demonstrate these suggestions: Add a 1-page “strategy stack” visual early in Part 1: Mission & Values → Governance/Policy → Training → Use-Cases/Workflows → Tools/Vendors → Measurement. You already imply the stack; visualizing it will orient readers. Unify the decision lens early. In Ch.3 you describe the aversion↔automation spread driving inconsistent bottoms-up use. Pull that framing forward (end of Ch.1) and reuse it as a recurring figure/caption. Make Part 3 the payoff. Your intro promises a practical, step-by-step build of a values-driven augmentation strategy. Deliver a tight sequence with worksheets (you already point to downloadable materials in Ch.3).
Please offer a suggestion for the standardized outline for the values chapters And please let me know where the forward references are by listing quotes (e.g. "beginning of the sentence. . . end of the sentence"
—> Can you say more about how your suggestion for part 3 differs from what I have already?
—> Hi, I am sorry, I don't really understand what you're saying. can you rephrase in more concrete ,less jargony way? for example "stage-gates," "pipeline" "roll back triggers" "fallbacks." I added some custom instructions that will help you write in the way I am hoping you will can you read them? if so, please reproduce your advice in language following those guidelines
—
—> I have been thinking about a way to classify multi-prompt approaches for LLMs.
Curation: ask for many ideas and select one
Carving: request something long and broad, then chat back and forth to make it more specific and relevant. Like a sculptor carving from a marble block everything that isn't the statue. best for creative projects. Can help with complex projects.
Construction: breaking down a project into smaller pieces and delegating some to the AI or only letting the AI do scaffolding type tasks (e.g. here's my list of sources, what else should i read?) good for complex tasks, and also can help make risky tasks less so.
Can you critique this framework?
—> Thank you! Can you give me a few examples of where you might use a carving approach? (the use of “you” here is funny in retrospect)
—-
—> I have a chapter that I drafted and like OK. I have been giving a talk on this topic and I present the same information in a different order. Can you take a look at the chapter and the slides then give me your thoughts on the best way to organize this material for the book?
—>Can you draft a chapter that rearranges things in the way you've suggested? If there are any missing pieces that I haven't already written in one of the two documents I sent, you can draft it, but please put that new content in brackets.
—-
—> Check out the current manuscript draft and tell me what is working (what should i not change, if anything) and what I can change to make the book better. Be direct and concrete, but look carefully for opportunities to improve. It is important not just to my career and goals, but for the impact that I want to have on the world that I produce a book that is as useful as possible. If you cannot read a portion of the text, please let me know. If your context window isn't large enough to cover the entire draft, start with Part 1, and we can do the latter two parts in separate chats. Thank you for your help!
As you can tell, I was really trying to get feedback on the manuscript and it structure on Aug 17, and I wasn’t getting great results. I realized later that this is because it was treating documents uploaded to Projects differently than text copied and pasted into the prompt: it was searching it, rather than ingesting the entire thing.
—> I've added more on the jagged frontier. Can you improve this further?
—> Can you point these spots out to me? Give me words I can use the "find" feature. [quote from output]
—> Can you show me what you mean by this? [quote from output]
—> Can you draft something for the RACI chart you suggested?
—> I really like the idea of calling out quick tips in sidebars or boxes. I would like some advice about 1) which prompts you think I should call out in this way and 2) where I should put them. I'd also like to do this with custom instructions and other tool-specific advice. I've added a few additional quick tips I'd like to add. Can you suggest additional ones, and where they should all go?
—> Thank you! Can we do part 2 now, or should I move to a different chat?
—> I think i'd like to put a lot of the actionable/wrksheet/checklist type stuff into Part and keep values chapters focused on values in a way that will endure after tech changes, even a lot. Can you review Part 3 with this in mind?
—-
—> I would like two lists: 1. All of the supplemental materials (worksheets, checklists, surveys, etc) that I refer to in the text: I want to make sure I write and post all of them! 2. All of the content about training throughout the text,
—> For all of the training content, can you create learning objectives and sort them into 3-7 clusters that I can write a workshop curriculum for?
August 27
—> Can you list, explain, and link the methods that the makers of LLM models use to pursue alignment? for example, system prompts or RLHF
August 28
—> Can you help me create as complete a list as possible here?
August 29
—> I have two chapters that are very similar. I will paste both of them in here. I would like one single chapter or section that I can use to introduce Part 2, values and AI. Please ask me any questions you need to to clarify the task before executing.
—> Hm, I like the sector information, but it doesn't seem to belong in the intro to the values chapters. I think it should be two chapters, and I only need to rewrite the introductory paragraphs and title to that chapter. Can you draft an introductory paragraph or two to the chapter that presents sector by sector risks.
August 30
—> Here's a draft of an AI readiness checklist. Can you review the chapters of the book draft, especially the Readiness and values chapters, to identify additional sections and check boxes? It is OK to revise or reword existing checks as well.
September 13
—> I am moving some topics around, and need to rewrite the equity and justice chapter to include prejudicial outputs along with the rest of the types of equity and justice threats. This chapter should be parallel to the other values chapters, and I've appended things I'd like to include (and where in the structure, if relevant.)
—> Can you review this article and explain to me : the definition of automation bias a brief history of its study The methods we use to study it effect sizes anything we've found that mitigates it
—-
—> Can you summarize research findings about algorithmic management? In particular: where has it been documented? (I know of gig, domestic, retail, service, warehouse, and teaching work-- is there more? please give me citations if so). What kinds of things are managed with algorithms? What are the documented consequences of algorithmic management, particuarly on low power workers and people outside of the organizations?
—> I am interested in fleshing out the justice threats. I have done the first two. Can you draft the others?
—> what effect sizes are documented in this article? https://pmc.ncbi.nlm.nih.gov/articles/PMC3240751/
October 20
—> I am trying to figure out how to replace this confusing sentence: "If you’ve already decided that you need a consultant and looked around, you have likely found that there are many people styling themselves as AI consultants and experts. You want to avoid scammers and substance-free gurus, but it's difficult to evaluate experts if you don’t already have expertise. *Fortunately, I not only have the AI expertise, but I also consult with mission-driven organizations— I can help you with your vetting!* When selecting an expert, I suggest you pay attention to. . . " It's not clear upon reading the sentence between the stars whether I can help select a consultant or be a consultant. both are true, but I don't want to be salesy. Can you recommend 4 replacement sentences?
—> I didn't specify my request well: i mean that I am sharing information in the chapter they are reading that can help them vet.
October 28
—> Can you use the book draft to offer a summary, purpose, and steps for the strategic framing phase (Where you ID mission, vision, values)
—>Great! Can you make the steps shorter? A 2-5 word headline in bold and one or two sentences to describe each step
November ~15
—> I am thinking through something for the values chapters. It seems to me like there are a couple reactions you can have to friction between technology implementation and your values. You can 1) avoid using AI; 2) use it, but take steps to mitigate values harms (e.g. anonymizing PII before asking an LLM advice about a case/data set) 3) use it, but take steps to compensate for harms (like people buy carbon offsets for flights) 4) ignore it; and maybe there are some other options. Can you help me think through what the options are first? Then we will talk about how to integrate them into the book draft.
—> I like this idea. I think I'd like to rewrite the introduction to Part 2 introducing these approaches and then weave examples throughout the values chapters. I would guess there are examples already in the chapters with out the names, which I'd like to add the names of the stances to, and then I think we should make sure every chapter has at least two examples. Can you create a plan for integrating these stances into the draft?
The response here was too high-level to be useful, so I decided to be more specific.
—> Can you start by drafting the rewrite of the part two intro? Please call out where you're adding or changing things with specific locations (e.g. the text right before and right after the proposed addition, or the specific text to be changed so I can search the chapter for them). The chapter should be written in a voice as close to that of the rest of the book as possible.
—> Thank you! I would like a change map (and any other recommendations you have, like a stance lens box) for Privacy and Security
This is where I started to discover that it was responding to the attached document almost like it was skimming it, rather than being able to read the entire thing. After this, I started pasting in the chapters.
—> I would love that! Just a note that I have slightly renamed some of the stances. Please use these new names:
—> Can you give me some more ideas to round out this paragraph?
Here, I had some examples of compensation for the sustainability chapter. I knew there were more, but I kept thinking of ideas that were too similar to each other. This prompt helped me get out of the box.
—> this sentence is tripping me up. Can you help me explain the concept in the last sentence of this excerpt?
—> Please search the AGW draft for the Authenticity and Trust chapter and offer suggestions for integrating the stances. Please use "constrain or mitigate" instead of "constrain and mitigate."
This fixed the problem I was having where it seemed to be skimming the chapter or speculating about what’s in it!
Claude Prompts
I started gathering these a while ago and failed to log the dates, even the approximate ones. Unfortunately, opening them caused their dates to update to the date that I reviewed them, instead of the day I made them. I learned a lot about documenting prompts while putting this resource together!
—> Here's the draft so far. What do you think? Please don't be nice just to make me happy-- I want this to be as useful for its audience as possible. Please note where things aren't clear or as compelling as they could be. Where does the book feel like it's going off track? Can I improve the structure? I would like to add chapters about values, e.g. Privacy & security, Authenticity and governance, IP and ownership, replacement and deskilling. Where should those go? I have been trying to make this mostly instructional as described in "Write Useful Books" but the introduction is now really long, and adding more there seems unhelpful.
—> This is super helpful. Can you point me to some more nonprofit, government/agency, or b-corp examples that I can include? These were all positive examples
—> Amazing! Thank you! Are there any more cautionary tales of mission-drive org AI implementation (similar to NEDA). I was impressed by how few hallucinations the previous prompt generated!
—-
—> I would like to write content for this book based on blog content. My goals are:
1. convey the important content
2. broaden the examples (and content if necessary) beyond LLMs to AI systems in general, including custom apps and implementations
3. change the tone to match that of the book text (more formal)
4. If i missed any citations (or good opportunities to point to useful citations) in the blog, I want to make sure I point to them in the book
5. Make sure the following types of content are covered in each:
- define or describe the relevant values
- describe how AI use could implicate the value
- note any relevant real world examples, especially in non-profits, governments, and public benefit corps
- Note contexts where AI use is particularly risky to use in terms of that value
- recommendations for mitigating risks, or implications for use (can include contexts in which use is not appropriate).
6. I don't want blog readers to be bored if they have already read these posts
What other things should I be doing to make sure the book adds value that I am missing?
—> Thank you! I'd like to start with the blog post rewrites. Each will be a chapter, but short is OK. only add content if you think it's useful, not fluff. [in retrospect, I should have given it MUCH more tone guidance, and probably some examples. The “chapters” it drafted were well over 50% bullet points and very abstract. They ended up serving as outlines, which is probably fine. All of the outputs included substantial hallucinations. It decided that real world case studies would be useful (I agree) but it just fully invented all of them (not ideal). So although this chat resulted in a massive amount of rewriting, I still got a lot of useful ideas out of it and a consistent outline for chapters.]
—> Thanks! I'll review these in detail later, but I like the length, tone, and expansion outside of LLMs.
—> Here's the next one: [pasted in a blog post I had already written and published]
[After this, it ended its output by suggesting that it write me a chapter on a topic I had not yet blogged about. I decided to see how it did without all that context]
—> A draft of an inclusion and accessibilty draft would be great!
[It did fine, but certainly not better!]
[For the next three, I just pasted the blog content straight in with no requests. Having done so many of the same task, it did what I was expecting].
—> [I tried just giving it notes instead of a whole blog post:] Can you draft me something about ownership and intellectual property? Here are the notes I have so far:
—> Thank you! I want to make sure this book is as useful and complete as it can be. Can you recommend more chapters I should include along these lines?
—> You described a case where The Refugee Case Management Consortium workers were using a public LLM, then switched to a private one. Can you point me in the direction of some links about that? [it turned out this was fake.]
—> Yes, I would like a few examples of real-world examples of mission-driven organizations using private LLMs for sensitive data with links and any details of how and why the decision was made.
—> Can you give me the link for this quote/cituatio "With algorithms permitted into the enclave and data never leaving the home institution, the Mayo-Google partnership illustrates an approach to how health systems and technology companies can partner to facilitate knowledge generation while addressing privacy and cybersecurity concerns" (National Academy of Medicine, 2022)
—> You gave me three examples for the synthetic media chapter (pasted below) can you send me links to real organizations that have faced these, or other synthetic media attacks>
[After these nudges, it did give me some real case studies, two of which ended up in the book]
—> You mentioned these case studies: [Pasted 7 examples from draft chapter outlines generated earlier in this chat]
—> Can you send me links for these case studies? IF they are just composites, don't worry about it. In the end, I'd like at least one case study in a mission-driven organization about authenticity and accessibility if possible. They can be positive case studies (look at this success!) or cautionary tales (here's what happens when this value is violated.
[“composites” was the word it was using to describe its hallucinations]
—> Thanks! Can you point me to links for these, replace them, or tell me the don't exist? {these examples were all from earlier outputs, not the most recent one.]
—> Thanks! Are there any organizations that are credible that have used or have said they are comfortbale using AI generated images? please offer links.
[this is where I hit the end of the context window, where I got a warning to that effect].
—-
—> I got this example from you when looking for case studies about accessibility in AI technology. Can you point me to some links for specific cases that are similar?
—> can you get me some plain english definitions for the following key words? Accessibility, Inclusion (this definition should extend beyond disability.), Universal Design, Disabled (this should acknowledge invisible disabilities and the debate around person-first language)
—> Can you give me examples for how assistive technologies, if not designed or implemented thoughtfully, could compromise disable users autonomy in the following three ways: Over-reliance on automated systems can diminish human support networks that provide social connection alongside practical assistance. Privacy-accessibility tradeoffs may disproportionately impact vulnerable populations who most need assistive technologies but also face heightened risks from data collection. Loss of agency can occur when systems make decisions for individuals rather than supporting their autonomous choices. [I wish I could tell you where I got these examples from, but I can tell you I did not write it because of the voice. I could have pasted it from a third-party source or from the output of another chat, but it wasn’t from this chat.]
—> Here's the chapter so far. I would like some case studies of AI assistive technologies. Ideally, they would be designed or implemented by mission-driven organizations. I'd like a mix of positive case studies and cautionary tales. All should have links. If you cannot find one, please do not create composite examples. [This did not prevent hallucination, but it did offer several real ones, which is an improvement over the previous chat].
—-
—> I am working on the next values chapter, and I am feeling stuck. I'll paste in what I have so far, and I am hoping you can give me some advice about how to reorganize or what else I can include to be complete and match the other values chapters. Please give me ideas of things to add and a new outline.
—> can you draft these sections in the sty;e of the book? [here I pasted a list of sections that Claude had suggested. Little of this ended up in the book because they were really topics that sounded good.]
—> Thank you! Here's what I have. I like the content, but it feels out of order or repetitive. Can you use this exact content, but move things around? You can add transitions, but please don't delete content or alter the style.
—-
—> Can you draft an outline for chapter in the style of the values chapters in draft that you have about how governance and accountability is implicated by LLM use in mission-driven organizations? This should consider at least accountability to the community, the board, government where applicable, donors, and the staff. [this result was weirdly less bullet pointy and abstract than previous drafts or later ones in this same chat where I brought in lots of notes and content! It did still require a lot of editing.]
—> Can you give me some examples of organizations (preferably mission-drive ones) that used Ai in a way that compromised these values? Please includes links where I can learn about each example. If there are none, let me know.
—> Thank you! Can you draft a similar chapter about Ownership and intellectual property? Here are my notes:
—> Please feel free to add considerations or examples that i have missed, but add direct links to proof of any evidence or examples you add.
—-
—> Hi! Here's what I have so far on a chapter, plus some notes about what else I want to incldue. Can you help me get unstuck?
—> can you draft these sections for me minus the case study?
—> TIm Harford on his podcast cautionary tales told the story of an Air France flight that crashed because the pilots couldn't figure out what was going on when the autopilot was turned off. Can you 1) check to make sure I am remembering the story correctly and 2) draft a short vignette using the true story to illustrate the points we are making in this chapter? Highly recommend the podcast.
—> thanks! Here's the chapter as it stands right now. Can you please 1) make sure it is all in a voice matching the rest of the book 2) rearrange it for clarity and to avoid duplication. feel free to not just move sections or paragraphs around, but merge sections and rewrite things to improve the flow.
—-
—> I'm trying to think through a value I am seeing around the internet when reading objections to using AI. It sounds like "they just threw it into an LLM, but I put a lot of effort in." you'll also hear it in discourse refer to AI as "cheating" in circumstances where there is no game or grade to be cheating.
—> I think it's "effort." in order of importance, i think its time, personal sacrifice, skill development, and intentionality. I haven't seen it play out in mission driven organizations, but I can imagine that it would if a government put out an RFP or program or ad or website and people discovered it was AI generated. It would feel to people as if the thing AI created wasn't seen as important enough to put human effort into it. Maybe it's a costly signal thing?
Does that make it clearer?
—> CUtting corners does resonate. I don't know whether orgs should acknowledge all LLM use. In response to Claude’s questions
can you draft up an outline for a blog post and a chapter for me?
—> Thank you! I've written the introduction. Can you write me a draft of the rest of it? [It had given me a chapter draft and a blog post, but the introduction was very underwhelming and the chapter itself wasn’t great either, so I just drafted it myself. It turned out to be a good opportunity to add a personal story. The result of this prompt was a big improvement.]
Approximately June 2025
—> Can you summarize the findings of this paper? https://arxiv.org/pdf/2410.03703
—-
—> Can you review this and tell me how I can make it better? The audience is leaders in mission-driven organizations. Many of them are not sure whether the should use LLMs in their work, and if they are interested in trying it out, they are not sure where to start. I want this document to be actionable, clear, and helpful. When people are done reading it, I want them to think that they got a lot of value out of it and the author is knowledgable and trustworthy.
—
—>I have a chapter draft about selecting a consultant and a separate doc about vendor selection. Can you add create a new chapter using the information in both to create a chapter about selecting consultants and vendors? You can treat them separately or together, which ever makes more strucutral sense. THis should be a book chapter, so similar in tone to the book attached to this project. It should only use bullet points sparingly, if at all.
—
—> Can you evaluate this source for me? How would you rate the data, methods, sources, analysis, and robustness of the conclusions? (Not whether you think the conclusions match others conclusions, but whether they follow from the evidence)
—>I have some rambling notes about a chapter of my book. Can you draft a blog post using these notes, and refer to strong sources where necessary. Please link sources that you use. You can use the source above where it is strong.
—>here's what I ended up with.
Can you draft a book chapter out of this that matches with the other values chapters?
—> Here's the blog post I ended up with: https://drkarenboyd.com/blog/ethics-amp-llms-sustainability Can you draft a chapter in a similar voice and structure to the other values chapters for sustainability? This is basically the same request as above; I am not sure looking at the chat why I chose to repeat this, and if I hated the first thing, why I wouldn’t have just edited it or started a new chat.
—>The book covers AI use broadly, not just LLMs. Can you update the chapter accordingly? For example, the section "How AI can threaten sustainability" talks about misdirected focus, but non-LLM use in orgs is one of the things its contrasted against. Does that make sense? Can you update the chapter to talk about the impacts of AI broadly? The LLM impacts can be in there, but they are not the only focus.
—
—> I'd like to add a values chapter with this outline. Can you draft it? Please lean away from bullet points and toward complete paragraphs.
—> To match the rest of the chapters, I'd like this to cover how AI can help and harm (otherwise, I can merge this with the error and biased error section). Are there ways outside of just being wrong that AI can impact justice? For example, exacerbating digital divides among people or resource gaps between large and small organizations? This was a rhetorical question; I wanted to see what it would list.
—
—>I've uploaded a current draft. Understanding that it is not complete, can you critically review it? I am especially interested in the structure, what is missing, and how I can make it more consistent and useful. You can see that I’ve asked this a lot. It was nice to be able to do this a bunch of times, but I think the context windows were not long enough for it to treat the entire book.
—> Thank you, this is helpful. Let's start with the values chapters. Which feel underdeveloped? Are there other values I should add?
—> You mentioned a values framework overview. Can you draft one around your suggested restructuring of the values section? I wish I would never have asked this! I spent so much time trying to make this work and never got it.
—> thanks! Can you draft outlines for each of the new values chapters you suggested? I ended up using just one of around ten of these
—> Can you explain how Chapter 6: Transparency and Explainability is idfferent from accountability and governance, and whether or not it makes sense to merge them? Not convincing response
—> Can you explain how Chapter 10: Equity and Justice is different than error and biased error, and create either a single outline for merging them or outlines for both that make a clear distinction? Perhaps one can introduce the other? I ended up making a chapter on this
—> Can you look at the current draft and visualize how much of the total. word count is currently made up by each chatper? perhaps in a 100% bar chart? My attempt to see if it was getting the whole manuscript and getting a big picture myself
—> Thanks. Part 2 is not done yet, indeed. Can you suggest a step by step plan for filing out part 2, including an outline for how the chapters might look? I could tell that the structure of the book at this point was not working. I did end up completely restructuring the book, partly based on Claude’s advice in this and future prompts (this is going to be a theme for a while :)
July 2025
—> I am writing a chapter about limitations. Here's what i have so far. I just want to finish the end, about cognitive biases first, and then I'll ask about how I can reframe or restructure the superpowers/limitations sections.
—> thanks! I am feeling like the flow is awkward. What do you think? Can you give me some ideas about how to reorder or reframe the sections?
—>I'd rather not center the story around someone else's metaphor, but I agree it's very useful! Can you reframe the chapter around Option 3? You can add and edit transitions between paragraphs and sections, move things around, and change the headings, but please don't change the content. i was annoyed when I wrote this because it seemed very obvious to me not to frame the entire chapter on something I am borrowing (Jagged Frontier)
—-
—> I've uploaded a new draft of the book. It is not complete, especially at the end. The Business Project section also needs a revamp. Can you help me find ways to improve the flow of the draft before I finish it?
—> I would love a detailed outline for your proposed approach. You can skip the values chapters, as they are ready for editing.
—> I'd like to know more about some sections whose themes I will paste in here. If any of these are referring to content I already have, please give me the first and last sentence of that section. If not, please write a more detailed outline of what you imagine goes in that section. This is referring to the output from the previous prompt: Claude put a bunch of chapters and topics in that I didn’t recognize and I wasn’t sure if it was suggesting additions or renaming some of my content
—> If the aversion- augmentation- automation model goes in Chapter 1, what goes in chapter 3?
—-
—> I want to write a chapter about assessing an organization's readiness for AI implementation. this is in Part 3, which is more how-to focused. Here are some notes:
—-
—> Here's an excerpt from a section of the book. I'd like some examples to fit in where the "XXXX" is. Can you offer me 10 ideas that are relevant to mission driven organizations?
—> thank you! Can you give me a few that are not LLM driven, but use other AI types?
—-
—> This chapter feels unbalanced and unsatisfying. The "What AI can do" section feels overly specific compared to the human section, which is more abstract. I am also not sure that the structure is right. Ideally, I'd like to have a collectively exhaustive list of strengths and weaknesses that are framed abstractly and supported by examples, so that readers can flexibly but concretely apply what they learn to their own work. The rest of the content can be included elsewhere in the draft. Can you rewrite this chapter in that format? If you add information, please include linked citations. For content that doesn't fit in this structure, please suggest where else in the draft it could go (i've added an updated draft to the project). Feel free to ask follow up questions, offer options, or advise me about a different structure if my idea is difficult to execute or there's a more effective way to help people learn about these ideas.
—-
—> I'd like to write a section about the particular considerations of mission-driven organizations when implementing AI?
You can pull from/refer to the NEDA chatbot example, any other content already in the book, anything in the project knowledge this other case study I wrote, https://drkarenboyd.com/blog/case-study-givedirectlys-ai-powered-poverty-targeting
—> I like a lot of this. the "Why Industry-Specific Knowledge Matters" feels too closely tied to the case study, limiting its generalizability and seeming out of step with the more broad organizational context section. Can you rewrite the industry section include a comprehensive look at how industry-specific factors make AI implementation in mission-driven organizations unique? I ended up not using the results of either of these in the final draft, but I had them in there for a while and they helped me think more concretely and definitely informed the book.
—-
—> I am working on adding a section to the chapter that currently helps people identify green, yellow and red flags for tasks they are already doing. The new section will help people identify new opportunities. I've attached the chapter as it stands and my notes for the new section. I'd like to use the example of me choosing between paying for an audiobook to be created (very expensive), trying to do it myself (low quality and extremely time intensive) or using a voice clone (accessible, but may piss some people off, lower quality than a professional one). I'd like to set up the example early in the chapter and then walk through how to evalute it. evaluation should include identifying counterfactuals (pro, DIY, no audiobook) and evaluating in using values, as similarly as possible to the existing task evaluation earlier in the chapter. Please ask me questions if this doesn't make sense I really wanted to take this approach, but it never really fit because a lot of the framework is organizational and.I am not an organization. It was helpful to figure this out more quickly using gen AI rather than having to write it out myself, inevitably getting attached to the work/sunk cost, and having a hard time taking it out.
—> introduce the audiobook example early in the new opportunities chapter. 2. all of the above. I would replace "Economic considerations for voice actors" with replacement/deskilling for voice actors. 3. I am good with either. 4. either the same evaluation criteria or something similar to the existing task analysis where they identify relevant values for each and then go from there. 5. Sure! These are answers to questions it asked me
—-
—> I wrote this blog post about brainstorming. I'd like to create a use case study similar to the ones I've written for custom training and research for brainstorming. I've attached the brainstorming information, the other two case studies, and the some text I've already written referring to the case study. Please cite any additional fact claims or examples and link to your primary sources, preferably peer reviewed studies. I ended up putting case studies in the supplemental materials
August 2025
—> I am writing a case study about using AI to create frameworks. here are my notes. THe notes include an example, including the prompts and their outputs. Can you draft a case study for me, similar in format to the ones already in the book, about how to create a nice, MECE (or close) framework using LLMs? Add more ideas about how to accomplish it than I have in there, how I could extend the example with more/different prompts if needed, and add things we can learn from the example. I ended up putting case studies in the supplemental materials
—-
—> Can you give me a more complete explanation of cognitive biases for my section on human weaknesses? As I read this I immediately remembered this conversation. This is the chapter I worked the most with AI on. I just couldn’t get the structure and content to work together. I ended up having Claude help extensively with both. I did an extensive rewrite of this chapter another couple of times, so the phrasing and structure from this chat is no longer in the draft, but again it helped me think about what should be in there. This is the occasion where it got me unstuck most successfully, even though I didn’t keep most of what it output in this chat
I think I'd like one short section explaining each cognitive bias as described by Amos and Tversky. One or two sentences explaining each (e.g. confirmation bias, loss aversion, etc) followed by an example relevant to mission driven organizations.
—> other biases that I haven't otherwise covered would be great. Anchoring and recency bias come to mind, for example. Please also include in the text an example of how each can play out in mission driven organizations. Here's a template from other non-coginitive-bias types of limitations I've already described.
—> this is great! can you give me three alternative examples for loss aversion?
—>Can you write up a short section like these about confirmation bias,
—> Can you give me another example for selection bias? the one we have is heavily overlapping with survivorship bias and might confuse the two
—> Can you write one for sterotyping?
—>Thank you! Sunk cost as well please
—> Thank you! Here's the chapter so far. What am I missing?
—> I do plan to draft an interactions section as soon as these are solid :) can you write matching sections for these:
—-
—> I am working on this chapter and I am still not sold on how it is structured. Can you give me 7 options of short outlines to chose from? This is the same chapter as before, just wanted a fresh view on it so I started a new chat.
—-
—> I've just uploaded the current draft of the book. Can you review it? Please give me honest and constructive feedback; I want to make this text as useful for mission-driven workers as I can make it. Please note that I have not included the entire process in Part 3, so the draft is incomplete. I wonder how many times I asked this!
—> Thank you! will you please write up a complete outline of the most useful book I could make? You can reorder and merge existing content as long as you let me know where that content is currently and where you think it should go. You can leave out content or indicate that some sections should be shorter. You can suggest adding content as well. Please include: an order of content including notes about whether sections are merged or removed. suggest a way to complete part 3, including remaining steps. I am particularly not sure how to incorporate case studies: an appendix? sprinkled throughout? attached to relevant topics?
September 2025
—> I am drafting an implementation chapter in Part 3 about how people ought to select technology and vet vendors once they have decided on a use case. This chapter will go right after the chapter currently labeled "Identify efficiency gains." Please draft a chapter based on the notes attached. Please take care to structure this chapter clearly. The chapter should be concrete and actionable. It could refer to the attached free resource on my website (the "mission first AI starter kit") which includes a vendor evaluation guide This reference to the starter kit does not work
—> I wrote this, which has a vendor evaluation guide in it. Can yu update the chapter to include any ideas or actions that aren't already in the chapter? If there are any conflicts, please let me know Had to upload the starter kit directly
—> Can you draft a chapter about implementation and change management? Here are my notes. There are notes about training throughout the current draft; those can be brought in to this chapter
—> Can you suggest an outline for the remaining chapters in part 3? I think there should be at least some content about how to learn and adapt. What else?
—-
—> Can you draft a chapter about how AI's dependence on its infrastructure can impact reliability of services? Here are some notes
—-
—> Can you draft a section of the sustainability section in this format: Giving me ideas for and quickly prototyping new structures is probably where AI added the most value in this book draft.
October 2025
—> Here's my draft about measurement. I would like the Concrete Example near the end to identify each of the types of measurement that I mention in the chapter by name. You can change the format or content of the example and it can be longer if needed.
—-
—> Here's a section from the book on vendor evaluation (pasted) and a toolkit I wrote that contains a vendor evaluation script and rubric. I would like to make sure they are consistent. Can you identify things that I need to change? It's oK if the toolkit is more detailed, I just don't want them to contradict and I want them to cover the same general ideas.
—>For context, the toolkit is intended to be a lightweight companion to the book with a narrower scope (i.e. just the vendor evaluation portion). can you update your analysis to reflect that?
—> Thank you! Can you draft the additions you recommend for each?
—> Thank you! I've updated the toolkit as you recommended, but I am thinking of going a different way for the book. I'd like to add a section about "interviewing prospective vendors" that includes the approach in the toolkit (including linking to the toolkit) and connects back to the values list (e.g. bring the values list you created in Part 1. if you identified sustainability as one of your key values, environmental considerations, you of course should ask about it.) Can you draft such a section? It should be a similar length and format to other sections (e.g. **Developing your RFP)**
—-
—>I've uploaded the current version of the book as AGW 100125.pdf. I am working on my last susbtantial editing pass. I am noticing that some chapters, in particular in Part 3, seem overlapping. Can you help me identify where there is repetition and help me resolve it?
—>Can you read the implementation chapter closely and identify repetition from other sections of the book?
—>In general, I think the implementation chapter repeats too much. I think i wrote it before the others. Can you give me an outline of what should go in a chapter about implementation and change management, given what is covered in the rest of Part 3?
—>Yes, can you draft them?
I—>'d like to refer readers to different relevant sections of the book when it comes to training emphasis.
Can you add a parenthetical after each bullet point with the relevant chapter number(s) and name(s) for each of these? IF you are not sure, please leave it blank.
Strategic implications of AI for mission advancement
Risk management and governance responsibilities
Communication strategies for stakeholders
Ethical decision-making frameworks
Board and donor engagement
—> Yes, please (in response to it asking me if I wanted it to add similar chapter references to other sections.)
—-
—> I just uploaded a completed manuscript. Can you help me draft a conclusion? I am too close to the content to know what should go into it. Please given me 4 potential outlines for a short conclusion chapter.
—> I'd love a draft of option 1.
—> I am trying to make a a point in line with your outline, but I feel like I am not articulating it well. Can you help me make this clearer and neater? Maybe offer me 3 rewrites
—> "Rather than individual advice or a set of rules, you’ve built something much more durable over the course of this book: your own judgment. In an age of AI, building, exercising, and continuing to update your own judgment is a critical skill. When classifiers are labeling the world, recommenders are curating our options, and generative AI is creating so much of what we learn from, human judgment will be both more important, but also easier to relinquish. "
—-
—> I'd like to write a little note in the introduction to Part 3 that acknowledges that the plan I've laid out there is kind of aspirational: in particular, if followed to the letter, it would require a lot of staff time that smaller and resource constrained organizations may not be able to offer. If you cannot offer this much capacity, you can mitigate risk by starting slower (less risky tasks, fewer new implementations at once, longer pilots). Smaller organizations can also make use of deeper qualitiative feedback because they have fewer staff members giving feedback, whcih can reduce risk.
Can you draft something that makes more sense and matches the tone of the book?
—-
—> I want to add other bad actor behavior to the disinformation a deepfakes chapter. I've already added the defintions. Can you give me some ideas about how to weave jailbreaking and prompt injections into the chapter?
—> thank you! Can you draft these each and advise me where (exactly) to put them? I tried this instruction to try to get more specific advice on where to place things and it worked: it gave me specific quotes that I could search for in the chapter.
—-
—> I've uploaded a new, complete draft of the manuscript. Can you identify all of the supplemental materials and case studies that I mention in the text and any notes about how i described it? (e.g. what is in the document, what it will help with, what is it called)
—> Thank you! I would also like you to review the values chapter and identify which ones have case studies or real life (not composite or hypothetical) examples, and which ones do not.
—> For those that do not have case studies or examples, can you recommend 3 case studies or examples that I could include? I am looking first for real ones, not hypothetical or composite ones, with linked citations. I also prefer examples from mission-driven organizations, like non-profits, government, and social impact organizations. If you can't find one, just write "unknown."
—> I am interested in these: Earth Species Project, Climate Policy Radar, Environmental Defense Fund. Can you summarize and offer links?
—-
—> I am interested in adding real life examples for as many of the values chapters as I can. THese can be exmaples of mission driven organizations leveraging AI to pursue those values or undermining them. Can you identfy the chapters that do not currently have examples or case studies and recommend some I can include with linked citations? I would like real examples, rather than hypotheticals or composite ones: if you can't find a real example with a citation, just let me know that you couldn't find anything.
—> Thank you! Here's a bunch of info I got on the Glaze project you suggested. Can you draft a case study highlighting the chapter content and suggest a specific spot for it to go?
—> Thanks! Can you update the how it works page to explain how the perturbations work more specifically? YOu can use an example, and you can pull from these sources:
—-
—> I want to expand the scope of this chapter to include prompt injection attacks. Can you suggest the minimum amount of edits to accomplish this? I have already added the definition and expanded the main title, but I there are a lot of other references to the scope of the chapter as "synthetic media" alone rather than "bad actor behavior" in general.
November 2025
—> I think that this sustainable management practices chapter is missing some things. I am fine with referring to content from other chapters and how they should be wrapped into ongoing management, but don't want to duplicate content from other chapters. E.g. we've described feedback systems in other chapters, and we probably want to discuss how someone is in charge of reading and responding to that.
Here's what I have right now. Can you suggest what I might add?
—> Can you draft a book description for Amazon based on the AGW pdf?
November 28
—> I am introducing a framework into the book. I've appended the introductory text I wrote. I'd like to weave it in throughout the book. How should I style these mentions? what are my options? e.g. should I captialize them, bold them, italicize them every time I mention them? put them in quotes? I've appended examples of how I plan to integrate them. Please give me a table reviewing my options, focusing on pros and cons including what readers might be used to and what will aid comprehension, and make a recommendation.

