Prompts used to create “Amplify Good Work”
This page is open during construction :)
This page discloses the prompts I used to help me create “Amplify Good Work.” I'm sharing them for transparency and to demonstrate that using AI as a tool requires judgment, effort, and intention. You’ll see that it’s a lot more work than, “Hey Claude, write me a book,” but also a lot more useful.
As I recommend in the book and to any one who will listen, I did not use any of the text provided by an LLM without carefully reviewing and editing it.
Some notes on the presentation of this information.
For Claude prompts: I tried to the best of my ability to present them in the order in which I submitted them. This is challenging, because the prompts are organized by chats. Also, in their UI, Claude’s date labels are vague (e.g. 7 months ago). I tried extracting the prompts with more precise dates from the json file of my conversations, but they don’t have a field connecting conversations to projects (so I couldn’t extract them with code) and the file was too large to open on my computer, so I couldn’t go through manually.
When ChatGPT updated to GPT5 and deleted past models, it got rid of the labels that indicated which model was used for each prompt unfortunately. However, given that these models are no longer available, perhaps that information is not as relevant as I had assumed it would be.
The fact that Claude doesn’t have precise dates made it impossible to line up ChatGPT and Claude together, so they are presented separately.
These prompts are reproduced with errors in context; if I made a spelling or punctuation or strategic error, I included it. I did this because we don’t yet know how errors like these influence model output (or don’t.) I did not include all of the context I added to these prompts, although if I didn’t refer to what that context was in the prompt, I added a note about what it was. The context I included was often entire chapters, pages of notes, or even the current draft.
Somewhat embarrassingly, it didn’t occur to me to use a large language model until I had been working on this project for more than a year. That is why there are not a lot of developmental prompts early on and they jump straight into the specific. There are some developmental prompts for chapters that I hadn’t written yet later on, though.
“—> “ Indicates a new prompt
—— Indicates a new chat started on the same day as the previous one
ChatGPT Prompts
May 1
—> I am writing a chapter about how to use LLMs to augment work in mission driven organizations.
Right now, I am making the argument that (as opposed to trying to automate as much as possible or trying to avoid AI as much as possible), augmentation allows organizations to take advantage of the opportunities of AI while mitigating the risks. Under opportunities, I have efficiency, new products, process improvements, accessibility, and custom training. Under risks, I talk about how people bring human connection and trust, innovation, deep expertise, context sensitivity, uncertainty, and negativity to the table.
1. Can you improve the framing of this information? I feel like "risks" isn't the right word, but it feels balanced against "opportunities" and I am not sure what the right word would be. Selecting a new framing for the entire section is allowed. Can you give me 5 ideas for alternative ways to frame this?
2. What am I missing on my lists? What else can I cover that augmentation can offer?
—> Thanks! I had forgotten about moral imagination, and i added the organizational memory piece, too. Can you review what I have so far and make some suggestions about how I can improve it? Please don't spare my feelings-- I want this to be as good as it can be.
I would like to add a section about values that organizatons need to balance when creating an AI augmentation strategy (e.g. privacy and data security, governance and accountability, etc.). Can you recommend where I might put that?
May 5
—> Here are some thoughts I have about content that should go in the book. Right now, I am torn between trying to lay out the entire thing in the form of steps (inspired by "Write Useful Books") and keeping related content together.
Can you give me a couple options for how I can organize this content? For each, please give me a brief summary of the argument, an outline and, if there are any additional topics that outline provides space for, note what they are and where they could go. When reviewing the content, I'm submitting, please ignore the order it is currently in and any signposting (e.g. "in the next chapter," or "Introduction:") so that I can get several views on how this content could flow and work together. If there is content you think does not fit, you don't need to include it, just let me know what that approach left out.
—> Great! Thank you! I like the step by step best, I think. Can you give me a couple options for this approach?
—> Thank you! I think I'll do a version of option A. I want this book to be as useful as possible for readers. Here are some ideas for supplementary content or appendicies. I'd like to know what you think would better serve as linked content and which should be written in the back of the book as an appendix. I'd also like to know what else I could do that would make the book more useful for readers in mission driven orgs (nonprofits, gov/agencies, and b-corps). feel free to suggest an unreasonable number of ideas-- some of them I can use as content marketing after publication, too.
1. draft survey
2. Sankey vignette
3. Queries I used while writing this book.
—> Thanks! my website is drkarenboyd.com. I would like to update it to center around this book and some of this supplemental content. What suggestions do you have?
—> I love it, thank you. I am looking for names for the book and/or approach that I can center the website around.
Here are the thoughts I have.
Should reflect empowerment/augmentation (people plus AI, not AI replacing human work)
Should invoke mission/doing good/service (for leaders/workers in non-profits/gov/b-corps, not solely-profit-motivated orgs)
Should look at ethics + effectiveness.
Here are ideas I've had: "The Mission and the Machine," "Amplify good work" with the subhead "effective, ethical AI" or "helping non-profits thrive in the future of work.
I am interested in book titles, website headings, taglines for me/my services.
May 8
—> I am writing a chapter about identifying which business processes an organization has, what the tasks involved with each business process are, and what they need to consider when deciding whether and how to automate those tasks. What considerations should I include when they are deciding whether and how to automate?
—> thank you! What are some judgment calls in non-profits (etc) that AI would be good at making and some examples where people should be making the judgments?
Note: this query didn’t produce any text that I used, perhaps because I didn’t give it enough context.
—> I have written a chapter and included a checklist that people can use to help automate tasks. I've attached the chapter, which specifies with the checklist should be used for and what it should do. Do you think this checklist is possible to make, and what should I keep in mind? Can you create a document with a draft checklist in it.
—> Thanks! I imagine a checklist separate from the green/yellow/red flag system, with different questions. Once a task is identified as all green, what else do they need to think about when deciding whether, when, and how to automate it?
Note: this checklist was so bad as to be discouraging. I abandoned the idea of a checklist after reading it. Probably not the right choice, but I definitely needed to put in more information.
May 11
—> Can you summarize the copyright law in the US around AI generated content?
May 14
—> Can you create a document summarizing peer-reviewed research around brainstorming with LLMs?
- What is the range of findings around idea quantity and quality? (For each paper that scores quality, please describe how quality was assessed: by whom? how were they related to the project? were they blind to whether AI was involved or not?)
- Are there other ways that the brainstorming sessions were evaluated, how were they measured, and what were the results?
- what do researchers attribute those results to? (if they are bad, why do they think LLM-augmented teams did worse? if they are good, what to they attribute that to?)
- Were there cases where interventions allowed LLM augmented brainstorming sessions to improve their results?
Please cite each study that you included.
—> Thank you! does this include the following studies? Why or why not? If they should be included, please add them:
https://arxiv.org/pdf/2406.06561
https://arxiv.org/pdf/2310.10677
https://arxiv.org/pdf/2410.11877
https://www.mdpi.com/2504-4990/5/4/65
https://dl.acm.org/doi/pdf/10.1145/3613904.3642414
study or studies by Jeremy Utley, Kian Gohar, and Henrik Werdelin
—> thanks! There are some weird artifacts throughout the document and the outcome column aren't very clear. e.g. "Originality +0.17 D vs. baseline citeturn7view0" which group is the originality higher for? can you clear this up?
—> Hi! I would like the citations to remain, but right now they are gibberish. Can you regenerate the citations?
—> Thanks! Based on the results of what you read, can you please correct or confirm this thesis:
AI on its own can generate a huge volume of ideas, but the quality isn't great/(consistent?). When humans generate with AI, they often do worse, perhaps because they don't iterate enough or build their own ideas off of AI's--they treat it like google instead of a brainstorming partner. You can fix this by 1) asking the AI several times, rather than just once. 2) seed it with example ideas 3) delay the use of the LLM until the people have had time to ideate on their own 4) use worse AI-- having but great but wildly varying answers gives the humans ideas to build off of, but they stay engaged.
Please identify where this paragraph is contradicted by the evidence and link citations to the parts that are correct. Help me edit this so it is accurate.
—> thank you! This is extremely helpful. The idea about the low quality AI comes from a Dell'Acqua study that gave recruiters (not brainstormers) good AI, worse AI, or no AI. AI helped, but bad AI helped the most. I'll make sure this is stated speculatively.
—> Can you explain:
1. What is GPS scaffolding
2. what was the domain for brainstorming in each of the studies? What question were they asked to find solutions or ideas for?
May 14
—> Can you read this document and tell me if this summary is accurate?
https://www.ncbi.nlm.nih.gov/books/NBK594445/
Mayo Clinic has implemented a carefully designed framework for using AI with sensitive patient data. Rather than sending patient data to third-party LLM providers, Mayo created what they call a "data under glass" approach in partnership with Google.
This system includes two key components:
The Mayo Clinic Cloud - A secure environment housing patient records
The Mayo Clinic Platform - A controlled enclave where Mayo can share de-identified clinical data for advanced analytics and AI development
What makes their approach distinctive is that algorithms are permitted into the enclave, but sensitive patient data never leaves Mayo's secure environment. This federated learning model allows Mayo Clinic to leverage AI capabilities while maintaining strict privacy and security protocols for sensitive patient information.
As explained by Mayo Clinic representatives: "With algorithms permitted into the enclave and data never leaving the home institution, the Mayo-Google partnership illustrates an approach to how health systems and technology companies can partner to facilitate knowledge generation while addressing privacy and cybersecurity concerns" (National Academy of Medicine, 2022).
Is there anything that I can add to or change about this summary to better illustrate data privacy and security solutions for mission-driven organizations with sensitive data?
—> This is great! Can you make these edits to the summary and replace jargon (e.g. hashing, least-privlege) with a plain ENglish explanation and put the jargon in parentheses (or remove it altogether?)
May 22
—> I am writing a short section or chapter on how to choose an AI consultant. I want to make sure I have everything in this section that could help someone bring in an external expert on AI who can work with internal strategic and legal experts to help guide the development of an AI strategy. Here's what I have so far. Can you suggest more things I could include with brief explanations of each factor and why someone should look for a consultant with that quality?
June 5
—> Hi! I am writing values chapters for the book. Here's what I have so far. What other chapters should I write? Can you give me a sentence or two about how they relate to AI in mission-driven organizations?
I'm not sure if you can read the doc, but heres what I have so far:
Privacy and Security
authenticity and trust
Effort and Craft
Deep Fakes & Disniformation
Inclusion and Accessibilty
Replacement and Deskilling
Error and Baised Error
Governance and Accountability
Ownership and Intellectual Property
Sustainability
Anything that isn't on this list?
June 7
I am aware of several other books about AI for mission-driven organizations. I'd like to to ame a list of such books and as you to review the current draft and explain to me whether how it overlaps and is different from each of them? Which ones is it most similar to, and how can I differentiate it (including by changing the draft)?
Thanks! Can you add these titles: [three specific books I found in my own search]
June 7
—> I am writing a chapter in the strategic implementations section about bringing together the right people. I think executives need to be there, IT if they aren't already execs, key enthusiasts and skeptics maybe, and any external experts: legal, strategy, and technical. Can you draft a chapter about that? I've pasted in what I have so far:
—> thank you! Can you write this more in the style of the rest of the draft? (fewer bullet points and tables, more paragraphs?)
{Note: I should have started a new chat here, but I did not] I have a chapter on how to select an AI strategy consultant, and I'd like to make sure I've got all the red flags and positive signals. Can you give me a list?
June 8
—> I am trying to make a decision about whether to automate a particular task. Here's what I am thinking so far. How should I go about making this decision? What framework should I use? I have a brief one at the end of these notes, but I want to make sure it is complete.
{then, I asked another model:] I am trying to decide whether to automate a particular task. I'll attach what I've been thinking. I want to make sure I consider values and how they are implicated in my particular mission and work. I have started a framework at the bottom of my notes. Can you help me flesh it out>
June 9
—> I'd like to write a values chapter on social isolation and disruption. Here are the notes I have. Please make it match the structure and tone od the other values chapters.
—> I don't want lists and tables please. The definitions can be lists, but everything else should be in paragraphs
July 23
—> I am writing the last section of this chapter, talking about the overlaps in human and AI weaknesses so that users can be aware. Can you create a list of these intersections? Here's the introduction to the section:
July 30
(for some reason, this chat still has a label: I used o3 pro. I probably tried another model for the first attempt)
—> I am writing the last section of this chapter, talking about the overlaps in human and AI weaknesses so that users can be aware. Can you create a list of these intersections? Here's the introduction to the section:
—> Thank you! Can you merge these ideas into your output if they arent already there? Feel free to expand your suggestions to look at cases where one side's strengths become weaknesses when combined with the others (e.g. RLHF makes AI aligned with what we want, but when what we ask for is biased, we get bias back without realizing it (3 in the list below))
—> I would like to complete this chapter with an discussion of the intersection of AI and human weaknesses. Before I do that, I want to make sure that this text is clear and comprehensive. Should I group some of the cognitive biases together (e.g. anchoring and confirmation bias?) I would like to group them together in a way that makes sense. Please let me know which ones you think I should group together and draft text and examples to match the others. It's OK if the text for the sets are a little longer. If there are suitable existing names for the groups, please use and cite that name. Otherwise, make up your own and make it simple.
—> Can you write 2-4 sentence explanations of the following biases along with examples of how it could play out in mission driven organziations? Overconfidence, Apophenia / Spurious Correlation
July 30
—> Here is a chapter. I am not sure if I like the structure. Can you give me some ideas about how I could make the structure clearer and more useful for readers? Please summarize your ideas in 1-3 sentences each before executing and I will pick some options to create outlines from.
Aug 9
—> I am working on a section in Part 3 about identifying new tasks or processes that mission-driven organizations can add with AI (not just LLMs, but AI broadly.) Here's what I have so far and some additional notes. Can you create a detailed outline of this section?
—> I added an fourth question to the flagging stage. can you add it?
Aug 11 (GPT 5)
[I asked Claude Opus 4.1 the first two prompts as well (verbatim) to get a different perspective. In this case, I like ChatGPT’s answer better, although mainly because it is closer to what I already have and would require less work to implement. It made me feel as if ChatGPT understood what I was trying to do more than Claude did. Don’t read too much into this: to know whether Claude or ChatGPT is better at this type of task, we should do repeated trials in new chats and compare a larger sample of outputs.]
—> I've just uploaded the current draft of the book. Can you review it? Please give me honest and constructive feedback; I want to make this text as useful for mission-driven workers as I can make it. Please note that I have not included the entire process in Part 3, so the draft is incomplete.
—> Thank you! will you please write up a complete outline of the most useful book I could make? You can reorder and merge existing content as long as you let me know where that content is currently and where you think it should go. You can leave out content or indicate that some sections should be shorter. You can suggest adding content as well. Please include: an order of content including notes about whether sections are merged or removed. suggest a way to complete part 3, including remaining steps. I am particularly not sure how to incorporate case studies: an appendix? sprinkled throughout? attached to relevant topics?
[because I liked ChatGPT’s answer better, I followed up]
—> Thank you! Can you package this into a step-by-step guide for me (the author) to make the changes you are suggesting? I can't use the links to the text that you have included for some reason, so please indicate where in the text using excerpts (e.g. "from 'A policy that covers risk assessment. . . ' to '. . . improve efficiency without compromising sensitive data.")
—> Thank you! Can you explain what you think should go in the AI primer section that isn't currently in the section starting with "Artificial Intelligence, or AI, is a broad term . . " and ending with ". . . by which point they are already late?" Or is it similar content arranged differently?
August 12
—> I am drafting an implementation chapter in Part 3 about how people ought to select technology and vet vendors once they have decided on a use case. This chapter will go right after the chapter currently labeled "Identify efficiency gains." Please draft a chapter based on the notes attached. Please take care to structure this chapter clearly. The chapter should be concrete and actionable. It could refer to the attached free resource on my website (the "mission first AI starter kit") which includes a vendor evaluation guide.
—> I am interested in the flow chart, thanks [it suggested a flow chart, but it was just a single line of boxes, so I did not use it]
—> can you also explain more about custom development. I understand that people often make wrappers or custom data storage and analysis solutions that protect sensitive data. What is the benefit of a wrapper? what other types of custom development exist? Can you point me to some real use cases, preferably in mission driven organizations? [normally, I would have told it to give me linked sources for the case studies, because it seems to reduce hallucinations. However, this was my first time using GPT 5 for the type of task that past models like to hallucinate on, and GPT5 is supposed to hallucinate less, so I wanted to test it.]
August 14 ChatGPT5
Can you draft a chapter in part 3 (a concrete actionable chapter similar to the ones in there already) about Measuring impact and value? Here are my notes. Please feel free to add or change the structure and add more content than I have. If you bring in sources and citations, please include that content in brackets and include a linked citation to the source you're relying on.
[I used OpenRouter for this query because my workspace went away. This means that it didn’t have access to my project content, and I just added my notes and a copy of the manuscript as context. Annoyingly, it gave me an outline instead of a chapter, or assumed that (despite all the other chapters being in long-form prose) I wanted this one to be entirely bullet points.]
August 15
—> Can you read this chapter draft and fill out the rest of the outline?
It gave me a still pretty outline-like draft, so I followed up
—> I would like the full narrative prose draft, thank you
It still gave me lots of bullet points, so I gave up on this line of questioning.
August 16
—> I would ideally like to add a case study to every values chapter, like there currently are in Privacy (Mayo Clinic) and Disinformation (the school principal). These can be positive or cautionary tales. Please give me three ideas for each chapter that are real case studies, ideally about mission-driven organizations. I would rather have an example from a for-profit company than a composite case study. Please include links for each case study that I can use to learn more about it.
It didn’t respond to all the chapters, so I followed up:
—> Thank you? can you see the sustainability; error and biased error; governance and accountability; and ownership and intellectual property chapters? If so, can you offer some case studies for those, too?
—-
—> Can you draft a chapter based on this outline?
—> can you give me some examples of adversarial attacks on production models? What could happen, why, and are there any types or circumstances where it's more likely or more harmful for mission-driven organizations
—-
—> I would like to finish this chapter. I've attached what I have so far. I've also attached more information about adversarial attacks that you can use to finish that section. PLease draft any additional sections in line with the structure and tone of what I have so far. Please include a case study with linked citations (similar to the one in the disinformation chapter or privacy and security chapter.)
—> I'd like to condense the section on adversarial attacks down to a similar format and length to the other subsections (e.g offline modes and model drift)
—> Can you put this in prose like the rest of the chapter draft and only use bullet points when necessary?
—> The audience here is not a technical one. We don't know what hygiene means in this domain, what a trojan is, or what half of the suggested responses are. Please rewrite this for a lay audience.
—> thanks. What else should go in this chapter about infrastructure dependence and reliability?
—-
—> I have separate chapters about 1) equity and justice and 2) error and biased error. Can you merge them?
I didn’t end up taking this approach, but I being able to quickly see it without having to combine them by hand made it easier to let it go :)
August 17
—>I am adding to the readiness chapter information about identifying data risks, and I want to make sure my list of risky data categories is complete. What am I missing?
—-
—> I have uploaded the current draft of the manuscript. I would like your thoughts on its organization, both globally and on a chapter by chapter basis. Please be honest: I want to make this as good as it can be.
—> Thank you! I have many follow ups, but I will start with these:
Please explain or demonstrate these suggestions: Add a 1-page “strategy stack” visual early in Part 1: Mission & Values → Governance/Policy → Training → Use-Cases/Workflows → Tools/Vendors → Measurement. You already imply the stack; visualizing it will orient readers. Unify the decision lens early. In Ch.3 you describe the aversion↔automation spread driving inconsistent bottoms-up use. Pull that framing forward (end of Ch.1) and reuse it as a recurring figure/caption. Make Part 3 the payoff. Your intro promises a practical, step-by-step build of a values-driven augmentation strategy. Deliver a tight sequence with worksheets (you already point to downloadable materials in Ch.3).
Please offer a suggestion for the standardized outline for the values chapters And please let me know where the forward references are by listing quotes (e.g. "beginning of the sentence. . . end of the sentence"
—> Can you say more about how your suggestion for part 3 differs from what I have already?
—> Hi, I am sorry, I don't really understand what you're saying. can you rephrase in more concrete ,less jargony way? for example "stage-gates," "pipeline" "roll back triggers" "fallbacks." I added some custom instructions that will help you write in the way I am hoping you will can you read them? if so, please reproduce your advice in language following those guidelines
—
—> I have been thinking about a way to classify multi-prompt approaches for LLMs.
Curation: ask for many ideas and select one
Carving: request something long and broad, then chat back and forth to make it more specific and relevant. Like a sculptor carving from a marble block everything that isn't the statue. best for creative projects. Can help with complex projects.
Construction: breaking down a project into smaller pieces and delegating some to the AI or only letting the AI do scaffolding type tasks (e.g. here's my list of sources, what else should i read?) good for complex tasks, and also can help make risky tasks less so.
Can you critique this framework?
—> Thank you! Can you give me a few examples of where you might use a carving approach? (the use of “you” here is funny in retrospect)
—-
—> I have a chapter that I drafted and like OK. I have been giving a talk on this topic and I present the same information in a different order. Can you take a look at the chapter and the slides then give me your thoughts on the best way to organize this material for the book?
—>Can you draft a chapter that rearranges things in the way you've suggested? If there are any missing pieces that I haven't already written in one of the two documents I sent, you can draft it, but please put that new content in brackets.
—-
—> Check out the current manuscript draft and tell me what is working (what should i not change, if anything) and what I can change to make the book better. Be direct and concrete, but look carefully for opportunities to improve. It is important not just to my career and goals, but for the impact that I want to have on the world that I produce a book that is as useful as possible. If you cannot read a portion of the text, please let me know. If your context window isn't large enough to cover the entire draft, start with Part 1, and we can do the latter two parts in separate chats. Thank you for your help!
As you can tell, I was really trying to get feedback on the manuscript and it structure on Aug 17, and I wasn’t getting great results. I realized later that this is because it was treating documents uploaded to Projects differently than text copied and pasted into the prompt: it was searching it, rather than ingesting the entire thing.
—> I've added more on the jagged frontier. Can you improve this further?
—> Can you point these spots out to me? Give me words I can use the "find" feature. [quote from output]
—> Can you show me what you mean by this? [quote from output]
—> Can you draft something for the RACI chart you suggested?
—> I really like the idea of calling out quick tips in sidebars or boxes. I would like some advice about 1) which prompts you think I should call out in this way and 2) where I should put them. I'd also like to do this with custom instructions and other tool-specific advice. I've added a few additional quick tips I'd like to add. Can you suggest additional ones, and where they should all go?
—> Thank you! Can we do part 2 now, or should I move to a different chat?
—> I think i'd like to put a lot of the actionable/wrksheet/checklist type stuff into Part and keep values chapters focused on values in a way that will endure after tech changes, even a lot. Can you review Part 3 with this in mind?
—-
—> I would like two lists: 1. All of the supplemental materials (worksheets, checklists, surveys, etc) that I refer to in the text: I want to make sure I write and post all of them! 2. All of the content about training throughout the text,
—> For all of the training content, can you create learning objectives and sort them into 3-7 clusters that I can write a workshop curriculum for?
August 27
—> Can you list, explain, and link the methods that the makers of LLM models use to pursue alignment? for example, system prompts or RLHF
August 28
—> Can you help me create as complete a list as possible here?
August 29
—> I have two chapters that are very similar. I will paste both of them in here. I would like one single chapter or section that I can use to introduce Part 2, values and AI. Please ask me any questions you need to to clarify the task before executing.
—> Hm, I like the sector information, but it doesn't seem to belong in the intro to the values chapters. I think it should be two chapters, and I only need to rewrite the introductory paragraphs and title to that chapter. Can you draft an introductory paragraph or two to the chapter that presents sector by sector risks.
August 30
—> Here's a draft of an AI readiness checklist. Can you review the chapters of the book draft, especially the Readiness and values chapters, to identify additional sections and check boxes? It is OK to revise or reword existing checks as well.
September 13
—> I am moving some topics around, and need to rewrite the equity and justice chapter to include prejudicial outputs along with the rest of the types of equity and justice threats. This chapter should be parallel to the other values chapters, and I've appended things I'd like to include (and where in the structure, if relevant.)
—> Can you review this article and explain to me : the definition of automation bias a brief history of its study The methods we use to study it effect sizes anything we've found that mitigates it
—-
—> Can you summarize research findings about algorithmic management? In particular: where has it been documented? (I know of gig, domestic, retail, service, warehouse, and teaching work-- is there more? please give me citations if so). What kinds of things are managed with algorithms? What are the documented consequences of algorithmic management, particuarly on low power workers and people outside of the organizations?
—> I am interested in fleshing out the justice threats. I have done the first two. Can you draft the others?
—> what effect sizes are documented in this article? https://pmc.ncbi.nlm.nih.gov/articles/PMC3240751/

