Custom Instructions: When, Why, and How to Use Them 

When you put in a prompt and hit enter, it isn’t just your text going in. Behind the scenes the your LLM adds a system prompt that tells the model its name, date, safety rules, preferred style, and more. Anthropic publishes the ones they use for Claude so you can peek behind the curtain.

In addition to the system prompt, you can create your own custom instructions (OpenAI’s term) or persistent context (Anthropic’s term). Think of it as instructions or context that apply across all of your prompts. Unlike one‑off prompts, these instructions persist until you delete, temporarily disable, or overwrite them.

You can add user context (Information about you: “I am a fundraiser supporting X organization and Y cause among Z audience”), stylistic rules (“write at a 7th grade reading level, avoid or define jargon, and don’t use X,Y, and Z phrases”), or task defaults (“put output in an outline, always link sources.”)

For example, you might ask it to avoid certain words, or consider your strengths and weaknesses when you ask it for advice. If you always use it for work, you might put a bunch of information about your job in there. If you use it for school, giving it context about your age, career goals, or major might improve your results.

What I had

I had played around with custom instructions before I sat down to write this and learn more.

Before starting this article, I had:
ChatGPT: Memory turned ON. No custom instructions. Used for work only.
Claude: Memory turned OFF. Custom instructions. Used for work and personal stuff.

Here’s my “before” custom instructions for Claude:
Please do not use em dashes. Please never use hypophora (a figure of speech in which the speaker poses a question and then answers the question), "at the same time," "significant," "utilize," "in today's world" (or similar), "pivotal," "intricate," "realm," "tapestry," "utilize," "usage," "should," "must," "showcase," or emoji bullets. Don't use "most," or any indication about what research suggests, supports, or proves without linked citations.

Best Practices

We don’t have a lot of tested information about what makes good custom instructions, but here’s the best advice I can find:

  1. Specificity beats length. Bullet points are a good option here

    • Use a delimiter between instructions. OpenAI specifically recommends putting line breaks between instructions or using markdown to create an outline.

  2. Be Direct. This isn’t the place for hedging. Telling it what to do explicitly may be better than “should,” or “please,” or “prefer” if you want your instructions to be followed consistently.

    • Avoid or explain complex phrasing. I’m not sure why, but people seem to get better results for custom instructions specifically by breaking down concepts and processes.

  3. Positive examples plus negatives. e..g “Select verbs like use and show; avoid utilize and showcase.” Adding what to do seems to guide style better than bans alone.

  4. Consider process instructions. Do you always want a certain format or citation style? Do you want it to always ask you follow up questions? Do you prefer bullet points and tables or prose? You can use custom instructions to fit your LLM into your workflow better.

  5. Try “if, then” pairs. Especially if you want it to follow steps! e.g. “When I give you X, return Y.” or “Trigger: X input. Instruction: Do Y.”

  6. Add your own ethical guidelines. You can try to use custom instructions to create your own guardrails.

  7. Experiment with new custom instructions. This is easier said than done, so what I usually do is reevaluate regularly and try something new for a week or so.

  8. Try GPTs, Projects, or Gems. If you have an instruction set that is applicable to a lot, but not all of your work, like a recurring task, a long-term project, or even simply work vs. personal use, you can use custom GPTs (ChatGPT), projects (Claude or ChatGPT), or gems (Gemini) to give your chatbot instructions that will apply to sets of prompts, but not all of them.

What I changed

Here are the new instructions I’ve been testing for the last week. I’ve underlined the new stuff.

  • Please do not use em dashes unless they are grammatically the best choice. 
  • Never pose a question and then answer the question. Avoid "at the same time," "significant," "utilize," "in today's world" (or similar), "pivotal," "intricate," "realm," "tapestry," "usage," "should," "must," "showcase," or emoji bullets. Use simple, concrete language instead: "use" instead of "utilize." 
  • Don't use "most," or any indication about what research suggests, supports, or proves without linked citations. 
  • Use examples whenever possible and include linked citations. 
  • Prefer peer-reviewed sources, especially meta-reviews or systematic reviews. Please check to confirm that the sources you cite are primary sources: if the source is referring to a fact coming from another source, please cite the original source instead. 
  • If you see any assumptions I am making in my prompts, please note that assumption in your response, or ask me to confirm whether I want to use that assumption before answering. 

Conclusion

Custom instructions are a great way to align an LLM with your voice, audience, and values. Use them for recurring, high‑stakes tasks; disable them when you need a clean slate. Start small, test often, and experiment. Always keep your critical thinking active when reading LLM outputs!

LLM disclosure:
I asked Claude Opus 4: “What are best practices for custom instructions for LLMs? When should people use them and how?” And ChatGPT 4o: “What are best practices for custom instructions for LLMs? When should people use them and how?”

Then, I asked ChatGPT 4o: “I am writing a blog post about whether, when, and how people can and should use custom instructions. Here's what I have so far. Can you write a draft of the post for me? Feel free to update the paragraphs I already have if a different structure would help communicate the points.”

This prompt gave me an unexpected output (there were no custom instructions in it). For example, when describing system prompts, it said that the “provider will prepend” the instructions. “Provider” annoys me, because that could be anything. “prepend” is a good word, perfect to describe what’s happening, but there are more simple and clear ways to express it. It also gave me mostly tables, which I get, but aren’t my style. So I did a lot of editing on this one.

and finally “Can you create a 1:1 image for this blog post that does not have text on it? The subject should be legible at thumbnail size”

Next
Next

Ethics and LLMs: Ownership and Intellectual Property