Better Than One-Shot: How Step-By-Step Prompts Can Improve LLM Work
One of the techniques that have improved LLMs, especially between GPT-3-era and today’s models, is called “chain of thought.”
Chain of thought is where the model breaks down complex reasoning into explicit intermediate steps. Instead of trying to jump straight to a plausible answer, it breaks up the task into smaller ones and dedicates processing time to each step. This mimics a pretty successful technique some of us were using to get better responses from earlier models: it reduces hallucination and improves response quality.
Here's the interesting part: even though newer models can supposedly handle complex, multi-step reasoning on their own, I consistently get better results when I manually walk them through my thinking process step by step.
Why does this work? I have two theories. First, I might be better at breaking down tasks in my specific domain than the model is at guessing how I want it approached. Second—and I think this is the bigger factor—I improve my own understanding as I go. Each response helps me clarify what I actually need: I am learning and I can course-correct in real-time if the model is getting off-track or generic.
How does this work?
For example, yesterday, I was toying around with the idea of starting a TikTok to talk about the content on this blog. I drafted a script for a video and wanted an opinion, but don’t know anyone who has a similar account, so I wanted to ask an LLM for advice.
My first instinct was to paste the script and ask "how can I improve this?" (Asking for negatives helps avoid generic flattery, by the way.)
Instead, I started broader. I opened Claude and asked: "I am interested in promoting [the] work [in this project] on TikTok. Can you first give me some best practices for informational tiktok accounts and posts?"
The model returned detailed information about TikTok best practices: hook timing, video length, caption strategies, and more. I skimmed it, and the full response went into the context window where it could inform every subsequent exchange. The model also asked clarifying questions about my goals, which pushed me to think more carefully about what I actually wanted to accomplish.
I continued working through the task incrementally:
Discussed target audience
Refined my goals based on its questions
Shared my draft script for feedback
Applied the feedback and tried again
Got a second round of critique
Testing the Alternatives
For comparison, I opened a new chat with the same model and jumped straight to the end goal. I provided context conversation and asked it to deliver the final advice I'd eventually gotten through the longer process.
Both approaches returned useful feedback. But the step-by-step method offered major advantages:
What I Gained from Going Step by Step:
General knowledge I can reuse. The TikTok best practices overview wasn't tied to my specific script or even topic. If I pivot my approach or work on a different TikTok project later, I've already absorbed that foundational information. I can operate more confidently on this project, too, because I understand the “why” behind the advice and when it would change.
Clarity through dialogue. When the model asked me to define my primary goal and suggested three possibilities, I realized my actual goal was a fourth option they hadn't listed. That exchange helped me understand my own priorities, which shaped not only how I responded to that question, but also what I asked going forward and how I envisioned the project.
Practice and iteration. After getting feedback on my first script, I applied it myself and submitted a revision. The second round of feedback wasn't just "looks great!"— it offered additional refinements. Doing revisions myself instead of just requesting revised scripts gave me hands-on practice applying the principles, deeper understanding of the advice, and naturally led me to evaluate whether I agreed with the suggestions.
What the One-Shot Approach Did Better:
Speed. If you just need to complete a task and don't care about learning the underlying principles, asking for everything at once wins on efficiency.
Questioning assumptions. In response to my custom instructions (which request noting assumptions I've made), the one-shot response immediately flagged that I was assuming TikTok was a good fit over other platforms. Good point! The step-by-step chat got so focused on TikTok specifics that it never questioned the platform choice.
When to Use Each Approach
The one-shot approach makes sense when:
You're on a deadline
You already understand the domain well
You mainly need execution help, not strategic thinking
The task is straightforward
The step-by-step method works well when:
You're learning a new domain
The task is complex or unfamiliar
You want to understand the reasoning, not just get an answer
You'll do similar tasks in the future
You need to evaluate and potentially disagree with the advice
And when you do the step-by-step method, take a moment to zoom out at the beginning. I took one step back (“tell me about this domain” before giving it a task) I could and probably should have taken another one or two back. (“Here’s my goal. What are my options?”)
—
LLM disclosure: In addition to the interactions described in the post, I drafted an outline of the post that started very outline-y (“explain what chain of thought is”) and got more detailed as I described my own experience. I asked Claude 4 Sonnet: “Here's an outline/partial draft of a blog post. Can you help flesh it out? “