“Did you use AI?” Is the Wrong Question

When you publish a book, submit a paper, or upload a product description, you’re increasingly asked a version of the same question:

“Did you use AI?”

Amazon will ask me this when my book comes out: Did you use AI tools in creating texts, images, and/or translations? Many journals, conferences, and employers now ask similar yes-or-no questions.

For the narrow task of flagging whether something is human-made or machine-generated, that binary might be good enough. But when we talk to colleagues, staff, clients, or partners about how we actually work, “yes” and “no” collapse too much detail to be useful.

Using a Large Language Model to fix a typo, to pressure-test an argument, get a few options for different outlines, or thoughtlessly generate a quick draft that you never review are not the same activity, but they all produce the same checkbox outcome.

We need a shared language for the different roles AI can play in our work.

Below is one way to name those roles. You don’t need to adopt this taxonomy wholesale, and I am not entirely sure it’s complete. I would love to hear from you if you find this set of roles incomplete and how else you use LLMs in your work!

The Roles AI Can Play at Work

1. Search engine

“Find a fact or source for me.”

Here, you treat the model like a conversational lookup tool, much like you might write a query into Google. The goal is retrieval, not creativity.

Examples:

  • “What year was the telephone invented?”

  • “What’s the capital of Mongolia?”

  • “I’m considering analyzing this dataset using X to detect Y. Has anyone done something similar? What did they call it?”

  • “How much does it cost to take the Orient Express?”

For what it’s worth, this isn’t what LLM chatbots excel at. They can easily rely on unchecked and unacknowledged assumptions (like, you’re talking about the ticket price of the train, not including flights and hotels on either side), and get old information confused with new (using a 5 year old forum post to find ticket costs instead of the up-to-date website.)

Request a list of assumptions and linked sources so it’s easier for you to double check these responses.

2. Oracle

“Tell me something about the nature of the world or predict the future.”

This role assumes the model has absorbed deep patterns that reveal hidden truths about people, systems, or the future.

Examples:

  • “What really makes startups succeed?”

  • “How will AI change higher education?”

  • “Generate an image of California where each county is made of its funniest stereotype.”

This use often overestimates what pattern-matching can deliver: these chatbots do not know anything, they are guessing what you want to hear next. Still, in low-stakes or playful contexts, it can be entertaining or creatively useful.

3. Assistant

“Do this bounded task for me.”

You hand the model a clearly scoped job that sits inside a larger human goal.

Examples:

  • “Draft a polite follow-up email.”

  • “Summarize this report.”

  • “Rewrite this paragraph for clarity.”

You of course bring your critical thinking cap when reading the output, but for the most part, the LLM is responsible for the task. Success in this role means the task gets done adequately, not perfectly.

4. Thought partner

“Think this through with me.”

Here, the value comes from dialogue, not correctness. You use the model to surface alternatives, poke at gaps, or escape your usual mental grooves.

Example:

  • “I think my problem is X. Ask me questions to check whether that’s really the problem, or whether I’m confusing symptoms with causes.”

  • “List the main trade-offs involved in this decision. In addition to costs and benefits, note who benefits and who bears the cost.

  • “Here’s my plan. List the assumptions it relies on. For each assumption, tell me what would change if it were false.”

You’re using the LLM to get you unstuck, zoom out, or get another perspective on your own thoughts and ideas. You don’t use the text provided, but the ideas. This works best when you go back and forth a few times and open a new chat to ask the same question in a different way: “more chat, less bot,” as I think I heard someone say once.

5. Adversary

“Challenge this.”

You ask the model to argue against you.

Examples:

  • “What am I missing?”

  • “What are the strongest objections to this proposal?”

  • “List every weakness in this argument.”

Iterate until the critique becomes shallow or silly. To get rid of flattery, you can ask it to be direct and critical with you, but beware: I can attest that it is capable of hurting one’s feelings ;) Requesting constructive criticism is a great way to get around sycophancy and align the incentives of the chatbot (“be useful to the user”) with improving your draft or idea.

6. Short-cut

“Just make this easier.”

Sometimes, you don’t need to learn. You just need the result.

If I need a refrigerator ASAP. I need one that fits the space and the budget. I do not need a review of refrigeration technology.

This role makes people nervous, and it’s often what they imagine when someone checks “yes” on an AI disclosure or thinks about student AI use. But not all short-cuts are laziness: some are proportionate responses to low-stakes needs.

7. Tutor

“Help me understand.”

This is the mirror image of the short-cut. You don’t want the answer; you want comprehension.

Examples:

  • “Explain this concept three different ways.”

  • “Walk through an example step by step.”

  • “Help me think of a salient mnemonic for this word I keep forgetting.”

  • “Give me practice problems and feedback.”

This is an underrated way of using AI in my opinion! It’s better at helping you learn non-kinesthetic tasks: coding, algebra, or marketing concepts, not knitting, running, or painting.

8. Accessibility tool

“Help me process or communicate.”

Here, AI supports people in seeing, writing, organizing, or expressing information.

Examples:

  • Describing images for someone with low vision.

  • Helping someone with dyslexia draft professional emails.

  • Breaking tasks into steps for someone with executive-function challenges.

9. Translator

“Adapt this for a different audience.”

Translation here goes beyond language.

Examples:

  • Turning technical documentation into plain-language guidance.

  • Adapting materials written for adults to be useful for younger readers.

  • Converting a formal proposal into talking points.

10. Coach

“Help me do better.”

Models can role-play conversations, offer encouragement, or help reframe situations. I built a tightly-scoped tool for myself—Bright Shift—that nudges me toward more constructive interpretations when I tend to get negative and pessimistic.

This can be genuinely useful. It can also drift into flattery, parasocial relationships, or unlicensed therapy. Stay critical and stay in the lead.

In Summary

There are a lot of ways to use AI beyond drafting final products, and naming the roles AI took can help your collaborators and audience interpret your work

If we want better conversations than checkbox disclosures allow, we need better language. This is one attempt to offer it, but I am not sure it’s complete. Please let me know what roles AI takes in your work!

Next
Next

The Hidden Costs of “Free” AI Tools