Most prompts are fine for quick answers. But when you ask AI to craft a full research brief, refactor a codebase, or design a multi-step learning plan, the wheels can wobble. The model might produce something decent once, but then drift, skip constraints, or hallucinate details. That inconsistency costs you time and trust.
Advanced prompt patterns fix that. Think of them as recipe cards for AI: they structure the ingredients (context), method (steps), and plating (output format) so you get repeatable results. In this post, you will learn a library of patterns and get plug-and-play templates that work across ChatGPT, Claude, and Gemini.
Whether you are building internal workflows or just trying to level-up your daily prompting, these patterns help you decompose, direct, and verify complex work without becoming an AI whisperer.
Why complex tasks need patterns
Complex tasks have three features that trip up AI:
- Ambiguity: goals, audience, and constraints are fuzzy.
- Multi-step reasoning: the model must plan, not just answer.
- Evaluation: quality depends on a rubric you might not have stated.
Patterns reduce ambiguity, force planning, and make evaluation explicit. The result is more consistent outputs and faster iteration.
A reusable base frame you can apply anywhere
Before you layer on advanced patterns, start with a solid base brief. Use ROADS: Role, Objective, Audience, Deliverables, Standards.
The ROADS base frame
- Role: Who should the AI act as?
- Objective: What is the single clear goal?
- Audience: Who will consume the output?
- Deliverables: What format, sections, or fields are required?
- Standards: Constraints, style, length, citations, or tools to use.
Example (marketing brief):
- Role: You are a senior B2B content strategist.
- Objective: Create a product launch brief for a new analytics feature.
- Audience: Sales and marketing managers.
- Deliverables: 1-page brief with problem, solution, ICP, key messages, objections, CTA.
- Standards: Plain English, no hype, include 3 customer-proof points, 600-800 words.
You can paste ROADS ahead of any pattern below to anchor the work.
The pattern library: templates for complex tasks
Use these seven patterns individually or combine them. Each includes when to use, a template, and a quick example.
- Plan-Then-Act (Decompose-then-Do)
- Use when: A task has multiple steps or dependencies.
- Template:
- First, list the steps you will take to complete the Objective. Keep steps concise.
- Ask me to confirm the plan before proceeding.
- After confirmation, execute step 1 and stop. Wait for my feedback before continuing.
- Example: You need a GDPR-compliant data retention policy.
- The AI outlines steps: gather requirements, draft sections, legal review checklist.
- You confirm and iterate step by step, reducing rework.
- Deliberate Options + Decision
- Use when: You want diversity of approaches before committing.
- Template:
- Produce 3 distinct options that vary meaningfully on strategy, tone, or structure.
- For each: list pros, cons, and ideal use-case.
- Recommend one and explain why in 2 sentences.
- Example: For an onboarding email sequence, get three styles (educational, product-led, story-led), then pick the best for a developer audience.
- Rubric-First Evaluator
- Use when: Quality is subjective and you need a fair score.
- Template:
- Create a 5-criteria rubric aligned to the Objective and Audience. Weight each criterion to total 100%.
- Score the draft against the rubric with 1-2 sentence justifications per criterion.
- Propose the top 3 changes that would most raise the score.
- Example: Have Claude evaluate a technical blog draft for accuracy, clarity, and structure, then revise based on the highest-impact suggestions.
- JSON/Form-Locked Output
- Use when: You need structured, parseable results.
- Template:
- Output only valid JSON matching this schema:
- title (string)
- summary (string)
- sections (array of objects: heading, key_points[])
- risks (array of strings)
- Do not include extra fields, commentary, or Markdown.
- Output only valid JSON matching this schema:
- Example: With Gemini, collect feature requests into a uniform JSON list ready for your backlog tool.
- Critic-and-Revise Loop
- Use when: You want the model to self-improve before you review.
- Template:
- Draft the deliverable.
- Adopt the role of a tough reviewer and list 5 pointed critiques.
- Apply those critiques and provide a revised version, noting what changed.
- Example: For a slide outline, ChatGPT drafts, critiques its messaging, and delivers a tighter second draft.
- Source-Grounded Answer
- Use when: Facts and citations matter.
- Template:
- Only use facts from the provided sources.
- For each claim, append an inline citation like [S1], [S2].
- Include a references list mapping [S#] to the source.
- If a fact is not in the sources, say ‘Not found in sources’.
- Example: Summarize a research packet into an executive memo with explicit citations and a short ‘unknowns’ section.
- Few-Shot I/O Pattern
- Use when: You want consistent style or transformations.
- Template:
- Provide 2-3 input-output examples that show the exact mapping.
- Then give a ‘Now do:’ input and require the same style and structure.
- Example: Teach the AI how you write product changelogs by showing before/after pairs, then feed new raw notes.
Tip: Combine patterns. For instance, use ROADS + Plan-Then-Act + JSON output to get a stepwise plan that returns a clean, structured artifact at each step.
Real-world scenarios that benefit immediately
-
Policy drafting in regulated industries:
- Use ROADS to define scope and standards.
- Use Source-Grounded Answer with your policy PDFs to avoid hallucinations.
- Finish with Rubric-First Evaluator to score compliance and clarity.
-
Codebase migration or refactoring:
- Plan-Then-Act to map the migration path.
- Few-Shot I/O to show preferred code style or patterns.
- JSON/Form-Locked Output to produce issue tickets per module.
-
Spreadsheet formula troubleshooting:
- Deliberate Options + Decision to suggest 3 formulas with trade-offs.
- Critic-and-Revise Loop to simplify the chosen formula and add edge-case tests.
-
Learning design:
- ROADS to define learner level and outcomes.
- Plan-Then-Act to build a module outline.
- Rubric-First Evaluator to validate assessments align to outcomes.
These are all tasks where ChatGPT, Claude, and Gemini shine when you constrain the brief and structure the process.
Tool-specific notes: ChatGPT, Claude, Gemini
All three tools can use these patterns, but they have different strengths:
- ChatGPT: Strong generalist with rich plugin and structured-output modes. Great for JSON/Form-Locked Output and Critic-and-Revise loops. Use the ‘structured output’ capability when available to enforce schemas.
- Claude: Excellent at long-context work and careful, nuanced reasoning. Ideal for Rubric-First Evaluator and Source-Grounded Answer across lengthy documents.
- Gemini: Solid at synthesis and multimodal contexts. Helpful for Few-Shot I/O where you may attach images or varied inputs, and for generating clean, constrained summaries.
Practical tip: If you hit a limitation in one tool (e.g., context size), switch. You can even use the Deliberate Options pattern to compare outputs across tools and pick the winner.
Guardrails and troubleshooting
Even with patterns, complex tasks can drift. Here is how to stay on track:
- Define success explicitly: Add a ‘Success looks like:’ line with 3 bullet criteria. This becomes the model’s north star.
- Timebox and scope: State what is intentionally out of scope to prevent overreach.
- Require verification: For factual work, require citations or ‘Not found in sources’ responses.
- Reduce surface area: Use JSON or bullet lists for drafts; prose comes last.
- Iterate with short loops: Plan-Then-Act with step stops catches mistakes early.
- Protect sensitive data: Remove or mask personal or confidential information before sharing. Do not paste secrets or production keys.
If outputs are inconsistent:
- Add a Few-Shot I/O example to anchor style.
- Tighten Deliverables and Standards in ROADS.
- Use Rubric-First Evaluator to make quality measurable.
- Nudge the model with ‘Show your plan first’ to improve coherence.
End-to-end example: product launch brief in 20 minutes
Imagine you need a clear, factual brief for a new analytics feature.
- Start with ROADS:
- Role: You are a senior B2B content strategist.
- Objective: Produce a 1-page launch brief.
- Audience: Sales and marketing managers.
- Deliverables: Problem, solution, ICP, key messages, objections, CTA.
- Standards: Plain English, include 3 customer-proof points, 600-800 words.
- Add Plan-Then-Act:
- Ask the AI to list steps (research, outline, draft, review) and wait for your confirmation after the plan.
- Layer Source-Grounded Answer:
- Provide 3 internal docs as sources and require [S1]/[S2]/[S3] citations and a references list.
- Finish with Critic-and-Revise:
- Have the AI critique its draft for clarity and specificity, then deliver a tighter version.
You can run this in ChatGPT or Claude. If you need structured outputs for your CMS, add JSON/Form-Locked Output and map fields directly.
Conclusion: make patterns your default
Complex tasks stop being unpredictable when you brief, structure, and verify. Use ROADS to set the stage, then apply the right pattern to decompose, diversify options, enforce structure, or ground in sources. With a few reusable templates, you will spend less time fixing outputs and more time shipping quality work.
Next steps:
- Save the ROADS frame and at least 3 patterns (Plan-Then-Act, Rubric-First, JSON Output) as prompt snippets in your AI tool of choice.
- Pick one recurring task this week (e.g., policy draft, meeting summary to action items) and run it through ROADS + one pattern.
- Create a simple quality rubric for that task and reuse it to score outputs across ChatGPT, Claude, and Gemini to see which setup works best.