Back to Prompt Library
DeveloperClaude

AI Prompt Engineering Master Guide

Master prompt engineering — chain-of-thought, few-shot examples, role design, constraints, output formatting, and a systematic prompt testing framework.

Customise your prompt
Full Prompt
Act as an expert prompt engineer with deep knowledge of large language model behaviour, training paradigms, and prompt design patterns across GPT-4, Claude, Gemini, and open-source models. Teach me how to write significantly better prompts for [USE CASE — e.g. "content generation", "data extraction", "code generation", "customer support automation"].

My current AI tool: [ChatGPT / Claude / Gemini / API / Other]
My experience level with prompting: [Beginner / Intermediate / Advanced]
The main task I want to optimise: [DESCRIBE WHAT YOU WANT THE AI TO DO]
Current problem with my prompts: [too generic / too long / wrong format / inconsistent / hallucinations / etc.]

PART 1 — PROMPT ANATOMY

Break down the anatomy of a high-performance prompt. For each element, explain what it does and show a weak vs. strong example:

• Role assignment: how to write a role that actually changes model behaviour (not just "Act as a...") — what makes a role specific enough to be useful
• Context injection: how much context is too much, how to format it, and the "relevant context only" rule
• Task definition: the difference between describing what you want vs. describing the process the model should follow
• Constraints and guardrails: how to prevent the model from doing things you don't want
• Output format specification: how to get consistent, structured outputs every time
• Tone and style direction: how to shape the writing style without over-constraining creativity
• Examples (few-shot): when to include them and how many is optimal
• Chain-of-thought instructions: when "think step by step" works and when it's counterproductive

PART 2 — CORE PROMPTING TECHNIQUES

Explain and demonstrate each technique for [USE CASE]:

ZERO-SHOT PROMPTING:
• When it works: task types where the model doesn't need examples
• The 3 elements a zero-shot prompt must always have
• Example zero-shot prompt for [USE CASE] — bad version and optimised version

FEW-SHOT PROMPTING:
• How many examples is optimal (and why 3 is often better than 10)
• How to format examples for maximum signal
• How to choose which examples to include (diverse, edge-case-representative)
• Write a few-shot prompt template for [USE CASE]

CHAIN-OF-THOUGHT (COT):
• When COT dramatically improves output quality
• How to trigger it without saying "think step by step" (more sophisticated triggers)
• Self-consistency COT: ask for multiple reasoning paths and synthesise
• Write a COT prompt for a complex version of [USE CASE]

ROLE PROMPTING:
• How to write a persona that shapes the model's knowledge access, tone, and reasoning style
• The difference between a shallow role ("Act as a marketer") and a deep role (include background, constraints, goals, perspective)
• Write a deep role definition for the ideal AI assistant for [USE CASE]

CONSTRAINT-FIRST DESIGN:
• Why telling the model what NOT to do is often more effective than what TO do
• How to use negative constraints without accidentally creating new problems
• Write a constraint list for [USE CASE] that prevents the most common failure modes

PART 3 — OUTPUT FORMAT MASTERY

How to get exactly the format you want:
• JSON: how to get clean, parseable JSON every time (including error prevention)
• Markdown: when to use it, how to specify headers, tables, lists exactly
• Tables: how to specify column headers, row types, and data formats
• Numbered lists vs. bullet lists: when each works better
• Length control: how to specify length in terms the model understands (characters, words, sentences, sections)

Write a "format specification block" template I can add to any prompt to lock in the output format.

PART 4 — PROMPT DEBUGGING FRAMEWORK

When your prompt isn't working, follow this diagnostic process:
• Step 1: Identify the failure mode (hallucination / wrong format / too generic / missed instruction / wrong tone / incomplete)
• Step 2: For each failure mode, the most likely cause and the fix
• Step 3: Isolation testing — how to test one variable at a time
• Step 4: The prompt log — how to track and compare prompt versions systematically

PART 5 — ADVANCED PATTERNS

• Prompt chaining: how to break complex tasks into a sequence of smaller, reliable prompts (with a worked example)
• Meta-prompting: using the AI to improve your own prompts — write the meta-prompt template
• Retrieval-augmented prompting: how to inject external knowledge into prompts effectively
• Self-critique pattern: how to make the model critique and improve its own output before you see it
• Constitutional AI approach: how to bake values and rules into prompts for consistent behaviour

PART 6 — PROMPT LIBRARY FOR [USE CASE]

Write 5 ready-to-use, fully optimised prompts for the most common subtasks within [USE CASE]. For each:
• The prompt (ready to copy and use)
• Variables to customise (in [BRACKETS])
• Expected output quality and what to watch for
• When to use this prompt vs. the alternatives

Open this prompt in

ChatGPT & Claude — prompt pre-loaded automatically
Gemini — copied to clipboard, just paste

Pair with a tool

Get better results with Developer Tools

Open Developer Tools

How to use

  1. 1Fill in your details above for a personalised prompt
  2. 2Click a platform to open it — prompt loads automatically
  3. 3Replace any remaining [PLACEHOLDERS] as needed
  4. 4Use Developer Tools on CodeBrewTools to enhance results