Why Most Prompts Underperform
This lesson assumes you've completed AI Essentials Part 3 and are familiar with the basic CRAFT framework and iterative conversation. We're going deeper — much deeper.
Most people who use AI regularly are operating at roughly 20% of its potential. Not because the tool is limited — but because their prompts are. The gap between a basic prompt and a sophisticated one is not technical. It's a matter of understanding how AI actually processes your input, and then using that understanding deliberately.
This lesson is a diagnostic. We're going to look at why common prompts produce mediocre results, and exactly what structural changes produce dramatically better ones.
How AI actually reads your prompt
To write better prompts, you first need to understand what happens when AI receives one. When you send a message, the model doesn't read it the way a human does — scanning for meaning and inferring intent. It processes every token (roughly every word or word fragment) and builds a probability distribution over what should come next. The entire prompt shapes that distribution.
This means three things that most people get wrong:
- Early words carry disproportionate weight. What you put at the start of a prompt sets the frame for everything that follows. A prompt beginning "List five ways to..." will produce a list even if you later say "in conversational prose please." The instruction to list has already set the tone.
- Ambiguity gets resolved by assumption. Wherever your prompt is vague, AI makes the most statistically likely assumption. Those assumptions are often not what you wanted.
- Context accumulates within a conversation. Every message you've sent shapes subsequent responses. A conversation that starts casually will continue casually even if you later become more formal — unless you explicitly reset the frame.
The seven most common prompting mistakes
1. Starting with the task instead of the context
The most common mistake. Most people jump straight to what they want ("Write me a cover letter") without giving AI the context it needs to calibrate ("I'm applying for a senior product manager role at a Series B fintech startup..."). Context is not preamble — it is the instruction that determines everything else.
2. Using vague evaluation criteria
"Make it better." Better how? More concise? More engaging? Less formal? More specific? Without criteria, AI guesses — and its guess may be the opposite of what you wanted. Always specify what "better" means in your context.
3. Asking for everything at once
Packing multiple distinct tasks into one prompt produces one mediocre response rather than several excellent ones. "Summarise this, identify the key themes, suggest three follow-up questions, and rewrite the conclusion" is four tasks. Break it into four prompts — or at minimum, number your requests clearly.
4. Forgetting the audience
AI writes for the most generic possible reader unless you specify otherwise. "Explain machine learning" produces something different if you add "to a software engineer switching careers" versus "to a 60-year-old GP with no technical background." The audience specification is one of the highest-leverage elements you can add to any explanatory prompt.
5. Not specifying format
AI defaults to the format that appeared most frequently in its training data for similar requests. For most topics, that means flowing paragraphs. If you want a table, a numbered list, a two-column comparison, or a structured document with headers — say so explicitly.
6. Accepting the first response
The first response is a draft. In AI Essentials you learned the five follow-up patterns. At Advanced level, the expectation is that you iterate on virtually every significant output — not from dissatisfaction, but as part of a deliberate process of refinement.
7. Longer is not always better
Adding more words to a prompt is not the same as adding more useful information. A focused, well-structured 60-word prompt often outperforms a rambling 200-word one. The quality of context matters more than the quantity. Cut anything that doesn't add signal.
When a response misses the mark, ask: what did I not tell it? What did it have to assume? Which of the seven mistakes above did I make? Answering this question precisely — rather than just rewriting the prompt hoping for better luck — is what builds real prompting skill.
The transformation framework
Here is the same request at three levels of prompting skill:
"Help me write a speech."
"Write a 3-minute best man speech for my friend Tom's wedding. Tom is funny and loves football. Warm and humorous tone."
You are an experienced speechwriter who specialises in wedding speeches. I'm the best man at my best friend Tom's wedding on Saturday. Tom and I have been friends for 22 years — we met at university. Key things about Tom: fiercely competitive at everything (especially five-a-side football, which he's terrible at), incredibly loyal, has a habit of being 20 minutes late to everything except this wedding, and met his partner Sarah at a disastrous camping trip I organised. The wedding is relaxed — 80 guests, outdoor venue, afternoon ceremony. Sarah's family are quite traditional; Tom's are not. The speech should be 3.5 minutes when read at a comfortable pace. Structure: warm opening that establishes our friendship, two or three stories that build to a genuine moment, end on something that makes the room go quiet before the laugh. Avoid clichés like "dictionary definition of friendship." Do not start with "Ladies and gentlemen." The speech should sound like me — self-deprecating, dry British humour — not like a generic template.
The advanced prompt doesn't just give more information — it gives the right information, organised in a way that lets AI make no wrong assumptions. Every element earns its place: the speechwriter role, the specific relationship history, the character details, the audience composition, the structural brief, and the negative constraints.
Pick a prompt you use regularly — something you've settled into a habit of writing in a particular way. Run the diagnostic: which of the seven mistakes above does it make? Add the missing elements and run both versions. The improvement will be immediate and significant.
AI processes prompts structurally, not like a human reader. Early words set frames. Ambiguity gets resolved by assumption. The seven most common mistakes: wrong order, vague criteria, too many tasks, no audience, no format, no iteration, padding over precision. The diagnostic question — what did I not tell it? — is the foundation of advanced prompting skill.
The CRAFT Framework: Advanced Application
In AI Essentials you learned the CRAFT framework as a foundation — Context, Role, Action, Format, Tone. At Advanced level, we go beyond the basics. This lesson covers stacking, chaining, negative constraints, and chain-of-thought prompting that produce outputs most people never achieve.
CRAFT stacking
Stacking means layering multiple CRAFT elements deliberately rather than hoping the AI infers them. A basic prompt uses one or two elements. An advanced prompt uses all five, each reinforcing the others.
You are a senior product manager with 12 years experience in B2B SaaS (Role). I am preparing a board presentation about our Q3 churn rate increase from 2.1% to 3.4% (Context). Identify the three most likely root causes and a one-paragraph action plan for each (Action). Present as three numbered sections, each with: root cause, evidence I should gather, and recommended action (Format). Direct, analytical, no corporate hedging — assume the board has seen the data (Tone).
Negative constraints
Negative constraints tell AI what NOT to do. They are often more powerful than positive instructions because they eliminate the most likely failure modes. AI defaults to the most statistically average output — negative constraints prevent that.
- Do not use bullet points. Forces prose when you need flowing argument.
- Do not mention [topic]. Prevents AI from taking an easy but unhelpful route.
- Do not start with a summary. Forces the argument before the conclusion.
- Avoid the word "leverage". Eliminates business jargon you hate.
Add negative constraints at the end of your prompt, after your positive instructions. Two or three well-chosen constraints often improve output quality more than doubling the length of your positive instructions.
Chain-of-thought prompting
Chain-of-thought prompting asks AI to reason through a problem step by step before giving a final answer. This dramatically improves accuracy on analytical tasks, because it forces the model to process each step rather than pattern-matching to a plausible-sounding conclusion.
I need to decide whether to hire a freelancer or bring this work in-house. Think through this step by step: first, list the true costs of each option (time, money, quality risk, management overhead). Second, identify what I would need to know to make this decision confidently. Third, flag any assumptions I might be making that could be wrong. Only then give me your recommendation.
Take a prompt you use regularly and add three negative constraints. Run both versions and compare. You will almost certainly prefer the constrained version.
CRAFT stacking layers all five elements deliberately. Negative constraints eliminate failure modes more efficiently than longer positive instructions. Chain-of-thought prompting forces step-by-step reasoning before conclusions — use it for any analytical or decision task.
Role Prompting at a Professional Level
In AI Essentials you learned that giving AI a role transforms the quality of its responses. At Advanced level, we go further — building rich, layered expert personas, running multi-role conversations, and using roles to unlock capabilities most users never access.
Building expert personas beyond "act as a doctor"
Vague roles produce generic expert responses. Rich persona construction produces genuinely specialist output. The difference is specificity at every layer.
"Act as a marketing expert and review my campaign strategy."
"You are a brand strategist with 15 years of experience at mid-size creative agencies in London. You have led campaigns for FMCG brands transitioning to digital-first marketing, and you have particular expertise in brand positioning for challenger brands entering established markets. You are direct, commercially focused, and not afraid to challenge assumptions. Review my campaign strategy below."
The rich persona specifies: years of experience, sector context, specialist expertise, personality characteristics, and communication style. Each element shapes the response in a distinct way.
Multi-role conversations
One of the most powerful advanced techniques: getting two distinct expert perspectives in one conversation, without starting a new session. This is invaluable for decisions where you want to hear from multiple types of expertise.
I want you to respond to my business plan as two different people, clearly labelled.
Person A: A seasoned venture capitalist who has invested in 40+ startups and is commercially focused, direct, and sceptical by default. They prioritise market size, defensibility, and founder capability.
Person B: A domain expert in [your industry] with 20 years of operational experience. They are knowledgeable about the practical realities of building in this space and will identify things the VC might miss.
Here is my business plan: [paste plan]
Please give me both perspectives clearly, including where they agree and where they diverge.
The devil's advocate role
AI naturally tends toward agreeableness and positive reinforcement. The devil's advocate role deliberately inverts this, and it's one of the most practically useful things you can do with an advanced AI setup.
Act as a highly intelligent, deeply experienced devil's advocate. Your job is not to be contrarian for its own sake, but to find the strongest possible case against the position I'm about to share. Don't just list objections — build the best counter-argument you can, as if you were trying to persuade a room of intelligent people that I'm wrong. When you've made the strongest possible case against my position, tell me what you think the three most important weaknesses in your own argument are.
My position: [your idea or plan]
Notice the final instruction: asking for the weaknesses in the devil's advocate's own argument. This is a meta-move that produces extraordinary intellectual depth — you get the best case against your idea and then a balanced assessment of that case, in one prompt.
When role prompting misleads
Roles are powerful — but they have failure modes worth knowing:
- Over-specific historical roles. Asking AI to "think like Winston Churchill" or "respond as Aristotle" produces plausible-sounding output that may reflect AI's training data biases rather than genuine historical thinking. Use with caution for anything where accuracy matters.
- Roles that create false confidence. "Act as a doctor and give me a diagnosis" still produces AI output, not medical advice. The role changes the framing, not the fundamental nature of what AI can reliably do.
- Roles that conflict with the task. "Act as a harsh critic but write something encouraging" creates an internal contradiction that produces muddled output. Keep role and task aligned.
Rich persona construction — specific experience, sector, expertise, personality — produces genuinely specialist output. Multi-role conversations give you multiple expert perspectives in one session. The devil's advocate role, combined with asking for weaknesses in its own argument, produces exceptional intellectual depth. Be aware of the failure modes: historical roles, false clinical confidence, and conflicting role-task pairs.
Prompt Chaining & Multi-Step Workflows
The most powerful AI work doesn't happen in a single prompt. It happens across a carefully designed sequence — where each output becomes the input for the next step, and the final result is something no single prompt could have produced. This is prompt chaining, and it's what separates casual AI use from genuinely sophisticated AI work.
What prompt chaining is and why it works
A prompt chain is a deliberate sequence of prompts where the output of each step feeds into the next. Rather than asking AI to do everything at once — which produces mediocre results across all dimensions — you break complex tasks into discrete steps, each optimised for a specific sub-task.
This works because complex tasks have distinct sub-components that benefit from different approaches. Research and synthesis require different framing from writing and editing. Analysis requires different framing from persuasion. Keeping them separate lets you be more precise at each step.
Before starting a chain, map out the steps: what comes first, what depends on what, and what the final output should look like. A five-minute planning step at the start produces dramatically better chains than diving straight in.
A five-step prompt chain: writing a thought leadership article
Here's a complete prompt chain for producing a high-quality 800-word article on a specialist topic — the kind of thing that would take a skilled writer 2–3 hours, done in 20 minutes with consistent quality.
"You are an expert in [topic]. I want to write a thought leadership article for [audience] about [subject]. What are the three most counterintuitive or underexplored angles on this topic? For each one, give me the core argument and two or three pieces of supporting evidence or examples."
"I want to use angle [X] from your suggestions. Build a detailed outline for an 800-word article: working title, opening hook, three main sections with key points for each, and a closing that drives action. The reader should be able to see the argument of the whole article from the outline alone."
"Write the full 800-word article based on this outline. Tone: [specify]. Voice: [first/third person, conversational/formal]. Write it as if the byline belongs to someone who has lived this topic, not someone who has researched it. No generic openings."
"Now act as a sharp editor at [relevant publication]. Read the draft above critically. What are the three weakest parts? Where does the argument lose energy? Where does it feel generic? What would you cut and why?"
"Apply the editorial feedback above to produce a tightened, improved version of the article. Where the editor suggested cuts, cut. Where the editor identified weakness, strengthen. The word count can go below 800 if the piece is stronger for it."
Feeding outputs back in
One of the most powerful chaining techniques is explicitly referencing earlier outputs in later prompts. This keeps the context coherent and lets you build on previous steps precisely.
"Using the persona you built in step 2 and the key messages from step 3, now write the email sequence. Each email should reference the core tension identified in the positioning — but approach it from a different angle."
When to chain vs when to use a single prompt
- Chain when: the task has distinct sub-components, quality matters more than speed, you want to review and steer at each stage, or the output from one step meaningfully changes what the next step should do.
- Single prompt when: the task is well-defined and bounded, speed matters, the output will be used as a starting point for human editing anyway, or the task is genuinely simple.
Prompt chains break complex tasks into discrete steps, each optimised for a specific sub-task. The five-step article chain — research, outline, draft, review, revise — produces consistently higher quality than a single "write me an article" prompt. Design chains before starting them. Feed outputs explicitly back in. Use chains when quality matters; single prompts when speed does.
Building Your Personal Prompt Library
The most productive AI users don't start from scratch every time. They have a curated collection of tested, refined prompts for their most common tasks — what we'll call a personal prompt library. This lesson covers building yours: how to structure it, how to turn great one-off prompts into reusable templates, and how to maintain it so it compounds in value over time.
Why a prompt library matters
A great prompt doesn't just produce a great response — it encodes the understanding that went into writing it. Every time you refine a prompt through iteration, you're building institutional knowledge about how to get what you need from AI. Without a library, that knowledge disappears after each conversation. With one, it accumulates.
The practical impact is significant. Once you have a prompt library for your core tasks, your AI sessions become dramatically faster and more consistent. You stop spending time on prompt setup and spend more time on the actual work.
How to structure your library
The simplest effective structure is a document (Notion, Apple Notes, Google Docs — whatever you actually use) with sections by use case:
- Writing: emails, reports, proposals, content
- Research: topic exploration, document analysis, fact-checking
- Thinking: decision support, brainstorming, critique
- Learning: tutoring setups, explanation patterns, practice exercises
- Personal: anything specific to your life, work, or interests
Each entry should have: a title, the full prompt text with clear placeholders marked in [BRACKETS], a note on what it's for and when to use it, and the date last updated.
Turning one-off prompts into reusable templates
When a conversation produces an excellent response, work backwards: what about the prompt made it work? Extract the structure, replace the specific details with placeholders, and save the template.
TITLE: Expert critique of [document type]
USE WHEN: You want rigorous critical feedback on a document before sharing it externally
PROMPT:
You are a senior [ROLE — e.g. 'partner at a top-tier strategy consultancy'] with deep expertise in [DOMAIN]. You are known for being direct, commercially sharp, and having high standards for written work.
Review the [DOCUMENT TYPE] below as if you were the most demanding reader it will face. Your job is not to be kind — it's to find every weakness before someone else does. Specifically:
1. What is the central argument, and how well does the document make it?
2. Where does it lose credibility, specificity, or momentum?
3. What would a sceptical [AUDIENCE — e.g. 'board member / investor / client'] push back on?
4. What three changes would have the highest impact on the document's effectiveness?
[PASTE DOCUMENT]
LAST UPDATED: [date]
The weekly prompt review
A prompt library gets better through deliberate maintenance. Once a week, spend five minutes:
- Adding any new prompts that worked particularly well this week
- Updating existing prompts where you've found improvements
- Removing prompts you haven't used in two months
- Noting any prompt that consistently underdelivers — it needs rewriting
This habit takes five minutes a week and produces compounding returns. In three months, your prompt library will be one of your most valuable personal productivity assets.
Before moving to the Part 1 exercise, create your prompt library document — even if it's just a blank page with five category headings. The Part 1 exercise will populate it with your first five prompts. Starting the document now makes the habit real.
A personal prompt library accumulates the knowledge invested in refining your prompts over time. Structure by use case. Each entry needs a title, full text with placeholders, purpose note, and date. Turn great one-off prompts into reusable templates by extracting the structure. Maintain it with a five-minute weekly review. The compounding effect over months is substantial.
Part 1 Exercise: Rewrite and Build
This is your Part 1 exercise lesson. You have covered the core advanced prompting techniques — prompt layering, CRAFT stacking, professional persona building, prompt chaining, and building a prompt library. Now you apply them.
Your exercise
Take five prompts you use regularly — from work, from daily life, from anything you actually do. For each one:
1. Run your original prompt and save the output.
2. Rewrite it using full CRAFT stacking — all five elements, deliberately.
3. Add at least two negative constraints.
4. Run the new prompt and compare the outputs.
5. Save the best version to your prompt library with a note on what made the difference.
This exercise should take 45-60 minutes. The goal is not perfection on the first rewrite — it is to build the muscle memory of thinking through all five CRAFT elements before you hit send.
What good looks like
A good Part 1 exercise submission has five clearly improved prompts in a prompt library, each with a short note explaining what changed and why the new version works better. The note is as important as the prompt — it shows you understand the principle, not just the output.
Mark this lesson complete to unlock your Part 1 completion. Your prompt library is the foundation of everything that follows — the more carefully you build it now, the more it compounds across the rest of the course.