Here's something that catches almost every new AI user off guard: AI will confidently tell you something that is completely and entirely wrong.
Not vaguely wrong. Not slightly off. Sometimes completely fabricated — a statistic that doesn't exist, a quote that was never said, a scientific study that never happened — presented with the same calm, authoritative tone as everything else it tells you.
This is called a hallucination, and understanding why it happens is one of the most important things you can learn about using AI safely.
Why does AI hallucinate?
To understand hallucinations, you need to understand what AI is actually doing when it generates a response. It isn't searching a database of facts. It isn't looking things up. It's predicting the most statistically likely next word based on patterns learned from billions of pieces of text.
Most of the time, that produces accurate, useful responses — because accurate information is well-represented in the data it was trained on. But when it encounters a gap — an obscure fact, a recent event, a specific statistic — it doesn't say "I don't know." Instead, it generates what a plausible answer would look like, based on the patterns around similar topics.
Imagine asking someone to fill in a crossword based on the pattern of letters rather than actually knowing the answers. They'd get most of them right because patterns work most of the time — but occasionally they'd confidently write in something completely wrong, with no idea they'd done it.
The problem is compounded by the fact that AI has no awareness of what it doesn't know. It doesn't experience uncertainty the way humans do. It generates confident-sounding text regardless of whether the underlying information is accurate.
What kinds of things does AI get wrong?
Hallucinations are more common in some areas than others. The highest-risk situations:
- Specific statistics and data — exact percentages, figures, and research findings are frequently fabricated or misremembered
- Citations and references — AI will invent plausible-sounding academic papers, books, and sources that don't exist
- Quotes from real people — it will attribute statements to real named individuals that they never actually said
- Recent events — anything after its knowledge cutoff date is unknown territory where it may guess rather than admit ignorance
- Niche or obscure topics — the less well-represented a subject is in training data, the more likely it is to hallucinate
- Legal, medical, and financial specifics — exact rules, regulations, dosages, and thresholds are particularly prone to error
Real examples of hallucination
The invented citation
One of the most famous early examples involved a US lawyer who used ChatGPT to research case law for a court filing. ChatGPT provided several citations — case names, court references, the works — that turned out to be entirely fabricated. The cases did not exist. The lawyer submitted them to court anyway, not realising they needed to be verified.
The wrong statistic
Ask AI "what percentage of UK adults use social media?" and it will give you a confident, specific answer. That answer may be accurate — or it may be a plausible-sounding figure that doesn't match any actual research. Without checking, you can't tell which it is.
The false quote
AI will readily attribute quotes to real people — politicians, scientists, business figures — that they never actually said. The quote will sound like something they might say, which is exactly the problem.
Never use AI-generated quotes, citations, statistics, or legal/medical specifics without verifying them against a primary source. The risk isn't that AI gets things wrong occasionally — it's that it presents errors with the same confidence as correct information, making them almost impossible to spot without checking.
Learn to use AI safely and effectively.
Our AI Essentials course covers hallucinations, AI safety, and how to verify AI output — alongside 18 practical lessons on getting genuinely useful results. Verified certificate included.
View AI Essentials →How to spot hallucinations
There's no reliable way to tell from the response itself whether something is accurate. The writing style, tone, and confidence level are identical whether the information is correct or fabricated. But there are practical approaches that dramatically reduce your risk:
Treat specific claims with scepticism
Broad, conceptual explanations are generally reliable. Specific facts — exact numbers, named individuals, dates, citations — are where hallucinations are most likely. Apply more scrutiny to anything specific.
Ask AI to flag its uncertainty
Add this to your prompts: "If you're uncertain about any of this, please say so clearly." This doesn't eliminate hallucinations but it does encourage more cautious responses and occasionally surfaces uncertainty that wouldn't otherwise be expressed.
Ask for sources — then verify them independently
You can ask AI to cite its sources. But critically — don't assume those sources exist just because it names them. Search for the source independently. If it's a published paper, look it up. If it's a statistic, find the original research. If you can't find it, treat the claim as unverified.
Cross-reference with reliable sources
For anything that matters — medical information, legal requirements, financial figures, important decisions — verify against a primary source. The NHS, GOV.UK, reputable news outlets, or the original research. Use AI to help you understand and explore, not as the final authority.
Use web-connected AI for current information
Both ChatGPT and Claude have options to search the web for current information. When enabled, this reduces (but doesn't eliminate) hallucinations about recent events, since the AI is pulling from live sources rather than generating from training data alone.
Is hallucination getting better?
Yes — the newest AI models hallucinate significantly less than earlier versions, and the research community is actively working on reducing it further. But it hasn't been eliminated, and there's genuine debate about whether it can ever be completely solved given the fundamental way these models work.
The honest position: hallucination rates have improved dramatically, but you should still verify anything that matters. That's not a reason to avoid AI — it's a reason to use it intelligently.
The right mental model
The most useful way to think about AI is as a brilliant, extremely well-read assistant who occasionally misremembers things and has no awareness when they're doing it. You'd still value their input enormously — but you'd double-check the specific facts before acting on them.
Use AI for drafting, exploring, explaining, brainstorming, and summarising. Verify the specifics. That combination — AI's speed and breadth, your judgement and verification — is where the real value is.
- AI generates plausible-sounding responses — it doesn't look up facts
- Hallucinations are most common with statistics, citations, quotes and recent events
- The writing style gives no indication of whether information is accurate
- Always verify specific claims against primary sources before acting on them
- Hallucination rates are improving but haven't been eliminated
- AI is a brilliant thinking partner — not a reliable authority on specifics
The AI Starter Guide
5 things every beginner needs to know before using AI — plain English, instantly useful. Free, no credit card.
Get it free →