Part 1 · Lesson 1 of 5
⏱ 14 min read

AI in Plain English

📖 Lesson 1.1 ⏱ 14 min read 🎯 Part 1: What Is AI, Really?

You've heard the word artificial intelligence thousands of times by now. It's in the news every day, politicians talk about it, and your friends probably have opinions about it ranging from excited to alarmed. Before we do anything else, let's cut through all of that and talk about what it actually is — in plain language, with no jargon.

The simplest possible explanation

Artificial intelligence is software that has been trained to do things that used to require human intelligence. That's it. Writing, answering questions, recognising images, translating languages, making decisions — these are all things humans do by thinking. AI can now do versions of all of them.

The key word there is trained. AI doesn't think the way you and I think. It doesn't have opinions, feelings, or awareness. What it has is patterns — an enormous amount of them — absorbed from reading, processing, and analysing more text, images, and data than any human could in a thousand lifetimes.

💡
The core idea

AI learns patterns from huge amounts of data, then uses those patterns to generate responses. When you ask it a question, it's not looking up the answer — it's predicting the most useful response based on everything it has processed.

A brief, painless history

The idea of thinking machines has been around for decades — but for most of that time, AI was brittle, narrow, and disappointing. A chess-playing program from 1997 could beat the world champion but was completely useless at anything else. Early voice assistants could set a timer but couldn't hold a conversation.

What changed everything was a technique called deep learning, combined with an explosion in computing power and the sheer volume of data available on the internet. By the early 2020s, researchers had built something genuinely new: AI models that could engage with language in an open-ended, flexible, surprisingly capable way.

ChatGPT launched to the public in November 2022 and reached one million users in five days. For context, it took Instagram two and a half months to reach the same number. Something had shifted. For the first time, genuinely powerful AI was accessible to anyone with a phone or laptop — no technical knowledge required.

What kind of AI are we talking about?

There are many different types of AI, but this course is focused on the kind you will actually use in your daily life: large language models, or LLMs. These are the technology behind ChatGPT, Claude, Gemini, and similar tools.

The name is more straightforward than it sounds. They are models — mathematical systems — that are very large (trained on billions of documents) and work primarily with language. You type something, they respond. You ask a question, they answer. You give them a task, they attempt it.

📌 Worth knowing

When people talk about "AI" in everyday conversation in 2025, they're almost always referring to large language models like ChatGPT. There are other types of AI — image generators, voice tools, code assistants — but LLMs are the foundation, and mastering them is the most valuable skill you can build right now.

The magic trick that makes it work

Here's the thing that surprises most people when they first understand it: these models don't store facts the way a database does. They don't have a filing cabinet full of answers. What they have is a deep, complex understanding of the relationships between words, concepts, and ideas — built from processing an almost incomprehensible volume of human writing.

When you ask ChatGPT "what's a good way to apologise to someone?", it doesn't look up "apologies" in an index. It generates a response word by word, each word chosen because it is the most probable useful continuation of what came before, given everything it was trained on.

This is why it can write a poem, debug code, explain quantum physics, help you draft a difficult email, and translate from Japanese — all in the same conversation. It's not switching between different specialised programs. It's the same underlying capability applied to wildly different tasks.

What AI is not

This is as important as understanding what it is. AI is not:

  • Conscious or self-aware. It has no inner experience. It doesn't know it exists.
  • Reliably factual. It can and does make things up with complete confidence. You will learn to check what matters.
  • Connected to the internet by default. Most AI tools work from their training data, which has a cut-off date. They don't automatically know what happened last week.
  • A replacement for human judgment. It is a tool — a remarkably powerful one — but the judgment, the values, and the decisions remain yours.
  • Magic. Once you understand how it works, a lot of its behaviour becomes predictable and manageable. That's exactly what this course teaches.
🚀 Try this right now

Before you move on, open ChatGPT or Claude in another browser tab and type: "Explain what you are in two sentences, as if I'm completely new to AI." Read what it says. Notice that it can describe itself clearly — but notice also that it doesn't claim to think or feel. That's intentional.

Why this matters for you

You don't need to understand the mathematics behind AI to use it effectively. You don't need to know how a car engine works to drive well. What you do need is an accurate mental model — a realistic sense of what this tool can and can't do — so that you know when to trust it, when to question it, and when to use it in the first place.

That's what Part 1 of this course builds. By the time you finish these five lessons, you'll have a clear, grounded understanding of AI that most people — including many who use it every day — don't have. And that understanding is the foundation everything else builds on.

📌 Key takeaways from this lesson

AI is software trained on vast amounts of data to recognise patterns and generate useful responses. The AI tools you'll use — ChatGPT, Claude, Gemini — are large language models. They're genuinely capable, but not infallible, not conscious, and not magic. The better your understanding of how they work, the better results you'll get from them.

Part 1 · Lesson 2 of 5
⏱ 16 min read

The Main AI Tools Explained

📖 Lesson 1.2⏱ 16 min read🎯 Part 1: What Is AI, Really?

There are dozens of AI tools competing for your attention right now. Some are genuinely useful. Some are gimmicks. Some are excellent at one specific thing and poor at everything else. This lesson gives you a clear map of the landscape so you know what's worth your time.

The big three: ChatGPT, Claude, and Gemini

For everyday AI use, three tools dominate in 2025. They are all large language models, they all work in broadly the same way, and they are all remarkably capable. The differences between them are real but subtle — and you'll develop your own preferences over time.

ChatGPT (by OpenAI)

ChatGPT was the tool that changed everything when it launched in late 2022. It remains the most widely used AI assistant in the world and the one most people encounter first. The free version is genuinely capable for most everyday tasks. ChatGPT Plus (currently around £20/month) gives you access to the most powerful version of the model, image generation via DALL-E, and web browsing.

Best for: General tasks, creative writing, explaining concepts, brainstorming, coding help. The largest community of users means the most tutorials, tips, and shared prompts online.

Claude (by Anthropic)

Claude is often described as the AI that feels most like talking to a thoughtful person. It tends to produce longer, more nuanced, more carefully reasoned responses than ChatGPT, and it is particularly good at working with long documents — you can paste in an entire report and ask it questions about it. Claude is also notably careful about accuracy and will tell you when it's uncertain rather than confidently guessing.

Best for: Writing, analysis, working with documents, tasks requiring careful reasoning. Many professional writers and researchers prefer it.

This course is hosted on AI101.uk. Both ChatGPT and Claude are excellent choices for the exercises throughout — use whichever feels more natural to you.

Gemini (by Google)

Google's AI assistant is deeply integrated with Google's products — Docs, Gmail, Drive, Search. If you already live in the Google ecosystem, Gemini can work directly within the tools you already use. Gemini's web integration is also a strength: it draws on Google Search in real time, which means it's better than most at current information.

Best for: Searching and summarising current information, working within Google Workspace, quick lookups.

🗺️
Which one should you use?

Start with whichever one you have access to — all three have generous free tiers. Most people settle on one as their default but use others for specific tasks. There's no wrong answer. The prompting skills you build in this course work on all of them.

Tools built on top of LLMs

Beyond the main assistants, there's a growing ecosystem of specialist tools that use the same underlying technology but apply it to a specific task.

Image generation: Midjourney, DALL-E, Adobe Firefly

These tools generate images from text descriptions. You describe what you want — "a watercolour painting of a coastal village at sunset" — and the AI produces it. The quality has become extraordinary. DALL-E is built into ChatGPT Plus. Midjourney is a separate subscription with particularly striking results. Adobe Firefly is integrated into Adobe's Creative Suite and is designed to be safe for commercial use.

Voice AI: ElevenLabs, Whisper

ElevenLabs converts text to strikingly natural-sounding speech, and can even clone a voice from a short sample. OpenAI's Whisper converts speech to text with impressive accuracy across many languages. These tools are quietly transforming podcasting, accessibility, and content creation.

Search-augmented AI: Perplexity

Perplexity is a search engine built around AI. Instead of giving you a list of links, it reads sources and synthesises an answer with citations. It's excellent for research questions where you want a summary rather than a list of websites to browse.

Microsoft Copilot

Microsoft has embedded AI throughout its products — Word, Excel, Outlook, Teams. If you use Microsoft 365 at work, Copilot can summarise emails, draft documents, analyse spreadsheets, and prepare meeting summaries. For many professional users in the UK, this is the AI they'll interact with most.

⚠️ A word of caution

New AI tools launch almost every week, and many are heavily marketed. Not all of them are genuinely useful, and many are expensive wrappers around the same underlying models you can access for free. In this course, we focus on the tools with the broadest utility and the strongest track records. You don't need to subscribe to everything.

Free vs paid: what do you actually need?

The honest answer is that the free tiers of ChatGPT and Claude will cover most people's needs most of the time. The paid versions offer meaningful improvements — faster responses, access to the most capable model versions, longer context windows, and image generation — but they are not necessary to complete this course or to get significant value from AI in your daily life.

If you're going to pay for one thing, ChatGPT Plus or Claude Pro (both around £18–20/month) is a worthwhile investment once you've established a regular AI habit. But start with the free tier, get comfortable, and upgrade when you find yourself hitting the limits.

🚀 Try this right now

If you haven't already, create a free account on either ChatGPT (chat.openai.com) or Claude (claude.ai). You don't need to pay for anything. Just sign up and have a look around. Notice the simple text box — that's where all the magic happens.

📌 Key takeaways from this lesson

ChatGPT, Claude, and Gemini are the three main AI assistants worth knowing. All have free tiers. All are capable for everyday tasks. Beyond these, there are specialist tools for images, voice, search, and professional applications. You don't need all of them — start with one and build from there.

Part 1 · Lesson 3 of 5
⏱ 18 min read

Your First Conversation

📖 Lesson 1.3⏱ 18 min read🎯 Part 1: What Is AI, Really?

Everything in this course comes down to one skill: having a good conversation with an AI. Not a perfect conversation — a good one. This lesson teaches you the foundations of how to interact with AI tools so you get useful responses rather than frustrating ones.

What a "prompt" actually is

Every message you send to an AI is called a prompt. That's the technical term, but think of it simply as your side of the conversation. The quality of what you get back is almost entirely determined by the quality of what you put in. Vague prompt, vague response. Clear, specific prompt, useful response.

This doesn't mean you need to write perfectly. It means you need to be clear about what you want.

The single most important habit: be specific

Let's look at the difference between a vague prompt and a specific one.

Vague prompt

Help me with an email.

What AI will probably do

Ask you clarifying questions, or produce something generic that doesn't fit your situation. Useful time lost.

Specific prompt

I need to write a professional email to my landlord asking for a repair to be done. The issue is a leaking tap in the kitchen that has been ongoing for three weeks. The tone should be polite but firm. Keep it to around 150 words.

What AI will probably do

Produce a well-crafted, appropriately toned email that you could send with minimal editing. Much more useful.

The difference isn't length — it's specificity. The second prompt tells AI who this is for, what the situation is, what tone to use, and how long it should be. Those four pieces of information make an enormous difference.

The four things a good prompt includes

You don't always need all four, but the more of these you include, the better the response:

  • Context. What's the situation? Who is this for? What's the background?
  • Task. What specifically do you want the AI to do?
  • Format. How should the response be structured? A list? An email? A paragraph? Short or long?
  • Tone. How should it sound? Formal or casual? Friendly or direct? Simple or detailed?
🎯
The CRAFT framework (introduced in more depth in Part 3)

Context, Role, Action, Format, Tone. We'll build on this throughout the course. For now, just remember: the more you tell AI about what you need, the more useful it becomes.

Conversation, not interrogation

One of the most powerful things about modern AI is that it remembers the conversation. You don't have to get it perfect in one prompt. You can refine, redirect, and build on what it gives you.

If the first response isn't quite right, try one of these:

  • "That's good, but can you make it shorter?"
  • "I like the structure but the tone is too formal. Can you make it warmer?"
  • "Can you give me three different versions of that opening paragraph?"
  • "Actually, let's try a different approach entirely. Instead of..."

Think of it as working with a very capable assistant. You wouldn't expect a brilliant first draft every time — you'd expect a good first attempt and a productive back-and-forth until it's right.

What to expect from your first few conversations

Most people's first experience with AI goes through roughly the same stages:

  1. Impressed. "This is remarkably good. How is it doing this?"
  2. Frustrated. "Why did it get that so wrong? It seemed so confident."
  3. Calibrated. "I understand now what it's good at and what it isn't. I can work with that."

The goal of this course is to get you to stage three as quickly as possible, and to stay there. Stage three is where AI actually becomes useful rather than just occasionally impressive.

🚀 Try this right now — your first proper conversation

Open ChatGPT or Claude and try this prompt. Then reply to what it produces and refine it at least once:

Your first real prompt

I want to learn one new skill this year. I enjoy being outdoors, I have about an hour free most evenings, and I'm looking for something that's both relaxing and gives me something to show for my time. Suggest three possible skills with a brief explanation of why each one suits what I've described.

After you get a response, reply with something like: "I like the second suggestion. Can you tell me how someone completely new to that would get started, including what they'd need to buy?"

📌 Key takeaways from this lesson

A prompt is just your message to AI. Better prompts include context, task, format, and tone. You don't need to get it perfect first time — AI conversation is iterative. The most important skill is learning to refine and build on the first response.

Part 1 · Lesson 4 of 5
⏱ 14 min read

What AI Gets Wrong

📖 Lesson 1.4⏱ 14 min read🎯 Part 1: What Is AI, Really?

This is the most important lesson in Part 1. Everything else in this course — the prompting techniques, the practical applications, the workflows — works better when you have a clear-eyed understanding of where AI falls short. Knowing the failure modes is what separates confident, effective AI users from those who get burned by it.

The hallucination problem

This is the term the AI industry uses for when an AI confidently produces information that is simply wrong. Not hedged, not qualified — stated as fact, in a helpful, articulate tone, with no indication that it might be invented.

Here's why it happens. Remember how we said AI predicts the most likely next word, based on patterns? Well, sometimes the most plausible-sounding continuation of a sentence isn't the true one. The model has no way to distinguish between "a pattern I learned from real facts" and "a pattern that sounds like a real fact but isn't." It produces what sounds right, not what is right.

⚠️
Hallucination in practice

Ask an AI to cite sources and it will often produce citations that look completely real — correct journal name, plausible author, reasonable year — but simply don't exist. The article was never published. The quote was never said. This is one of the most dangerous failure modes because it looks so convincing.

The cut-off date problem

Most AI models are trained on data up to a specific date, after which they know nothing about what happened in the world. ChatGPT's training data has a cut-off of early 2025. Claude's is similar. Gemini connects to Google Search and is therefore more current, but even that has limits.

What this means practically: don't ask AI about recent news, current prices, live sports results, or anything that changes frequently without verifying the answer elsewhere. An AI asked "who is the current Prime Minister?" might give you the right answer, or it might give you whoever held the role when its training data ended.

Maths and logic: proceed with caution

This surprises people because AI seems so intelligent. But LLMs are fundamentally language models — they are trained on text, not mathematical reasoning. They can explain maths beautifully, walk you through concepts clearly, and often get the right answer. But they make arithmetic errors, sometimes get logical puzzles wrong, and can confidently produce incorrect calculations.

For anything involving numbers that matters — financial calculations, measurements for a building project, drug dosages — verify the AI's answer with a calculator or a specialist.

The confidence problem

Perhaps the hardest thing about AI errors is how they're delivered. AI doesn't say "I think this might be roughly right." It says "Here is the answer." The tone is authoritative even when the content is wrong. This is a feature of how the models are trained — they optimise for useful, clear responses, not for epistemic humility.

The best modern models have got better at expressing uncertainty when they genuinely don't know something. Claude in particular will often say "I'm not certain about this" or "you should verify this." But you can never fully rely on this. The absence of a caveat does not mean the information is correct.

⚠️ The golden rule

Anything that matters — a medical decision, a legal question, a significant financial choice, a fact you're about to share publicly — should be verified from a primary source. AI is a starting point, not a final authority.

Bias and limitations in perspective

AI models are trained on human-generated text from the internet, books, and other sources. Human-generated text reflects human biases — cultural, political, historical. AI can reproduce these biases in subtle ways. It may give different responses depending on how a question is framed, reflect the perspectives more common in its training data, or be less familiar with topics underrepresented in that data.

This doesn't make AI useless — it makes it important to approach it as you would any source: with informed, thoughtful judgment.

What this means for how you use AI

None of this should put you off. These limitations become manageable once you know about them:

  • For factual information that matters, verify it. Use AI to understand, summarise, or explain — then check the key facts with a reliable source.
  • Don't ask AI for citations unless you plan to check them. It will produce convincing-sounding ones that may not exist.
  • Be cautious with numbers. Double-check any calculation that has consequences.
  • Treat AI output as a first draft, not a final answer. Apply your own knowledge and judgment to everything it produces.
  • Use AI for what it's genuinely brilliant at. Writing, explaining, brainstorming, summarising, translating — these are areas where its strengths far outweigh its weaknesses.
🚀 Try this — see a hallucination in action

Ask an AI: "Can you recommend a book about [your specific hobby or interest] and tell me a bit about it?" Then look the book up. It may well exist and be accurately described. Or it might not exist at all. Either way, you'll have learned something important about how to approach AI responses.

📌 Key takeaways from this lesson

AI hallucinates — it produces wrong information confidently. Its knowledge has a cut-off date. It can make maths errors. Its confident tone doesn't mean it's correct. None of this makes it useless — it makes it a tool that rewards informed, thoughtful use. Verify what matters. Trust your own judgment.

Part 1 · Exercise
⏱ ~20 min

Part 1 Exercise

✏️ Practical Exercise⏱ ~20 min🎯 Part 1 completion

This exercise puts together everything from Part 1. There are four tasks. Work through each one in ChatGPT or Claude, then write your brief reflection in the boxes below. There are no right or wrong answers — what matters is that you've tried each thing and noticed what happened.

✏️
Part 1 Exercise — Four Tasks

Complete all four tasks and capture your observations. This is for your own learning — your responses aren't submitted anywhere.

Task 1 — Your first real prompt. Ask AI something genuinely useful to you right now — something you'd actually benefit from knowing or having help with. Use what you learned in Lesson 1.3: include context, the task, and the format you want.
Task 2 — Refine the response. Whatever it produces, reply to it and ask it to change something specific. Maybe make it shorter, change the tone, or focus on a different aspect. Notice how quickly it adapts.
Task 3 — Find a limitation. Ask AI a question where you already know the correct answer — something factual about your own area of expertise or a recent event you know well. Check how accurate it is.
Task 4 — Ask AI about itself. Ask: "What are the most important things I should know about how you work and where you might go wrong?" Compare its answer to what you've learned in this part of the course.
📌 Part 1 complete

You now have a clear, grounded understanding of what AI is and how it works — the foundation everything else builds on. In Part 2 we move straight into practical applications: writing help, research, planning, learning, and creative uses. The exercises get progressively more hands-on from here.

🎉
Part 1 Complete!

You can now explain what AI actually is, choose the right tools for the job, have a productive first conversation, and spot when AI is likely to be wrong. Most people who take this course say Part 1 alone changes how they think about technology.

Part 2 is where it gets practical — writing, research, planning, learning, and creative uses of AI in everyday life.

Start Part 2 →
Lesson marked complete