Privacy & What to Share
Using AI effectively means sharing information with it — context, background, details about your situation. The more you tell it, the more useful it becomes. But this creates a genuine question: what's safe to share, and what should you hold back?
This isn't a reason for alarm. It's a reason to understand how these systems work and make informed choices — the same approach you'd take with any service you use online.
How AI companies handle your data
The major AI platforms — OpenAI (ChatGPT), Anthropic (Claude), Google (Gemini) — are all subject to data protection regulations including GDPR in the UK and EU. They have privacy policies and data retention settings. The basics are consistent across all three:
- Your conversations may be reviewed by staff to improve the model, though this varies by plan and can often be opted out of
- Most platforms have settings to turn off conversation history — worth knowing about
- None of the major platforms sell your personal data to third parties for advertising
- Paid tiers generally offer stronger privacy protections than free tiers
The practical privacy rules
There is no situation where an AI tool needs your password. If you're asking for help with a login issue, describe the problem — don't share the credential.
Account numbers, sort codes, card numbers — these have no place in an AI conversation. If you're asking about financial matters, describe the situation without including the actual numbers.
When you ask AI to help with something involving another person — drafting a message, solving a conflict, writing about someone — share only what's necessary. Other people haven't consented to their details being shared with an AI system.
If you work with confidential client data, proprietary business information, or personal details of customers or colleagues, check your employer's AI policy before sharing these in a consumer AI tool. Many organisations have specific guidance on this.
Your age, general health situation, job role, family context, preferences — sharing this makes AI dramatically more useful and the risk is low for most people in most contexts. Use your judgment.
Using incognito or privacy mode
Most AI platforms let you disable conversation history, which means your conversations aren't saved or used for training. For particularly sensitive conversations, this is worth using. In ChatGPT, this is in Settings → Data Controls. In Claude, conversations aren't stored for training by default on the paid tier.
Think of sharing information with AI the way you think of telling a story to a professional you're consulting — a doctor, solicitor, or adviser. Share what's relevant and necessary. Omit identifying details about others when they're not essential. Don't share credentials or financial account details. Everything else is generally fine.
Never share passwords, PINs, or financial account numbers with AI. Be thoughtful about other people's personal details. Check your employer's AI policy for work-related information. Personal context about yourself is generally fine and makes AI more useful. Use conversation history settings if you're asking about something particularly sensitive.
Spotting AI Misinformation
We covered AI hallucinations in Lesson 1.4. Now we go a level deeper: how does AI misinformation spread, how do you develop the instincts to spot it, and what are the fast verification habits that protect you without slowing you down?
How AI misinformation spreads
The danger isn't that AI is dishonest — it isn't. The danger is that AI is confidently wrong in ways that look exactly like being confidently right. And because AI-generated content is now everywhere — in articles, emails, social media posts, and customer service responses — that confident wrongness travels further and faster than it used to.
A few specific patterns to watch for:
The fabricated citation
AI will produce academic citations, news article references, and book recommendations that look completely real and simply don't exist. The journal name is plausible. The author is plausible. The year is plausible. The article was never published. This is one of the most dangerous forms of AI error because it looks like the kind of evidence that should end an argument — and sometimes does, among people who don't check.
The outdated fact stated as current
AI's training data has a cut-off date. A statistic that was accurate in 2023 may be significantly wrong by 2025. Laws change, guidelines are updated, organisations restructure. AI doesn't know any of this and will present outdated information with the same confidence as current information.
The plausible-sounding invented detail
Ask AI about a historical event and it may add specific details — dates, names, quotes — that fit perfectly and are entirely fabricated. It's not lying in any intentional sense. It's producing what sounds most plausible, and plausible is not the same as true.
The fast verification habits
You don't need to verify everything AI tells you — that would defeat the purpose. You need a calibrated sense of when to verify and a fast way to do it.
- Verify before acting or sharing. If you're about to make a decision, share something publicly, or send something to someone important based on AI information — verify the key facts first.
- Citations: check before citing. If AI gives you a source, search for it. If you can't find the specific article or paper, it may not exist. Don't cite something you haven't been able to locate.
- Statistics: find the primary source. Ask AI where a statistic comes from, then look for that original source. If AI can't tell you or the source doesn't say what AI claims, treat the number with caution.
- Recent information: search the date. For anything where recency matters — current laws, current prices, current office holders — use a search engine to verify rather than relying on AI's training data.
As you answer, please indicate your confidence level for each key claim and flag anything I should verify independently before acting on it.
Add this to any research prompt where accuracy matters. Good AI tools will honestly flag where they're less certain — and knowing which parts of a response to verify saves time compared to verifying everything.
AI-generated content in the wild
Increasingly, the content you read online has been written, edited, or substantially shaped by AI. This doesn't automatically make it wrong — much AI-assisted content is accurate and well-researched. But it does mean the volume of plausible-sounding content has expanded enormously, and not all of it has been carefully verified by humans before publication.
The same critical reading habits that served you well before AI exist serve you even better now: check the source, look for the primary evidence, notice when specific claims aren't supported by specific citations, and be appropriately sceptical of content that confirms what you already believe a little too neatly.
AI misinformation is dangerous because it sounds confident. Watch for fabricated citations, outdated facts, and invented specific details. Verify before acting, sharing, or citing. Ask AI to flag its own uncertainty. Apply the same critical reading habits to AI-assisted content you'd apply to any online source.
AI and Your Wellbeing
This is a lesson most AI courses skip. It matters. AI tools are genuinely useful — but like any powerful tool, they're worth thinking about consciously rather than just absorbing into your life without reflection. This lesson covers the questions worth asking yourself about how you use AI.
The dependency question
One of the most common concerns about AI is dependency: the worry that using AI for writing, thinking, and decision-making will erode the skills we use to do those things ourselves. It's a legitimate question.
The honest answer is nuanced. Using AI for tasks you couldn't do well anyway — formal letters, technical research, unfamiliar situations — carries almost no risk of eroding skills you don't have. Using AI as a first draft for things you're practising — your own writing style, your own analytical thinking — means you need to remain conscious of doing the hard cognitive work, not just accepting the output.
A calculator is a tool for arithmetic. A dictionary is a tool for spelling. Neither made people worse at maths or writing — they freed up cognitive resources for higher-order thinking. AI is a tool in the same sense, as long as you use it that way. It becomes a crutch when you stop engaging your own judgment at all.
AI as conversation — and its limits
Some people find AI genuinely comforting to talk to — it's always available, endlessly patient, non-judgmental, and responsive. For light processing of everyday thoughts, this can be a reasonable use. But it's worth being clear about what AI cannot offer: genuine empathy, real experience, actual care for your wellbeing, or the kind of understanding that comes from a real relationship.
If you find yourself preferring AI conversation to human connection — if it feels easier, lower-risk, more comfortable — that's worth noticing. The ease of AI conversation is real, but it comes at the cost of the depth, challenge, and genuine care that only human relationships provide.
Anxiety about AI
Many people feel anxious about AI — about its speed of development, its impact on jobs, and what it means for the future. This anxiety is understandable and you're in good company.
Completing this course is a meaningful response to that anxiety. Understanding how AI works, knowing how to use it, and developing a realistic sense of its capabilities and limitations is far more useful than either dismissing it or being overwhelmed by it. The people best placed to navigate the AI transition are those who engage with it thoughtfully — which is exactly what you've been doing.
Setting your own boundaries
There's no universal rule about how much AI use is right. What matters is that your use is conscious and intentional rather than habitual and unconsidered. A few questions worth asking yourself periodically:
- Am I using AI in ways that add genuine value to my life and work?
- Are there tasks I want to keep doing myself — for the skill, the satisfaction, or the personal meaning?
- Does my AI use feel like it's in service of my goals, or is it becoming an avoidance mechanism?
Conscious, intentional AI use is different from unconsidered dependency. AI is a tool, not a crutch, when you keep your own judgment engaged. AI conversation has real limits — it cannot replace genuine human connection. Anxiety about AI is best addressed through understanding and engagement, not avoidance.
AI Tools Worth Knowing About
You now have a solid foundation in the core AI assistants. This lesson broadens the picture — covering the other AI tools that are genuinely useful in everyday life, without overwhelming you with every tool that exists. These are the ones worth knowing about, organised by what they help you do.
For searching and current information
Perplexity
Perplexity is an AI-powered search engine that reads sources and gives you a synthesised answer with citations, rather than a list of links to browse. For research questions — "what are the current HMRC rules on claiming home office expenses?" or "what's the latest evidence on intermittent fasting?" — it's often faster and more useful than traditional search. Free tier is generous. Worth bookmarking alongside your main AI assistant.
Google Gemini with Search
Unlike standard AI assistants, Gemini can access Google Search in real time. This makes it significantly better for current information — recent news, live prices, today's events. If recency matters for your query, Gemini is often the right tool.
For images and visuals
DALL-E (built into ChatGPT)
ChatGPT Plus subscribers can generate images by describing what they want. Excellent for creating social media graphics, illustrations for documents, visual concepts, and anything where you need a one-off image without stock photo costs. The quality has become genuinely impressive — photorealistic, artistic styles, specific compositions.
Adobe Firefly
Adobe's AI image generator is built into Photoshop and available as a standalone tool. It's designed to be commercially safe — trained only on licensed content. If you create content professionally and worry about copyright, Firefly is the more defensible choice.
For audio and voice
Otter.ai
Automatically transcribes and summarises meetings, voice memos, and recordings. If you have meetings you need notes from, phone calls you'd like transcribed, or voice ideas you want turned into text, Otter is excellent. Free tier covers most personal use cases.
ElevenLabs
Converts text to natural-sounding speech — useful for creating audio versions of written content, voiceovers, or accessibility purposes. Can also clone voices from samples, though this raises ethical questions worth thinking about carefully.
For productivity in tools you already use
Microsoft Copilot
If you use Microsoft 365 — Word, Excel, Outlook, Teams — Copilot embeds AI directly into those tools. Summarise a long email thread, draft a Word document from notes, analyse a spreadsheet, prepare meeting notes. If your workplace uses Microsoft 365, check whether your organisation has Copilot enabled.
Notion AI
If you use Notion for notes or project management, Notion AI can summarise pages, generate content, help with writing, and analyse data within your workspace. Useful if you're already a Notion user.
New AI tools launch constantly, and many are aggressively marketed. Most are wrappers around the same underlying models you can access for free. Before paying for any new AI tool, ask: can I do this with ChatGPT or Claude already? The answer is often yes. Your core AI assistant does far more than most people realise — explore its full capabilities before adding tools.
Perplexity for research, Gemini for current information, DALL-E for images, Otter.ai for transcription, Copilot for Microsoft 365 users. Explore your core AI assistant's full capabilities before adding new tools. Most of what you need is already available in ChatGPT or Claude — used well.
Where to Go From Here
You've finished AI Essentials. That's a genuine achievement — and not just in the sense of completing a course. You now have a grounded, accurate, practical understanding of AI that most people who use these tools every day simply don't have. That understanding is the foundation everything else builds on.
This final lesson does three things: reflects on what you've actually learned, sets out the most valuable habits to carry forward, and shows you clearly where to go next if you want to go deeper.
What you now know — and why it matters
Let's be specific about what has changed since you started this course:
- You understand how AI works — not at a technical level, but at the level that matters: you know why it sometimes gets things wrong, why context improves results, why the confident tone isn't a guarantee of accuracy. This understanding is what separates skilled users from frustrated ones.
- You know the tools — ChatGPT, Claude, Gemini, and the specialist tools that complement them. You can choose intelligently rather than just using whatever you've heard of.
- You can prompt effectively — the CRAFT framework, iterative conversation, role prompting, negative constraints. These are real skills that produce measurably better results.
- You use AI safely — you understand the privacy boundaries, the verification habits, and the wellbeing considerations that informed AI use requires.
The three habits that compound over time
Skills atrophy without practice. The three habits that will make everything you've learned compound in value:
1. Use AI for one real task every day
Not a practice exercise — something you actually need. An email, a piece of research, a plan, a question you've been wondering about. Daily use builds fluency faster than anything else. After two weeks of consistent daily use, the tool stops feeling unfamiliar and starts feeling like a natural extension of how you work and think.
2. Stay curious about what's changing
AI is developing faster than any technology in history. The tools will improve, the capabilities will expand, new applications will emerge. You don't need to follow this obsessively — but a general awareness of where things are heading helps you recognise new opportunities when they arrive. The simplest approach: occasionally ask your AI assistant "what can you do now that you couldn't do six months ago?"
3. Share what you learn
Explaining something to someone else is one of the most effective ways to deepen your own understanding. When you find a useful prompt, a clever application, or a technique that saves you time — share it. With a colleague, a friend, a family member. Teaching is learning.
Where to go next
This course has given you a comprehensive foundation. Where you go from here depends on what matters most to you:
- Go deeper personally: AI Advanced — Power Up Your Personal AI covers advanced prompting, building personal AI systems, automation, creativity, and AI-assisted learning. The natural next step if you want to go significantly further in your personal use.
- Apply it professionally: AI for Business covers AI in meetings, documents, data, customer communications, and building workflows for teams. The course most valued by people who want real productivity gains at work.
- Apply it to content and marketing: AI for Marketing covers social media, email campaigns, video scripting, brand storytelling, and analytics. Built for marketers, content creators, and business owners managing their own marketing.
The most important thing to carry forward isn't any specific technique. It's the mindset: AI is a tool that rewards curious, informed, thoughtful use. The more you understand it, the more you get from it. You now understand it. Use that.
Before you receive your certificate, take five minutes on these three questions. There are no right answers — this is for your own reflection.