ChatGPT and Claude are extraordinarily useful tools. Most people who use them regularly wonder how they managed without them. But there's a side of AI use that doesn't get discussed enough — what you should keep away from these tools entirely.

This isn't about being paranoid. It's about understanding how these systems actually work, who might see what you type, and where the real risks are. Getting this right means you can use AI confidently without exposing yourself unnecessarily.

How AI chatbots handle your data

Before getting into the list, it's worth understanding the basics of what happens when you type something into ChatGPT or Claude.

Your conversations are sent to the company's servers to generate a response. By default, both OpenAI (ChatGPT) and Anthropic (Claude) may use your conversations to train and improve their models — though both offer ways to opt out of this. Both companies employ human reviewers who may read conversations as part of their safety and quality processes.

This doesn't mean your conversations are public — they're not. But it does mean they exist on someone else's server, and "private" is a relative term. Treat it like email: not something a stranger on the street can read, but not something that exists only on your device either.

Quick settings tip

Both ChatGPT and Claude let you turn off conversation history and model training in their settings. If you regularly discuss sensitive topics, turning this off is a sensible precaution. Look for "Data Controls" in ChatGPT settings and "Privacy" in Claude settings.

What you should never share

🔑
Passwords, PINs, and security codes

Never type a password into an AI chatbot under any circumstances. Not to ask for help with something, not as an example, not in any context. There is no legitimate reason to do this, and the risk of exposure — however small — is never worth it.

💳
Financial account details

Bank account numbers, sort codes, credit card numbers, National Insurance numbers. These are the building blocks of financial fraud. If you need help with a financial document, describe the situation in general terms rather than pasting in real numbers.

🏥
Sensitive personal medical information

Using AI to understand a medical condition or research treatment options is fine — and genuinely useful. What's riskier is sharing specific identifying details alongside medical information: your full name, NHS number, GP details, or a combination of details that could identify you. Discuss conditions in general terms where possible.

👥
Other people's personal information without their knowledge

Pasting a colleague's email, a client's details, or a friend's personal situation into an AI chatbot raises real ethical and potentially legal questions — particularly under UK GDPR. If you need to use AI to help with something involving another person, anonymise the details first.

🏢
Confidential company information

Unreleased product plans, client lists, financial projections, internal strategy documents, legal correspondence. Many companies now have explicit policies prohibiting the use of AI tools with confidential information — and with good reason. The Samsung data leak in 2023, where engineers accidentally shared proprietary code with ChatGPT, is the most famous example of how quickly this can go wrong.

⚖️
Details of ongoing legal matters

If you're involved in a legal dispute, employment tribunal, or any active legal proceeding, be very cautious about discussing the specifics with AI. Lawyer-client privilege does not extend to AI chatbots. If you need legal help, speak to a solicitor.

📸
Intimate or sensitive images

Some AI tools can analyse images. Never upload intimate images of yourself or anyone else to an AI platform. Beyond the obvious privacy concerns, you have no control over how those images are handled, stored, or potentially used in training data.

📍
Real-time location or home address in concerning contexts

Sharing your general location to get local recommendations is fine. Sharing your home address, daily routine, or real-time whereabouts in detail — particularly in combination with other identifying information — creates an unnecessary record of data you'd rather keep private.

Learn to use AI safely and confidently.

Our AI Essentials course covers AI safety, privacy, and responsible use — alongside 18 practical lessons on getting genuinely useful results from ChatGPT and Claude. Verified certificate included.

View AI Essentials →

What about work — should I use AI at all?

This is the question many people are wrestling with right now, and the honest answer is: it depends on your employer's policy.

Many large organisations have either banned the use of consumer AI tools entirely, or have moved to enterprise versions (ChatGPT Enterprise, Microsoft Copilot via Microsoft 365) where data handling is governed by a contract rather than standard consumer terms.

If you work for a larger organisation, check your IT or data policy before using ChatGPT or Claude with any work information. If you're self-employed or run a small business, the decision is yours — but the principle remains: anonymise anything sensitive before it goes into an AI tool.

Practical tip for work use

Get into the habit of replacing real names, company names, and specific figures with placeholders before pasting anything into an AI tool. "Our client Acme Ltd ordered 500 units at £12 each" becomes "A client ordered X units at £Y each." You get the same quality of help with zero data exposure.

The right perspective on AI privacy

None of this should put you off using AI. The benefits are real and significant — saving hours of time, improving your writing, helping you think through problems more clearly.

The goal is informed use. Understanding what you're sharing, with whom, and under what terms — and making sensible decisions as a result. That's not paranoia. That's just being a thoughtful user of any digital tool.

The people who get the most value from AI are the ones who use it consistently and intelligently — not the ones who avoid it out of vague concern, and not the ones who share everything without thinking. The middle ground is where the real benefit is.

Key takeaways
  • Your AI conversations exist on company servers — treat them like email, not a private diary
  • Never share passwords, financial details, or National Insurance numbers
  • Anonymise other people's information before using AI to help with anything involving them
  • Check your employer's AI policy before using consumer tools with work information
  • Turn off conversation history in settings if you regularly discuss sensitive topics
  • Informed use — not avoidance — is the goal
Free Download

The AI Starter Guide

5 things every beginner needs to know before using AI — plain English, instantly useful. Free, no credit card.

Get it free →