Auditing Your AI Habits
This is the final part of AI Advanced. Everything you've built across Parts 1–5 comes together here. This lesson is about honest self-assessment before you build your final system document.
Before you build your Personal AI System document, you need an honest picture of where you are now — what's changed since you started this course, where the genuine gaps still are, and what you want to be true in another six months. This lesson is a structured self-audit.
The AI habits audit
Work through these questions honestly. This isn't a test with right answers — it's a mirror. The value is in the accuracy of what you see.
What has changed since you started
One of the most valuable things you can do in this capstone is document what's actually different. Not what you've learned — what's different about how you work, think, and approach problems. These are different questions.
Think about: the tasks where AI is now genuinely part of your process. The things you've stopped doing manually. The ways your thinking has changed. The moments where you caught yourself about to use AI inappropriately and chose not to. The areas where your scepticism has increased alongside your fluency.
Setting the right ambition level
Your Personal AI System should be ambitious enough to push you and realistic enough to stick. Most people set systems that are slightly too elaborate — they describe the ideal version of themselves rather than the actual one. A simple system you maintain beats a sophisticated one you abandon.
The right ambition level for your system is: 20% more sophisticated than your current habits, not 200% more sophisticated. Growth that's sustainable rather than aspirational.
The audit is about accuracy, not judgment. Honest assessment of your current habits is more valuable than an inflated picture. Document what's genuinely changed, not just what you've learned. Set your system ambition at 20% above your current reality, not at your ideal self. The system that sticks is the one that wins.
Building Your Personal AI System Document
Your Personal AI System document is the capstone of this course — a living reference that captures everything you have built so you can use it consistently and keep improving it. This lesson walks through the five sections it should contain and how to write each one.
Why write it down?
Most AI users rely on memory — they vaguely remember a prompt that worked, roughly know which tool they prefer for what, and have a general sense of their habits. Writing it down turns vague memory into a reliable system. It also forces clarity: if you cannot write down why you use a particular tool for a particular task, you probably do not know why.
The five sections
1. My AI stack
List every AI tool you use regularly, with one sentence explaining why — what task or need it uniquely serves. If you cannot write that sentence, the tool probably does not belong in your stack.
2. My prompt library
Your curated prompts, organised by use case. Include only prompts that have worked well on real tasks and are likely to recur. Annotate them — a note on when to use each and what makes it work.
3. My daily and weekly routines
Document when and how you use AI in a typical day and week. Be specific: "Monday mornings: use Claude to process the week's briefing. Before any important email: use ChatGPT to draft." Specificity makes routines stick.
4. My automations
List any automations you have built or are building — what they do, which tools are involved, and what they save you. Even simple automations are worth capturing so you remember they exist.
5. My personal AI principles
A short list of your own rules for using AI well — what you always do, what you never do, and where you draw lines. These keep you using AI intentionally rather than habitually.
Open a document and write your current stack in section 1 — just the tools you actually use and one sentence per tool. That is your first 20 minutes of capstone work done.
Five sections: stack, prompt library, routines, automations, personal principles. Write only what is genuinely useful to you — not what you think should be there. Update it whenever something meaningful changes, not on a fixed schedule.
AI Ethics for Individuals
Ethics in AI is usually discussed at the level of governments, corporations, and systems. But the individual AI user also faces genuine ethical questions — about privacy, authenticity, dependency, and responsibility. This lesson covers those questions honestly, without pretending the answers are simple.
Privacy in practice
Every time you share information with an AI tool, you're sharing it with the company that runs that tool. This has practical implications:
- Don't share confidential information about clients, employers, or colleagues in AI conversations unless you've read and understood the provider's data policy.
- Be thoughtful about personal data: Names, health information, financial details — even anonymised versions of real situations — can sometimes be re-identified.
- Know what your tools do with data: Most providers offer enterprise tiers with stronger data privacy guarantees. If you're doing professional work with sensitive data, the cost is usually worth it.
- Use system prompts or custom instructions to avoid having to repeat sensitive context in every conversation.
Authenticity and disclosure
When should you disclose that AI helped you produce something? There's no universal rule — but there are some principles worth working through:
- Substance vs. assistance: The same way you wouldn't attribute a document to your editor, AI assistance in drafting, structuring, or editing doesn't automatically require disclosure. AI that generated the core ideas or content is a different question.
- Context matters enormously: In academic settings, many institutions have specific rules. In professional settings, norms vary. Know the norms for your context.
- Honesty with yourself: If AI did the thinking and you provided the prompt, who is the author? If you directed, judged, edited, and took responsibility for the output — you are. If you copy-pasted without engagement — the question gets harder.
Healthy AI dependency
The most valuable uses of AI amplify your capabilities — they let you do things you couldn't do alone, or do things you already do much faster. The ones to watch are uses where AI is replacing thinking you should be doing yourself.
A useful test: if you stopped using AI for a week, what would you be unable to do? If the answer is tasks where you should be developing the underlying skill — writing, reasoning, research — that's worth examining. If the answer is administrative tasks, formatting, and synthesis work — that's healthy delegation.
Your responsibilities as a skilled AI user
As someone who uses AI more fluently than most, you have some responsibilities most people don't:
- Don't spread AI output uncritically. You know better than most how often AI is wrong. Verify before you share, especially if you have a platform or audience.
- Be honest when people ask about AI use. Normalising AI assistance is fine; obscuring it isn't.
- Help the people around you develop AI fluency. The gap between people who use AI well and people who don't is growing. You can help close it.
Privacy: don't share confidential data with AI tools; know what providers do with your information. Authenticity: context determines disclosure norms; the author is whoever did the thinking. Healthy dependency: AI should amplify your capabilities, not replace thinking you should develop. As a skilled AI user, you have responsibilities around verification, honesty, and helping others develop fluency.