After a year with Claude Code, I pulled out 7 principles from my last six months of daily use. Each one came from real pain during real development.

The most common beginner mistake isn’t inability — it’s treating Claude Code like a smarter ChatGPT. Used right, it’s an engineer you can train. Used wrong, it’s a hallucinating, lying creature that “looks like it worked but actually didn’t change anything,” and won’t lift a finger unless you tell it to.

Get these 7 things right and AI becomes your best partner.


1. Don’t Treat It Like a Chat Box — Treat It Like Your Employee

Imagine you’re a team lead assigning a task to a new hire. You say “add a delete feature” and walk away. They don’t know what you’re deleting, where the page redirects after, whether it’s soft delete, whether there’s a confirmation dialog, whether to log the action, which users have permission. They’ll guess. The result probably isn’t what you wanted.

That’s not the employee being incompetent. You didn’t explain the task.

AI is exactly the same. You type “add a delete feature” and it doesn’t have the product picture in your head, doesn’t have your week of conversations with the PM, doesn’t know how “delete” works in other modules, doesn’t know your acceptance criteria. It can only guess.

Treat it like an actual new colleague. Every task needs four things:

  • Objective — what the user sees and what the system looks like when it’s done
  • Detailed requirements — what to delete, how, where to redirect, soft delete or not, permission checks, confirmation dialog
  • Current project state — tech stack, where related modules live, how similar features were implemented before, any patterns to follow
  • Acceptance criteria — which tests must pass, which edge cases to cover, don’t stop until all tests pass

The most common beginner mistake is skipping the five minutes of explanation, then spending two hours correcting the AI’s wrong direction. Those five minutes are the highest-ROI five minutes of the entire task.

The test: is your keyboard getting busier or quieter? If you’re using it right, you should be typing less over time.


2. Use CLAUDE.md — It’s the AI’s Engineering Red Line

CLAUDE.md is the baseline Claude Code must always follow. It’s loaded into every conversation, so it needs to be concise while being clear.

  • “Loaded into every conversation” — it takes up context window. The longer it is, the higher the cost per turn. If you can say it in one sentence, don’t use three.

  • “Must always follow” — it’s not a suggestion, it’s a hard constraint. Only write things you truly want followed every single time. Don’t use “preferably,” “try to,” or “sometimes.”

What belongs in CLAUDE.md:

  • Engineering red lines (comment language, commit conventions, search before guessing)
  • File output rules (where reports go, what not to generate, where tests live)
  • Response format constraints (what language to use, required output structure)
  • Hard no’s (no --no-verify, no force-committing gitignored files)

What doesn’t belong:

  • Code style details — that’s the linter’s job
  • Architecture background and business logic — that’s the Design Doc’s job
  • Things you want it to do “sometimes” — it won’t execute reliably, and it dilutes the real red lines
  • Long explanations and examples — burns tokens, imperative bullet points are enough

You also need to separate global vs. project-level CLAUDE.md:

  • Global CLAUDE.md (~/.claude/CLAUDE.md) covers cross-project personal habits: comment language, commit language, always search when uncertain, response format, memory system paths. These are “you” rules that apply everywhere.

  • Project-level CLAUDE.md (repo root) covers only what’s unique to this project: tech stack, how to run tests, deployment process, forbidden directories, Design Doc path. Rule of thumb: if the rule still holds when moved to another project, it doesn’t belong at the project level.

One practical tip: red lines are earned from pain, not planned in advance. Your first CLAUDE.md for a new project can be short. Every time the AI does something that genuinely angers you, add a rule. After a month you’ll have a dense, high-signal document that’s uniquely yours.


3. Package Frequent Tasks into Skills — Stop Re-explaining

Many people have heard of Skills but haven’t started using them. Skills are the single most productivity-boosting feature. They’re essentially “prompt templates with an entry point and a tool allowlist.”

An example I’ve hit countless times: writing Design Docs.

Every new project I had to explain: write to <memory_root>/docs/design.md, choose Lite or Full template based on project size, architecture diagram must start from the user entry point not just backend internals, must include Overview / Ultimate Vision / Tech Stack / Architecture Diagram / Feature Status / Current Milestone / Key Decisions sections, unfilled sections get TBD not made-up content, if design.md already exists don’t overwrite it and run the update flow instead…

I got tired of repeating this. Every time I’d waste tokens and inevitably miss a rule, and every project’s Design Doc looked different.

I packaged it into the /init-design skill: fixed template, fixed path, fixed rules, fixed logic. Now starting a new project is one command: /init-design this project is XXX. Ten seconds, spec-compliant Design Doc.

Anti-pattern: repeating a dozen standards every new project, hoping the AI doesn’t miss any this time. Pattern: encode the standards into a skill, type /init-design and done.

I now use 20+ custom skills daily: /plan, /build, /debug, /codex-review, /init-design, /update-design, /commit, /contribute, /loop, /schedule… Each one replaces a workflow you’d otherwise type out repeatedly.

The test for whether something should be a skill: are you annoyed at re-typing a prompt you’ve written before? If yes, package it immediately.


4. Build a Memory System — Let Context Survive Across Conversations

Claude Code’s context window, no matter how large, resets to zero every session. What actually makes an AI assistant “know you” isn’t prompt engineering — it’s a stable, readable, writable, cross-session memory system.

A simple memory system at ~/memory/ looks like this:

  • USER.md — who you are, your profile
  • NOW.md — what you’re currently working on, updated after every session
  • docs/INDEX.md — project documentation map
  • daily-logs/ — last 14 days of conversation logs
  • lessons/ — experience distilled from conversations, auto-extracted by hooks
  • projects/ — each project’s design.md / plan/

lessons/ captures every pitfall from your conversations and records them as reusable experience files, ensuring the AI doesn’t repeat the same mistake twice.

First thing after every new conversation or Compact: read NOW.md to restore context.

With hooks maintaining it automatically, you’ll find the AI remembers what you did last week better than you do.


5. Automate with Hooks — Don’t Rely on Prompt Reminders

“Always do X” written in CLAUDE.md isn’t reliable — the model forgets, cuts corners, or gets overridden by context.

Hooks are shell scripts executed at the harness level. They’re not LLM decisions, so reliability is 100%.

Claude Code supports 9 hook events, each a point where you can inject automation:

  • SessionStart — new session begins
  • SessionEnd — session ends
  • UserPromptSubmit — user hits enter, before the model sees the message
  • PreToolUse — model decides to call a tool, before execution
  • PostToolUse — tool finishes executing
  • Notification — system needs to notify the user (e.g., waiting for authorization)
  • Stop — model finishes a response, before stopping
  • SubagentStop — subagent finishes, before stopping
  • PreCompact — context is about to be compressed, long-term memory about to be lost

The key is matching “when should this happen” to “which hook.” A few I actually use:

  • Stop → voice announcement that this turn is done, with customizable voice and content, giving Claude Code actual personality
  • PreToolUse (Bash) → dangerous command interception — catches rm -rf /, git push --force and pops a confirmation first
  • PostToolUse (Write / Edit) → auto-run update_docs_index — every new document gets registered in the global doc index so other agents can find it
  • Stop → auto-save conversation log + run git status — what changed this turn at a glance; simultaneously runs extract_lessons async to distill reusable experience into long-term memory
  • SessionEnd → auto-update NOW.md — summarizes what happened this session so the next session can pick up immediately
  • PreCompact → auto-run update_user_preference — before context gets compressed and details are lost, extract newly revealed preferences, pitfalls, and corrections into long-term memory

The mantra: “automatic behavior goes in hooks, flexible judgment goes in prompts.” Any requirement that’s “every time X, do Y” should never live in CLAUDE.md — put it in a hook. CLAUDE.md is rules. Hooks are execution.


6. Document-Driven Development, Not Chat-Driven

The beginner pattern: type one sentence → watch it write code → it goes sideways → ask it to fix → still sideways → close the terminal in frustration.

My workflow is three steps plus a final gate:

Design → Plan → Build → Manual Acceptance

Each step has its own job. Don’t mix them:

  • Design — high-level architecture, principles, module boundaries, data flow, ultimate goal. No specific steps.
  • Plan — execution plan based on the design: which files to change, in what order, risk points, rollback paths, how to validate each step.
  • Build — execute the plan faithfully. No improvisation, no “while I’m here” optimization.
  • Manual Acceptanceyou’re the boss, not the QA engineer. You’re accepting a deliverable: open the app, check if all requirements are met, glance at the database and logs for obvious issues.

Every step gets a review gate, and the review is done automatically by Codex. Claude Opus is like the senior who knows everything and excels at planning and architecture, but often cuts corners on execution. Codex is better at code execution, debugging, and review. So I route reviews to Codex: design done → Codex review; plan done → Codex review; build done → Codex review; then you sign off. Three automated gates. You only make decisions between gates and sign off at the end.

What if acceptance fails? Don’t fix it yourself, and don’t just say “it’s broken, take a look.” You’re the boss. The boss’s job is to describe the problem clearly, state the requirements, and let the employee solve it. Paste the complete error message verbatim back into the conversation (don’t summarize, don’t paraphrase), add a scene description (what you clicked, what you expected, what actually happened), then give a clear instruction: “debug it yourself, fix it yourself, keep going until it passes, don’t stop until it works.”

Then watch it reproduce the issue, add logs, locate the root cause, fix it, and re-run. Your role is boss and referee, not pair programmer and QA.

Core principle: have the AI write what you’ve already thought through. Don’t have the AI think for you. It’s good at execution, not at defining problems. Your job is defining problems and accepting deliverables. Everything in between gets outsourced to Claude Code.


7. Use Subagents to Isolate Context

Your main conversation’s context window is your most expensive resource. Once it’s stuffed with 200 file reads, thousands of grep results, or half a PDF, the model gets noticeably dumber.

The core value of subagents isn’t just saving tokens — it’s that they run in a completely independent conversation, can’t see your main conversation’s history, and won’t be biased by your earlier reasoning. It’s like calling in a colleague who hasn’t been influenced by your thinking.

Three use cases cover 80% of your needs:

Outsource “noisy” work. Running tests, scanning logs, reading a pile of docs, searching the entire codebase — all of these generate massive intermediate data but you only need the conclusion. Hand it to a subagent. It runs in its own context and sends back a summary. Your main conversation stays clean.

Get an independent second opinion. Your own design, your own plan, your own code — reviewing your own work is nearly zero-value (it looked right when you wrote it, it still looks right when you review it). Spin up a subagent to review from scratch. It didn’t participate in the earlier discussion, so it catches things the main conversation can’t see. The Codex review from the previous point is essentially this pattern.

Run long tasks in the background and free up your waiting time. This is subagents’ most underrated use. Running a full test suite, building the entire project, crawling a batch of documents, doing a large refactor — these take minutes or even half an hour. You don’t need to watch the screen. Throw it to a background subagent, continue designing the next feature or editing another module in your main conversation. It’ll come back with results when it’s done. One person pushing 3 things forward simultaneously — that’s the real leverage agents give you. Press Ctrl+B to send the current task to the background at any time.


Start Using It, Come Back as Needed

After reading these 7 rules, you might feel the urge to set everything up at once — fill out CLAUDE.md, write a dozen skills, enable all hooks, build the full memory directory tree. And then run out of energy before you even start building anything.

Don’t do that.

These 7 rules are reference material, not a checklist. You don’t need all of them on day one. Start using it, run it on a real project, and come back when you hit a specific problem:

  • You keep repeating the same instructions → come back to rule 3, make a skill
  • The AI forgot what you told it last time → come back to rule 4, build a memory system
  • You keep manually repeating the same action → come back to rule 5, write a hook
  • The AI went off track three times and counting → come back to rule 6, set up the workflow
  • You’re frustrated waiting for a long task → come back to rule 7, send it to the background

Every configuration should be pulled out by a real pain point, not set up preventively. Pre-loading configurations just makes you tired and then you quit. Pull what you need when you need it, and the experience only gets smoother.

Trust me — after one serious month, you’ll have a capable super-engineer on your team. One who knows your preferences, remembers your projects, runs your workflows, and quietly finishes work while you sleep.

All you need to do is give them their first task today.