Project Learnings
Manage project-specific operational knowledge that persists across Claude Code sessions. Learnings complement the memory system — memory stores cross-project user preferences, learnings store project-specific operational knowledge.
Storage
Learnings live in plain markdown files alongside the project's memory directory:
~/.claude/projects/{project-hash}/learnings/
├── index.md # Quick-reference summary (briefing for new sessions)
├── patterns.md # "This codebase does X because Y"
├── pitfalls.md # "Don't do X, it breaks Y"
├── operational.md # "Build requires Z, deploy needs W"
└── decisions.md # "We chose X over Y because Z"
Note:
/learnis a skill trigger — not a registered slash command. Invoke it by saying "show learnings", "add a learning", etc. The index is not auto-loaded at session start. To load it automatically, add this line to your project's CLAUDE.md:@~/.claude/projects/{project-hash}/learnings/index.md
Usage
show learnings— Show the index (quick reference of all learnings)add a learning— Add a new learning interactivelysearch learnings <query>— Search all learnings files for a termprune learnings— Check for stale entries (referenced files that no longer exist)export learnings— Export learnings as a block suitable for CLAUDE.mdlearning stats— Show counts by category (total + per file)
Adding a Learning
When adding a learning (either via /learn add or at end of session), classify it:
pattern — How this codebase does things. Conventions, architecture patterns, integration approaches that aren't obvious from reading the code.
## Build system uses turborepo with custom pipeline
The monorepo runs `turbo build` but the order matters — `packages/db` must build before
`apps/api` because it generates Prisma types. Running `turbo build --filter=apps/api`
alone will fail with missing types.
pitfall — Things that break. Gotchas, footguns, things that look right but aren't.
## Don't run migrations in parallel
The migration system doesn't handle concurrent migrations. If two CI jobs run
`prisma migrate deploy` simultaneously, one will fail with a lock error and the
migration state becomes inconsistent. Always serialize migration jobs.
operational — How to build, test, deploy, configure. Environment setup, required services, deployment quirks.
## Local dev requires Redis running
The websocket service connects to Redis on startup. Without it, the dev server starts
but all real-time features silently fail. Run `docker compose up redis` before
`npm run dev`.
decision — Why we chose X over Y. Captures the reasoning so future sessions don't re-litigate settled decisions.
## Using Server Actions instead of Route Handlers for mutations
Decided 2025-12-15. Server Actions give us progressive enhancement and automatic
revalidation. Route Handlers would require manual cache invalidation. The trade-off
is that Server Actions can't be called from external clients, but we don't need that.
Entry Format
Each entry is an H2 heading with a description below it. Keep entries concise — a heading that captures the key insight and 2-4 lines of context. Include dates for decisions.
## Heading that captures the key insight
Context explaining why this matters, what happens if you ignore it, and when it applies.
Include file paths or commands where relevant.
Index File
The index file (index.md) is a curated summary — not a dump of everything. Keep it
under 50 lines. It should contain the most important learnings that any session should
know about. Think of it as the "briefing" for a new session.
Format:
# Project Learnings Index
## Key patterns
- Build order matters: packages/db → packages/api → apps/web
- All mutations use Server Actions, not Route Handlers
## Critical pitfalls
- Never run migrations in parallel (lock corruption)
- WebSocket service requires Redis running locally
## Operational
- `docker compose up` before `npm run dev`
- Deploy via `vercel --prod` from main branch only
Session-End Reflection
At the end of a working session (when wrapping up, before the user leaves), reflect on what was learned:
- Did any CLI commands fail unexpectedly? (→ operational)
- Did we discover a codebase convention? (→ pattern)
- Did we hit a gotcha that wasted time? (→ pitfall)
- Did we make a significant technical decision? (→ decision)
If any of these apply, append to the appropriate file and update the index if the learning is important enough.
Don't log trivial things. A learning should save future sessions at least 5 minutes of confusion or prevent a real mistake.
Pruning
prune learnings checks each learning for staleness:
- File paths mentioned → do they still exist? (
ls <path>) - Commands mentioned → is the binary still available? (
which <cmd>) - Dependencies mentioned → are they still in package.json / requirements.txt?
- Decisions → has the approach been reversed since the decision date?
Mark stale entries with [STALE] prefix rather than deleting — they may contain
historical context worth keeping. Let the user decide whether to remove.
Stats
learning stats counts H2 headings (entries) per file:
Patterns: 8 entries
Pitfalls: 5 entries
Operational: 3 entries
Decisions: 4 entries
─────────────────────────
Total: 20 entries
Export
export learnings renders all files as a single markdown block suitable for pasting
into CLAUDE.md or sharing with another AI session. Example output:
## Project Learnings
### Patterns
[contents of patterns.md]
### Pitfalls
[contents of pitfalls.md]
### Operational
[contents of operational.md]
### Decisions
[contents of decisions.md]
Bootstrapping
The learnings directory is created on first use — don't create files speculatively. Only create the directory and the relevant category file when there's an actual learning to write.
The directory path is: ~/.claude/projects/{project-hash}/learnings/
To find the project hash: take the absolute path of the working directory, replace every
/ with -. The leading / becomes a leading -, so the hash always starts with -.
Example: /Users/nick/src/myapp → -Users-nick-src-myapp.