Context Tree Maintenance
Purpose
Execute structured maintenance tasks on existing Context Tree documentation to ensure accuracy, relevance, and high signal-to-noise ratio.
This skill is invoked by the /maintain-context-tree command.
Three Maintenance Modes
Mode 1: Git Learning Analysis
Analyze git history to detect Rule of Two violations (repeated patterns indicating documentation opportunities)
Mode 2: Quality Audit
Validate architectural claims against code, remove stale content, ensure accuracy
Mode 3: Health Check
Quick validation of links, structure, file sizes, and overall health
Core Principles (Non-Negotiable)
1. Verify Against Code, Not Docs
NEVER trust documentation - always verify claims against actual code.
- Read the actual implementation files
- Use Grep to confirm patterns exist
- Cross-check claims before documenting
- If you can't verify, mark as "unverified"
Example:
Doc says: "Redis is used for sessions"
Verify: grep -r "session" config/
Result: Sessions use cookies, not Redis
Action: Fix the doc immediately
2. Signal-to-Noise Ratio is Everything
Every line must justify its token cost.
Keep:
- Institutional knowledge (what AI can't infer from code)
- Terminology traps (UI ≠ code ≠ DB terms)
- Architecture gotchas (security rules, multi-tenancy patterns)
- Common mistakes (what breaks repeatedly)
Remove:
- Generic framework explanations
- Code-derivable facts
- Verbose examples
- Obvious patterns
Test: Can you grep for it? Can LSP show it? Then don't document it.
3. Bad Context is Worse Than Bad Code
Incorrect documentation actively misleads and compounds errors.
- Bad code: Can be debugged and fixed
- Bad context: Silently trains AI wrong, compounds errors, erodes trust
- Fix errors IMMEDIATELY when discovered
- Remove outdated content rather than leaving it
4. Single Source of Truth
Architectural facts live in ONE place to prevent drift.
- Core architecture →
docs/ARCHITECTURE.md - Business workflows →
docs/BUSINESS_CONTEXT.md - Terminology →
docs/GLOSSARY.md - Common mistakes →
CLAUDE.mdCommon Pitfalls
Before adding content: Check if it already exists elsewhere. Reference, don't duplicate.
Mode 1: Git Learning Analysis
Goal: Detect repeated patterns in git history that violate Rule of Two
Execution:
-
Run git-learning-detector:
${CLAUDE_PLUGIN_ROOT}/git-learning-detector.sh --since=3.months -
Parse five signal types:
- Repeated fix patterns (same issue, different files)
- Defensive comments (IMPORTANT, DON'T, NEVER warnings added)
- High churn files (≥5 changes = confusion zone)
- Terminology inconsistencies (multiple names for same thing)
- Explicit learning signals (TIL, learned, gotcha in commits)
-
For each high-priority pattern:
- Verify: Read mentioned files to confirm pattern
- Check docs: Search existing Context Tree for duplicates
- Assess value: Would documenting prevent third occurrence?
-
Provide recommendations with draft text:
HIGH PRIORITY: clientid filtering Pattern: 5 defensive comments about multi-tenant filtering Files: app/controllers/*.java (3 controllers) Last: 4 days ago Recommendation: Add to CLAUDE.md Common Pitfalls Draft: ```markdown ❌ DON'T write queries without clientid filter ✅ DO always include: String clientid = request().host() Security critical: Missing clientid causes tenant data leakageVerified:
- ✅ Pattern in 47 controller methods
- ✅ 3 recent fixes
- ✅ Security-critical
Add now? (y/n)
-
Apply approved changes:
- Read target file
- Edit to add content in appropriate section
- Commit:
docs: add [pattern] from git analysis
Integration with git-learning-detector.sh:
- Script packaged with plugin at
${CLAUDE_PLUGIN_ROOT}/git-learning-detector.sh - Supports --since, --branch, --base, --high-signal-only flags
- Outputs structured findings (repeated fixes, defensive comments, churn, terminology, learning)
Mode 2: Quality Audit
Goal: Validate existing docs for accuracy and prune stale content
Execution:
-
Audit CLAUDE.md:
Checking CLAUDE.md... ✅ Under 200 lines (156) ✅ No generic explanations ⚠️ Claim: "Redis stores sessions" (line 78) Verifying... grep -r "session" config/ ❌ Incorrect - cookies, not Redis Fix now? (y/n) -
Audit docs/ files:
- Read each file in docs/
- For architectural claims: Verify against actual code
- For terminology: Check actual usage in codebase
- For patterns: Confirm they still exist
-
Check for duplicates:
- Same fact in multiple places violates single-source-of-truth
- Suggest consolidation
-
Prune low-value content:
- Generic framework info → delete
- Obvious code patterns → delete
- Verbose examples without insight → delete
-
Summary:
Quality Audit Complete Fixed: 1 incorrect claim Added: 1 missing terminology Removed: 0 sections (none needed pruning) Verified: 12 existing claims Commit? (y/n)
Mode 3: Health Check
Goal: Quick structural validation
Execution:
-
File structure:
✅ CLAUDE.md exists ✅ docs/ directory present ✅ Working files in docs/context-tree-build/ -
Cross-references:
✅ All internal links valid (23 checked) ✅ All file references valid (15 checked) -
Signal-to-noise:
✅ No generic content detected ✅ File sizes reasonable (<300 lines) ⚠️ ARCHITECTURE.md approaching 300 lines -
Staleness:
⚠️ Last update 45 days ago Run git learning analysis for new patterns -
Report:
Health Check Complete Status: Good Issues: 0 critical, 2 warnings
Natural Guidance (Not a Mode)
When user says something like:
"I just figured out webhooks use HMAC auth, not bearer tokens. Should we document this?"
Respond naturally:
Good catch! Let me verify against code...
[Reads src/webhooks/payment_webhook.js]
Confirmed - webhooks use HMAC (line 34), REST API uses bearer tokens.
This belongs in CLAUDE.md Common Pitfalls:
```markdown
### Authentication Patterns
❌ DON'T assume webhooks use same auth as REST API
✅ DO use HMAC signature for webhooks
- Webhooks: HMAC in X-Signature header
- REST API: Bearer token in Authorization
- File: src/webhooks/payment_webhook.js:34
Add now? (y/n)
**No special command needed** - just natural conversation guided by context tree principles.
---
## Common Rationalizations (Stop These)
❌ **"I'll verify this architectural claim later"**
→ NO. Verify NOW. Bad context is worse than bad code.
❌ **"This might be useful someday"**
→ NO. If it doesn't justify token cost NOW, delete it.
❌ **"I'll document this even though code shows it"**
→ STOP. Can you grep for it? Then don't document it.
❌ **"This might be outdated but I'll leave it"**
→ NEVER. Fix or delete immediately.
❌ **"I'll find single source of truth later"**
→ STOP. Making duplicates is worse than not documenting.
---
## Workflow Integration
**Invoked by:** `/maintain-context-tree` command
**User selects:**
1. Git Learning Analysis (Mode 1)
2. Quality Audit (Mode 2)
3. Health Check (Mode 3)
4. All (run 1, 2, 3 in sequence)
**This skill executes the selected mode(s) and reports results.**
---
## After Completion
1. **Commit changes** with clear messages
2. **Report summary** to user
3. **Remind about Rule of Two:** Document on 2nd occurrence
---
## Important Notes
- This skill operates on EXISTING Context Trees (for initial build, use /build-context-tree)
- Quality gates prevent adding generic content
- All changes verified against code before applying
- Monthly maintenance recommended (run "all" option)
- Natural guidance available anytime for ad-hoc insights
---
**Execute the maintenance mode selected by the user.**