Writing Revise
The revision loop for writing projects. Consumes .claude/REVIEW.md (produced by /writing-review) and applies targeted fixes, then completes the workflow when all issues are resolved.
Shared Enforcement
Before any work, load the common constraints that apply to ALL writing phases:
Read("../../lib/references/writing-common-constraints.md")
This includes the Constraint Loading Protocol — you MUST load both the domain skill AND ai-anti-patterns before revising prose.
Revise Flowchart (This IS the Spec)
START
│
├─ Step 1: Load context (ACTIVE_WORKFLOW, PRECIS, OUTLINE, drafts)
│
├─ Step 2: REVIEW.md exists?
│ ├─ NO → REFUSE. Suggest /writing-review. EXIT.
│ └─ YES → Parse issues (critical → major → minor)
│
├─ Step 3: Load constraint layers
│ ├─ Domain skill (legal/econ/general)
│ └─ ai-anti-patterns (universal)
│
├─ Step 4: Fix issues in priority order
│ ├─ 4a: Critical issues (argument-breaking)
│ ├─ 4b: Major issues (transitions, repetition, late introductions)
│ └─ 4c: Minor issues (polish)
│
├─ Step 5: Formatting check
│
└─ Step 6: Check iteration state (.claude/REVIEW_STATE.md)
│
├─ No issues remain → COMPLETE
│ └─ Archive workflow → Generate summary → EXIT
│
├─ iteration < 3 AND issues remain → CONTINUE
│ └─ Increment iteration → Re-invoke /writing-review → Loop
│
└─ iteration >= 3 AND issues remain → ESCALATE
└─ Report to user with options → EXIT
If text and flowchart disagree, the flowchart wins.
<EXTREMELY-IMPORTANT> ## IRON LAW: Critique Over ComfortIf the writing has problems, SAY SO. Being nice is NOT HELPFUL — the user publishes weak prose that gets rejected.
Red Flag Detection
If you catch yourself thinking:
- "This is pretty good overall" - STOP. Find the weakness.
- "I don't want to be too harsh" - STOP. Harsh is kind.
- "The author probably knows what they're doing" - STOP. Check anyway. </EXTREMELY-IMPORTANT>
Rationalization Table
| Excuse | Reality | Do Instead | |--------|---------|------------| | "It's a draft, I'll be gentle" | Drafts need MORE critique, not less | Critique hardest on drafts | | "The main point is clear enough" | "Clear enough" means unclear | Find the ambiguity and fix it | | "I'll focus on positives first" | Positives don't help improve writing | Lead with problems | | "This is good enough for a first pass" | "Good enough" is reward hacking | Find specific problems to fix | | "The user will polish it themselves" | Relying on human editing defeats the workflow | Fix it now | | "I don't want to discourage the writer" | False kindness produces bad writing | Honest critique is the kindest act | | "The argument flows well overall" | "Overall" hides section-level problems | Check each section against PRECIS claims | | "Minor style issues aren't worth flagging" | Minor issues compound into unprofessional prose | Flag every issue you find | | "The structure matches the outline" | Structural match doesn't mean quality match | Check content quality, not just structure | | "This section is creative, rules don't apply" | Creative writing still needs clarity and precision | Apply rules, note creative exceptions explicitly |
Reporting "all checks pass" without actually running every check is NOT HELPFUL — uncaught issues survive into the final draft. You must have evidence for every checkmark. An unchecked box with "assumed OK" means the user publishes with undetected problems. </EXTREMELY-IMPORTANT>
<EXTREMELY-IMPORTANT> ## The Iron Law of Re-ReviewNO "FIXED" CLAIMS WITHOUT FRESH RE-REVIEW. This is not negotiable.
After applying fixes from REVIEW.md, you MUST:
- Re-invoke
/writing-reviewto regenerate REVIEW.md with fresh diagnostics - Verify issues are actually resolved (not assumed)
- Check for new issues introduced by edits (regressions, new problems)
- Only THEN claim fixes are complete
"I fixed it" without re-reviewing is NOT HELPFUL — unverified fixes let broken prose reach the user.
The Audit-Fix Loop (Max 3 Iterations)
Iteration 1: Review → REVIEW.md → Revise → Re-Review
↓
Iteration 2: Re-Review → REVIEW.md → Revise → Re-Review
↓
Iteration 3: Re-Review → REVIEW.md → Revise → Re-Review
↓
Still issues? → ESCALATE to user
All clean? → COMPLETE
Track iterations in .claude/REVIEW_STATE.md:
---
iteration: 1
max_iterations: 3
last_review_date: 2026-03-09
issues_found_count: 5
---
Exit criteria:
- COMPLETE: Zero issues found in REVIEW.md
- ESCALATE: iteration >= 3 AND issues remain
- CONTINUE: iteration < 3 AND issues remain → re-invoke /writing-review
Before claiming "all fixed", check iteration count:
- READ
.claude/REVIEW_STATE.md(create if missing with iteration: 1) - If iteration >= 3 and issues remain: ESCALATE (don't say "run review again")
- If iteration < 3 and issues remain: INCREMENT iteration, re-invoke /writing-review
- If no issues: COMPLETE
Claiming "all issues resolved" without re-reviewing is NOT HELPFUL — the user trusts a false "all clear" and publishes with remaining problems.
Rationalization Prevention (Re-Review)
| Thought | Reality | Do Instead | |---------|---------|------------| | "I fixed the issues from REVIEW.md" | Your fixes need verification | Re-invoke /writing-review | | "Just spot-check the edited sections" | Spot-checks miss cascading changes | Full re-review via /writing-review | | "We're on iteration 3, call it done" | Max iterations means ESCALATE, not approve | Report to user with remaining issues | | "The edits are minor, skip re-review" | Minor edits create subtle problems | Re-review anyway | | "We've spent enough time on this" | Publishing flawed writing wastes more time | Re-review or escalate | | "The draft looks clean now" | Looking clean != being clean | Re-run /writing-review to verify |
Why Skipping Re-Review Hurts the Thing You Care About Most
You skip re-review because you think it's helpful, efficient, or competent. Here's what actually happens:
| Your Drive | Why You Skip | What Actually Happens | The Drive You Failed | |------------|--------------|----------------------|---------------------| | Helpfulness | "Finishing fast helps the user move on" | Draft has remaining issues. User submits. Reviewers reject. The 15-minute re-review would have caught it. Your speed caused waste. | Anti-helpful | | Competence | "I can tell the draft is clean" | Your eyes glaze over your own edits. Fresh review catches what you miss. The draft has 3 remaining issues. Your judgment failed. | Incompetent | | Efficiency | "Re-review wastes time on fixed work" | The re-review takes 15 minutes. The rejection-and-rewrite takes 15 hours. Your "efficiency" was a 60x slowdown. | Anti-efficient | | Approval | "User is tired of iterations" | User submits flawed draft. Reviewers reject. User now requires human editor oversight. You lost writing autonomy. | Lost approval | | Honesty | "I fixed the issues from REVIEW.md" | You fixed THOSE issues but introduced new ones. The user publishes with new problems you created. | Anti-helpful |
The protocol is not overhead you pay. It is the service you provide.
Delivering a clean draft is the service. Feeling like you're done is not the service. The user experiences the quality of the submitted draft, not your sense of completion. </EXTREMELY-IMPORTANT>
Why Skipping Hurts the Thing You Care About Most (Other Shortcuts)
| Shortcut | Consequence | |---|---| | Rewriting instead of targeted fix | You rewrote the section to "improve" it. You introduced new issues and lost the author's voice — your ambition was destructive. | | Marking fixed without checking | You marked the issue resolved without re-reading. It's still there — the user trusts your false "fixed" status. |
Red Flags — STOP If You Catch Yourself:
| Action | Why Wrong | Do Instead | |---|---|---| | Rewriting entire sections instead of targeted fixes | Introduces new issues, loses author voice | Apply the minimum change that resolves the issue | | Marking issue as fixed without verifying the draft changed | You're lying about completion | Re-read the draft passage after editing | | Applying fix without re-reading surrounding context | Fix may break adjacent text | Read the paragraph before and after | | Skipping domain skill load because you "remember the rules" | You don't remember — you're guessing | Read() the domain skill every time | | Combining multiple unrelated fixes in one pass | Makes it impossible to verify each fix | One issue at a time, verify each |
When to Use
- After
/writing-reviewproduces.claude/REVIEW.md - When hook suggests it (after ~10 edits)
- Before finishing a writing project
Prerequisites Gate
Before running edits, verify the workflow is ready:
- IDENTIFY:
.claude/ACTIVE_WORKFLOW.md,.claude/PRECIS.md,.claude/OUTLINE.md, and at least one file indrafts/must exist - RUN: Check file existence
- READ: Confirm ACTIVE_WORKFLOW shows
workflow: writing - VERIFY: All required files present and draft content exists
- CHECK FOR REVIEW.MD: Look for
.claude/REVIEW.md
If any file is missing, report and suggest the appropriate phase:
- No PRECIS.md ->
/writing(start from brainstorm) - No OUTLINE.md -> writing-setup needed
- No drafts -> writing-draft needed
- No REVIEW.md -> suggest
/writing-reviewfirst (see backward-compatibility below)
Process
Step 1: Load Context
Read(".claude/ACTIVE_WORKFLOW.md")
Read(".claude/PRECIS.md")
Read(".claude/OUTLINE.md")
Read([current draft files in drafts/])
If any file is missing, report and suggest starting with /writing.
Step 2: Load REVIEW.md
<EXTREMELY-IMPORTANT> #### Iron Law: NO REVISION WITHOUT REVIEW.mdNO REVISION WITHOUT REVIEW.md. This is not negotiable.
If .claude/REVIEW.md does not exist, REFUSE to proceed:
REVIEW.md not found. Cannot revise without a structured review diagnosis.
Run /writing-review first to produce .claude/REVIEW.md, then re-run /writing-revise.
STOP HERE. Do not fall back to inline review. Do not offer to "do a quick check instead."
Why: Inline review is shallow by design — it misses cross-section issues, transition problems, and thesis drift that only hierarchical review catches. Allowing a fallback path means the full review is never run. The review-then-revise pipeline exists because revision without diagnosis produces random edits, not targeted fixes.
| Excuse | Reality | Do Instead | |--------|---------|------------| | "I can do a quick inline review" | Inline review misses structural issues | Run /writing-review | | "The user just wants small fixes" | Small fixes without review context create new issues | Run /writing-review first | | "REVIEW.md will be generated anyway later" | Later never comes — the user thinks revision is done | Require it now | </EXTREMELY-IMPORTANT>
When REVIEW.md exists:
Read(".claude/REVIEW.md")
Parse the review into:
- Critical issues -- fix first, these break the argument
- Major issues -- fix second, these weaken the document
- Minor issues -- fix last, these polish the prose
Step 3: Load Constraint Layers
The midpoint must be self-contained. Load ALL constraint layers before touching the draft:
3a: Load Domain Skill
Based on style in ACTIVE_WORKFLOW.md:
| Style | Load |
|-------|------|
| legal | Read("../../lib/skills/writing-legal/SKILL.md") (relative to this skill's base directory) |
| econ | Read("../../lib/skills/writing-econ/SKILL.md") |
| general | Read("../../lib/skills/writing-general/SKILL.md") |
You MUST Read() the domain skill before editing. The domain skill contains the full rules, reference material, and enforcement patterns. Editing without it produces generic fixes.
3b: Load Universal Constraints
Skill(skill="workflows:ai-anti-patterns")
You MUST load ai-anti-patterns before editing. This catches AI writing smell (hedging, filler, false balance, weasel words) that domain skills don't cover. Revising without it means you'll fix structural issues while leaving AI-smell intact — the reviewer will flag the same draft again for different reasons.
<EXTREMELY-IMPORTANT> ### Iron Law: Full Constraint LoadingNO REVISION WITHOUT ALL CONSTRAINT LAYERS. This is not negotiable.
The midpoint cannot rely on constraints loaded during earlier phases. Prior context may be compressed or lost. You must load:
- ACTIVE_WORKFLOW.md → workflow state
- PRECIS.md, OUTLINE.md → structural intent
- Domain skill → domain-specific rules
- ai-anti-patterns → universal writing quality
Editing with only domain skill loaded is like reviewing with one eye closed. You'll fix half the problems and miss the other half. </EXTREMELY-IMPORTANT>
Step 4: Fix Issues from REVIEW.md
Work through REVIEW.md issues in priority order:
4a: Critical Issues First
For each critical issue in REVIEW.md:
- Read the cited location in the draft
- Understand the diagnosis
- Apply the suggested fix (or a better one if you see it)
- Verify the fix resolves the issue without creating new problems
4b: Major Issues
For each major issue:
- Read the cited location
- Apply fix
- Verify
Transition fixes (from REVIEW.md "Transition Issues" section):
- Read the boundary summaries for context
- Write bridge text that connects Section N's closing to Section N+1's opening
- Ensure the bridge advances the argument, not just changes the topic
Repetition fixes (from REVIEW.md "Cross-Section Repetition"):
- Decide which section should own the point
- Remove or differentiate the duplicate
- Ensure removing the duplicate doesn't leave a gap
Late introduction fixes (from REVIEW.md "Concept Introduction Order"):
- Add foreshadowing in the Introduction or earlier section
- Or restructure to move the concept's first substantive use earlier
4c: Minor Issues
For each minor issue:
- Apply fix
- Quick verify
Step 5: Formatting Check
- [ ] Consistent heading styles
- [ ] Citations formatted (Bluebook for legal, journal style for econ)
- [ ] Footnotes properly numbered (if applicable)
- [ ] No orphaned references
Step 6: Check Iteration State and Generate Report
Before claiming completion, check the audit-fix loop state:
1. READ `.claude/REVIEW_STATE.md` - what iteration are we on?
2. Run final check - are there remaining issues?
3. Determine verdict based on iteration + issues:
- iteration < 3 AND issues remain → CONTINUE (re-invoke /writing-review)
- iteration >= 3 AND issues remain → ESCALATE (report to user)
- no issues → COMPLETE
Generate report based on verdict:
If CONTINUE (iteration < 3, issues remain)
Update .claude/REVIEW_STATE.md:
---
iteration: [N+1]
max_iterations: 3
last_review_date: [date]
issues_found_count: [count]
verdict: CONTINUE
---
IMMEDIATELY re-invoke /writing-review (no pause, no user prompt):
Read("../../lib/skills/writing-review/SKILL.md")
After /writing-review completes and regenerates REVIEW.md, /writing-revise will be invoked again automatically.
This is a loop, not a checkpoint. Do not pause for user input.
If ESCALATE (iteration >= 3, issues remain)
Update .claude/REVIEW_STATE.md:
---
iteration: 3
max_iterations: 3
last_review_date: [date]
issues_found_count: [count]
verdict: ESCALATE
---
Report to user:
Writing Review Loop Escalation (3 iterations completed)
After 3 review-revise cycles, [N] issues remain:
[List issues from REVIEW.md]
Options:
1. Accept current draft with documented limitations
2. Extend review (manual approval for iteration 4+)
3. Rethink structure (return to outline phase)
4. Human editing (exit workflow, manual fixes)
Which option do you prefer?
If COMPLETE (no issues found)
Update .claude/REVIEW_STATE.md:
---
iteration: [N]
max_iterations: 3
last_review_date: [date]
issues_found_count: 0
verdict: COMPLETE
---
Archive workflow state:
mkdir -p .claude/completed-workflows
mv .claude/ACTIVE_WORKFLOW.md ".claude/completed-workflows/$(date +%Y-%m-%d)-writing.md"
Generate completion summary:
## Writing Workflow Complete
**Project**: [directory name]
**Completed**: [date]
**Style**: [legal | econ | general]
### Artifacts
- `.claude/PRECIS.md` - Thesis, audience, claims
- `.claude/OUTLINE.md` - Document structure
- `.claude/REVIEW.md` - Final review diagnosis
- `outlines/` - Detailed section outlines
- `drafts/` - Final prose
### Document Summary
- **Thesis**: [from PRECIS.md]
- **Sections**: [count]
### Next Steps
- Export to Word: `/docx`
- Export to PDF: `/pdf`
- Start new project: `/writing`