Agent Skills: Dev Workflow

Guided development workflow that orchestrates plan → review → implement → check/test → code review → rules update. Use this skill whenever the user wants to develop a feature, fix a bug, refactor code, or make any code changes following a structured process — even if they don't explicitly mention "workflow" and simply describe what they want built or fixed.

UncategorizedID: hiroro-work/claude-plugins/dev-workflow

Install this agent skill to your local

pnpm dlx add-skill https://github.com/hiroro-work/claude-plugins/tree/HEAD/skills/dev-workflow

Skill Files

Browse the full folder contents for dev-workflow.

Download Skill

Loading file tree…

skills/dev-workflow/SKILL.md

Skill Metadata

Name
dev-workflow
Description
Guided development workflow that orchestrates plan → review → implement → check/test → code review → rules update. Use this skill whenever the user wants to develop a feature, fix a bug, refactor code, or make any code changes following a structured process — even if they don't explicitly mention "workflow" and simply describe what they want built or fixed.

Dev Workflow

Usage

/dev-workflow --init                             # Project setup (detect check/test commands)
/dev-workflow [-i N | --iterations N] <task>    # Execute workflow (default)
/dev-workflow --resume <state-file> [-i N]      # Resume next subtask from a decomposition state file

Prerequisites

  • Reviewer skill (reviewer setting, default: ask-peer): Required for plan/code review. Supported: ask-peer, ask-claude, ask-codex, ask-gemini, ask-copilot. If the configured skill is unavailable, ask user directly instead.
  • rules-review skill: Required for rules compliance review (Step 7.5). If unavailable, skip Step 7.5 with message.
  • extract-rules skill: Required for rule update. If unavailable, skip with message.

Configuration

Settings files (YAML frontmatter only, merged across layers):

  1. ~/.claude/dev-workflow.local.md — User global defaults (lowest priority)
  2. .claude/dev-workflow.md — Project shared settings (git tracked, team-shared)
  3. .claude/dev-workflow.local.md — Personal overrides (gitignored, highest priority)

Merge strategy per key type:

  • Scalar (reviewer, review_iterations, task_decomposition, custom_instructions): higher layer wins (replaces)
  • List (check_commands): append — lower-layer items first, then higher-layer items, duplicates removed (keep first occurrence). test_commands is excluded from merge — always fixed to ["Skill(run-tests)"]
  • hooks: deep-merge at the hooks level — each sub-key (on_complete) is merged as a list (append, deduplicated)

Keys absent from a higher layer inherit from lower layers. Only specify keys you want to override or extend.

---
reviewer: "ask-peer"
review_iterations: 3
task_decomposition: true
custom_instructions: "Always use TDD. Write tests before implementation."
check_commands:
  - "pnpm run lint:fix"
  - "pnpm run format"
  - "pnpm run typecheck"
test_commands:
  - "Skill(run-tests)"
hooks:
  on_complete:
    - "Skill(work-complete)"
---
  • reviewer: Reviewer skill name (default: ask-peer). Choose from: ask-peer, ask-claude, ask-codex, ask-gemini, ask-copilot. Unsupported values fall back to ask-peer
  • review_iterations: Max iterations for Plan Review (Step 3) and Code Review (Step 8) (default: 3, must be a positive integer). Can be overridden per invocation with -i N / --iterations N
  • task_decomposition: Whether Step 1.5 runs the auto-decomposition check in Normal sub-mode (default: true). Set to false to treat Normal sub-mode requests (/dev-workflow <task>) as single tasks — Step 1.5 is omitted from TodoWrite and the decomposition judgment is skipped entirely. --resume <state-file> is unaffected and still executes existing state files. Non-boolean values fall back to true with a warning
  • custom_instructions: Free-form development instructions applied as guiding principles across planning, implementation, review, and simplification (e.g., "Always use TDD", "Prefer functional style"). Optional. .claude/rules/ and explicit user requests take precedence if they conflict
  • check_commands: Static checks (lint, format, typecheck, etc.). Always run all in order
  • test_commands: Always ["Skill(run-tests)"]. The run-tests skill is generated by --init and handles test execution via subagent (see Step 7). Run --init to generate or update the skill
  • hooks: Execute skills/commands at specific workflow timing points
    • on_complete: Runs after Step 9. Entry format: Skill(<name>) or shell command string
    • Entries not covered by allowed-tools require user approval

Mode Detection

  • --init → Init Mode (-i / --iterations is ignored)
  • --resume <state-file> → Execution Mode (Resume sub-mode; see Step 1.5)
  • Otherwise → Execution Mode (Normal sub-mode)

Init Mode

Read references/init-mode.md and follow the procedure.

Note: Skills generated by --init (e.g. run-tests) are recognized from the next session onward. Do not run /dev-workflow <task> in the same session as --init.


Execution Mode

Step 1: Load Settings

  1. Read settings from up to three layers and merge (type-aware):
    merged = {}
    if ~/.claude/dev-workflow.local.md exists:  overlay its frontmatter onto merged
    if .claude/dev-workflow.md exists:          overlay its frontmatter onto merged
    if .claude/dev-workflow.local.md exists:    overlay its frontmatter onto merged
    
    "Overlay" = for each key present in the file:
    • Scalar keys: merged[key] = file[key] (replace)
    • List keys (check_commands): append file[key] items after merged[key], then deduplicate (keep first occurrence)
    • hooks: deep-merge — for each sub-key (e.g. on_complete), append and deduplicate the list
    • null or empty ([], {}) explicitly clears the key — lower-layer value is discarded, not inherited
    • Key absent from the file: left untouched (inherit from lower layers) If a file's YAML frontmatter is malformed (parse error), warn the user naming the file, skip that layer, and continue with remaining layers.
  2. If none of the three files exist, prompt user to run /dev-workflow --init and stop
  3. Resolve reviewer from config. If not specified or not in the supported list (ask-peer, ask-claude, ask-codex, ask-gemini, ask-copilot), use ask-peer
  4. Resolve N (review iteration count):
    1. If -i / --iterations option is present and is a positive integer, use it
    2. Else if config review_iterations is present and is a positive integer, use it
    3. Else use default 3
  5. Parse hooks from config. Warn and ignore if hooks.on_complete has invalid format. Parse custom_instructions from config (optional, string). Warn and ignore if not a string. Parse task_decomposition from config (optional, boolean, default true). Warn and fall back to true if present but not a boolean
  6. Determine execution sub-mode: Resume if --resume <state-file> was provided, otherwise Normal. Step 1.5 branches on this
  7. Register all workflow phases with TodoWrite, including review iterations. Do NOT skip any phase:
    • Step 1.5: Task Decomposition (Normal sub-mode only, AND only when task_decomposition is true — omit this entry entirely in Resume sub-mode or when task_decomposition is false, since in either case the step has nothing to do at registration time)
    • Step 2: Create Plan
    • Step 3: Plan Review
    • Step 3-1 through Step 3-N: Plan Review - iteration 1 through N (generate N items based on resolved N)
    • Step 4: Finalize Plan
    • Step 5: Implement
    • Step 6: Simplify
    • Step 7: Check / Test [check: {check_commands} | test: {test_commands}]
    • Step 7.5: Rules Compliance Review
    • Step 8: Code Review
    • Step 8-1 through Step 8-N: Code Review - iteration 1 through N (generate N items based on resolved N)
    • Step 9: Update Rules
    • Step 10: Completion Hooks (only if hooks.on_complete is configured) Mark each item in_progress when starting and completed when done. Registering all phases upfront gives the user visibility into overall progress and prevents steps from being accidentally dropped. Implementation sub-tasks in Step 5 are additions, not replacements. Note: Unless -i / --iterations was explicitly specified, Step 2 may reduce N based on task difficulty.

Step 1.5: Task Decomposition

This step decides whether the user's request should be split into multiple smaller subtasks (each delivered as its own PR), or — in Resume sub-mode — picks the next subtask from an existing state file under .claude/plans/dev-workflow.<slug>.md.

State-file semantics are critical (a malformed or mis-routed file silently corrupts subtask boundaries), so the full procedure lives in a dedicated reference. Dispatch:

  • Resume sub-mode (--resume <state-file> was provided): read references/task-decomposition.md and follow section A. Resume sub-mode from top to bottom.
  • Normal sub-mode + task_decomposition: true (the default): read references/task-decomposition.md and follow section B. Normal sub-mode.
  • Normal sub-mode + task_decomposition: false: no decomposition work. Set the "effective task" to the original request and proceed to Step 2 without creating a state file. Step 1.5 is not in TodoWrite in this case (see Step 1), so there is nothing to mark completed. You do not need to read the reference file.

EnterPlanMode is reserved for Step 2 — any decomposition proposal in Step 1.5 is a plain yes/no dialogue, not a plan.

After section A or B completes, the "effective task" is set for Step 2 onward: the selected subtask when decomposed, otherwise the original request.

Step 2: Create Plan

  1. Record the current commit as base-commit (git rev-parse HEAD) for later diff comparison
  2. EnterPlanMode
  3. Analyze the task and codebase, create implementation plan. Apply custom_instructions to shape plan priorities and structure (must include test plan: what to test, test types, scope, and which existing test files to update or new test files to create — or justification why no tests are needed)
    • If a state file exists (this run is executing one subtask of a decomposed parent): the "effective task" = the current in_progress subtask. Frame the plan around just this subtask while keeping the full parent task and other subtasks as background context so the plan stays consistent with the overall direction. Do not plan work belonging to other subtasks
  4. No code changes in this phase
  5. Adjust N by difficulty (skip if -i / --iterations was explicitly specified): A typo fix doesn't need 3 rounds of review. Based on the plan just created, assess task difficulty and reduce N to avoid unnecessary iterations — the configured value is a ceiling, not a target:
    • Simple (typo fix, config tweak, straightforward bug fix with obvious solution): N = 1
    • Moderate (multi-file within one module, feature following existing patterns): N = min(2, N)
    • Complex (cross-module, new patterns, API changes, significant refactoring): keep N File count is a hint, not the sole criterion. If adjusted, mark excess TodoWrite iteration items (Step 3-x and Step 8-x) as completed. Log the assessed difficulty and effective N.
  6. Do not present the plan to the user or ask for approval/confirmation — presenting an unreviewed plan wastes user time and risks approval of a suboptimal approach. Proceed to Step 3. The user will see the reviewed plan in Step 4.

Step 3: Plan Review

This step is an internal review — the reviewer refines the plan before the user sees it, so the user receives a higher-quality plan in Step 4. Do not present the plan to the user or ask for feedback during this step.

Mark Step 3: Plan Review as in_progress. Process each pending iteration item (Step 3-1 through 3-N) in order:

  1. Mark the iteration item as in_progress. Call the reviewer skill resolved in Step 1 (e.g. Skill(ask-peer)): Review the plan.
    • Instruct reviewer to read all files under .claude/rules/ for project conventions
    • Request feedback organized into three categories: a. Scope & feasibility: scope appropriateness, dependencies, risks, .claude/rules/ compliance b. Approach & alternatives: simpler methods, architectural fit with existing code c. Completeness: edge cases, error handling, test plan adequacy (verify specific test files are identified and existing related tests are covered for update)
    • If custom_instructions is configured, include the instructions text in the review request and have the reviewer verify alignment and report conflicts
    • Reviewer should only report actionable findings. If none, explicitly state "No actionable findings"
  2. If reviewer returned "No actionable findings": mark this and remaining iteration items as completed (skip). Mark Step 3: Plan Review as completed and proceed to Step 4.
  3. Otherwise: autonomously apply improvements or reject inapplicable points with reason — do not ask the user for judgment on individual review findings. Mark this iteration item as completed.
    • If the plan was modified: continue to the next pending iteration item (back to step 1). Plan modifications often introduce new gaps or ripple effects that the previous reviewer had no chance to see — the re-review round-trip is cheap compared to shipping a plan that looks fine to the author but has an unvetted section. Don't short-circuit even when the fixes feel airtight
    • If all points were rejected (no modifications): mark remaining iteration items as completed (skip — there is nothing new for the next reviewer to look at) Continue to the next pending iteration item with:
    • the updated plan
    • a summary of changes made and rejections with reasons
    • the same three-category structure, .claude/rules/ reference, and "No actionable findings" requirement
  4. If all N iteration items are completed and actionable feedback still remains, carry the unresolved points forward to Step 4.

Mark Step 3: Plan Review as completed.

Step 4: Finalize Plan (USER APPROVAL GATE)

  1. This is the first time the user sees the plan. Present the reviewed plan to the user (include any unresolved review points from Step 3)
  2. Collaborate with the user to refine the plan as needed (normal Plan Mode interaction). If the user requests material changes to scope or approach, add a new review iteration item (e.g. Step 3-(N+1)) and return to Step 3 to process it before asking for acceptance.
  3. Wait for explicit user acceptance. After the user accepts, ExitPlanMode and begin implementation

Step 5: Implement

  1. Follow the plan, track progress with TodoWrite. Apply custom_instructions throughout implementation

Step 6: Simplify

Implementation often introduces unnecessary complexity that's easier to spot in a dedicated pass after the code works.

  1. Skill(simplify): Review changed code for reuse, quality, and efficiency, then fix any issues found. Pass custom_instructions as constraints for simplification

Step 7: Check / Test (max 3 retries)

  1. Run check_commands in order (always run all)
    • On failure, fix and retry (do not proceed to test execution)
  2. Run Skill(run-tests) with --base-commit <sha> (from Step 2) via $ARGUMENTS
    • The skill handles scope decision and test execution internally via subagent
    • Returns structured summary: SUCCESS / TEST_FAILED / EXECUTION_ERROR
  3. After 3 retries, report to user and stop

GATE: Verify Steps 2-7 are completed (check TodoWrite status; if status is inconsistent, verify actual completion by reviewing work done). Mark Step 7.5 as in_progress.

Step 7.5: Rules Compliance Review

Dedicated rules compliance check, separate from code review (Step 8). This ensures rule enforcement gets focused attention rather than competing with correctness and design concerns.

  1. Skill(rules-review) with --base-commit <sha> (base-commit recorded in Step 2) via $ARGUMENTS
  2. If result indicates no violations (e.g., "All rules compliant", "No applicable rules for changed files", "No changed files", "No rule files found"): mark completed, proceed to Step 8
  3. If violations found: a. Fix all reported violations b. Re-run Step 7 (Check / Test) to ensure fixes did not break anything c. Re-run Skill(rules-review) with --base-commit <sha> for verification (2nd cycle) d. If violations persist after 2 cycles, present remaining violations to user for decision. Wait for user response before marking completed.

Mark Step 7.5: Rules Compliance Review as completed only after all violations are resolved or user has decided on remaining violations.

GATE: Verify Steps 2-7.5 are completed (check TodoWrite status; if status is inconsistent, verify actual completion by reviewing work done). Mark Step 8 as in_progress.

Step 8: Code Review

Code review catches bugs, convention violations, and design issues that tests alone miss — skipping it risks shipping preventable defects. Always run this step even when tests pass cleanly.

Mark Step 8: Code Review as in_progress. Process each pending iteration item (Step 8-1 through 8-N) in order:

  1. Mark the iteration item as in_progress. Call the reviewer skill resolved in Step 1 (e.g. Skill(ask-peer)): Review code changes.
    • Include git diff <base-commit> (base-commit recorded in Step 2) to capture all changes since workflow start
    • Thorough rules compliance has been verified in Step 7.5, but instruct reviewer to also flag any obvious .claude/rules/ violations as a safety net — especially for code modified after Step 7.5
    • Request feedback organized into three categories: a. Correctness & edge cases: bugs, error handling gaps, race conditions, missing validations, missing or insufficient tests for changes (verify planned test files from Step 2 are present in the diff) b. Conventions & consistency: naming, file structure, patterns, .claude/rules/ compliance (lightweight check — Step 7.5 handles the thorough review) c. Simplicity & maintainability: unnecessary complexity, duplication, unclear abstractions
    • If custom_instructions is configured, include the instructions text in the review request and have the reviewer verify compliance and report conflicts
    • Reviewer should only report actionable findings. If none, explicitly state "No actionable findings"
  2. If reviewer returned "No actionable findings": mark this and remaining iteration items as completed (skip). Mark Step 8: Code Review as completed and proceed to Step 9.
  3. Otherwise: autonomously fix genuine issues or reject inapplicable points with reason — do not ask the user for judgment on individual review findings. Mark this iteration item as completed.
    • If code was modified: re-run Step 7 and Step 7.5 (with same base-commit from Step 2), then continue to the next pending iteration item (back to step 1). Code fixes routinely introduce fresh bugs, tighten one place while loosening another, or miss a caller the author didn't know about — the next review round is how those leaks get caught. Don't short-circuit based on confidence in the fix itself
    • If all points were rejected (no modifications): mark remaining iteration items as completed (skip — there is nothing new for the next reviewer to look at) Continue to the next pending iteration item with:
    • the latest git diff <base-commit>
    • a summary of fixes made and rejections with reasons
    • the same three-category structure, .claude/rules/ reference, and "No actionable findings" requirement
  4. If all N iteration items are completed and actionable feedback still remains, present the unresolved points to user for decision.

Mark Step 8: Code Review as completed.

Step 9: Update Rules

  1. Skill(extract-rules) with --from-conversation (always)
  2. Skill(extract-rules) with --update (only if significant structural/pattern changes occurred)
  3. If extract-rules is unavailable, skip this step and inform user

Step 10: Completion Hooks

Skip this step if hooks.on_complete is not configured. Mark Step 10: Completion Hooks as in_progress.

  1. Execute each entry in hooks.on_complete in order:
    • Skill(<name>) pattern: invoke the skill
    • Other strings: execute as a Bash command
  2. If a hook fails, report the error but continue executing remaining hooks. Include as warnings in the Completion summary
  3. After all hooks complete (or are skipped), mark Step 10: Completion Hooks as completed and proceed to Completion

Completion

Report summary: tasks completed, files modified, test results, review outcomes, rules updated.

If this run was executing a subtask from a decomposition state file, also do the following (all reads/writes target the canonical state-file path recorded in Step 1.5):

  1. Mark the current subtask's status as completed in the canonical state file and write back
  2. Ask the user for an optional PR URL for this subtask. On a non-empty answer, set the subtask's pr field and write back; otherwise leave it null
  3. Refresh the parent-task TodoWrite row's <done>/<total> count
  4. Find the next runnable subtask (smallest-id pending with all depends_on completed)
  5. If a next subtask exists: tell the user to commit the current subtask's changes and open a PR before resuming, then start a new session with /dev-workflow --resume <slug>. Explain why this matters: the next run records a fresh base-commit from HEAD, so uncommitted changes would leak into the next subtask's diff. The workflow itself never stages, commits, or pushes
  6. If no next subtask exists (all subtasks completed): delete the canonical state file via rm <canonical-path>, remove the parent-task TodoWrite row, and include every subtask's title and recorded pr (if any) in the parent-task completion summary