Agent Skills: Refactor and Clean Code

>-

UncategorizedID: philoserf/claude-code-setup/refactor-clean

Install this agent skill to your local

pnpm dlx add-skill https://github.com/philoserf/claude-code-setup/tree/HEAD/skills/refactor-clean

Skill Files

Browse the full folder contents for refactor-clean.

Download Skill

Loading file tree…

skills/refactor-clean/SKILL.md

Skill Metadata

Description
Structured refactoring with smell detection, severity classification, and before/after metrics. Use when code needs deep structural analysis — decomposing large classes, resolving SOLID violations, eliminating duplication across modules, or reducing cyclomatic complexity. Presents a prioritized plan for approval before making changes. Not for lightweight post-edit polish (the simplify agent handles that automatically).

Refactor and Clean Code

Systematic methodology for analyzing and refactoring code to improve quality, maintainability, and performance. Focus on practical, incremental improvements — not over-engineering.

When to Use

  • Code has grown unwieldy (long functions, large classes, deep nesting)
  • Duplicate logic scattered across modules
  • Complexity makes the code hard to test or extend
  • User asks to "refactor", "clean up", "simplify", or "improve code quality"

When NOT to Use

  • Adding new features (build first, refactor after)
  • Pure formatting/style changes (use formatter instead)
  • Writing tests from scratch (use tdd-cycle skill)
  • Performance-only optimization with no structural issues

Workflow

1. Analyze

Read the target code and identify issues using the analysis rubric.

  • Map function/class boundaries and responsibilities
  • Flag code smells with specific locations and threshold violations
  • Note SOLID violations and performance smells
  • Classify each issue by severity (Critical / High / Medium / Low)

2. Prioritize

Rank issues using the impact-effort matrix:

| Priority | Description | Action | | -------- | ------------------------ | ---------- | | P1 | High impact, low effort | Do first | | P2 | High impact, high effort | Plan next | | P3 | Low impact, low effort | Quick wins | | P4 | Low impact, high effort | Skip |

Present the prioritized list to the user before making changes.

3. Refactor

Apply changes incrementally — one concern at a time:

  • Extract methods/functions to reduce size and complexity
  • Decompose classes that violate single responsibility
  • Replace magic numbers with named constants
  • Eliminate duplication by extracting shared logic
  • Simplify conditionals and reduce nesting depth
  • Improve names to be descriptive and searchable
  • Remove dead code and unused variables

Principles to follow: DRY, YAGNI, composition over inheritance, consistent abstraction levels, no side effects.

4. Verify

Run existing tests after each incremental change. Check against the quality checklist.

  • Detect test runner: check for package.json scripts, Makefile targets, pytest, go test, cargo test
  • Run the suite and confirm all tests pass
  • If tests break, fix before continuing
  • If no tests exist, note this in the report but don't block the refactor

5. Report

Provide a before/after metrics comparison:

## Refactoring Summary

### Changes Made
- [list of changes with severity tags]

### Metrics
| Metric              | Before | After |
|---------------------|--------|-------|
| Max function length |        |       |
| Max complexity      |        |       |
| Duplicate blocks    |        |       |
| Responsibilities    |        |       |

### Remaining Issues
- [anything deferred with rationale]

Output Format

  1. Analysis — Issues found, classified by severity
  2. Plan — Prioritized changes (confirm with user before proceeding)
  3. Refactored Code — Incremental changes with clear explanations
  4. Metrics Report — Before/after comparison

Reference Files

Detailed analysis criteria and quality standards:

  • analysis-rubric.md — Code smell thresholds, SOLID indicators, severity classification, prioritization matrix
  • quality-checklist.md — Before/after metrics template, acceptance criteria, reporting guidelines

Do not use when

  • Quick format + lint on a single language — use the matching *-quality-gate
  • Reviewing a staged or branch diff — use diff-review
  • Prioritizing debt across an entire project — use tech-debt
  • Finding individual bugs rather than structural issues — use code-audit