Agent Skills: Ticket Review

|

UncategorizedID: malhashemi/dotfiles/ticket-review

Install this agent skill to your local

pnpm dlx add-skill https://github.com/malhashemi/dotfiles/tree/HEAD/dot_config/opencode/skill/caster/ticket-review

Skill Files

Browse the full folder contents for ticket-review.

Download Skill

Loading file tree…

dot_config/opencode/skill/caster/ticket-review/SKILL.md

Skill Metadata

Name
ticket-review
Description
|

Ticket Review

Overview

This skill provides a structured workflow for reviewing ticket documents. Unlike Caster's standard creation workflow, this is an analysis and critique workflow focused on finding issues and proposing fixes.

When to Use

  • User wants to review an existing ticket
  • User says "review this ticket", "check this spec", "find issues in..."
  • The /ticket-review command is invoked
  • User has been discussing a ticket and wants it reviewed

Workflow

Review follows a different workflow than document creation:

Phase 1: Document Loading

  1. Identify the ticket to review:

    • If path is provided, read that file
    • If no path, infer from conversation context (look for recently discussed ticket)
    • If ambiguous, ask the user to specify
  2. Read the full document without limit/offset parameters

    • Parse frontmatter for metadata
    • Understand the document structure
    • Note the ticket's purpose and scope

Phase 2: Analysis [ULTRATHINK]

Analyze the document for four types of issues:

| Issue Type | Description | Questions to Ask | |------------|-------------|------------------| | Gaps | Missing information | What would an implementer need to know that isn't here? | | Inconsistencies | Conflicting information | Do different sections contradict each other? | | Ambiguity | Unclear or vague statements | Could this be interpreted multiple ways? | | Completeness | Missing coverage | Are acceptance criteria complete? Are all cases covered? |

Phase 3: Severity Classification

Classify each finding using this framework:

| Severity | Definition | Example | |----------|------------|---------| | Critical | Conflicting information that would break implementation | Two sections specify different behavior for the same feature | | High | High ambiguity causing significant implementer confusion | Behavior described but critical edge cases undefined | | Medium | Missing info that could cause unexpected behavior | Default value unspecified, implementer must guess | | Low | Nice-to-have clarifications, cosmetic issues | Output format example missing but inferable |

Phase 4: Present Findings

Present findings organized by severity, with specific location and proposed fix:

## Review Findings

### Critical (0 issues)

None found.

### High (1 issue)

#### H1: [Brief title]

**Location**: Part X, lines Y-Z

**Issue**: [Clear description of what's wrong]

**Impact**: [Why this matters for implementation]

**Proposed fix**:

Before:

[exact current text]


After:

[exact proposed text]


### Medium (2 issues)

#### M1: [Brief title]
...

### Low (1 issue)

#### L1: [Brief title]
...

---

## Summary

| Severity | Count |
|----------|-------|
| Critical | 0 |
| High | 1 |
| Medium | 2 |
| Low | 1 |

**Recommendation**: [Fix the N critical/high issues before implementation]

Phase 5: Collect Decisions

After presenting all findings and the summary, use the Question tool to collect the user's decision on each finding individually:

  • One question per finding — each finding gets its own question in a single Question tool call
  • Question header: Finding ID + short title (e.g., "M1: Attachment var context")
  • Question text: One-sentence summary of the finding and proposed fix
  • Standard options: "Approve" (apply as proposed) and "Reject" (skip entirely)
  • Design-choice findings: When a finding presents alternatives (e.g., support vs prohibit a feature), offer each alternative as a separate Approve option with a description
  • Custom input: Always enabled — user can type modifications to the proposed fix

Do NOT apply any changes until all decisions are collected.

Phase 6: Apply Fixes

Apply only the approved fixes using the Edit tool for targeted changes — never rewrite the entire file:

  1. Apply in order of severity (Critical first)
  2. For custom responses, adapt the proposed fix to match the user's instructions
  3. Update last_updated and last_updated_by in frontmatter
  4. Add last_updated_note summarizing changes
  5. Confirm each change was applied

Finding Format

Each finding must include:

  1. Unique ID: Severity prefix + number (C1, H1, M1, L1)
  2. Title: Brief description (5-10 words)
  3. Location: Part/section and line numbers if possible
  4. Issue: Clear description of the problem
  5. Impact: Why this matters (for Medium+ severity)
  6. Proposed fix: Exact before/after text

Review Principles

  • Be specific: Vague feedback is not actionable
  • Show don't tell: Always include before/after text
  • Prioritize: Critical and High issues first
  • Stay scoped: Review for implementer readiness, not feature design
  • Preserve intent: Fixes should clarify, not change meaning

What NOT to Review

  • Feature decisions (that's the author's domain)
  • Writing style (unless it causes ambiguity)
  • Structure choices (unless they cause confusion)
  • Scope (unless explicitly asked)

Focus on: Would an implementer be able to build this correctly?

Quality Checklist

Before presenting findings:

  • [ ] Read the entire document (no skimming)
  • [ ] Each finding has specific location
  • [ ] Each finding has exact before/after text
  • [ ] Severity classification is justified
  • [ ] Findings are actionable (not vague critiques)
  • [ ] Summary includes counts and recommendation

Remember

A good review catches issues before they reach implementation. Be thorough but constructive - the goal is to improve the document, not criticize the author.