prep-issue — Agent-Ready Issue Preparation
Evaluate existing issues or scaffold new ones so an AI agent can execute them autonomously without human intervention.
Per-Project Config Discovery
Before doing anything, check for a project-specific config file:
.agent/issue-context.md
If found, read it. It provides project-specific paths (spec directories, template files, checklists, conventions) that inform both evaluation and scaffolding. If not found, work with just the universal criteria — but note the absence and suggest creating one.
Criteria Reference
Read references/agent_ready_criteria.md for the full evaluation rubric. The
short version:
Required (must have all three):
- Objective — clear statement of what the issue produces
- Specification — link to spec doc or inline requirements
- Acceptance criteria — testable "done" conditions
Recommended (should have 4+ of these): 4. Pattern/template reference 5. Target location 6. Context references (standards, conventions docs) 7. Dependencies 8. Execution instructions (prompts, playbooks, workflows to follow) 9. Scope boundary
Mode 1: Evaluate an Existing Issue
Triggered by: /prep-issue <number> or /prep-issue #<number>
Workflow
- Fetch the issue:
gh issue view <number> - Read
references/agent_ready_criteria.md - Read
.agent/issue-context.mdif it exists in the current repo - Score each criterion as present, partial, or missing
- If the project config lists file paths (specs, templates, checklists), verify those paths actually exist in the repo
- Present the evaluation
Output Format
## Agent-Readiness: Issue #<number>
**Rating**: Ready / Nearly Ready / Not Ready
### Required Fields
- [x] Objective: <summary or what's missing>
- [x] Specification: <link found or missing>
- [ ] Acceptance criteria: <what's there or what's needed>
### Recommended Fields
- [x] Pattern reference: ...
- [ ] Target location: ...
- [x] Context references: ...
- [ ] Dependencies: ...
- [ ] Execution instructions: ...
- [ ] Scope boundary: ...
### Suggestions
1. <specific, actionable suggestion to close the gap>
2. ...
### Auto-Fill Available
<If the project config provides info that could fill missing fields, list them
here and offer to update the issue.>
Mode 2: Scaffold a New Issue
Triggered by: /prep-issue new or /prep-issue scaffold
Workflow
-
Read
references/agent_ready_criteria.md -
Read
.agent/issue-context.mdif it exists -
Ask what the issue is about (or infer from conversation context)
-
Classify the issue type and construct the title using this convention:
<type>: <scope> — <action>| Type | When | Example | |------|------|---------| |
feat| New models, objects, pipelines |feat: dim_member — implementation| |fix| Bug fixes |fix: ces_provider — hash key mismatch| |refac| Restructuring without behavior change |refac: raw vault staging — consolidate source models| |docs| Documentation only |docs: business vault patterns — add CES conventions|The type prefix drives the branch name in
/work-issue(e.g.,feat→feat/issue-26-dim-member). Get it right here so downstream is automatic. -
For each required and recommended field:
- If the project config provides a default (e.g., spec directory), use it
- If the field needs user input, ask concisely
-
Draft the issue body using the template below
-
Confirm with the user, then create with
gh issue create
Issue Body Template
## Objective
<one sentence: what this issue produces when done>
## Context
- **Spec**: `<path to spec doc>`
- **Template**: `<path to example code to follow>`
- **Target**: `<directory where output files go>`
- **Standards**: `<links to relevant convention/pattern docs>`
- **Checklist**: `<path to checklist to update when done>`
## Execution Instructions
- **Playbook**: `<path to prompt, workflow, or runbook the agent should follow>`
- If no playbook exists, describe the execution approach inline.
## Instructions
1. Read the spec to understand the object's purpose, sources, and columns
2. Read the template to understand the coding pattern
3. <specific implementation steps>
4. Update the checklist when complete
## Acceptance Criteria
- [ ] <testable condition 1>
- [ ] <testable condition 2>
- [ ] <testable condition 3>
## Dependencies
- <what must exist before starting, or "None">
## Scope
- **In scope**: <what's included>
- **Out of scope**: <what's explicitly excluded>
Mode 3: Batch Evaluate
Triggered by: /prep-issue all or /prep-issue batch
- List open issues:
gh issue list --state open --json number,title,labels - Evaluate each against the criteria (summary only, not full detail)
- Present a table:
| # | Title | Rating | Missing |
|---|-------|--------|---------|
| 6 | Implement CES objects | Nearly Ready | acceptance criteria, dependencies |
| 7 | Add provider models | Not Ready | spec, acceptance criteria |
- Offer to deep-dive on any specific issue
Handling Edge Cases
- No project config found: Note it, suggest creating one, and evaluate with universal criteria only
- Issue references files that don't exist: Flag as a problem — the agent will fail if it can't find referenced files
- Issue is too large for single-agent execution: Suggest breaking it into smaller issues, each individually agent-ready
- Issue mixes agent work with human decisions: Flag decision points that need human input before the agent can proceed