work-issue — Autonomous Issue Execution
Pick up an agent-ready GitHub issue and execute it. The issue is assumed to have
passed /prep-issue criteria (objective, specification, acceptance criteria, and
ideally execution instructions). This skill is the execution engine — it reads the
issue, discovers project context, validates consistency, does the work, and reports
back.
Quick Reference
/work-issue 42 # work issue #42 in current repo
/work-issue 42 --repo org/repo # work issue in a specific repo
/work-issue <github-url> # work issue from URL
Step 0: Parse Input
Extract the issue number and optional repo from args.
- If a full GitHub URL is provided, parse the owner/repo and issue number from it
- If just a number, use the current git repo (run
gh repo view --json nameWithOwner -q .nameWithOwner) - If
--repois specified, use that
Store as $ISSUE_NUMBER and $REPO for the rest of the workflow.
Step 1: Fetch the Issue
gh issue view $ISSUE_NUMBER --repo $REPO --json title,body,labels,comments,state,assignees
Parse the issue body. Expect structured sections per the /prep-issue template:
- Objective — what this produces when done
- Context — spec paths, template references, standards docs
- Execution Instructions — playbook or prompt to follow
- Instructions — step-by-step implementation guide
- Acceptance Criteria — testable done conditions
- Dependencies — what must exist first
- Scope — in/out boundaries
If the issue body is unstructured, extract what you can. Missing structure is not a blocker — but note any gaps and proceed with caution.
Step 2: Discover Project Context
Look for project-specific agent configuration:
.agent/issue-context.md
If found: Read it. This file provides:
- Spec directories, templates, and target locations
- Standards and convention documents to reference
- Execution instructions (code generators, spec generators, playbooks)
- Key coding conventions
- Verification commands
If not found: Proceed with just the issue body and general codebase context. Note the absence — suggest creating one if the project would benefit from it.
Also read these if they exist (check before reading):
CLAUDE.mdor.ai/instructions.mdin the repo rootdocs/doc_conventions.md- Any architecture docs referenced in the issue body
Step 3: Check Dependencies
Before starting work, verify that dependencies listed in the issue actually exist:
- If the issue references upstream models, check that those files exist
- If it depends on other issues, check their state (
gh issue view <dep> --json state) - If a dependency is missing or incomplete, stop and report — don't build on a foundation that doesn't exist
If all dependencies are satisfied (or none listed), proceed.
Step 4: Validate Spec Consistency
This is the architectural guardrail. Before executing, read the specification and cross-reference it against the project's established patterns.
What to check:
- Naming conventions — do proposed file/model names match the project's patterns?
- Structural patterns — does the spec align with how similar objects are built?
- Business key design — are keys source-neutral where they should be? (check against architecture docs)
- Column definitions — do data types, nullability, and naming match conventions?
- Template alignment — if a template is referenced, does the spec's structure match what the template expects?
If inconsistencies are found:
- List each inconsistency clearly: what the spec says vs. what the architecture says
- Ask which should take precedence before proceeding
- Do NOT silently resolve conflicts by guessing
If everything aligns: Proceed to execution.
Step 5: Execute the Work
Before starting: Set up branch and update issue state
Derive the branch name from the issue title. Issue titles follow the convention
<type>: <scope> — <action> where type is one of feat, fix, refac, docs.
- Parse the type prefix from the title (text before the first colon)
- Generate a slug from the scope/action (lowercase, hyphens, no special chars)
- Create the branch:
# Example: title "feat: dim_member — implementation" → feat/issue-26-dim-member
git checkout -b $BRANCH_NAME main
| Title prefix | Branch prefix |
|-------------|---------------|
| feat | feat/ |
| fix | fix/ |
| refac | refac/ |
| docs | docs/ |
If the title doesn't follow the convention, infer the type from the issue content
(new objects → feat, bug → fix, restructuring → refac, docs-only → docs).
Default to feat/ if unclear.
Update issue state:
# Assign to Dan
gh issue edit $ISSUE_NUMBER --repo $REPO --add-assignee danbrickey
# Add status label
gh issue edit $ISSUE_NUMBER --repo $REPO --add-label "status: In Progress"
# Move to "In Progress" on the project board (query field/option IDs dynamically)
Execution
Follow the execution path in this priority order:
- Issue's explicit execution instructions — if the issue body says "Execute
using
<path>", read that prompt/playbook and follow it - Project context execution instructions — if
.agent/issue-context.mdmaps this type of work to a specific generator or playbook, use it - Direct implementation — if no playbook exists, implement based on the spec, template reference, and conventions
During execution:
- Follow the issue's step-by-step instructions if provided
- Reference the template/pattern file for structural guidance
- Apply all coding conventions from the project context
- Create files in the specified target directories
- Create both SQL and YAML files when the project convention requires paired files
Architectural Decision Gates
While executing, you may encounter decisions not covered by the spec, architecture docs, or project context. These are architectural decision gaps.
When you hit one:
- Describe the decision needed — what question arose and why
- Explain the options you see and their tradeoffs
- State which option you'd lean toward and why (give a recommendation)
- Stop and ask — do not proceed past a genuine architectural decision gap
Examples of architectural decisions:
- A business rule is ambiguous and could be implemented multiple ways
- The spec references a pattern that contradicts an established convention
- A dependency exists that the spec doesn't account for
- A naming conflict arises between the spec and existing models
- The spec requires a column or join that doesn't exist in source data
Examples of things that are NOT architectural decisions (just do them):
- Obvious typos in the spec
- Standard boilerplate (imports, config blocks, hash key wrappers)
- File placement when the target directory is clear
- YAML test generation for standard patterns (not_null, unique, etc.)
Progress Updates
For multi-file or multi-step work, provide brief progress updates at natural milestones:
- "Created staging model, moving to hub..."
- "SQL complete, generating YAML tests..."
- "All files created, running verification..."
Step 6: Verify
After execution, run verification to confirm the work is complete:
- Project-specific verification — if
.agent/issue-context.mdlists verification commands, run them (e.g.,dbt compile --select <model>) - File existence — confirm all files mentioned in acceptance criteria exist
- Acceptance criteria check — walk through each criterion and confirm it's met
If verification fails:
- Diagnose and fix the issue
- Re-verify after fixing
- If you can't fix it, report what failed and why
Step 7: Commit, Push, and Open PR
After verification passes, package the work for review:
-
Stage and commit all created/modified files:
git add <specific files> git commit -m "$(cat <<'EOF' <type>: <short description matching issue title> Closes #<issue_number> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> EOF )"Use the same type prefix as the branch (
feat,fix,refac,docs). -
Push the branch:
git push -u origin $BRANCH_NAME -
Open a pull request linking back to the issue:
gh pr create --repo $REPO --title "<issue title>" --body "$(cat <<'EOF' ## Summary <1-3 bullet points describing what was built/changed> Closes #<issue_number> ## Acceptance Criteria - [x] <criterion from issue> - [x] ... ## Test Plan - [ ] Review generated SQL for correctness - [ ] Verify dbt compile succeeds - [ ] Spot-check YAML tests 🤖 Generated with [Claude Code](https://claude.com/claude-code) EOF )"
The PR becomes the artifact Dan reviews. The issue comment (Step 8) links to it.
Step 8: Report
When done, provide a completion summary:
## Issue #<number> — Complete
### What was done
- <bullet list of files created/modified>
### Acceptance criteria
- [x] <criterion 1>
- [x] <criterion 2>
- [ ] <criterion that couldn't be met — explain why>
### Decisions made
- <any decisions that came up during execution, even small ones>
- <include your reasoning so Dan can spot-check>
### Architectural gaps surfaced
- <any missing conventions, unclear patterns, or undocumented decisions
that should be added to project context for future issues>
### Suggested follow-ups
- <checklist updates, downstream issues, doc updates>
Post-Completion Actions
-
Post the report as an issue comment so it's visible on the issue itself:
gh issue comment $ISSUE_NUMBER --repo $REPO --body "$REPORT" -
Assign the issue to Dan:
gh issue edit $ISSUE_NUMBER --repo $REPO --add-assignee danbrickey -
Move the issue to "Review" on the project board. Use the GitHub Projects GraphQL API to update the Status field to "Review":
- Get the project item ID for this issue
- Update the Status field to the "Review" option
- The board column IDs may change — query them dynamically rather than hardcoding. Use this pattern:
# Get the project item ID for this issue gh api graphql -f query='...' # query projectItems for the issue node ID # Update the Status field to Review gh api graphql -f query='mutation { updateProjectV2ItemFieldValue(...) }' -
If the project context references a checklist to update (e.g.,
docs/edp_reconciliation_checklist.md), offer to update it -
If architectural gaps were surfaced, offer to create a doc-tracker entry or update the relevant architecture doc
Do NOT close the issue — Dan reviews the work and closes it manually.
Handling Edge Cases
-
Issue is not agent-ready: If the issue is missing required fields (objective, spec, acceptance criteria), say so and suggest running
/prep-issue <number>first. Don't attempt to execute an underspecified issue. -
Issue is too large: If the issue contains work that would produce more than ~5-8 files or spans multiple architectural layers, suggest breaking it into sub-issues. Offer to help with the breakdown.
-
Spec doesn't exist yet: If the issue references a spec file that doesn't exist, this is a blocker. Report it and suggest using the appropriate spec generator.
-
Execution instruction prompt not found: If the issue references a playbook path that doesn't exist, fall back to direct implementation with the spec and template. Note the missing playbook.
-
Multiple repos: If the issue spans multiple repos, work only in the current repo. Flag cross-repo dependencies as follow-ups.
Design Philosophy
This skill is the execution engine in a pipeline:
/prep-issue (quality gate) --> /work-issue (execution) --> human review --> merge
It assumes issues arrive well-specified. Its job is faithful execution with architectural awareness — not creative interpretation. When the spec is clear, execute precisely. When it's ambiguous, ask. When it conflicts with established patterns, flag it.
The .agent/issue-context.md file is the dynamic instruction layer. Iterate
on that file as you discover flaws in the process — the skill reads it fresh every
time. No need to modify this skill to change project-specific behavior.