SDLC Orchestration
Coordinate the full software development lifecycle through a structured pipeline of specialised agents.
Pipeline overview
[Project init]
intent → requirements (produces backlog)
[Story cycle - iterative per story]
commit:branch → spec → design → design-review → stubs → test → commit:commit(red) → implement → commit:commit(green) → refactor → code-review → commit:commit(refactor) → user-test → commit:pr → commit:merge
[Release]
deploy → monitor (feedback → backlog)
Manifest location
Store manifest at .sdlc/manifest.yaml in the project root. Create the directory if it does not exist.
Core responsibilities
- Initialise project - Gather config, run intent, build initial backlog
- Pick next story - Select highest priority story from backlog
- Run story cycle - Execute phases in sequence with gates
- Spawn agents - Launch phase skills with minimal context
- Gate transitions - Enforce prerequisites before advancing
- Handle failures - Retry, rollback, or escalate as appropriate
- Pause at checkpoints - Await human approval at configured points
- Log decisions - Record choices and rationale to manifest
- Enforce gate conditions - Do not advance to next phase if gate condition fails; follow rollback handling in
skills/feature/references/tdd-cycle.md - Track gate results - Record gate pass/fail in manifest per story
- Fresh context for review - Spawn code-review with a new agent invocation receiving only the diff, spec, and standards (not the implementation conversation)
- Pass artifacts by reference - Between skills, pass file paths and summary documents, not full file contents
Commands
The user may invoke orchestration with these patterns:
start project/init- Begin new project setupnext story/continue- Pick and start next story from backlogresume- Continue from current phasestatus- Show manifest state and progresscheckpoint- Present decision summary for approval
Project initialisation
When starting a new project:
- Check for existing
.sdlc/manifest.yaml - If manifest exists with incomplete init:
- Detect current init phase from
init_phasefield - Offer to resume from that point or reinitialise
- Detect current init phase from
- If manifest exists with complete init, offer to resume or reinitialise
- If no manifest:
- Ask for project name
- Create manifest with project name and
init_phase: name_collected - Run
intentskill to clarify goals - Update manifest with intent output and
init_phase: intent_complete - Gather remaining technical config (language, runtime, git, run commands)
- Use intent's
coding_standardsas defaults for paradigm/patterns/naming
- Use intent's
- Update manifest with full config and
init_phase: config_complete - Run
requirementsskill to build backlog
Default configuration
project:
name: ""
standards:
paradigm: "mixed"
language: ""
runtime: "" # e.g., "Bun", "Node", "Python"
patterns: []
forbidden: []
naming: ""
git:
strategy: "feature-branch"
pattern: "story/{id}-{slug}"
auto_pr: true
run: # How to execute the project locally
command: "" # e.g., "./brain", "npm start", "python main.py"
test: "" # e.g., "bun test", "npm test", "pytest"
checkpoints:
- "post-intent"
- "post-design-review"
- "post-code-review"
- "post-commit" # User tests before next story
Note: The standards.paradigm, standards.patterns, standards.forbidden, and standards.naming fields may be populated from intent output's coding_standards. Only prompt for these if intent did not capture them.
Story cycle execution
For each story:
- commit:branch — Create feature branch
- spec — Define contracts, schemas, behaviours
- design — Plan architecture and components
- design-review — Validate design (checkpoint)
- stubs — Create interfaces and types
- test — Write failing tests (RED)
- commit:commit(red) — Commit stubs + tests: "feat(story-id): define interface and test cases"
- implement — Write code to pass tests, create migrations if needed (GREEN)
- commit:commit(green) — Commit implementation + migrations: "feat(story-id): implement to pass tests"
- refactor — Clean up while tests stay green
- code-review — Review with fresh context (checkpoint)
- commit:commit(refactor) — Commit refactored code: "refactor(story-id): improve structure"
- user-test — User manually tests functionality (checkpoint)
- commit:pr — Create pull request
- commit:merge — Merge when approved
For detailed TDD cycle execution (artifact flow, gate conditions, commit strategy,
migration handling, and rollback), see the feature skill's reference document at
skills/feature/references/tdd-cycle.md.
Phase gates
Before advancing to next phase, verify:
| Phase | Gate condition | On failure | |-------|----------------|------------| | stubs → test | Code compiles/type-checks | Fix stubs | | test → commit(red) | New tests fail, existing tests pass | Fix tests | | commit(red) → implement | Red commit succeeds | Resolve git issues | | implement → commit(green) | All tests pass (100%), dev DB verified if migrations | Continue implementing | | commit(green) → refactor | Green commit succeeds | Resolve git issues | | refactor → code-review | All tests still pass | Revert refactor, retry | | code-review → commit(refactor) | Review approved or issues resolved | Loop to refactor with review comments |
Spawning phase agents
Provide each agent with minimal context:
Run {phase} skill with context:
- Story: {story_id} - {story_title}
- Acceptance criteria: {criteria}
- Relevant artifacts: {list from manifest}
- Standards: {project.standards}
Only include artifacts from immediately preceding phases. Do not overload context.
Checkpoints
At configured checkpoints:
- Summarise decisions made since last checkpoint
- Present current state and proposed next steps
- Wait for explicit approval before continuing
- Log approval/rejection to manifest
Decision summary format
## Checkpoint: {phase}
### Decisions made
- [{phase}] {decision}: {rationale}
- ...
### Current state
- Story: {id} - {title}
- Phase: {current_phase}
- Branch: {branch_name}
### Next steps
1. {next_phase}: {what it will do}
Approve to continue? [y/n]
User testing (post-commit checkpoint)
IMPORTANT: After committing a story, ALWAYS invoke the user-test skill before proceeding to commit:pr.
The user-test skill reformats the manual_test_script from code-review into the user's preferred format (human checklist or agent prose), presents it for testing, and records pass/fail results. See skills/user-test/SKILL.md for details.
Pass to user-test:
manual_test_scriptfrom code-review output- Story ID and title
- Project run config
Do not advance to commit:pr until user-test returns a passing verdict. If any scenario fails, return to implement.
Failure handling
When a phase fails:
- Retry - Attempt phase again with same context (max 2 retries)
- Rollback - Revert to previous phase state if retry fails
- Escalate - Pause for human intervention with failure details
Log all failures and recovery actions to manifest.
Manifest management
Update manifest after each phase:
manifest:
project:
name: "project-name"
init_phase: "config_complete" # name_collected | intent_complete | config_complete
standards: {}
# ... rest of config
intent: {} # Intent skill output
backlog: [] # Prioritised story list
current_story: null # Active story ID
stories:
US-001:
status: "in-progress" # complete | in-progress | blocked
phase: "design" # Current phase
branch: "story/US-001-user-login"
artifacts:
spec: ".sdlc/stories/US-001/spec.md"
design: ".sdlc/stories/US-001/design.md"
stubs: [] # file paths produced by stubs skill
tests: [] # file paths produced by test skill
implementation: [] # file paths produced by implement skill
migrations: [] # migration file paths (if any)
review_verdict: "" # approved | changes_requested | blocked
decisions:
- phase: "design"
decision: "Using event sourcing"
rationale: "Requirement R3 needs full history"
gate_results:
red_verified: false # new tests fail, existing pass
green_verified: false # all tests pass after implementation
refactor_verified: false # all tests still pass after refactor
review_approved: false # review verdict is "approved"
releases: []
Artifact storage
Store phase outputs in .sdlc/stories/{story-id}/:
spec.md- Specificationdesign.md- Design documentstubs/- Interface definitionstests/- Test files (may also live in project test directory)review-notes.md- Review feedback
Phase contracts
See references/phase-contracts.md for detailed input/output specifications for each phase.
Integration with existing skills
- intent - Use as-is for project initialisation
- commit - Extend to support branch/commit/pr/merge subcommands
- reconcile - Use for drift detection and state synchronisation
Drift detection
Before advancing to next phase or picking next story, check for divergence between manifest and reality.
When to check for drift
- Before picking next story - Ensure previous work is properly recorded
- At phase transitions - Verify current phase is accurate before advancing
- When resuming after pause - Catch any manual changes made outside the pipeline
- On explicit
statuscommand - Include drift warnings in status output
Detection triggers
Check for these conditions:
1. Uncommitted files matching current story scope
2. Branch doesn't match expected pattern for current_story
3. Manifest phase contradicts file state (e.g., "design" but implementation exists)
4. Stale branches from completed stories
5. Artifacts referenced in manifest that don't exist
Integration workflow
[Pipeline action requested]
│
▼
[Check for drift]
│
├── No drift → Continue pipeline
│
└── Drift detected
│
▼
[Pause pipeline]
│
▼
[Invoke /reconcile --report]
│
▼
[Present findings to user]
│
▼
[Offer choices]
│
├── Apply corrections → Run reconcile
│
├── Continue anyway → Resume pipeline
│
└── Abort → Stop and investigate
Example drift handling
When drift is detected before picking next story:
## Drift detected
Before picking the next story, reconciliation found issues:
### Divergences
- **Phase drift** (US-004): Manifest says "design", but implementation files exist
- **Uncommitted files**: scoring.py, triage.py, writeback.py
- **Stale branch**: story/US-003-match-gmail-correspondence (merged)
### Options
1. **Reconcile first** - Fix divergences before continuing
2. **Continue anyway** - Pick next story despite drift
3. **Investigate** - Pause to manually review state
Which would you like to do?
Severity thresholds
| Severity | Action | |----------|--------| | High (uncommitted implementation, status mismatch) | Block pipeline, require reconcile | | Medium (phase drift, branch mismatch) | Warn, offer to reconcile | | Low (stale branches, orphan artifacts) | Note in status, continue |
Notes
- Keep context tight: each phase receives only what it needs
- Follow TDD flow: stubs → tests (red) → implement (green) → refactor
- Support both solo and parallel multi-agent execution
- Manifest is the source of truth for pipeline state