sadd:do-competitively
Execute tasks through competitive multi-agent generation, meta-judge evaluation specification, multi-judge evaluation, and evidence-based synthesis
sadd:do-in-parallel
Launch multiple sub-agents in parallel to execute tasks across files or targets with intelligent model selection, quality-focused prompting, and meta-judge → LLM-as-a-judge verification
sadd:do-in-steps
Execute complex tasks through sequential sub-agent orchestration with intelligent model selection, meta-judge → LLM-as-a-judge verification
sadd:judge-with-debate
Evaluate solutions through multi-round debate between independent judges until consensus
sadd:judge
Launch a meta-judge then a judge sub-agent to evaluate results produced in the current conversation
sadd:launch-sub-agent
Launch an intelligent sub-agent with automatic model selection based on task complexity, specialized agent matching, Zero-shot CoT reasoning, and mandatory self-critique verification
sadd:multi-agent-patterns
Design multi-agent architectures for complex tasks. Use when single-agent context limits are exceeded, when tasks decompose naturally into subtasks, or when specializing agents improves quality.
sadd:subagent-driven-development
Use when executing implementation plans with independent tasks in the current session or facing 3+ independent issues that can be investigated without shared state or dependencies - dispatches fresh subagent for each task with code review between tasks, enabling fast iteration with quality gates
sadd:tree-of-thoughts
Execute tasks through systematic exploration, pruning, and expansion using Tree of Thoughts methodology with meta-judge evaluation specifications and multi-agent evaluation
sdd:add-task
creates draft task file in .specs/tasks/draft/ with original user intent
sdd:brainstorm
Use when creating or developing, before writing code or implementation plans - refines rough ideas into fully-formed designs through collaborative questioning, alternative exploration, and incremental validation. Don't use during clear 'mechanical' processes
sdd:create-ideas
Generate ideas in one shot using creative sampling
sdd:implement
Implement a task with automated LLM-as-Judge verification for critical steps
sdd:plan
Refine, parallelize, and verify a draft task specification into a fully planned implementation-ready task
tdd:fix-tests
Systematically fix all failing tests after business logic changes or refactoring
tdd:test-driven-development
Use when implementing any feature or bugfix, before writing implementation code - write the test first, watch it fail, write minimal code to pass; ensures tests actually verify behavior by requiring failure first
tdd:write-tests
Systematically add test coverage for all local code changes using specialized review and development agents. Add tests for uncommitted changes (including untracked files), or if everything is commited, then will cover latest commit.
tech-stack:add-typescript-best-practices
Setup TypeScript best practices and code style rules in CLAUDE.md
prompt-engineering
Advanced prompt engineering techniques for optimal AI responses. Use this when crafting prompts, optimizing AI interactions, or designing system prompts for applications.
Page 2 of 2 · 69 results