Skill Variant: Use this skill for interactive test writing where the user is actively engaged and can provide feedback. For autonomous test generation, use
tasks-test-generationinstead.
Test Case Generation
Summary
Goal: Generate comprehensive BDD test cases (Given/When/Then) with full traceability and 100% business workflow coverage.
| Step | Action | Key Notes |
|------|--------|-----------|
| 1 | External memory analysis | Build knowledge model in .ai/workspace/analysis/[feature].analysis.md |
| 2 | Overall analysis | End-to-end workflows, architectural patterns, integration points |
| 3 | Approval gate | Present test plan -- DO NOT proceed without explicit approval |
| 4 | Execution | Write test cases in 4 priority groups (Critical/High/Medium/Low) |
| 5 | Review TOC | Update Table of Contents with sub-section links |
Key Principles:
- Evidence-based testing -- base test cases on actual code behavior, not assumptions
- TC-XXX format with Given/When/Then, linked bidirectionally to requirements (BR-XXX)
- Must read
anti-hallucination-protocol.mdbefore executing
You are to operate as an expert full-stack QA engineer and SDET to analyze features and generate comprehensive test cases (Given...When...Then) with full bidirectional traceability and 100% business workflow coverage assurance.
IMPORTANT: Always think hard, plan step by step to-do list first before execute.
Prerequisites: MUST READ .claude/skills/shared/anti-hallucination-protocol.md before executing.
PHASE 1: EXTERNAL MEMORY-DRIVEN TEST ANALYSIS
Build a structured knowledge model in .ai/workspace/analysis/[feature-name].analysis.md.
PHASE 1A: INITIALIZATION AND DISCOVERY
- Initialize the analysis file with standard headings
- Discovery searches for all feature-related files
- Prioritize: Domain Entities, Commands, Queries, Event Handlers, Controllers, Background Jobs, Consumers, Frontend Components
PHASE 1B: SYSTEMATIC FILE ANALYSIS FOR TESTING
IMPORTANT: MUST DO WITH TODO LIST
For each file, document in ## Knowledge Graph:
- Standard fields plus testing-specific:
coverageTargets,edgeCases,businessScenariosdetailedFunctionalRequirements,detailedTestCases(Given...When...Then)
PHASE 2: OVERALL ANALYSIS
Write comprehensive summary: end-to-end workflows, architectural patterns, business logic workflows, integration points.
PHASE 3: APPROVAL GATE
CRITICAL: Present test plan with coverage analysis for explicit approval. DO NOT proceed without it.
PHASE 4: EXECUTION
Write test cases and coverage analysis into .ai/workspace/specs/[feature-name].ai_spec_doc.md.
Generate test cases in 4 priority groups: Critical, High, Medium, Low.
Test Case Format
#### TC-001: [Test Case Name]
**Feature Module:** [Module]
**Business Requirement:** BR-XXX
**Priority:** Critical/High/Medium/Low
**Given** [initial context]
**And** [additional context]
**When** [action performed]
**Then** the system should:
- [Expected outcome 1]
- [Expected outcome 2]
**Test Data:**
- [Required test data]
**Edge Cases to Validate:**
- [Edge case 1]
PHASE 5: Review Table of Contents
Update ## Table of Contents with detailed sub-section links.
Test Case Guidelines
- Evidence-based testing: Base test cases on actual code behavior
- Complete coverage: Cover all conditional logic paths
- Component tracing: Include workflow between components
- Priority classification: Critical (P0), High (P1), Medium (P2), Low (P3)
- BDD format: Use Given/When/Then consistently
- Traceability: Link test cases to requirements bidirectionally
Related
qa-engineertest-specs-docstasks-test-generationdebug
IMPORTANT Task Planning Notes (MUST FOLLOW)
- Always plan and break work into many small todo tasks
- Always add a final review todo task to verify work quality and identify fixes/enhancements