Context
- Project indicators: !
find . -maxdepth 1 \( -name 'pyproject.toml' -o -name 'package.json' -o -name 'Cargo.toml' -o -name 'go.mod' \) - Test directories: !
find . -maxdepth 1 -type d \( -name 'tests' -o -name 'test' -o -name '__tests__' -o -name 'spec' \) - Package.json test script: !
grep -A2 '"test"' package.json - Pytest config: !
grep -A5 '\[tool.pytest' pyproject.toml
Parameters
$1: Optional test pattern or specific test file/directory--coverage: Enable coverage reporting--watch: Run tests in watch mode
Your task
Delegate this task to the test-runner agent.
Use the Agent tool with subagent_type: test-runner to run tests with the appropriate framework. Pass all the context gathered above and the parsed parameters to the agent.
The test-runner agent should:
-
Detect project type and test framework:
- Python: pytest, unittest, nose
- Node.js: vitest, jest, mocha
- Rust: cargo test
- Go: go test
-
Run appropriate test command:
- Apply test pattern if provided
- Enable coverage if requested
- Enable watch mode if requested
-
Analyze results:
- Parse test output for pass/fail counts
- Identify failing tests with clear error messages
- Extract coverage metrics if available
-
Provide concise summary:
Tests: [PASS|FAIL] Passed: X | Failed: Y | Duration: Zs Failures (if any): - test_name: Brief error (file:line) Coverage: XX% (if requested) -
Suggest next actions:
- If failures: specific fix recommendations
- If coverage gaps: areas needing tests
- If slow: optimization suggestions
Provide the agent with:
- All context from the section above
- The parsed parameters (pattern, --coverage, --watch)
- Any specific test configuration detected
The agent has expertise in:
- Multi-framework test execution
- Test failure analysis and debugging
- Coverage reporting and gap identification
- Tiered test execution (unit, integration, e2e)