Agent Skills: Testing Standards

Testing standards and TDD workflow. Use when writing tests, running test suites, implementing TDD, or organizing test files. Covers unit vs integration test separation, pytest patterns, and test-driven development methodology.

UncategorizedID: tianjianjiang/smith/smith-tests

Install this agent skill to your local

pnpm dlx add-skill https://github.com/tianjianjiang/smith/tree/HEAD/smith-tests

Skill Files

Browse the full folder contents for smith-tests.

Download Skill

Loading file tree…

smith-tests/SKILL.md

Skill Metadata

Name
smith-tests
Description
Testing standards and TDD workflow. Use when writing tests, running test suites, implementing TDD, or organizing test files. Covers unit vs integration test separation, pytest patterns, and test-driven development methodology.

Testing Standards

<metadata>
  • Load if: Writing tests, running test suites, TDD
  • Prerequisites: @smith-principles/SKILL.md, @smith-standards/SKILL.md, @smith-python/SKILL.md
</metadata>

CRITICAL (Primacy Zone)

<required>
  • MUST mirror source structure: foo/bar/xyz.pytests/unit/foo/bar/test_xyz.py
  • MUST use pytest functions (not classes) - see @smith-python/SKILL.md
  • MUST separate unit (tests/unit/) and integration (tests/integration/) tests
  • MUST use virtual env runner for pytest (poetry run or uv run)
  • MUST write tests BEFORE implementation (TDD)
  • MUST run full test suite proactively after code changes — do not wait for the user to ask
  • MUST run the module's quality gate before reporting completion — linting, type checks, and tests (make quality > project CI script > run individually; see @smith-dev/SKILL.md Pre-PR Quality Gates)
</required> <forbidden>
  • NEVER use pytest -m "not integration" if folder structure is mirrored (import conflicts)
  • NEVER write implementation before tests
  • NEVER skip running tests after changes
</forbidden>

Test Organization

Unit:

  • Location: tests/unit/
  • Characteristics: Mock dependencies, fast

Integration:

  • Location: tests/integration/
  • Characteristics: Real services, @pytest.mark.integration

TDD Workflow

  1. Understand: Read existing test patterns
  2. Design: Write failing tests defining expected behavior
  3. Implement: Write minimal code to pass tests
  4. Verify: Run tests, validate coverage
  5. Refactor: Improve code while keeping tests green

Environment Configuration

  • tests/conftest.py disables tracking (OPIK, etc.)
  • Virtual env runners load .env automatically
  • Use .env.example as template (NEVER commit .env)

Claude Code Plugin Integration

<context>

When pr-review-toolkit is available:

  • pr-test-analyzer agent: Analyzes behavioral coverage, identifies critical gaps
  • Rates test gaps 1-10 (10 = critical, must add)
  • Trigger: "Check if the tests are thorough" or use Task tool
</context>

Ralph Loop Integration

<context>

TDD = Ralph iteration: test → implement → pytest → iterate until <promise>TESTS PASS</promise>.

See @smith-ralph/SKILL.md for full patterns.

</context> <related>
  • @smith-python/SKILL.md - Python testing patterns (pytest functions)
  • @smith-playwright/SKILL.md - Playwright failure monitoring
  • @smith-dev/SKILL.md - Development workflow (quality gates)
  • @smith-principles/SKILL.md - Core principles
</related>

ACTION (Recency Zone)

<required>

Run tests (use project's virtual env runner):

VENV_RUNNER pytest tests/unit/ -v
VENV_RUNNER pytest tests/integration/ -v

VENV_RUNNER = poetry run, uv run, etc. — check project CLAUDE.md or pyproject.toml.

Success criteria:

  • All new functionality has tests
  • Test names follow project conventions
  • Tests are isolated and deterministic
  • No regressions in existing tests
</required>