Agent Skills: /dev-test - Implementation Testing

Automated UI testing using Playwright browser tools. Use when: (1) After completing feature implementation, (2) After fixing bugs, (3) Before code review, (4) Verifying acceptance criteria from specs

UncategorizedID: codihaus/claude-skills/dev-test

Install this agent skill to your local

pnpm dlx add-skill https://github.com/codihaus/claude-skills/tree/HEAD/skills/dev-test

Skill Files

Browse the full folder contents for dev-test.

Download Skill

Loading file tree…

skills/dev-test/SKILL.md

Skill Metadata

Name
dev-test
Description
Automated UI testing using Playwright browser tools. Use when: (1) After completing feature implementation, (2) After fixing bugs, (3) Before code review, (4) Verifying acceptance criteria from specs

/dev-test - Implementation Testing

Skill Awareness: See skills/_registry.md for all available skills.

  • After: /dev-coding implementation complete
  • Auto-triggered: By /dev-coding after implementation
  • If fails: Loop back to fix, then re-test

Automated testing using Playwright to verify implementation works correctly.

When to Use

  • After completing feature implementation
  • After fixing bugs
  • Before code review
  • Before committing changes

Usage

/dev-test                         # Test current implementation
/dev-test UC-AUTH-001             # Test specific use case
/dev-test --url http://...        # Test specific URL
/dev-test --fix                   # Auto-fix issues found

What It Tests

| Check | Method | Catches | |-------|--------|---------| | Console Errors | browser_console_messages | JS errors, React errors, warnings | | Network Failures | browser_network_requests | API 4xx/5xx, failed fetches, timeouts | | Visual State | browser_snapshot | Render errors, missing elements | | Interactions | browser_click, browser_type | Form failures, broken buttons |

Expected Outcome

Test report showing whether implementation works correctly.

Report includes:

  • Overall status (Pass/Fail + issue count)
  • Issues found (critical/warning with location + suggested fix)
  • Test steps executed (which passed/failed)
  • Suggested fixes or next actions

Success Criteria

  • No console errors during user flows
  • No network failures (API 4xx/5xx)
  • Expected elements render correctly
  • User interactions work as expected
  • Happy path completes successfully
  • All acceptance criteria from spec verified

What to Test

From UC spec:

  • Expected user flows (happy path)
  • Required inputs and outputs
  • Acceptance criteria
  • Error scenarios (if specified)

From recent changes:

  • Modified components/endpoints
  • New functionality added

Error Detection

| Check | What It Catches | |-------|----------------| | Console Errors | JS errors, React errors, warnings, unhandled promises | | Network Failures | API 4xx/5xx, failed fetches, timeouts | | Visual State | Render errors, missing elements, wrong state | | Interactions | Form failures, broken buttons, navigation issues |

Error categories:

  • πŸ”΄ Critical: Breaks functionality, must fix
  • 🟑 Warning: Should fix, not blocking

Test Approach

1. Prepare:

  • Identify what to test (from spec + recent changes)
  • Determine test URL (from spec or default: http://localhost:3000)
  • Verify dev server running (prompt if not)

2. Execute User Flows:

  • Navigate to page
  • Capture initial state (check page loads, no immediate errors)
  • Execute happy path (fill forms, click buttons, navigate)
  • Wait for expected results
  • Capture final state

3. Collect Errors:

  • Console messages (errors, warnings)
  • Network requests (find failures)
  • Visual issues (missing elements)

4. Report:

  • Status (pass/fail + count)
  • Issues with severity, location, suggested fix
  • Test steps executed (which passed/failed)

5. Fix Loop (if --fix):

  • For each issue: read file, identify fix, apply fix, re-test
  • Continue until fixed or max iterations (3) or needs user input
  • Re-run full test to verify

Test Patterns

See references/test-patterns.md for code examples:

  • Form submission, navigation flow, API response validation, error state testing
  • Test data conventions
  • Common issues and solutions (page not loading, element not found, intermittent failures)

Use Playwright MCP tools to interact with browser.

Integration with /dev-coding

When /dev-coding completes:

## Implementation Complete

Backend: βœ“ API endpoints created
Frontend: βœ“ Components built

**Next Step**: Running /dev-test to verify...

[Auto-triggers /dev-test]

If tests fail, /dev-test can:

  1. Report issues for manual fix
  2. Auto-fix with --fix flag
  3. Re-run until passing

Tools Used

| Tool | Purpose | |------|---------| | mcp__playwright__browser_navigate | Go to test URL | | mcp__playwright__browser_snapshot | Capture page state | | mcp__playwright__browser_type | Fill form inputs | | mcp__playwright__browser_click | Click buttons/links | | mcp__playwright__browser_wait_for | Wait for elements/time | | mcp__playwright__browser_console_messages | Get JS errors | | mcp__playwright__browser_network_requests | Get API responses | | Read | Read spec for expected behavior | | Edit | Fix issues (with --fix) |

Output Locations

Test reports are informational and displayed inline. No files created unless requested.

For saved reports:

plans/features/{feature}/test-reports/
└── {date}-{UC-ID}.md