Agent Skills: Test Reviewer

Review test quality, coverage, and completeness. Analyzes for missing edge cases, assertion quality, and coverage gaps.

UncategorizedID: majesticlabs-dev/majestic-marketplace/test-reviewer

Install this agent skill to your local

pnpm dlx add-skill https://github.com/majesticlabs-dev/majestic-marketplace/tree/HEAD/plugins/majestic-engineer/skills/test-reviewer

Skill Files

Browse the full folder contents for test-reviewer.

Download Skill

Loading file tree…

plugins/majestic-engineer/skills/test-reviewer/SKILL.md

Skill Metadata

Name
test-reviewer
Description
Review test quality, coverage, and completeness. Analyzes for missing edge cases, assertion quality, and coverage gaps.

Test Reviewer

Audience: Developers reviewing test suites for quality and completeness.

Goal: Analyze automated tests for coverage completeness, assertion quality, and adherence to testing best practices.

Review Steps

1. Identify Test Context

  • Run git status and git diff to identify changed files and their corresponding test files
  • Use Grep and Glob to locate test files related to the changed code
  • Identify the testing framework (RSpec, Minitest, Jest, Playwright)

2. Map Implementation to Tests

  • Read the implementation files being tested
  • Identify all public methods, endpoints, and behaviors that should have test coverage
  • Create a coverage matrix mapping implementation features to test cases

3. Analyze Test Coverage

For each implementation feature, check if tests exist for:

  • Happy Path: Normal successful execution flows
  • Sad Path: Error conditions, validation failures, edge cases
  • Boundary Conditions: Empty values, nil/null, maximum limits
  • State Transitions: Before/after states, side effects
  • Authorization: Permission checks (if applicable)
  • Integration Points: External service calls, database operations

4. Evaluate Test Quality

  • Assertion Quality:
    • Are assertions specific and meaningful?
    • Do tests verify behavior, not implementation?
    • Are error messages helpful for debugging?
  • Test Isolation:
    • Do tests depend on execution order?
    • Are external services properly mocked?
    • Is test data properly set up and torn down?
  • AAA Pattern:
    • Clear Arrange (setup) section
    • Single Act (execution)
    • Focused Assert (verification)
  • Test Names:
    • Do names describe expected behavior?
    • Would someone understand the test without reading code?

5. Check Test Patterns

  • Framework Compliance:
    • RSpec: Proper use of describe, context, it, let, before
    • Minitest: Proper use of setup, test, fixtures
    • Jest: Proper use of describe, it, beforeEach, mocks
  • Project Conventions:
    • Do tests follow existing patterns in the codebase?
    • Are shared examples/contexts used where appropriate?
    • Is test data handled consistently (fixtures vs factories)?

6. Identify Missing Scenarios

Look for common gaps:

  • Nil/Empty Handling: What happens with nil, empty strings, empty arrays?
  • Error Cases: Network failures, database errors, validation failures
  • Concurrency: Race conditions, parallel execution
  • Edge Cases: First/last items, exactly at limits, off-by-one
  • Security: Input sanitization, authorization bypasses
  • Performance: Large data sets, timeout scenarios

7. Check for Test Anti-Patterns

  • Skipped/pending tests without justification
  • Tests with sleep or hardcoded delays (flaky test smell)
  • Tests with non-deterministic data (random values without seeds)
  • Over-reliance on allow_any_instance_of or similar broad mocks

Note: This skill performs static analysis only. Test execution is handled separately to avoid redundant test runs.

Red Flags to Watch For:

  • Tests that always pass (no real assertions)
  • Tests that test Rails/framework behavior, not application code
  • Overly complex test setup indicating design issues
  • Mocking too much (testing mocks, not real behavior)
  • No error case coverage
  • Tests tightly coupled to implementation details
  • Missing database state verification
  • Frozen fixture assertions: Exact collection comparisons (assert_equal [a, b], scope or expect(scope).to eq([a, b])) that break when unrelated fixtures are added. Recommend assert_includes/expect(...).to include(...) instead.

Test Quality Review Report

Provide your findings in this structure:

Test Coverage Summary

  • Implementation Files Reviewed: [count]
  • Test Files Reviewed: [count]
  • Coverage Gaps Identified: [count]
  • Quality Issues Found: [count]

Coverage Matrix

| Feature/Method | Happy Path | Sad Path | Edge Cases | Status | |----------------|------------|----------|------------|--------| | [method_name] | pass/fail | pass/fail| pass/fail | [Complete/Gaps] |

Critical Coverage Gaps

[List features/methods with missing test coverage, prioritized by risk]

  1. [Feature/Method Name] - [file:line]
    • Missing: [what scenarios are not tested]
    • Risk: [why this gap matters]
    • Recommended: [specific test to add]

Test Quality Issues

High Priority:

  • [Issue description with file:line reference]
  • Recommendation: [how to fix]

Medium Priority:

  • [Issue description]
  • Recommendation: [how to fix]

Low Priority:

  • [Issue description]
  • Recommendation: [how to fix]

Missing Edge Cases

[List specific edge case scenarios that should be tested]

  1. [Scenario]: [why it matters] → [file to add test]

Pattern Violations

[Tests that don't follow project conventions or best practices]

Recommendations

  1. [Prioritized actionable recommendations]
  2. [Include specific test examples where helpful]
  3. [Reference project patterns to follow]

Overall Assessment

  • Test Suite Health: [Excellent/Good/Needs Work/Critical Gaps]
  • Confidence Level: [High/Medium/Low] - Can we ship with current tests?
  • Priority Actions: [Top 3 things to fix before merging]

Note: All file paths should be absolute. Focus on actionable feedback that improves test quality and coverage.