MCAF: Testing
Trigger On
- implementing a feature or bugfix
- adding a regression test for a failure
- protecting a refactor with automated verification
Value
- produce a concrete project delta: code, docs, config, tests, CI, or review artifact
- reduce ambiguity through explicit planning, verification, and final validation skills
- leave reusable project context so future tasks are faster and safer
Do Not Use For
- repo-wide delivery policy with no test change
- documentation-only changes unless they alter executable verification
Inputs
- the nearest
AGENTS.md - the changed behaviour and touched boundaries
- existing tests near the impacted code path
Quick Start
- Read the nearest
AGENTS.mdand confirm scope and constraints. - Run this skill's
Workflowthrough theRalph Loopuntil outcomes are acceptable. - Return the
Required Result Formatwith concrete artifacts and verification evidence.
Workflow
- Read the repo’s real verification commands from
AGENTS.md. - Start with a failing test first when the change adds behaviour or fixes a bug.
- Start with the smallest meaningful test scope:
- new or changed tests
- related suite
- broader regressions
- When the stack is .NET, use the external
.NETskills from the Managed Code Skills catalog, usemcaf-dotnetas the orchestration skill when the task spans code, tests, and verification, and route framework mechanics through exactly one matching skill:mcaf-dotnet-xunitmcaf-dotnet-tunitmcaf-dotnet-mstest
- Prefer integration, API, or UI coverage when behaviour crosses boundaries.
- Prove the user flow or caller-visible system flow, not just internal details.
- Add a regression test for every bug that can be captured reliably.
- If the stack is .NET and production code changed, do not stop at tests only. Finish with the repo-defined format and analyzer pass as well.
- Use deeper testing references only when the repo’s current strategy is unclear.
Deliver
- automated tests close to the changed behaviour
- verification results that match the repo’s real commands
Validate
- the new behaviour is covered at the right level
- the main user flow or caller-visible system flow is proven
- tests assert meaningful outcomes, not implementation trivia
- coverage expectations from
AGENTS.mdare met, or the exception is documented - the verification sequence matches
AGENTS.md - for .NET changes, tests were not treated as a substitute for formatting or analyzer gates
- broader suites are run after there is something real to verify
Ralph Loop
Use the Ralph Loop for every task, including docs, architecture, testing, and tooling work.
- Brainstorm first (mandatory):
- analyze current state
- define the problem, target outcome, constraints, and risks
- generate options and think through trade-offs before committing
- capture the recommended direction and open questions
- Plan second (mandatory):
- write a detailed execution plan from the chosen direction
- list final validation skills to run at the end, with order and reason
- Execute one planned step and produce a concrete delta.
- Review the result and capture findings with actionable next fixes.
- Apply fixes in small batches and rerun the relevant checks or review steps.
- Update the plan after each iteration.
- Repeat until outcomes are acceptable or only explicit exceptions remain.
- If a dependency is missing, bootstrap it or return
status: not_applicablewith explicit reason and fallback path.
Required Result Format
status:complete|clean|improved|configured|not_applicable|blockedplan: concise plan and current iteration stepactions_taken: concrete changes madevalidation_skills: final skills run, or skipped with reasonsverification: commands, checks, or review evidence summaryremaining: top unresolved items ornone
For setup-only requests with no execution, return status: configured and exact next commands.
Load References
- read
references/test-planning.mdfirst - open
references/automated-testing.mdfor deeper strategy and trade-offs - for broader .NET implementation flow, use the external
mcaf-dotnetskill from the Managed Code Skills catalog - for .NET framework-specific mechanics, use exactly one external skill from the Managed Code Skills catalog:
mcaf-dotnet-xunit,mcaf-dotnet-tunit, ormcaf-dotnet-mstest
Example Requests
- "Add tests for this bugfix."
- "Protect this refactor with regression coverage."
- "Choose the right test level for this API change."