Adversarial Analysis
RULE: Before implementing any significant proposal, systematically attack it to find weaknesses.
The Principle
Proposals that survive adversarial scrutiny are more robust. This skill provides a 6-phase framework for stress-testing ideas before commitment.
Loading This Skill
Agents can load via:
required_skills:
- adversarial-analysis
Or dynamically invoke:
skill: adversarial-analysis
The 6-Phase Framework
Phase 1: Pre-mortem
"It's 6 months from now. This failed. Why?"
Assume complete failure and work backwards. What went wrong?
Process:
- Vividly imagine the failure state
- Identify 5-7 distinct failure scenarios
- Trace each back to root causes
- Note which causes were preventable
Output format:
### Pre-mortem
> "It's 6 months from now. This failed. Why?"
- [Failure scenario 1]: [Root cause]
- [Failure scenario 2]: [Root cause]
- [Failure scenario 3]: [Root cause]
Phase 2: Assumption Audit
Every proposal rests on assumptions. Surface and stress-test them.
Process:
- List every assumption (explicit and implicit)
- Rate confidence: H (high) / M (medium) / L (low)
- For each, answer: "If wrong, what breaks?"
Output format:
### Assumptions (confidence: H/M/L)
| Assumption | Confidence | If Wrong |
|------------|------------|----------|
| [Assumption 1] | M | [Impact if false] |
| [Assumption 2] | L | [Impact if false] |
| [Assumption 3] | H | [Impact if false] |
Prioritize low-confidence, high-impact assumptions for deeper analysis.
Phase 3: Edge Case Storm
Generate 10+ scenarios that could break the proposal.
Process:
- Think about boundary conditions
- Consider rare but possible inputs
- Imagine hostile actors
- Consider scale extremes (0, 1, many, millions)
- Consider timing/ordering issues
- Consider resource exhaustion
- Consider integration failures
Output format:
### Edge Cases
- [Breaking scenario 1]
- [Breaking scenario 2]
- [Breaking scenario 3]
... (minimum 10)
Phase 4: Counter-Arguments
"A senior engineer thinks this is wrong. Their case:"
Steel-man the opposition. Build the strongest possible case against.
Process:
- Adopt an adversarial persona
- Find the most compelling reasons to reject
- Identify precedents where similar approaches failed
- Note what a skeptic would demand as proof
Output format:
### Counter-Arguments
> "A senior engineer thinks this is wrong. Their case:"
- [Argument 1]: [Evidence/reasoning]
- [Argument 2]: [Evidence/reasoning]
- [Argument 3]: [Evidence/reasoning]
Phase 5: Failure Mode Catalog
Component-by-component failure analysis.
Process:
- Identify each major component
- Determine how each could fail
- Assess blast radius (contained vs cascading)
- Identify detection mechanisms
Output format:
### Failure Modes
| Component | How It Fails | Blast Radius | Detection |
|-----------|--------------|--------------|-----------|
| [Component 1] | [Failure mode] | [Impact scope] | [How to detect] |
| [Component 2] | [Failure mode] | [Impact scope] | [How to detect] |
Phase 6: Synthesis
Integrate findings into actionable recommendations.
Process:
- Identify critical weaknesses (must address before proceeding)
- Identify significant concerns (should address)
- Render verdict: GO / CONDITIONAL GO / NO GO
- State confidence level
Output format:
### Synthesis
**Critical Weaknesses (must address):**
1. [Weakness] - Mitigation: [Action]
2. [Weakness] - Mitigation: [Action]
**Significant Concerns (should address):**
1. [Concern] - Mitigation: [Action]
**Verdict:** [GO / CONDITIONAL GO / NO GO]
**Confidence:** [High / Medium / Low]
**Rationale:** [1-2 sentences explaining verdict]
Full Output Template
# Challenge: [Proposal Summary]
## The Attack
### Causal Failure Map: [root causes and cascade effects]
### Assumption Audit: [weakest assumptions]
### Counter-Arguments: [strongest case against]
## What the Analysis Caught
| Weakness | Source | Severity | Mitigation |
|----------|--------|----------|------------|
| [weakness] | [phase] | Critical/High/Medium | [action] |
## Verdict
**[GO / CONDITIONAL GO / NO GO]** (confidence: X.X)
[2-4 sentences: real risk picture, what would change the verdict]
**Required Mitigations (if CONDITIONAL GO):**
1. [mitigation]
## Where to Go Next
-> /challenge --deep "[proposal]"
_[if verdict needs simulation/ethical analysis]_
-> /problem-solve "[how to mitigate top risk]"
_[if proceeding and need to decide on approach]_
-> /deepthink "[specific uncertainty]"
_[if a key assumption needs exploration]_
Using cognition-mcp
For rigorous analysis, use cognition-mcp to store structured analysis:
mcp__cognition-mcp__cognition
operation: "thought"
thought: "Beginning adversarial analysis of [proposal]. Phase 1: Pre-mortem..."
thoughtNumber: 1
totalThoughts: 12
nextThoughtNeeded: true
Structure as 2 thoughts per phase (analysis + output), allowing revision and branching as needed.
Quick Mode (Phase 1 + Top 3 Risks)
When time is limited, run Phase 1 only and extract top 3 risks:
# Challenge (Quick): [Proposal Summary]
## Causal Map
[cause] -> [intermediate] -> [effect]
Root causes: [2-3 most critical]
## What the Analysis Found
| Risk | Likelihood | Impact | Mitigation |
|------|-----------|--------|------------|
| [risk] | H/M/L | [impact] | [mitigation] |
## Quick Verdict
[Proceed with caution / Needs more analysis / Red flags present]
[1-2 sentences: why]
## Where to Go Next
-> /challenge "[proposal]" (full analysis)
_[why deeper analysis would help]_
-> /problem-solve "[decision point]"
_[if ready to decide]_
Deep Mode (+ Cognition Integration)
For critical decisions, after completing all 6 phases, invoke:
/think --model inversion [proposal]
This adds mental model analysis (thinking backwards from failure) to complement the adversarial framework.
Anti-Patterns
DON'T:
- Rush through phases
- Accept vague "it might fail" without specifics
- Skip the steel-man counter-arguments
- Ignore low-probability, high-impact failures
- Let optimism bias filter out uncomfortable findings
DO:
- Be genuinely adversarial
- Quantify where possible
- Name specific failure modes
- Provide actionable mitigations
- Render an honest verdict
Integration with Other Skills
- search-before-edit: Search for similar past failures before analysis
- debugging-first: If analyzing a fix, debug the original issue first
- linter-loop-limits: Don't iterate forever on edge case generation
When to Use
Use adversarial analysis when:
- Making architecture decisions
- Choosing between major alternatives
- Proposing significant refactors
- Introducing new dependencies
- Planning migrations
- Making irreversible choices
Skip (or use quick mode) when:
- Changes are easily reversible
- Scope is small and contained
- Time pressure is extreme (but document the skip)