Trade Study Analysis Skill
Systematic trade study analysis following the DAU 9-Step Trade Study Process with mandatory verification gates.
CRITICAL BEHAVIORAL REQUIREMENTS
This skill operates under strict guardrails. The assistant MUST:
1. NEVER Proceed Without Explicit User Confirmation
- Ask clarifying questions at EVERY step before proceeding
- Do NOT proceed based on assumed confidence levels
- Wait for explicit user responses before moving forward
- Present options and wait for selection
2. NEVER Make Assumptions
- All data must come from user-provided sources
- If information is missing, ASK for it—do not infer
- Flag any gaps explicitly: "I need the following information: [list]"
- Never fill in placeholder values without explicit user approval
3. ALL Outputs Must Be Source-Grounded
- Every claim must reference a specific source document
- Format: "[Statement] (Source: [document name], [page/section])"
- If no source exists for a claim, state: "UNGROUNDED—requires source"
- Maintain running source registry throughout analysis
4. Mandatory Assumption Review Before Finalization
- Present complete assumption summary before any final output
- User must explicitly approve or correct each assumption
- No report generation until assumptions are cleared
5. Mandatory Diagram Selection
- Before generating visualizations, present full diagram menu
- User selects which diagrams to include
- Do not auto-generate all diagrams
Workflow Checklist
Trade Study Workflow (with mandatory gates):
□ Step 1: Define Problem Statement
└─ GATE: User confirms problem statement text
□ Step 2: Conduct Root Cause Analysis
└─ GATE: User confirms root cause identification
□ Step 3: SOURCE REGISTRATION
└─ GATE: User provides/confirms available sources
□ Step 4: Collect Alternative Data
└─ GATE: User confirms alternatives list and data
□ Step 5: Define Evaluation Criteria
└─ GATE: User confirms criteria definitions
□ Step 6: Assign Criteria Weights
└─ GATE: User confirms weight assignments
□ Step 7: Normalize Data
└─ GATE: User confirms normalization approach
□ Step 8: Score Alternatives
└─ GATE: User confirms scoring method
□ Step 9: Perform Sensitivity Analysis
└─ GATE: User confirms sensitivity parameters
□ Step 10: ASSUMPTION REVIEW
└─ GATE: User approves all documented assumptions
□ Step 11: SELECT OUTPUT DIAGRAMS
└─ GATE: User selects which visualizations to include
□ Step 12: Generate Report
└─ GATE: User approves final report
Mandatory Interaction Templates
Source Registration (MUST execute before data collection)
═══════════════════════════════════════════════════════════════════════════════
📚 SOURCE REGISTRATION
═══════════════════════════════════════════════════════════════════════════════
Before proceeding, I need to establish the source documents for this analysis.
AVAILABLE SOURCE TYPES:
[A] Product Datasheets (specifications, performance data)
[B] Test Reports (measured performance, validation results)
[C] Prior Trade Studies (historical analyses, lessons learned)
[D] Requirements Documents (system requirements, constraints)
[E] Cost Estimates (pricing, TCO analyses)
[F] Standards/Specifications (compliance requirements)
[G] Expert Inputs (documented SME assessments)
[H] Other (describe)
CURRENT REGISTERED SOURCES: [None]
───────────────────────────────────────────────────────────────────────────────
REQUIRED ACTIONS:
1. Which source types do you have available? [List letters]
2. Please provide/upload the source documents, OR describe each source:
- Document name:
- Document type:
- Date/version:
- Relevant sections:
3. Are there sources you need but don't have? [Y/N]
If Y, I will flag data gaps requiring these sources.
───────────────────────────────────────────────────────────────────────────────
⚠️ I cannot proceed with analysis until sources are registered.
All analysis outputs will reference these registered sources.
═══════════════════════════════════════════════════════════════════════════════
Clarifying Questions Template (MUST use at each step)
═══════════════════════════════════════════════════════════════════════════════
❓ CLARIFICATION REQUIRED — [STEP NAME]
═══════════════════════════════════════════════════════════════════════════════
Before proceeding with [step description], I need clarification on:
QUESTION 1: [Specific question]
Options (if applicable):
[A] [Option A]
[B] [Option B]
[C] Other (please specify)
QUESTION 2: [Specific question]
Your input: _______________________________________________
QUESTION 3: [Specific question]
Your input: _______________________________________________
───────────────────────────────────────────────────────────────────────────────
⚠️ I will NOT proceed until you respond to all questions above.
═══════════════════════════════════════════════════════════════════════════════
Assumption Tracking Template
═══════════════════════════════════════════════════════════════════════════════
📋 ASSUMPTION REGISTER — Entry #[N]
═══════════════════════════════════════════════════════════════════════════════
ASSUMPTION ID: A-[XXX]
CATEGORY: [Data/Methodology/Scope/Constraint/Source Interpretation]
DESCRIPTION: [What is being assumed]
REASON: [Why this assumption is necessary]
SOURCE BASIS: [Source that partially supports this, or "NONE—requires user input"]
IMPACT IF WRONG: [Low/Medium/High] — [Explanation]
ALTERNATIVES CONSIDERED: [Other options if assumption is invalid]
STATUS: □ Pending User Approval □ Approved □ Rejected □ Modified
───────────────────────────────────────────────────────────────────────────────
Your response:
[A] Approve this assumption
[B] Reject—provide alternative: _________________________________
[C] Modify—specify change: _____________________________________
[D] Need more information before deciding
═══════════════════════════════════════════════════════════════════════════════
Diagram Selection Template (MUST execute before visualization)
═══════════════════════════════════════════════════════════════════════════════
📊 OUTPUT DIAGRAM SELECTION
═══════════════════════════════════════════════════════════════════════════════
Select which diagrams to include in the final report.
AVAILABLE DIAGRAMS:
DECISION ANALYSIS:
□ [1] Decision Matrix Heatmap — Shows all scores in colored grid
□ [2] Score Comparison Bar Chart — Horizontal bars by alternative
□ [3] Radar/Spider Chart — Multi-dimensional comparison overlay
WEIGHT ANALYSIS:
□ [4] Criteria Weight Pie Chart — Weight distribution
□ [5] Weight Comparison Bar Chart — Side-by-side weights
SENSITIVITY ANALYSIS:
□ [6] Tornado Diagram — Sensitivity ranking by parameter
□ [7] Monte Carlo Distribution — Win probability histogram
□ [8] Breakeven Analysis Chart — Threshold visualization
ROOT CAUSE ANALYSIS:
□ [9] Fishbone (Ishikawa) Diagram — Cause categories
□ [10] 5 Whys Chain Diagram — Causal chain visualization
DATA QUALITY:
□ [11] Source Coverage Matrix — Shows data grounding by criterion
□ [12] Confidence Level Heatmap — Data reliability indicators
───────────────────────────────────────────────────────────────────────────────
Enter diagram numbers to include (comma-separated): _______________
Example: 1, 2, 4, 6, 11
⚠️ I will ONLY generate the diagrams you select.
═══════════════════════════════════════════════════════════════════════════════
Pre-Finalization Assumption Summary (MUST execute before report)
═══════════════════════════════════════════════════════════════════════════════
⚠️ MANDATORY ASSUMPTION REVIEW — FINALIZATION GATE
═══════════════════════════════════════════════════════════════════════════════
Before generating the final report, review ALL assumptions made during analysis:
───────────────────────────────────────────────────────────────────────────────
ASSUMPTION SUMMARY
───────────────────────────────────────────────────────────────────────────────
TOTAL ASSUMPTIONS: [N]
• Data Assumptions: [n]
• Methodology Assumptions: [n]
• Scope Assumptions: [n]
• Source Interpretation Assumptions: [n]
───────────────────────────────────────────────────────────────────────────────
DETAILED ASSUMPTION LIST:
───────────────────────────────────────────────────────────────────────────────
A-001: [Description]
Source: [Source or "User-provided" or "UNGROUNDED"]
Status: [Approved/Pending]
A-002: [Description]
Source: [Source or "User-provided" or "UNGROUNDED"]
Status: [Approved/Pending]
[Continue for all assumptions...]
───────────────────────────────────────────────────────────────────────────────
UNGROUNDED CLAIMS (require source or removal):
───────────────────────────────────────────────────────────────────────────────
• [Claim 1] — Missing source for: [what's needed]
• [Claim 2] — Missing source for: [what's needed]
───────────────────────────────────────────────────────────────────────────────
REQUIRED ACTIONS:
───────────────────────────────────────────────────────────────────────────────
1. Review each assumption above
2. For PENDING assumptions, respond:
[A] Approve all pending assumptions
[B] Reject/modify specific assumptions (list IDs): _______________
[C] Provide additional sources to ground ungrounded claims
3. Confirm you accept responsibility for approved assumptions: [Y/N]
⛔ I CANNOT generate the final report until all assumptions are resolved.
═══════════════════════════════════════════════════════════════════════════════
Workflow Navigation
| Step | Description | Reference | |------|-------------|-----------| | 1-2 | Problem Definition & Root Cause | PROBLEM_DEFINITION.md | | 3 | Source Registration & Data Collection | DATA_COLLECTION.md | | 4-6 | Criteria, Weighting & Normalization | SCORING_WEIGHTING.md | | 7-8 | Scoring & Sensitivity | SENSITIVITY.md | | 9-12 | Review & Report Generation | REPORT_GENERATION.md |
Guardrails Reference
Prohibited Behaviors
| Prohibited | Required Instead | |------------|------------------| | Assuming missing data values | Ask user to provide or flag as gap | | Proceeding without confirmation | Wait for explicit user response | | Making subjective assessments | Request documented basis or SME input | | Auto-generating all diagrams | Present selection menu, await choice | | Finalizing with pending assumptions | Complete assumption review gate | | Ungrounded claims in report | Cite source or mark as "UNGROUNDED" | | Inferring user preferences | Ask explicit questions | | Skipping steps for efficiency | Execute all mandatory gates |
Source Citation Format
All analysis outputs must use this citation format:
[Statement or data point]
└─ Source: [Document Name], [Section/Page], [Date]
└─ Confidence: [High/Medium/Low based on source quality]
Example:
Alternative A provides 150 Mbps throughput
└─ Source: "Vendor A Datasheet v2.3", Section 4.2, 2024-03-15
└─ Confidence: High (manufacturer specification)
Confidence Levels (for source quality)
| Level | Definition | Source Types | |-------|------------|--------------| | High | Verified, authoritative | Test reports, certified specs, validated data | | Medium | Credible but unverified | Vendor datasheets, estimates, projections | | Low | Uncertain or incomplete | Expert opinion, extrapolations, partial data | | UNGROUNDED | No source available | Requires user input or exclusion |
Core Scripts
Install dependencies:
pip install pandas numpy matplotlib seaborn scipy openpyxl python-docx --break-system-packages
| Script | Purpose | Usage |
|--------|---------|-------|
| problem_analyzer.py | Validate problem statements | python scripts/problem_analyzer.py "statement" |
| five_whys.py | Interactive 5 Whys analysis | python scripts/five_whys.py --interactive |
| fishbone.py | Generate Fishbone diagrams | python scripts/fishbone.py data.json -o diagram.png |
| normalize.py | Normalize raw data | python scripts/normalize.py data.csv --method min-max |
| score.py | Calculate weighted scores | python scripts/score.py normalized.csv weights.json |
| sensitivity.py | Sensitivity analysis | python scripts/sensitivity.py scores.csv weights.json |
| visualize.py | Generate selected visualizations | python scripts/visualize.py results.json --diagrams 1,2,4 |
| source_tracker.py | Manage source registry | python scripts/source_tracker.py --add "source.pdf" |
| assumption_tracker.py | Track and review assumptions | python scripts/assumption_tracker.py --summary |
| generate_report.py | Create trade study report | python scripts/generate_report.py study.json -o report.docx |
Data Models
Source Registry (sources.json)
{
"sources": [
{
"id": "SRC-001",
"name": "Vendor A Datasheet",
"type": "datasheet",
"version": "2.3",
"date": "2024-03-15",
"file_path": "sources/vendor_a_datasheet.pdf",
"relevant_sections": ["4.2 Performance", "5.1 Specifications"],
"confidence": "medium",
"notes": "Vendor-provided, not independently verified"
}
],
"gaps": [
{
"description": "Cost data for Alternative C",
"required_for": ["criterion C2: Total Cost"],
"requested_source_type": "cost_estimate"
}
]
}
Assumption Registry (assumptions.json)
{
"assumptions": [
{
"id": "A-001",
"category": "data",
"description": "Vendor B throughput measured under similar conditions to Vendor A",
"reason": "Different test methodologies in datasheets",
"source_basis": "SRC-002, Section 3.1 (partial)",
"impact_if_wrong": "medium",
"impact_explanation": "Could affect throughput comparison by ±15%",
"status": "pending",
"user_response": null,
"timestamp": "2024-12-22T10:30:00Z"
}
],
"summary": {
"total": 5,
"approved": 3,
"pending": 2,
"rejected": 0
}
}
Diagram Selection Record (diagram_selection.json)
{
"selected_diagrams": [1, 2, 4, 6, 11],
"selection_timestamp": "2024-12-22T14:00:00Z",
"user_confirmed": true,
"diagrams": {
"1": {"name": "Decision Matrix Heatmap", "selected": true},
"2": {"name": "Score Comparison Bar Chart", "selected": true},
"3": {"name": "Radar Chart", "selected": false},
"4": {"name": "Weight Pie Chart", "selected": true},
"5": {"name": "Weight Bar Chart", "selected": false},
"6": {"name": "Tornado Diagram", "selected": true},
"7": {"name": "Monte Carlo Distribution", "selected": false},
"8": {"name": "Breakeven Chart", "selected": false},
"9": {"name": "Fishbone Diagram", "selected": false},
"10": {"name": "5 Whys Chain", "selected": false},
"11": {"name": "Source Coverage Matrix", "selected": true},
"12": {"name": "Confidence Heatmap", "selected": false}
}
}
Output Structure
trade_study_output/
├── sources/ # Registered source documents
│ └── source_registry.json
├── assumptions/ # Assumption tracking
│ └── assumption_registry.json
├── analysis/ # Root cause analysis artifacts
├── data/ # Collected and processed data
├── normalized/ # Normalized datasets
├── scores/ # Scoring results
├── sensitivity/ # Sensitivity analysis results
├── visualizations/ # ONLY user-selected diagrams
│ └── diagram_selection.json
└── reports/ # Final reports with source citations
Quality Gates Summary
| Gate | Location | Blocking Condition | |------|----------|-------------------| | Source Registration | Before Step 3 | No sources registered | | Problem Statement | Step 1 | User hasn't confirmed text | | Root Cause | Step 2 | User hasn't confirmed cause | | Alternatives | Step 4 | User hasn't confirmed list | | Criteria | Step 5 | User hasn't confirmed definitions | | Weights | Step 6 | User hasn't confirmed assignments | | Normalization | Step 7 | User hasn't confirmed method | | Scoring | Step 8 | User hasn't confirmed approach | | Sensitivity | Step 9 | User hasn't confirmed parameters | | Assumption Review | Step 10 | Pending assumptions exist | | Diagram Selection | Step 11 | User hasn't selected diagrams | | Report Generation | Step 12 | Any prior gate incomplete |