Agent Skills: MoAI Foundation Thinking

>

foundationID: modu-ai/moai-adk/moai-foundation-thinking

Repository

modu-aiLicense: GPL-3.0
873151

Install this agent skill to your local

pnpm dlx add-skill https://github.com/modu-ai/moai-adk/tree/HEAD/.claude/skills/moai-foundation-thinking

Skill Files

Browse the full folder contents for moai-foundation-thinking.

Download Skill

Loading file tree…

.claude/skills/moai-foundation-thinking/SKILL.md

Skill Metadata

Name
moai-foundation-thinking
Description
>

MoAI Foundation Thinking

Structured thinking toolkit for creative problem-solving and rigorous analysis. Integrates three complementary frameworks that cover the full spectrum from idea generation to critical evaluation.

Core Philosophy: Generate broadly, evaluate rigorously, question deeply. Creativity and criticism are complementary forces.

Quick Reference

What is the Thinking Toolkit?

Three integrated frameworks for structured thinking:

  • Critical Evaluation: Rigorous 7-step analysis to assess proposals and detect flaws
  • Diverge-Converge: Systematic brainstorming from 20-50 raw ideas to 3-5 validated solutions
  • Deep Questioning: 6-layer progressive inquiry to uncover hidden requirements and risks

When to Use Each Framework:

  • Evaluating a proposal or recommendation: Critical Evaluation
  • Generating solutions for an open-ended problem: Diverge-Converge
  • Exploring an unfamiliar domain or unclear requirement: Deep Questioning
  • Complex decisions: Combine all three (Question first, Generate second, Evaluate third)

Quick Access:


Implementation Guide

Framework 1: Critical Evaluation

Purpose: Systematically assess proposals, claims, and recommendations to detect flaws before commitment.

Seven-Step Evaluation Process:

Step 1 - Restate: Reformulate the claim or proposal in your own words. Ensures genuine understanding before critique.

Step 2 - Assess Evidence: Examine supporting data. Is the evidence empirical, anecdotal, or assumed? What is the sample size and recency? Are there contradicting data points?

Step 3 - Detect Fallacies: Check for common reasoning errors. Appeal to authority without substance. False dichotomy (only two options presented). Hasty generalization from insufficient examples. Straw man misrepresentation of alternatives.

Step 4 - Expose Assumptions: Identify unstated premises. What must be true for this conclusion to hold? Which assumptions are testable? Which assumptions carry the highest risk if wrong?

Step 5 - Note Alternatives: For every claim, ask what else could explain the evidence. Generate at least two alternative interpretations. Consider the null hypothesis.

Step 6 - Check Contradictions: Look for internal inconsistencies. Do different parts of the proposal conflict? Are there contradictions with known facts or constraints?

Step 7 - Evaluate Burden of Proof: Determine if the evidence is proportional to the claim. Extraordinary claims require extraordinary evidence. Identify what additional evidence would strengthen or weaken the case.

Output Format:

  • Evaluation Summary: Overall assessment (Strong, Moderate, Weak, Flawed)
  • Key Strengths: What holds up under scrutiny
  • Critical Gaps: What needs more evidence or revision
  • Recommended Actions: Next steps to strengthen the proposal

WHY: Uncritical acceptance of proposals leads to preventable failures. IMPACT: Structured evaluation catches 60-80% of flawed recommendations.

Framework 2: Diverge-Converge Brainstorming

Purpose: Generate a broad solution space then systematically narrow to the best options.

Five-Phase Process:

Phase 1 - Gather Requirements: Define the problem space clearly. Identify stakeholders and success criteria. Set explicit constraints (budget, timeline, technology). Document "must-have" vs "nice-to-have" criteria.

Phase 2 - Diverge (Generate 20-50 Ideas): Quantity over quality during divergence. No criticism or filtering during generation. Include wild and unconventional ideas. Combine and build upon previous ideas. Use prompts: "What if we...", "How might we...", "What would happen if..."

Phase 3 - Cluster (Group into 4-8 Themes): Identify natural groupings among ideas. Name each cluster with a descriptive theme. Note which clusters have the most ideas (signals interest). Identify gaps where no ideas exist (potential blind spots).

Phase 4 - Converge (Score and Select): Rate each cluster against success criteria (1-10). Apply weighted scoring based on priority of criteria. Select top 3-5 candidates for deeper analysis. Document why rejected options were eliminated.

Phase 5 - Document and Validate: Write up selected solutions with rationale. Define validation experiments for top candidates. Identify risks and mitigation strategies. Plan implementation sequence.

Output Format:

  • Problem Statement: Clear definition of what we are solving
  • Idea Count: Total ideas generated and cluster distribution
  • Top Candidates: 3-5 selected solutions with scores
  • Validation Plan: How to test each candidate

WHY: Premature convergence on the first idea leaves better solutions undiscovered. IMPACT: Teams using diverge-converge find 3x more viable solutions.

Framework 3: Deep Questioning

Purpose: Progressively uncover hidden requirements, constraints, and risks through layered inquiry.

Six-Layer Progressive Inquiry:

Layer 1 - Surface Understanding: What is the stated goal or request? What does success look like? What are the obvious inputs and outputs? Verify: Can I explain this to someone else clearly?

Layer 2 - Problem Depth: Why does this problem exist? What is the root cause vs symptom? What has been tried before and why did it fail? What would happen if we did nothing?

Layer 3 - Context and Constraints: What are the technical constraints? What are the organizational or process constraints? What are the time and resource limitations? What external dependencies exist?

Layer 4 - User Perspective: Who are the actual end users? What is their current workflow? What pain points drive this request? What would they consider a disappointing solution?

Layer 5 - Solution Exploration: What are the boundary conditions? What edge cases could break the solution? What are the performance requirements? How will this integrate with existing systems?

Layer 6 - Validation and Risk: How will we know if the solution works? What could go wrong? What is the rollback strategy? What monitoring or alerting is needed?

Progressive Depth Indicators:

  • Shallow: Only Layers 1-2 explored (common in quick tasks)
  • Moderate: Layers 1-4 explored (sufficient for most features)
  • Deep: All 6 layers explored (required for architecture decisions)
  • Exhaustive: All layers with multiple iterations (critical systems)

Output Format:

  • Understanding Level: Shallow, Moderate, Deep, or Exhaustive
  • Key Discoveries: Insights from each explored layer
  • Open Questions: Remaining unknowns requiring further investigation
  • Risk Assessment: Identified risks by severity

WHY: Surface-level understanding leads to solutions that miss the real problem. IMPACT: Deep questioning reduces requirement changes by 40-60%.


Combined Workflow

For complex problems, use all three frameworks in sequence:

Step 1 - Deep Questioning: Explore the problem space (Layers 1-4 minimum) Step 2 - Diverge-Converge: Generate and select solutions based on discoveries Step 3 - Critical Evaluation: Rigorously assess the top candidates

Decision Complexity Guide:

Simple task (1-2 files): Skip thinking frameworks (direct implementation) Feature addition: Deep Questioning (Layers 1-3) + brief evaluation Design decision: Deep Questioning (full) + Diverge-Converge Architecture change: All three frameworks in full


Integration with MoAI Workflow

SPEC Phase (/moai plan):

  • Apply Deep Questioning during requirements gathering
  • Use Diverge-Converge for solution approach selection
  • Apply Critical Evaluation to finalize SPEC document

Run Phase (/moai run):

  • Use Critical Evaluation when reviewing implementation options
  • Apply Deep Questioning when encountering unexpected complexity

Agent Teams:

  • team-reader (analyst role): Primary user of Deep Questioning framework
  • team-reader (architect role): Primary user of Critical Evaluation framework
  • team-reader (researcher role): Uses all three for comprehensive analysis

Works Well With

Agents:

  • manager-strategy: Combined with Philosopher for full decision framework
  • manager-spec: Deep Questioning during requirement analysis
  • team-reader (analyst role): Primary consumer for plan phase analysis
  • team-reader (researcher role): Comprehensive research methodology

Skills:

  • moai-foundation-philosopher: Complementary (Philosopher = strategic decisions, Thinking = creative analysis)
  • moai-foundation-core: Integration with SPEC workflow
  • moai-workflow-spec: Requirement documentation support

Commands:

  • /moai plan: Apply thinking frameworks during specification
  • /moai run: Reference during implementation decisions

Module Deep Dives:

External Resources: reference.md

Origin: Integrated from critical-thinking, brainstorm-diverge-converge, and ideation frameworks

<!-- moai:evolvable-start id="rationalizations" -->

Common Rationalizations

| Rationalization | Reality | |---|---| | "Brainstorming is for creative tasks, not engineering" | Engineering decisions have design space too. Diverge phase surfaces solutions that linear thinking misses. | | "I already evaluated this critically, I do not need the framework" | Unstructured evaluation skips dimensions. The framework ensures completeness across all assessment axes. | | "Deep questioning slows down the process" | Shallow answers produce shallow solutions. Deep questions reveal constraints the specification missed. | | "Converging on the first good idea is efficient" | The first good idea is often a local optimum. Spending 5 minutes on alternatives can find a global optimum. | | "This decision is reversible, I do not need rigorous thinking" | Reversible decisions still cost time to reverse. Rigorous thinking upfront prevents the reversal. |

<!-- moai:evolvable-end --> <!-- moai:evolvable-start id="red-flags" -->

Red Flags

  • Architecture decision documented with only one option considered
  • Diverge phase produced fewer than 3 alternatives
  • Converge phase selected an option without evaluation criteria
  • Critical evaluation missing feasibility or risk dimension
  • Deep questioning stopped after one level of "why"
<!-- moai:evolvable-end --> <!-- moai:evolvable-start id="verification" -->

Verification

  • [ ] Diverge phase produced at least 3 distinct alternatives
  • [ ] Converge phase used explicit evaluation criteria to select
  • [ ] Critical evaluation covers feasibility, risk, and impact dimensions
  • [ ] Deep questioning reached at least 3 levels for the primary question
  • [ ] Selected approach references the alternatives it was compared against
<!-- moai:evolvable-end -->

First Principles (absorbed from moai-foundation-philosopher)

Five-phase strategic analysis framework for architecture decisions and technology selection.

When to Use

Use when facing decisions where the solution space is broad, trade-offs are non-obvious, or existing patterns may not apply. Particularly valuable for: architecture decisions, technology selection, breaking changes, performance vs maintainability trade-offs.

Five-Phase Process

Phase 1 — Assumption Audit: List every assumption underlying the current approach. Challenge each: is it empirically verified or conventionally accepted? Which assumptions carry the highest risk if wrong?

Phase 2 — First Principles Decomposition: Strip away analogies and precedent. Break the problem into fundamental truths. Ask "What must be true for this to work?" rather than "What has worked before?"

Phase 3 — Alternative Generation: From first principles, generate alternatives that ignore prior constraints. Aim for 5-10 distinct approaches before filtering. Include "absurd" alternatives — they often surface overlooked fundamentals.

Phase 4 — Trade-off Analysis: Score each alternative across: correctness, performance, maintainability, operability, reversibility. Use explicit weights based on project context. Document which trade-offs are acceptable vs dealbreakers.

Phase 5 — Cognitive Bias Check: Before finalizing, audit for: anchoring bias (overweighting the first option), confirmation bias (seeking evidence for the preferred option), sunk cost fallacy (keeping a bad choice due to investment).

Full methodology in modules: First Principles, Assumption Matrix, Trade-off Analysis, Cognitive Bias

Examples and reference: examples.md, reference.md


Sequential Thinking MCP (absorbed from moai-workflow-thinking)

Structured step-by-step reasoning via mcp__sequential-thinking__sequentialthinking MCP tool. Activated by --deepthink flag.

Three Distinct Reasoning Modes

| Mode | Trigger | Mechanism | GLM Compatible? | |------|---------|-----------|-----------------| | --deepthink | Explicit flag | Sequential Thinking MCP | NO — generates server_tool_use | | ultrathink | Keyword | Claude native extended reasoning | YES | | Adaptive Thinking | Automatic | Opus 4.7 built-in | YES |

Rules: --deepthink → ALWAYS invoke Sequential Thinking MCP. ultrathink → ALWAYS use Claude native. They can coexist: ultrathink --deepthink activates both independently.

When to Activate (--deepthink only)

Architecture decisions affecting 3+ files, technology selection between options, breaking changes under consideration, performance vs maintainability trade-offs, complex problems with multiple viable approaches.

Tool Parameters

Required: thought (string), nextThoughtNeeded (boolean), thoughtNumber (integer), totalThoughts (integer)

Optional: isRevision (boolean), revisesThought (integer), branchFromThought (integer), branchId (string), needsMoreThoughts (boolean)

Key Guidelines

  1. Start with a reasonable totalThoughts estimate (adjust up as needed)
  2. Use isRevision: true when correcting earlier thoughts
  3. Use branching (branchFromThought) when two viable alternatives exist
  4. Set nextThoughtNeeded: false only when a concrete conclusion is reached
  5. On Opus 4.7: Adaptive Thinking handles reasoning automatically — no fixed budget needed