External Models via Claudish CLI — Reference Guide
⚠️ Learn and Reuse Model Preferences
Models are learned per context and reused automatically:
cat .claude/multimodel-team.json 2>/dev/null
- Detect context from task keywords (debug/research/coding/review)
- If
contextPreferences[context]exists → USE IT (no asking) - If empty (first time) → ASK user → SAVE for that context
- User says "change models" → UPDATE preferences
How External Models Work
External models are invoked deterministically via the claudish CLI. The orchestrator
(e.g., /team command) calls claudish directly through Bash — no LLM delegation needed.
Orchestrator → Bash(claudish --model {MODEL_ID} --stdin) → External Model
This approach is 100% reliable because it's a direct CLI invocation, not a prompt-based delegation.
Invoking External Models
From /team Command (Automatic)
The /team command handles this automatically:
- Internal models → Task(dev:researcher)
- External models → Bash(claudish --model {MODEL_ID} --stdin)
Direct CLI Usage
# Pattern
claudish --model {MODEL_ID} --stdin --quiet < prompt-file.md > result.md
# Examples
claudish --model x-ai/grok-code-fast-1 --stdin --quiet < task.md > grok-result.md
claudish --model google/gemini-3-pro-preview --stdin --quiet < task.md > gemini-result.md
Required flags:
--model— The external model to use--stdin— Read prompt from stdin (for large prompts)--quiet— Suppress log messages (for clean output capture)
Multi-Backend Routing
Claudish routes to different backends based on model ID prefix:
| Prefix | Backend | Required Key | Example |
|--------|---------|--------------|---------|
| (none) | OpenRouter | OPENROUTER_API_KEY | openai/gpt-5.2 |
| g/ gemini/ | Google Gemini API | GEMINI_API_KEY | g/gemini-2.0-flash |
| oai/ | OpenAI Direct API | OPENAI_API_KEY | oai/gpt-4o |
| mmax/ mm/ | MiniMax Direct API | MINIMAX_API_KEY | mmax/MiniMax-M2.1 |
| kimi/ moonshot/ | Kimi Direct API | KIMI_API_KEY | kimi/kimi-k2-thinking-turbo |
| glm/ zhipu/ | GLM Direct API | GLM_API_KEY | glm/glm-4.7 |
| ollama/ | Ollama (local) | None | ollama/llama3.2 |
| lmstudio/ | LM Studio (local) | None | lmstudio/qwen |
| vllm/ | vLLM (local) | None | vllm/model |
| mlx/ | MLX (local) | None | mlx/model |
| http://... | Custom endpoint | None | http://localhost:8000/model |
⚠️ Prefix Collision Warning
OpenRouter model IDs may collide with routing prefixes. Check the prefix table above.
Collision-free models (safe for OpenRouter):
x-ai/grok-*✅deepseek/*✅minimax/*✅ (usemmax/for MiniMax Direct)qwen/*✅mistralai/*✅moonshotai/*✅ (usekimi/for Kimi Direct)anthropic/*✅z-ai/*✅ (useglm/for GLM Direct)google/*✅ (useg/for Gemini Direct)openai/*✅ (useoai/for OpenAI Direct)
Direct API prefixes for cost savings:
| OpenRouter Model | Direct API Prefix | Notes |
|------------------|-------------------|-------|
| openai/gpt-* | oai/gpt-* | OpenAI Direct API |
| google/gemini-* | g/gemini-* | Gemini Direct API |
| minimax/* | mmax/* or mm/* | MiniMax Direct API |
| moonshotai/* | kimi/* or moonshot/ | Kimi Direct API |
| z-ai/glm-* | glm/* or zhipu/* | GLM Direct API |
Correct Usage Patterns
Single External Model
claudish --model x-ai/grok-code-fast-1 --stdin --quiet < task.md > result.md
Parallel External Models (in /team)
# All launched in a single message with run_in_background: true
Bash("claudish --model x-ai/grok-code-fast-1 --stdin --quiet < vote-prompt.md > grok-result.md 2>grok-stderr.log; echo $? > grok.exit")
Bash("claudish --model google/gemini-3-pro-preview --stdin --quiet < vote-prompt.md > gemini-result.md 2>gemini-stderr.log; echo $? > gemini.exit")
Verifying Results
# Check exit code
cat grok.exit # 0 = success
# Check output size
wc -c < grok-result.md # Should be >50 bytes
# Check stderr for errors
cat grok-stderr.log
Common Mistakes
Mistake 1: Not capturing exit code
# ❌ WRONG - no way to detect failures
claudish --model grok --stdin < task.md > result.md
# ✅ CORRECT - capture exit code
claudish --model grok --stdin < task.md > result.md 2>stderr.log; echo $? > result.exit
Troubleshooting
"claudish: command not found"
Fix: npm install -g claudish
"OPENROUTER_API_KEY not set"
Fix: export OPENROUTER_API_KEY=your-key
Non-zero exit code
Fix: Check stderr log for error details. Common causes: rate limits, invalid model ID, API key issues.