Codex CLI Skill
Headless wrapper for OpenAI Codex CLI. Passes prompt as positional arg to codex exec "PROMPT".
Optimized for code generation and analysis. Requires OPENAI_API_KEY.
When to Use
- Code generation tasks needing OpenAI/GPT model perspective
- Cross-validation of code solutions with a non-Claude model
- Tasks benefiting from Codex's code optimization focus
- Multi-LLM consultation workflows
Usage
Ask a question
node .claude/skills/omega-codex-cli/scripts/ask-codex.mjs "Implement a Redis caching layer for Express"
With timeout
node .claude/skills/omega-codex-cli/scripts/ask-codex.mjs "Refactor this module" --timeout-ms 120000
JSONL streaming output
node .claude/skills/omega-codex-cli/scripts/ask-codex.mjs "Generate unit tests" --json
Sandbox mode
node .claude/skills/omega-codex-cli/scripts/ask-codex.mjs "Write and test a sort algorithm" --sandbox
Availability Check
node .claude/skills/omega-codex-cli/scripts/verify-setup.mjs
# Exit 0 = available (CLI found + OPENAI_API_KEY set)
# Exit 1 = not available
Scripts
| Script | Purpose |
| ------------------- | ---------------------------------------------------------- |
| ask-codex.mjs | Core headless wrapper — prompt as positional arg |
| parse-args.mjs | Argument parser (--model, --json, --sandbox, --timeout-ms) |
| verify-setup.mjs | Availability check (CLI + OPENAI_API_KEY) |
| format-output.mjs | JSONL event stream normalization |
Models
| Model ID | Description | When to Use |
| ------------------- | ------------------------------------------------------------------------------- | ---------------------------------------------------- |
| codex-mini-latest | Default. Fine-tuned o4-mini. Low-latency code Q&A. $1.50/$6 per 1M. | Fast code questions, CI pipelines, high-volume calls |
| gpt-5.4 | Full GPT-5.4 (released ~2026-03-05). 1M context, computer-use, top coding perf. | Complex multi-file tasks, computer-use agentic flows |
| gpt-5.4-pro | Pro variant of GPT-5.4. Higher capacity, higher cost. | State-of-the-art coding benchmarks, research tasks |
Default model: codex-mini-latest — fine-tuned o4-mini optimized for low-latency code Q&A with a 75% caching discount. Do not override unless you need GPT-5.4's extended context or computer-use capability.
To use GPT-5.4:
node .claude/skills/omega-codex-cli/scripts/ask-codex.mjs "PROMPT" --model gpt-5.4
Pricing (codex-mini-latest): $1.50/1M input tokens · $6/1M output tokens · 75% caching discount
Flags
| Flag | Description |
| ---------------- | ------------------------------------------------------------------------------------------ |
| --model MODEL | Override model (default: codex-mini-latest). Use gpt-5.4 or gpt-5.4-pro for GPT-5.4. |
| --json | JSONL event stream output |
| --sandbox | Workspace-write sandbox mode |
| --timeout-ms N | Timeout in milliseconds (exit code 124 on expiry) |
Exit Codes
| Code | Meaning | | ---- | ------------------------------------------ | | 0 | Success | | 1 | Error (CLI failure, auth issue, API error) | | 124 | Timeout (--timeout-ms exceeded) |
Anti-Patterns & Iron Laws
- ALWAYS verify OPENAI_API_KEY is set before invocation
- NEVER use stdin for prompt delivery — Codex uses positional arg
- ALWAYS include --skip-git-repo-check (built into wrapper)
- ALWAYS set --timeout-ms for production usage
- NEVER assume --json output is standard JSON — it produces JSONL event stream
Integration Notes
- API key:
OPENAI_API_KEYenv var required - Rate limits: OpenAI API rate limits apply
- Platform: Full cross-platform (Windows uses cmd.exe /d /s /c wrapper)
Memory Protocol (MANDATORY)
Before starting:
Read .claude/context/memory/learnings.md
After completing:
- New pattern ->
.claude/context/memory/learnings.md - Issue found ->
.claude/context/memory/issues.md - Decision made ->
.claude/context/memory/decisions.md
ASSUME INTERRUPTION: If it's not in memory, it didn't happen.
Note: Use pnpm search:code to discover references to this skill codebase-wide.