context-prep
Prepare optimal context package before delegating tasks to sub-agents
cost-optimization
Manage Claude Code API costs - token strategies, model selection, monitoring. Use when concerned about API spend, optimizing token usage, choosing models for tasks, or setting up cost monitoring. Covers /cost command, batching strategies, and budget management.
context-optimization
Guide for managing and optimizing context in Claude Code. Use when experiencing slow responses, context warnings, or planning large tasks. Covers /compact, /clear, context budgeting, subagent delegation, and efficient session workflows.
autonomous-operation
Use when starting any work session - establishes autonomous operation mode, overriding token limits and time pressure to work until goal is achieved
context-summary
Use when context window is getting full. Creates a summary file and instructions for starting a new session.
instruction-optimizer
Use when instruction files (skills, prompts, CLAUDE.md) are too long or need token reduction while preserving capability
prompt-engineering
Use this skill when you writing commands, hooks, skills for Agent, or prompts for sub agents or any other LLM interaction, including optimizing prompts, improving LLM outputs, or designing production prompt templates.
code-execution
Execute Python code locally with marketplace API access for 90%+ token savings on bulk operations. Activates when user requests bulk operations (10+ files), complex multi-step workflows, iterative processing, or mentions efficiency/performance.
ios-simulator-skill
21 production-ready scripts for iOS app testing, building, and automation. Provides semantic UI navigation, build automation, accessibility testing, and simulator lifecycle management. Optimized for AI agents with minimal token output.
ios-simulator-skill
21 production-ready scripts for iOS app testing, building, and automation. Provides semantic UI navigation, build automation, accessibility testing, and simulator lifecycle management. Optimized for AI agents with minimal token output.
token-savings
Show estimated token savings from soul memory
codebase-learn
Learn and remember codebase structure to minimize future token usage. Records architectural knowledge, file purposes, and patterns as a connected graph.
claude-md-architect
CLAUDE.md file generation and optimization for Claude Code projects. Capabilities: initialize project instructions, analyze codebase context, optimize existing CLAUDE.md, apply Anthropic best practices, reduce token usage, improve effectiveness. Actions: init, create, optimize, enhance CLAUDE.md files. Keywords: CLAUDE.md, project instructions, Claude Code setup, project context, codebase analysis, Anthropic best practices, token optimization, project configuration, AI instructions, coding guidelines, project rules, workspace setup. Use when: initializing CLAUDE.md for new projects, optimizing existing project instructions, setting up Claude Code for a codebase, improving AI coding guidelines.
repomix
Repository packaging for AI/LLM analysis. Capabilities: pack repos into single files, generate AI-friendly context, codebase snapshots, security audit prep, filter/exclude patterns, token counting, multiple output formats. Actions: pack, generate, export, analyze repositories for LLMs. Keywords: Repomix, repository packaging, LLM context, AI analysis, codebase snapshot, Claude context, ChatGPT context, Gemini context, code packaging, token count, file filtering, security audit, third-party library analysis, context window, single file output. Use when: packaging codebases for AI, generating LLM context, creating codebase snapshots, analyzing third-party libraries, preparing security audits, feeding repos to Claude/ChatGPT/Gemini.
prompt-enhancer
Prompt engineering and optimization for AI/LLMs. Capabilities: transform unclear prompts, reduce token usage, improve structure, add constraints, optimize for specific models, backward-compatible rewrites. Actions: improve, enhance, optimize, refactor, compress prompts. Keywords: prompt engineering, prompt optimization, token efficiency, LLM prompt, AI prompt, clarity, structure, system prompt, user prompt, few-shot, chain-of-thought, instruction tuning, prompt compression, token reduction, prompt rewrite, semantic preservation. Use when: improving unclear prompts, reducing token consumption, optimizing LLM outputs, restructuring verbose requests, creating system prompts, enhancing prompt clarity.
skill-builder
Build efficient, scalable Claude Code skills using progressive disclosure and token optimization. Use when creating new skills, optimizing existing skills, or learning skill development patterns. Provides templates, checklists, and working examples.
pricing-guidance
Claude API pricing, tier recommendations, token cost optimization, and ROI calculations for CDP features.
context_editing_guide
Managing context window, token optimization, summarization strategies for long conversations.
Page 1 of 3 · 46 results