ruleset-optimization
Guidelines for optimizing Claude rulesets and instruction files (CLAUDE.md, settings.json) using context efficiency principles. Includes strategies for skill extraction, progressive disclosure, token savings calculation, and deduplication. Manually invoke when optimizing rulesets, reducing context size, extracting content to skills, or improving ruleset organization.
prompt-optimizer
Optimize prompts for better AI performance. Use when user says "improve this prompt for better results", "optimize this prompt to reduce tokens", "apply prompt engineering best practices to this", "make this prompt more effective", "help me refine this system prompt", or "tune this prompt for the AI model I'm using".
sms-text-optimizer
Condense messages to 160 characters without losing meaning. Remove unnecessary words while keeping tone.
context-compression
Design and evaluate context compression strategies for long-running agent sessions. Use when agents exhaust memory, need to summarize conversation history, or when optimizing tokens-per-task rather than tokens-per-request.
context-engineering
Strategies for managing LLM context windows effectively in AI agents. Use when building agents that handle long conversations, multi-step tasks, tool orchestration, or need to maintain coherence across extended interactions.
progressive-disclosure
Template and guide for restructuring large documentation files into token-efficient directory structures. Reduces context bloat by 40-60% while maintaining accessibility.
octave-compression
Specialized workflow for transforming verbose natural language into semantic OCTAVE structures. REQUIRES octave-literacy to be loaded first
octave-ultra-mythic
Ultra-high density compression using mythological atoms and semantic shorthand. Preserves soul and constraints at 60% compression for identity transmission, binding protocols, and extreme token scarcity.
nav-compact
Clear conversation context while preserving knowledge via context marker. Use when user says "clear context", "start fresh", "done with this task", or when approaching token limits.
context-compression
This skill should be used when the user asks to "compress context", "summarize conversation history", "implement compaction", "reduce token usage", or mentions context compression, structured summarization, tokens-per-task optimization, or long-running agent sessions exceeding context limits.
Page 3 of 3 · 46 results