Back to tags
Tag

Agent Skills with tag: token-optimization

46 skills match this tag. Use tags to discover related Agent Skills and explore similar workflows.

ruleset-optimization

Guidelines for optimizing Claude rulesets and instruction files (CLAUDE.md, settings.json) using context efficiency principles. Includes strategies for skill extraction, progressive disclosure, token savings calculation, and deduplication. Manually invoke when optimizing rulesets, reducing context size, extracting content to skills, or improving ruleset organization.

token-optimizationprogressive-disclosurecontext-partitioningduplicate-detection
ilude
ilude
5

prompt-optimizer

Optimize prompts for better AI performance. Use when user says "improve this prompt for better results", "optimize this prompt to reduce tokens", "apply prompt engineering best practices to this", "make this prompt more effective", "help me refine this system prompt", or "tune this prompt for the AI model I'm using".

prompt-engineeringprompt-refinementtoken-optimizationconciseness
Uniswap
Uniswap
82

sms-text-optimizer

Condense messages to 160 characters without losing meaning. Remove unnecessary words while keeping tone.

smsconcisenesstoken-optimizationcommunication
OneWave-AI
OneWave-AI
237

context-compression

Design and evaluate context compression strategies for long-running agent sessions. Use when agents exhaust memory, need to summarize conversation history, or when optimizing tokens-per-task rather than tokens-per-request.

memory-managementconversational-memorytoken-optimizationsummarization
muratcankoylan
muratcankoylan
142

context-engineering

Strategies for managing LLM context windows effectively in AI agents. Use when building agents that handle long conversations, multi-step tasks, tool orchestration, or need to maintain coherence across extended interactions.

context-windowssliding-windowtoken-optimizationagent-memory
itsmostafa
itsmostafa
10

progressive-disclosure

Template and guide for restructuring large documentation files into token-efficient directory structures. Reduces context bloat by 40-60% while maintaining accessibility.

token-optimizationfile-organizationdocument-templatesrepository-structure
gptme
gptme
1111

octave-compression

Specialized workflow for transforming verbose natural language into semantic OCTAVE structures. REQUIRES octave-literacy to be loaded first

natural-language-processingsemantic-layertoken-optimizationoctave-literacy
elevanaltd
elevanaltd
26

octave-ultra-mythic

Ultra-high density compression using mythological atoms and semantic shorthand. Preserves soul and constraints at 60% compression for identity transmission, binding protocols, and extreme token scarcity.

token-optimizationcompressionsemantic-layeridentity-preservation
elevanaltd
elevanaltd
26

nav-compact

Clear conversation context while preserving knowledge via context marker. Use when user says "clear context", "start fresh", "done with this task", or when approaching token limits.

agent-memorycontext-partitioningcontext-windowtoken-optimization
alekspetrov
alekspetrov
504

context-compression

This skill should be used when the user asks to "compress context", "summarize conversation history", "implement compaction", "reduce token usage", or mentions context compression, structured summarization, tokens-per-task optimization, or long-running agent sessions exceeding context limits.

autonomous-agentsummarizationtoken-optimizationprompt-engineering
muratcankoylan
muratcankoylan
5,808463

Page 3 of 3 · 46 results