context-summary
Use when context window is getting full. Creates a summary file and instructions for starting a new session.
long-context
Extend context windows of transformer models using RoPE, YaRN, ALiBi, and position interpolation techniques. Use when processing long documents (32k-128k+ tokens), extending pre-trained models beyond original context limits, or implementing efficient positional encodings. Covers rotary embeddings, attention biases, interpolation methods, and extrapolation strategies for LLMs.
context_editing_guide
Managing context window, token optimization, summarization strategies for long conversations.
Convex Agents Context
Customizes what information the LLM receives for each generation. Use this to control message history, implement RAG context injection, search across threads, and provide custom context.
state-management
Cache management, configuration best practices, and progressive disclosure patterns for efficient context window usage. Use when working with large responses, optimizing token costs, or managing plugin state across operations.
nav-compact
Clear conversation context while preserving knowledge via context marker. Use when user says "clear context", "start fresh", "done with this task", or when approaching token limits.
context-optimization
This skill should be used when the user asks to "optimize context", "reduce token costs", "improve context efficiency", "implement KV-cache optimization", "partition context", or mentions context limits, observation masking, context budgeting, or extending effective context capacity.