Agent Skills: common-context-optimization

Techniques to maximize context window efficiency, reduce latency, and prevent 'lost in middle' issues through strategic masking and compaction. (triggers: *.log, chat-history.json, reduce tokens, optimize context, summarize history, clear output)

UncategorizedID: hoangnguyen0403/agent-skills-standard/common-context-optimization

Install this agent skill to your local

pnpm dlx add-skill https://github.com/HoangNguyen0403/agent-skills-standard/tree/HEAD/.agents/skills/common/common-context-optimization

Skill Files

Browse the full folder contents for common-context-optimization.

Download Skill

Loading file tree…

.agents/skills/common/common-context-optimization/SKILL.md

Skill Metadata

Name
common-context-optimization
Description
Maximize context window efficiency, reduce latency, and prevent lost-in-middle issues through strategic masking and compaction. Use when token budgets are tight, tool outputs flood the context, conversations drift from intent, or latency spikes from cache misses.

Priority: P1 (OPTIMIZATION)

1. Observation Masking (Noise Reduction)

Problem: Large tool outputs (logs, JSON lists) flood context and degrade reasoning. Solution: Replace raw output with semantic summaries after consumption.

  1. Identify outputs exceeding 50 lines or 1 KB.
  2. Extract critical data points immediately.
  3. Mask by rewriting history to replace raw data with summary placeholder.
  4. See references/masking.md for patterns.

See implementation examples for masking patterns.

2. Context Compaction (State Preservation)

Problem: Long conversations drift from original intent. Solution: Recursive summarization that preserves State over Dialogue.

  1. Trigger compaction every 10 turns or 8k tokens.
  2. Compact:
  • Keep: User Goal, Active Task, Current Errors, Key Decisions.
  • Drop: Chat chit-chat, intermediate tool calls, corrected assumptions.
  1. Format: Update System Prompt or Memory File with compacted state.
  2. See references/compaction.md for algorithms.

See implementation examples for compacted state format.

3. KV-Cache Awareness (Latency)

Goal: Maximize pre-fill cache hits.

  • Static Prefix: Enforce strict ordering — System -> Tools -> RAG -> User.
  • Append-Only: Never insert into middle of history; append new turns only.

References

Anti-Patterns

  • No raw tool dumps: Mask large outputs immediately after extracting data.
  • No unbounded growth: Compact every 10 turns to preserve intent over dialogue.
  • No middle insertions: Append-only history maximizes KV cache hits.