Agent Skills: long-context

Extend context windows of transformer models using RoPE, YaRN, ALiBi, and position interpolation techniques. Use when processing long documents (32k-128k+ tokens), extending pre-trained models beyond original context limits, or implementing efficient positional encodings. Covers rotary embeddings, attention biases, interpolation methods, and extrapolation strategies for LLMs.

UncategorizedID: davila7/claude-code-templates/long-context

Install this agent skill to your local

pnpm dlx add-skill https://github.com/davila7/claude-code-templates/long-context

Skill Files

Browse the full folder contents for long-context.

Download Skill

Loading file tree…

Select a file to preview its contents.