Agent Skills: long-context
Extend context windows of transformer models using RoPE, YaRN, ALiBi, and position interpolation techniques. Use when processing long documents (32k-128k+ tokens), extending pre-trained models beyond original context limits, or implementing efficient positional encodings. Covers rotary embeddings, attention biases, interpolation methods, and extrapolation strategies for LLMs.
UncategorizedID: davila7/claude-code-templates/long-context
19,6461,834
Install this agent skill to your local
Skill Files
Browse the full folder contents for long-context.
Loading file tree…
Select a file to preview its contents.