Agent Skills: optimizing-attention-flash

Optimizes transformer attention with Flash Attention for 2-4x speedup and 10-20x memory reduction. Use when training/running transformers with long sequences (>512 tokens), encountering GPU memory issues with attention, or need faster inference. Supports PyTorch native SDPA, flash-attn library, H100 FP8, and sliding window attention.

UncategorizedID: davila7/claude-code-templates/optimizing-attention-flash

Install this agent skill to your local

pnpm dlx add-skill https://github.com/davila7/claude-code-templates/optimizing-attention-flash

Skill Files

Browse the full folder contents for optimizing-attention-flash.

Download Skill

Loading file tree…

Select a file to preview its contents.