Agent Skills: awq-quantization

Activation-aware weight quantization for 4-bit LLM compression with 3x speedup and minimal accuracy loss. Use when deploying large models (7B-70B) on limited GPU memory, when you need faster inference than GPTQ with better accuracy preservation, or for instruction-tuned and multimodal models. MLSys 2024 Best Paper Award winner.

UncategorizedID: davila7/claude-code-templates/awq-quantization

Install this agent skill to your local

pnpm dlx add-skill https://github.com/davila7/claude-code-templates/awq-quantization

Skill Files

Browse the full folder contents for awq-quantization.

Download Skill

Loading file tree…

Select a file to preview its contents.