Agent Skills: gguf-quantization
GGUF format and llama.cpp quantization for efficient CPU/GPU inference. Use when deploying models on consumer hardware, Apple Silicon, or when needing flexible quantization from 2-8 bit without GPU requirements.
UncategorizedID: davila7/claude-code-templates/gguf-quantization
19,6461,834
Install this agent skill to your local
Skill Files
Browse the full folder contents for gguf-quantization.
Loading file tree…
Select a file to preview its contents.