Agent Skills: quantizing-models-bitsandbytes
Quantizes LLMs to 8-bit or 4-bit for 50-75% memory reduction with minimal accuracy loss. Use when GPU memory is limited, need to fit larger models, or want faster inference. Supports INT8, NF4, FP4 formats, QLoRA training, and 8-bit optimizers. Works with HuggingFace Transformers.
model-compressionquantizationllmhuggingfacegpu-memory-optimization
machine-learningID: ovachiever/droid-tings/quantizing-models-bitsandbytes
81
Skill Files
Browse the full folder contents for quantizing-models-bitsandbytes.
Loading file tree…
Select a file to preview its contents.