Agent Skills: peft-fine-tuning
Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.
UncategorizedID: davila7/claude-code-templates/peft-fine-tuning
19,6461,834
Install this agent skill to your local
Skill Files
Browse the full folder contents for peft-fine-tuning.
Loading file tree…
Select a file to preview its contents.