fine-tuning
LLM fine-tuning and prompt-tuning techniques
fine-tuningllmprompt-tuningmodel-training
pluginagentmarketplace
1
lora
Parameter-efficient fine-tuning with Low-Rank Adaptation (LoRA). Use when fine-tuning large language models with limited GPU memory, creating task-specific adapters, or when you need to train multiple specialized models from a single base.
machine-learninglarge-language-modelslow-rank-adaptationparameter-efficient-fine-tuning
itsmostafa
10