Back to authors
cuba6112

cuba6112

76 Skills published on GitHub.

structured-outputs

Techniques for ensuring LLM responses adhere to strict JSON schemas, utilizing Pydantic models, JSON mode, and schema-based refusals. Triggers: structured-output, pydantic, json-schema, json-mode, llm-response-parsing.

UncategorizedView skill →

tool-calling

Define and run tool-calling patterns for LLMs (schema design, call loops, validation, parallel calls). Use when building function/tool calling workflows or debugging tool selection and arguments; triggers: tool-calling, function-calling, tool schema, tool declaration, parallel function calling.

UncategorizedView skill →

torch-compile

Optimize PyTorch with torch.compile (TorchDynamo/Inductor), focusing on compile overhead, graph breaks, and benchmark methodology. Use when speeding up PyTorch models or debugging compile behavior; triggers: torch.compile, torchdynamo, inductor, graph break, pytorch optimization.

UncategorizedView skill →

torchaudio

Audio signal processing library for PyTorch. Covers feature extraction (spectrograms, mel-scale), waveform manipulation, and GPU-accelerated data augmentation techniques. (torchaudio, melscale, spectrogram, pitchshift, specaugment, waveform, resample)

UncategorizedView skill →

torchserve

Model serving engine for PyTorch. Focuses on MAR packaging, custom handlers for preprocessing/inference, and management of multi-GPU worker scaling. (torchserve, mar-file, handler, basehandler, model-archiver, inference-api)

UncategorizedView skill →

torchtext

Natural Language Processing utilities for PyTorch (Legacy). Includes tokenizers, vocabulary building, and DataPipe-based dataset handling for text processing pipelines. (torchtext, tokenizer, vocab, datapipe, regextokenizer, nlp-pipeline)

UncategorizedView skill →

torchvision

Computer vision library for PyTorch featuring pretrained models, advanced image transforms (v2), and utilities for handling complex data types like bounding boxes and masks. (torchvision, transforms, tvtensor, resnet, cutmix, mixup, pretrained models, vision transforms)

UncategorizedView skill →

unsloth-core

Core fundamentals of Unsloth for fast LLM fine-tuning, covering FastLanguageModel setup, optimized gradient checkpointing, and native inference acceleration (triggers: unsloth, FastLanguageModel, from_pretrained, get_peft_model, for_inference, gradient checkpointing).

UncategorizedView skill →

unsloth-cpt

Strategies for continued pretraining and domain adaptation in Unsloth (triggers: continued pretraining, CPT, domain adaptation, lm_head, embed_tokens, rsLoRA, embedding_learning_rate).

UncategorizedView skill →

unsloth-datasets

Standardizing and formatting datasets for Unsloth, including chat template conversion and synthetic data generation (triggers: chat templates, ShareGPT, Alpaca, conversation_extension, add_new_tokens, standardize_sharegpt, formatting_prompts_func).

UncategorizedView skill →

unsloth-dpo

Direct Preference Optimization (DPO) for aligning models with preference data without separate reward models. Triggers: dpo, preference optimization, rlhf, ref_model=none, patchdpotrainer, dpotrainer.

UncategorizedView skill →

unsloth-fft

Performing full fine-tuning (FFT) in Unsloth with 100% exact weight updates and optimized gradient checkpointing. Triggers include fft, full fine-tuning, full_finetuning, exact fine-tuning, and weight updates.

UncategorizedView skill →

unsloth-gguf

Exporting fine-tuned models to GGUF format for deployment in llama.cpp, Ollama, and local serving tools. Triggers: gguf, quantization export, llama.cpp, ollama, save_pretrained_gguf, modelfile.

UncategorizedView skill →

unsloth-grpo

Implementation of Group Relative Policy Optimization (GRPO) for training reasoning models, optimized for 8x memory savings (triggers: GRPO, reasoning, DeepSeek-R1, reinforcement learning, RLVR, GRPOTrainer, thinking tokens).

UncategorizedView skill →

unsloth-inference

Deploying fine-tuned models for production inference using native kernel optimization, vLLM, or SGLang. Triggers: inference, serving, vllm, sglang, for_inference, model merging, openai api.

UncategorizedView skill →

unsloth-long-context

Training models on extended context lengths using optimized RoPE scaling and memory-efficient attention kernels. Triggers: long context, max_seq_length, rope scaling, large context window, flex attention.

UncategorizedView skill →

unsloth-lora

Configuring and optimizing 16-bit Low-Rank Adaptation (LoRA) and Rank-Stabilized LoRA (rsLoRA) for efficient LLM fine-tuning using triggers like lora, qlora, rslora, rank selection, lora_alpha, lora_dropout, and target_modules.

UncategorizedView skill →

unsloth-models

Guidance on selecting and configuring supported model architectures like Llama 4, DeepSeek-R1, and Qwen3. Triggers: llama 4, deepseek-r1, qwen3, gemma 3, model selection, instruct vs base.

UncategorizedView skill →

unsloth-orpo

One-step preference alignment using Odds Ratio Preference Optimization (ORPO) (triggers: ORPO, preference optimization, alignment, ORPOTrainer, log_odds_ratio, binary preference).

UncategorizedView skill →

unsloth-qlora

Advanced 4-bit quantization techniques using Unsloth and BitsAndBytes for extreme VRAM efficiency (triggers: QLoRA, 4-bit, load_in_4bit, bnb-4bit, VRAM optimization, dynamic quantization).

UncategorizedView skill →

unsloth-quantization

Utilizing Dynamic 4-bit quantization, FP8 training, and 8-bit optimizers to minimize VRAM usage without sacrificing accuracy. Triggers: quantization, dynamic 4-bit, fp8, bitsandbytes, adamw_8bit, qat.

UncategorizedView skill →

unsloth-sft

Supervised fine-tuning using SFTTrainer, instruction formatting, and multi-turn dataset preparation with triggers like sft, instruction tuning, chat templates, sharegpt, alpaca, conversation_extension, and SFTTrainer.

UncategorizedView skill →

unsloth-stt

Fine-tuning Speech-to-Text models like Whisper using Unsloth's optimized LoRA pipeline. Triggers: stt, whisper, transcription, audio fine-tuning, speech-to-text, audio normalization.

UncategorizedView skill →

unsloth-tts

Fine-tuning Text-to-Speech (TTS) models with Unsloth for voice cloning and synthetic speech (triggers: TTS, text-to-speech, voice cloning, Orpheus-TTS, audio fine-tuning, speech synthesis).

UncategorizedView skill →

unsloth-vision

Fine-tuning multimodal vision-language models (Llama 3.2 Vision, Qwen2.5 VL) using optimized vision layers (triggers: vision models, multimodal, Llama 3.2 Vision, Qwen2.5 VL, UnslothVisionDataCollator, finetune_vision_layers).

UncategorizedView skill →

vector-databases

Design vector database ingestion and retrieval pipelines (points + payloads, filtered similarity search, multi-stage hybrid retrieval, index maintenance). Use when building RAG/vector search flows or debugging retrieval quality; triggers: vector database, RAG, embeddings, hybrid search, filtered search, Qdrant, Weaviate, Chroma.

UncategorizedView skill →

Page 2 of 2 · 76 results