70201 Skills Available

Find awesome
Agent Skills

Agent-Skills.md is a agent skills marketplace, to find the right agent skills for you.

Popular searches

obsidian

Manage prompts in your Obsidian vault. Use for saving, listing, and loading reusable prompts. Triggers on /obsidian commands, Obsidian vault operations, or prompt management requests.

leweii
leweii
0

ginx-skill

Develop HTTP APIs, middleware, error codes, and i18n strings using the ginx framework conventions. Use when creating or modifying APIs in apis/, defining error codes, adding i18n strings, or when the user asks to follow project conventions for HTTP endpoints, routes, middleware, or error handling.

shrewx
shrewx
0

festival-operator

This skill should be used when the user asks about "festival operations", "event management", "vendor management", "lost and found procedures", "security protocols", "customer service at events", "handling difficult customers", "festival emergencies", "marketing communications", or discusses managing festivals, winter events, or public gatherings.

clovis1122
clovis1122
0

powershell-skill

Execute PowerShell commands on Windows systems with security constraints

hpppm
hpppm
0

modern-doc

>

tychota
tychota
0

morpho-solana-frontend

Build production-ready frontend for Morpho Blue lending protocol on Solana. Covers all 26 program instructions across supply/borrow, flash loans, liquidations, authorization, and admin features. Uses Next.js 14, Anchor client, Jupiter wallet adapter, and Kamino-style UI/UX. Integrates with morpho-solana-builder skill for contract understanding.

raunit-dev
raunit-dev
01

PDF Manipulation

Enables Claude to read, manipulate, and fill out PDF forms

bowbozaa
bowbozaa
0

festival-operations

Expert knowledge for running winter festival operations (Security, Marketing, CX, Lost & Found).

scrappywyrm
scrappywyrm
0

unsloth-tts

Fine-tuning Text-to-Speech (TTS) models with Unsloth for voice cloning and synthetic speech (triggers: TTS, text-to-speech, voice cloning, Orpheus-TTS, audio fine-tuning, speech synthesis).

cuba6112
cuba6112
0

unsloth-stt

Fine-tuning Speech-to-Text models like Whisper using Unsloth's optimized LoRA pipeline. Triggers: stt, whisper, transcription, audio fine-tuning, speech-to-text, audio normalization.

cuba6112
cuba6112
0

unsloth-quantization

Utilizing Dynamic 4-bit quantization, FP8 training, and 8-bit optimizers to minimize VRAM usage without sacrificing accuracy. Triggers: quantization, dynamic 4-bit, fp8, bitsandbytes, adamw_8bit, qat.

cuba6112
cuba6112
0

unsloth-sft

Supervised fine-tuning using SFTTrainer, instruction formatting, and multi-turn dataset preparation with triggers like sft, instruction tuning, chat templates, sharegpt, alpaca, conversation_extension, and SFTTrainer.

cuba6112
cuba6112
0

torchserve

Model serving engine for PyTorch. Focuses on MAR packaging, custom handlers for preprocessing/inference, and management of multi-GPU worker scaling. (torchserve, mar-file, handler, basehandler, model-archiver, inference-api)

cuba6112
cuba6112
0

torchtext

Natural Language Processing utilities for PyTorch (Legacy). Includes tokenizers, vocabulary building, and DataPipe-based dataset handling for text processing pipelines. (torchtext, tokenizer, vocab, datapipe, regextokenizer, nlp-pipeline)

cuba6112
cuba6112
0

vector-databases

Design vector database ingestion and retrieval pipelines (points + payloads, filtered similarity search, multi-stage hybrid retrieval, index maintenance). Use when building RAG/vector search flows or debugging retrieval quality; triggers: vector database, RAG, embeddings, hybrid search, filtered search, Qdrant, Weaviate, Chroma.

cuba6112
cuba6112
0

unsloth-vision

Fine-tuning multimodal vision-language models (Llama 3.2 Vision, Qwen2.5 VL) using optimized vision layers (triggers: vision models, multimodal, Llama 3.2 Vision, Qwen2.5 VL, UnslothVisionDataCollator, finetune_vision_layers).

cuba6112
cuba6112
0

torchvision

Computer vision library for PyTorch featuring pretrained models, advanced image transforms (v2), and utilities for handling complex data types like bounding boxes and masks. (torchvision, transforms, tvtensor, resnet, cutmix, mixup, pretrained models, vision transforms)

cuba6112
cuba6112
0

unsloth-core

Core fundamentals of Unsloth for fast LLM fine-tuning, covering FastLanguageModel setup, optimized gradient checkpointing, and native inference acceleration (triggers: unsloth, FastLanguageModel, from_pretrained, get_peft_model, for_inference, gradient checkpointing).

cuba6112
cuba6112
0

unsloth-cpt

Strategies for continued pretraining and domain adaptation in Unsloth (triggers: continued pretraining, CPT, domain adaptation, lm_head, embed_tokens, rsLoRA, embedding_learning_rate).

cuba6112
cuba6112
0

unsloth-datasets

Standardizing and formatting datasets for Unsloth, including chat template conversion and synthetic data generation (triggers: chat templates, ShareGPT, Alpaca, conversation_extension, add_new_tokens, standardize_sharegpt, formatting_prompts_func).

cuba6112
cuba6112
0

unsloth-dpo

Direct Preference Optimization (DPO) for aligning models with preference data without separate reward models. Triggers: dpo, preference optimization, rlhf, ref_model=none, patchdpotrainer, dpotrainer.

cuba6112
cuba6112
0

unsloth-fft

Performing full fine-tuning (FFT) in Unsloth with 100% exact weight updates and optimized gradient checkpointing. Triggers include fft, full fine-tuning, full_finetuning, exact fine-tuning, and weight updates.

cuba6112
cuba6112
0

unsloth-gguf

Exporting fine-tuned models to GGUF format for deployment in llama.cpp, Ollama, and local serving tools. Triggers: gguf, quantization export, llama.cpp, ollama, save_pretrained_gguf, modelfile.

cuba6112
cuba6112
0

unsloth-grpo

Implementation of Group Relative Policy Optimization (GRPO) for training reasoning models, optimized for 8x memory savings (triggers: GRPO, reasoning, DeepSeek-R1, reinforcement learning, RLVR, GRPOTrainer, thinking tokens).

cuba6112
cuba6112
0

unsloth-inference

Deploying fine-tuned models for production inference using native kernel optimization, vLLM, or SGLang. Triggers: inference, serving, vllm, sglang, for_inference, model merging, openai api.

cuba6112
cuba6112
0

unsloth-long-context

Training models on extended context lengths using optimized RoPE scaling and memory-efficient attention kernels. Triggers: long context, max_seq_length, rope scaling, large context window, flex attention.

cuba6112
cuba6112
0

unsloth-lora

Configuring and optimizing 16-bit Low-Rank Adaptation (LoRA) and Rank-Stabilized LoRA (rsLoRA) for efficient LLM fine-tuning using triggers like lora, qlora, rslora, rank selection, lora_alpha, lora_dropout, and target_modules.

cuba6112
cuba6112
0

unsloth-models

Guidance on selecting and configuring supported model architectures like Llama 4, DeepSeek-R1, and Qwen3. Triggers: llama 4, deepseek-r1, qwen3, gemma 3, model selection, instruct vs base.

cuba6112
cuba6112
0

unsloth-orpo

One-step preference alignment using Odds Ratio Preference Optimization (ORPO) (triggers: ORPO, preference optimization, alignment, ORPOTrainer, log_odds_ratio, binary preference).

cuba6112
cuba6112
0

unsloth-qlora

Advanced 4-bit quantization techniques using Unsloth and BitsAndBytes for extreme VRAM efficiency (triggers: QLoRA, 4-bit, load_in_4bit, bnb-4bit, VRAM optimization, dynamic quantization).

cuba6112
cuba6112
0

pytorch-core

Core PyTorch fundamentals including tensor operations, autograd, nn.Module architecture, and training loop orchestration. Covers optimizations like pin_memory and lazy module initialization. (pytorch, tensor, autograd, nn.Module, optimizer, training loop, state_dict, pin_memory, lazylinear, requires_grad)

cuba6112
cuba6112
0

prompt-engineering

Comprehensive prompt engineering techniques for Claude models. Use this skill when crafting, optimizing, or debugging prompts for Claude API, Claude Code, or any Claude-powered application. Covers system prompts, role prompting, multishot examples, chain of thought, XML structuring, long context handling, extended thinking, prompt chaining, Claude 4.x-specific best practices, and agentic orchestration including subagents, agent loops, skills, MCP integration, and multi-agent workflows.

cuba6112
cuba6112
0

uv-advanced

Advanced usage of uv, the extremely fast Python package and project manager from Astral. Use this skill when working with uv for project management (uv init, uv add, uv run, uv lock, uv sync), workspaces and monorepos, dependency resolution strategies (universal, platform-specific, constraints, overrides), Docker containerization, PEP 723 inline script metadata, uvx tool execution, Python version management, pip interface migration, pyproject.toml configuration, or any advanced uv workflow. Covers workspaces, resolution strategies, Docker best practices, CI/CD integration, and migration from pip/poetry/pipenv.

cuba6112
cuba6112
0

agentic-patterns

Design and operate multi-agent orchestration patterns (ReAct loops, evaluator-optimizer, orchestrator-workers, tool routing) for LLM systems. Use when building or debugging agent workflows, tool-use loops, or multi-step task delegation; triggers: agentic, multi-agent, orchestration, ReAct, evaluator-optimizer, tool-use, handoff.

cuba6112
cuba6112
0

torchaudio

Audio signal processing library for PyTorch. Covers feature extraction (spectrograms, mel-scale), waveform manipulation, and GPU-accelerated data augmentation techniques. (torchaudio, melscale, spectrogram, pitchshift, specaugment, waveform, resample)

cuba6112
cuba6112
0

torch-compile

Optimize PyTorch with torch.compile (TorchDynamo/Inductor), focusing on compile overhead, graph breaks, and benchmark methodology. Use when speeding up PyTorch models or debugging compile behavior; triggers: torch.compile, torchdynamo, inductor, graph break, pytorch optimization.

cuba6112
cuba6112
0

tool-calling

Define and run tool-calling patterns for LLMs (schema design, call loops, validation, parallel calls). Use when building function/tool calling workflows or debugging tool selection and arguments; triggers: tool-calling, function-calling, tool schema, tool declaration, parallel function calling.

cuba6112
cuba6112
0

structured-outputs

Techniques for ensuring LLM responses adhere to strict JSON schemas, utilizing Pydantic models, JSON mode, and schema-based refusals. Triggers: structured-output, pydantic, json-schema, json-mode, llm-response-parsing.

cuba6112
cuba6112
0

pytorch-quantization

Techniques for model size reduction and inference acceleration using INT8 quantization, including Post-Training Quantization (PTQ) and Quantization Aware Training (QAT). (quantization, int8, qat, fbgemm, qnnpack, ptq, dequantize)

cuba6112
cuba6112
0

pytorch-onnx

Exporting PyTorch models to ONNX format for cross-platform deployment. Includes handling dynamic axes, graph optimization in ONNX Runtime, and INT8 model quantization. (onnx, onnxruntime, torch.onnx.export, dynamic_axes, constant-folding, edge-deployment)

cuba6112
cuba6112
0

pytorch-lightning

High-level training framework for PyTorch that abstracts boilerplate while maintaining flexibility. Includes the Trainer, LightningModule, and support for multi-GPU scaling and reproducibility. (lightning, pytorch-lightning, lightningmodule, trainer, callback, ddp, fast_dev_run, seed_everything)

cuba6112
cuba6112
0

pytorch-geometric

Library for Graph Neural Networks (GNNs). Covers MessagePassing layers, modular aggregation schemes, and handling large graphs via mini-batching with disjoint graph representation. (pyg, messagepassing, gnn, gcn, gat, edge_index, knn_graph, global_mean_pool)

cuba6112
cuba6112
0

pytorch-distributed

Distributed training strategies including DistributedDataParallel (DDP) and Fully Sharded Data Parallel (FSDP). Covers multi-node setup, checkpointing, and process management using torchrun. (ddp, fsdp, distributeddataparallel, torchrun, nccl, rank, process-group)

cuba6112
cuba6112
0

pytorch-cuda

PyTorch CUDA environment and performance guidance, with emphasis on CUDA 13 toolkit/driver requirements, PyTorch wheel compatibility, and runtime checks. Use when configuring PyTorch on NVIDIA GPUs, debugging CUDA setup, or migrating to CUDA 13; triggers: pytorch cuda, cuda 13, driver version, nvcc, torch.version.cuda, tf32, streams.

cuba6112
cuba6112
0

ollama-rag

Build RAG systems with Ollama local + cloud models. Latest cloud models include DeepSeek-V3.2 (GPT-5 level), Qwen3-Coder-480B (1M context), MiniMax-M2. Use for document Q&A, knowledge bases, and agentic RAG. Covers LangChain, LlamaIndex, ChromaDB, and embedding models.

cuba6112
cuba6112
0

python-async

Asyncio patterns in Python for high-concurrency IO-bound tasks. Includes coroutines, task management, and asynchronous resource handling. Triggers: asyncio, python-async, coroutine, await, async-gather, async-generator, event-loop.

cuba6112
cuba6112
0

pytest-patterns

Advanced Python testing strategies with Pytest, covering fixtures, matrix testing with parametrization, and async test architecture. Triggers: pytest, fixtures, parametrize, pytest-asyncio, matrix-testing, yield-fixture.

cuba6112
cuba6112
0

numpy-ufuncs

Universal functions (ufuncs) for vectorization, including reductions, in-place operations, and custom Python-function wrapping. Triggers: ufunc, vectorize, reduce, accumulate, frompyfunc, in-place.

cuba6112
cuba6112
0

numpy-structured

Structured and record arrays for C-interoperability, binary blob interpretation, and multi-field tabular data handling. Triggers: structured array, record array, compound dtype, multi-field index.

cuba6112
cuba6112
0

numpy-string-ops

Vectorized string manipulation using the char module and modern string alternatives, including cleaning and search operations. Triggers: string operations, numpy.char, text cleaning, substring search.

cuba6112
cuba6112
0

Page 1347 of 1405 · 70201 results

Adoption

Agent Skills are supported by leading AI development tools.

FAQ

Frequently asked questions about Agent Skills.

01

What are Agent Skills?

Agent Skills are reusable, production-ready capability packs for AI agents. Each skill lives in its own folder and is described by a SKILL.md file with metadata and instructions.

02

What does this agent-skills.md site do?

Agent Skills is a curated directory that indexes skill repositories and lets you browse, preview, and download skills in a consistent format.

03

Where are skills stored in a repo?

By default, the site scans the skills/ folder. You can also submit a URL that points directly to a specific skills folder.

04

What is required inside SKILL.md?

SKILL.md must include YAML frontmatter with at least name and description. The body contains the actual guidance and steps for the agent.

05

How can I submit a repo?

Click Submit in the header and paste a GitHub URL that points to a skills folder. We’ll parse it and add any valid skills to the directory.