Agent Skills: serving-llms-vllm

Serves LLMs with high throughput using vLLM's PagedAttention and continuous batching. Use when deploying production LLM APIs, optimizing inference latency/throughput, or serving models with limited GPU memory. Supports OpenAI-compatible endpoints, quantization (GPTQ/AWQ/FP8), and tensor parallelism.

UncategorizedID: davila7/claude-code-templates/serving-llms-vllm

Install this agent skill to your local

pnpm dlx add-skill https://github.com/davila7/claude-code-templates/serving-llms-vllm

Skill Files

Browse the full folder contents for serving-llms-vllm.

Download Skill

Loading file tree…

Select a file to preview its contents.