Agent Skills: ai-llm-inference

Operational patterns for LLM inference: latency budgeting, tail-latency control, caching, batching/scheduling, quantization/compression, parallelism, and reliable serving at scale. Emphasizes production-grade performance, cost control, and observability.

UncategorizedID: majiayu000/claude-skill-registry/ai-llm-inference

Install this agent skill to your local

pnpm dlx add-skill https://github.com/majiayu000/claude-skill-registry/ai-llm-inference

Skill Files

Browse the full folder contents for ai-llm-inference.

Download Skill

Loading file tree…

Select a file to preview its contents.