Agent Skills: tensorrt-llm

Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than PyTorch, or for serving models with quantization (FP8/INT4), in-flight batching, and multi-GPU scaling.

UncategorizedID: davila7/claude-code-templates/tensorrt-llm

Install this agent skill to your local

pnpm dlx add-skill https://github.com/davila7/claude-code-templates/tensorrt-llm

Skill Files

Browse the full folder contents for tensorrt-llm.

Download Skill

Loading file tree…

Select a file to preview its contents.