Agent Skills: tensorrt-llm
Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than PyTorch, or for serving models with quantization (FP8/INT4), in-flight batching, and multi-GPU scaling.
UncategorizedID: davila7/claude-code-templates/tensorrt-llm
19,6461,834
Install this agent skill to your local
Skill Files
Browse the full folder contents for tensorrt-llm.
Loading file tree…
Select a file to preview its contents.