Agent Skills: distributed-llm-pretraining-torchtitan

Provides PyTorch-native distributed LLM pretraining using torchtitan with 4D parallelism (FSDP2, TP, PP, CP). Use when pretraining Llama 3.1, DeepSeek V3, or custom models at scale from 8 to 512+ GPUs with Float8, torch.compile, and distributed checkpointing.

UncategorizedID: davila7/claude-code-templates/distributed-llm-pretraining-torchtitan

Install this agent skill to your local

pnpm dlx add-skill https://github.com/davila7/claude-code-templates/distributed-llm-pretraining-torchtitan

Skill Files

Browse the full folder contents for distributed-llm-pretraining-torchtitan.

Download Skill

Loading file tree…

Select a file to preview its contents.