Agent Skills: distributed-training

Use when training models across multiple GPUs or nodes, handling large models that don't fit in memory, or optimizing training throughput - covers DDP, FSDP, DeepSpeed ZeRO, model/data parallelism, and gradient checkpointingUse when ", " mentioned.

UncategorizedID: omer-metin/skills-for-antigravity/distributed-training

Install this agent skill to your local

pnpm dlx add-skill https://github.com/omer-metin/skills-for-antigravity/distributed-training

Skill Files

Browse the full folder contents for distributed-training.

Download Skill

Loading file tree…

Select a file to preview its contents.