Agent Skills: torch-pipeline-parallelism
Guidance for implementing PyTorch pipeline parallelism for distributed model training. This skill should be used when tasks involve implementing pipeline parallelism, distributed training with model partitioning across GPUs/ranks, AFAB (All-Forward-All-Backward) scheduling, or inter-rank tensor communication using torch.distributed.
UncategorizedID: benchflow-ai/skillsbench/torch-pipeline-parallelism
278174
Install this agent skill to your local
Skill Files
Browse the full folder contents for torch-pipeline-parallelism.
Loading file tree…
Select a file to preview its contents.