Agent Skills: torch-pipeline-parallelism

Guidance for implementing PyTorch pipeline parallelism for distributed model training. This skill should be used when tasks involve implementing pipeline parallelism, distributed training with model partitioning across GPUs/ranks, AFAB (All-Forward-All-Backward) scheduling, or inter-rank tensor communication using torch.distributed.

UncategorizedID: benchflow-ai/skillsbench/torch-pipeline-parallelism

Install this agent skill to your local

pnpm dlx add-skill https://github.com/benchflow-ai/skillsbench/torch-pipeline-parallelism

Skill Files

Browse the full folder contents for torch-pipeline-parallelism.

Download Skill

Loading file tree…

Select a file to preview its contents.