Agent Skills: training-llms-megatron

Trains large language models (2B-462B parameters) using NVIDIA Megatron-Core with advanced parallelism strategies. Use when training models >1B parameters, need maximum GPU efficiency (47% MFU on H100), or require tensor/pipeline/sequence/context/expert parallelism. Production-ready framework used for Nemotron, LLaMA, DeepSeek.

UncategorizedID: davila7/claude-code-templates/training-llms-megatron

Install this agent skill to your local

pnpm dlx add-skill https://github.com/davila7/claude-code-templates/training-llms-megatron

Skill Files

Browse the full folder contents for training-llms-megatron.

Download Skill

Loading file tree…

Select a file to preview its contents.