Agent Skills: training-llms-megatron
Trains large language models (2B-462B parameters) using NVIDIA Megatron-Core with advanced parallelism strategies. Use when training models >1B parameters, need maximum GPU efficiency (47% MFU on H100), or require tensor/pipeline/sequence/context/expert parallelism. Production-ready framework used for Nemotron, LLaMA, DeepSeek.
training-orchestrationlarge-language-modelsparallelismgpu-accelerationmegatron
machine-learningID: ovachiever/droid-tings/training-llms-megatron
81
Skill Files
Browse the full folder contents for training-llms-megatron.
Loading file tree…
Select a file to preview its contents.