Agent Skills: moe-training

Train Mixture of Experts (MoE) models using DeepSpeed or HuggingFace. Use when training large-scale models with limited compute (5× cost reduction vs dense models), implementing sparse architectures like Mixtral 8x7B or DeepSeek-V3, or scaling model capacity without proportional compute increase. Covers MoE architectures, routing mechanisms, load balancing, expert parallelism, and inference optimization.

UncategorizedID: davila7/claude-code-templates/moe-training

Install this agent skill to your local

pnpm dlx add-skill https://github.com/davila7/claude-code-templates/moe-training

Skill Files

Browse the full folder contents for moe-training.

Download Skill

Loading file tree…

Select a file to preview its contents.