pytorch-lightning
Deep learning framework (PyTorch Lightning). Organize PyTorch code into LightningModules, configure Trainers for multi-GPU/TPU, implement data pipelines, callbacks, logging (W&B, TensorBoard), distributed training (DDP, FSDP, DeepSpeed), for scalable neural network training.
deep-learningpytorchdistributed-traininggpu-acceleration
ovachiever
81
openrlhf-training
High-performance RLHF framework with Ray+vLLM acceleration. Use for PPO, GRPO, RLOO, DPO training of large models (7B-70B+). Built on Ray, vLLM, ZeRO-3. 2× faster than DeepSpeedChat with distributed architecture and GPU resource sharing.
reinforcement-learningdistributed-traininggpu-accelerationray
ovachiever
81
training-pipelines
Master training pipelines - orchestration, distributed training, hyperparameter tuning
training-orchestrationdistributed-traininghyperparameter-tuningmodel-training
pluginagentmarketplace
1