Back to tags
Tag

Agent Skills with tag: pytorch

20 skills match this tag. Use tags to discover related Agent Skills and explore similar workflows.

torchdrug

Graph-based drug discovery toolkit. Molecular property prediction (ADMET), protein modeling, knowledge graph reasoning, molecular generation, retrosynthesis, GNNs (GIN, GAT, SchNet), 40+ datasets, for PyTorch-based ML on molecules, proteins, and biomedical graphs.

drug-discoverygraph-neural-networksmolecular-modelingprotein-modeling
ovachiever
ovachiever
81

pytorch-lightning

Deep learning framework (PyTorch Lightning). Organize PyTorch code into LightningModules, configure Trainers for multi-GPU/TPU, implement data pipelines, callbacks, logging (W&B, TensorBoard), distributed training (DDP, FSDP, DeepSpeed), for scalable neural network training.

deep-learningpytorchdistributed-traininggpu-acceleration
ovachiever
ovachiever
81

ray-train

Distributed training orchestration across clusters. Scales PyTorch/TensorFlow/HuggingFace from laptop to 1000s of nodes. Built-in hyperparameter tuning with Ray Tune, fault tolerance, elastic scaling. Use when training massive models across multiple machines or running distributed hyperparameter sweeps.

training-orchestrationdistributed-computinghyperparameter-tuningscalability
ovachiever
ovachiever
81

pytorch-fsdp

Expert guidance for Fully Sharded Data Parallel training with PyTorch FSDP - parameter sharding, mixed precision, CPU offloading, FSDP2

pytorchdistributed-computingmixed-precisioncpu-offloading
ovachiever
ovachiever
81

torch-geometric

Graph Neural Networks (PyG). Node/graph classification, link prediction, GCN, GAT, GraphSAGE, heterogeneous graphs, molecular property prediction, for geometric deep learning.

graph-neural-networkspytorchgeometric-deep-learningnode-classification
ovachiever
ovachiever
81

optimizing-attention-flash

Optimizes transformer attention with Flash Attention for 2-4x speedup and 10-20x memory reduction. Use when training/running transformers with long sequences (>512 tokens), encountering GPU memory issues with attention, or need faster inference. Supports PyTorch native SDPA, flash-attn library, H100 FP8, and sliding window attention.

transformersflash-attentionpytorchgpu-memory-optimization
ovachiever
ovachiever
81

huggingface-accelerate

Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.

pytorchdistributed-computingdeep-learninghuggingface
ovachiever
ovachiever
81

cellxgene-census

Query CZ CELLxGENE Census (61M+ cells). Filter by cell type/tissue/disease, retrieve expression data, integrate with scanpy/PyTorch, for population-scale single-cell analysis.

single-cell-rna-seqscanpypytorchpopulation-scale-analysis
ovachiever
ovachiever
81

deep-learning

PyTorch, TensorFlow, neural networks, CNNs, transformers, and deep learning for production

pytorchtensorflowneural-networkscnn
pluginagentmarketplace
pluginagentmarketplace
11

deep-learning

Build and train neural networks with PyTorch - MLPs, CNNs, and training best practices

deep-learningpytorchneural-networksmlp
pluginagentmarketplace
pluginagentmarketplace
11

Machine Learning

Python machine learning with scikit-learn, PyTorch, and TensorFlow

scikit-learnpytorchtensorflowpython
pluginagentmarketplace
pluginagentmarketplace
1

deep-learning

Neural networks, CNNs, RNNs, Transformers with TensorFlow and PyTorch. Use for image classification, NLP, sequence modeling, or complex pattern recognition.

pytorchtensorflowneural-network-architecturescomputer-vision
pluginagentmarketplace
pluginagentmarketplace
21

pytorch

Building and training neural networks with PyTorch. Use when implementing deep learning models, training loops, data pipelines, model optimization with torch.compile, distributed training, or deploying PyTorch models.

pytorchdeep-learningneural-network-architecturesgpu-acceleration
itsmostafa
itsmostafa
10

Computer Vision

Implement computer vision tasks including image classification, object detection, segmentation, and pose estimation using PyTorch and TensorFlow

computer-visiondeep-learningpytorchtensorflow
aj-geddes
aj-geddes
301

Neural Network Design

Design and architect neural networks with various architectures including CNNs, RNNs, Transformers, and attention mechanisms using PyTorch and TensorFlow

pytorchtensorflowneural-network-architecturestransformers
aj-geddes
aj-geddes
301

ML Model Training

Build and train machine learning models using scikit-learn, PyTorch, and TensorFlow for classification, regression, and clustering tasks

machine-learningdeep-learningpytorchtensorflow
aj-geddes
aj-geddes
301

at-dispatch-v2

Convert PyTorch AT_DISPATCH macros to AT_DISPATCH_V2 format in ATen C++ code. Use when porting AT_DISPATCH_ALL_TYPES_AND*, AT_DISPATCH_FLOATING_TYPES*, or other dispatch macros to the new v2 API. For ATen kernel files, CUDA kernels, and native operator implementations.

pytorchc++cudaaten
pytorch
pytorch
96,34426,418

add-uint-support

Add unsigned integer (uint) type support to PyTorch operators by updating AT_DISPATCH macros. Use when adding support for uint16, uint32, uint64 types to operators, kernels, or when user mentions enabling unsigned types, barebones unsigned types, or uint support.

pytorchc++macrosunsigned-integers
pytorch
pytorch
96,34426,418

Page 1 of 2 · 20 results