Back to tags
Tag

Agent Skills with tag: gpu-acceleration

24 skills match this tag. Use tags to discover related Agent Skills and explore similar workflows.

voltage-park

Provision and manage Voltage Park H100 GPU instances. Use when the user needs to spin up H100s, SSH into VP instances, transfer files, or terminate cloud GPU instances.

voltage-parkcloud-infrastructuregpu-accelerationssh
Infatoshi
Infatoshi
3

funsloth-local

Training manager for local GPU training - validate CUDA, manage GPU selection, monitor progress, handle checkpoints

cudagpu-accelerationmonitoringresource-allocation
chrisvoncsefalvay
chrisvoncsefalvay
4

pytorch

Building and training neural networks with PyTorch. Use when implementing deep learning models, training loops, data pipelines, model optimization with torch.compile, distributed training, or deploying PyTorch models.

pytorchdeep-learningneural-network-architecturesgpu-acceleration
itsmostafa
itsmostafa
10

swiftui-animation

This skill provides comprehensive guidance for implementing advanced SwiftUI animations, transitions, matched geometry effects, and Metal shader integration. Use when building animations, view transitions, hero animations, or GPU-accelerated effects in SwiftUI apps for iOS and macOS.

swiftuianimationtransitionsgpu-acceleration
jamesrochabrun
jamesrochabrun
204

get-available-resources

This skill should be used at the start of any computationally intensive scientific task to detect and report available system resources (CPU cores, GPUs, memory, disk space). It creates a JSON file with resource information and strategic recommendations that inform computational approach decisions such as whether to use parallel processing (joblib, multiprocessing), out-of-core computing (Dask, Zarr), GPU acceleration (PyTorch, JAX), or memory-efficient strategies. Use this skill before running analyses, training models, processing large datasets, or any task where resource constraints matter.

pythondistributed-computingresource-monitoringgpu-acceleration
K-Dense-AI
K-Dense-AI
3,233360

modal

Run Python code in the cloud with serverless containers, GPUs, and autoscaling. Use when deploying ML models, running batch processing jobs, scheduling compute-intensive tasks, or serving APIs that require GPU acceleration or dynamic scaling.

pythonmachine-learningdistributed-computingserverless
K-Dense-AI
K-Dense-AI
3,233360

Page 2 of 2 · 24 results