Back to tags
Tag

Agent Skills with tag: parallel-processing

6 skills match this tag. Use tags to discover related Agent Skills and explore similar workflows.

get-available-resources

This skill should be used at the start of any computationally intensive scientific task to detect and report available system resources (CPU cores, GPUs, memory, disk space). It creates a JSON file with resource information and strategic recommendations that inform computational approach decisions such as whether to use parallel processing (joblib, multiprocessing), out-of-core computing (Dask, Zarr), GPU acceleration (PyTorch, JAX), or memory-efficient strategies. Use this skill before running analyses, training models, processing large datasets, or any task where resource constraints matter.

resource-constraintsparallel-processinggpu-accelerationmemory-management
ovachiever
ovachiever
81

dask

Parallel/distributed computing. Scale pandas/NumPy beyond memory, parallel DataFrames/Arrays, multi-file processing, task graphs, for larger-than-RAM datasets and parallel workflows.

distributed-computingparallel-processingpandasnumpy
ovachiever
ovachiever
81

task-orchestration

Use when coordinating complex tasks with orchestration, delegation, or parallel workstreams - provides structured workflows for orchestrate:brainstorm, orchestrate:spawn, and orchestrate:task.

orchestrationtask-automationdelegationparallel-processing
NickCrew
NickCrew
52

reduce-orchestrator

MapReduce root/orchestrator with a mandatory parallel Verify phase, narrative-first reduction, deterministic artifact lifecycle management (.rlm run/archives), and concurrency safety (per-run locks + cleanup lock). Use when coordinating many parallel map-worker tasks under optional hint_paths, then synthesizing narrative reports into a decision to iterate or finish.

mapreduceorchestrationparallel-processingconcurrency
hyophyop
hyophyop
1

map-worker

Contract-based MapReduce worker for executing a task (may be broad) under optional hint_paths, producing a narrative report. Use when an orchestrator invokes a map worker (llm_query role) many times in parallel and needs mergeable narrative reports.

mapreduceparallel-processingorchestrationnarrative-report
hyophyop
hyophyop
1

concurrency

Comprehensive concurrency and parallelism patterns for multi-threaded and async programming. Use when implementing async/await, parallel processing, thread safety, worker pools, or debugging race conditions and deadlocks. Triggers: async, await, concurrent, parallel, threads, race condition, deadlock, mutex, semaphore, worker pool, queue.

concurrencyasynchronous-programmingasync-awaitparallel-processing
cosmix
cosmix
3