get-available-resources
This skill should be used at the start of any computationally intensive scientific task to detect and report available system resources (CPU cores, GPUs, memory, disk space). It creates a JSON file with resource information and strategic recommendations that inform computational approach decisions such as whether to use parallel processing (joblib, multiprocessing), out-of-core computing (Dask, Zarr), GPU acceleration (PyTorch, JAX), or memory-efficient strategies. Use this skill before running analyses, training models, processing large datasets, or any task where resource constraints matter.
dask
Parallel/distributed computing. Scale pandas/NumPy beyond memory, parallel DataFrames/Arrays, multi-file processing, task graphs, for larger-than-RAM datasets and parallel workflows.
task-orchestration
Use when coordinating complex tasks with orchestration, delegation, or parallel workstreams - provides structured workflows for orchestrate:brainstorm, orchestrate:spawn, and orchestrate:task.
reduce-orchestrator
MapReduce root/orchestrator with a mandatory parallel Verify phase, narrative-first reduction, deterministic artifact lifecycle management (.rlm run/archives), and concurrency safety (per-run locks + cleanup lock). Use when coordinating many parallel map-worker tasks under optional hint_paths, then synthesizing narrative reports into a decision to iterate or finish.
map-worker
Contract-based MapReduce worker for executing a task (may be broad) under optional hint_paths, producing a narrative report. Use when an orchestrator invokes a map worker (llm_query role) many times in parallel and needs mergeable narrative reports.
concurrency
Comprehensive concurrency and parallelism patterns for multi-threaded and async programming. Use when implementing async/await, parallel processing, thread safety, worker pools, or debugging race conditions and deadlocks. Triggers: async, await, concurrent, parallel, threads, race condition, deadlock, mutex, semaphore, worker pool, queue.