Back to authors
neolabhq

neolabhq

69 Skills published on GitHub.

sadd:do-competitively

Execute tasks through competitive multi-agent generation, meta-judge evaluation specification, multi-judge evaluation, and evidence-based synthesis

UncategorizedView skill →

sadd:do-in-parallel

Launch multiple sub-agents in parallel to execute tasks across files or targets with intelligent model selection, quality-focused prompting, and meta-judge → LLM-as-a-judge verification

UncategorizedView skill →

sadd:do-in-steps

Execute complex tasks through sequential sub-agent orchestration with intelligent model selection, meta-judge → LLM-as-a-judge verification

UncategorizedView skill →

sadd:judge-with-debate

Evaluate solutions through multi-round debate between independent judges until consensus

UncategorizedView skill →

sadd:judge

Launch a meta-judge then a judge sub-agent to evaluate results produced in the current conversation

UncategorizedView skill →

sadd:launch-sub-agent

Launch an intelligent sub-agent with automatic model selection based on task complexity, specialized agent matching, Zero-shot CoT reasoning, and mandatory self-critique verification

UncategorizedView skill →

sadd:multi-agent-patterns

Design multi-agent architectures for complex tasks. Use when single-agent context limits are exceeded, when tasks decompose naturally into subtasks, or when specializing agents improves quality.

UncategorizedView skill →

sadd:subagent-driven-development

Use when executing implementation plans with independent tasks in the current session or facing 3+ independent issues that can be investigated without shared state or dependencies - dispatches fresh subagent for each task with code review between tasks, enabling fast iteration with quality gates

UncategorizedView skill →

sadd:tree-of-thoughts

Execute tasks through systematic exploration, pruning, and expansion using Tree of Thoughts methodology with meta-judge evaluation specifications and multi-agent evaluation

UncategorizedView skill →

sdd:add-task

creates draft task file in .specs/tasks/draft/ with original user intent

UncategorizedView skill →

sdd:brainstorm

Use when creating or developing, before writing code or implementation plans - refines rough ideas into fully-formed designs through collaborative questioning, alternative exploration, and incremental validation. Don't use during clear 'mechanical' processes

UncategorizedView skill →

sdd:create-ideas

Generate ideas in one shot using creative sampling

UncategorizedView skill →

sdd:implement

Implement a task with automated LLM-as-Judge verification for critical steps

UncategorizedView skill →

sdd:plan

Refine, parallelize, and verify a draft task specification into a fully planned implementation-ready task

UncategorizedView skill →

tdd:fix-tests

Systematically fix all failing tests after business logic changes or refactoring

UncategorizedView skill →

tdd:test-driven-development

Use when implementing any feature or bugfix, before writing implementation code - write the test first, watch it fail, write minimal code to pass; ensures tests actually verify behavior by requiring failure first

UncategorizedView skill →

tdd:write-tests

Systematically add test coverage for all local code changes using specialized review and development agents. Add tests for uncommitted changes (including untracked files), or if everything is commited, then will cover latest commit.

UncategorizedView skill →

tech-stack:add-typescript-best-practices

Setup TypeScript best practices and code style rules in CLAUDE.md

UncategorizedView skill →

prompt-engineering

Advanced prompt engineering techniques for optimal AI responses. Use this when crafting prompts, optimizing AI interactions, or designing system prompts for applications.

UncategorizedView skill →

Page 2 of 2 · 69 results