Reference Files
Advanced agent authoring guidance:
- design-patterns.md - Proven agent patterns with examples
- examples.md - Complete agent examples with analysis
- agent-decision-guide.md - Deciding when to use agents vs skills vs commands
- comparison-with-official.md - Comparison with Anthropic's official agents
About Agents
Agents are specialized AI assistants that run in separate subprocesses with focused expertise. They have:
- Specific focus areas - Clearly defined areas of expertise
- Model choice - Sonnet, Opus, or Haiku depending on complexity
- Tool restrictions - Limited to only the tools they need
- Permission modes - Control over how they interact with the system
- Isolated context - Run separately from the main conversation
When to use agents:
- Task requires specialized expertise
- Need different model than main conversation
- Want to restrict tools for security/focus
- Task benefits from isolated context
- Can be invoked automatically or manually
Core Principles
1. Clear Focus Areas
Focus areas define what the agent is expert in. They should be:
Specific, not generic:
- ❌ "Python programming"
- ✅ "FastAPI REST APIs with SQLAlchemy ORM and pytest testing"
Concrete, with examples:
- ❌ "Best practices"
- ✅ "Defensive programming with strict error handling"
5-15 focus areas that cover the agent's expertise comprehensively.
Example from evaluator agent:
## Focus Areas
- YAML Frontmatter Validation
- Markdown Structure
- Tool Permissions
- Description Quality
- File Organization
- Progressive Disclosure
- Integration Patterns
2. Model Selection (Keep It Simple)
Sonnet (default choice for most agents):
- Balanced cost and capability
- Handles most programming tasks
- Good for analysis and code generation
- Use unless you have a specific reason not to
Haiku (for simple, fast tasks):
- Fast and cheap
- Good for read-only analysis
- Simple, repetitive tasks
- When speed matters more than complexity
Opus (for complex reasoning):
- Most capable model
- Complex architectural decisions
- Requires deep reasoning
- Higher cost - use sparingly
Decision guide:
- Start with Sonnet
- Switch to Haiku if agent is simple read-only analyzer
- Only use Opus if task genuinely requires highest capability
3. Tool Restrictions
Why restrict tools:
- Security - Prevent unwanted file modifications
- Focus - Agent only needs specific capabilities
- Predictability - Clear what agent can/cannot do
Common tool patterns:
Read-only analyzer:
allowed_tools:
- Read
- Glob
- Grep
- Bash
Examples: evaluator, audit-skill
Code generator/modifier:
allowed_tools:
- Read
- Edit
- Write
- Grep
- Glob
- Bash
Examples: test-runner
Minimal/focused:
allowed_tools:
- Read
- AskUserQuestion
Example: When agent only needs to read and ask questions
If unspecified: Agent inherits all tools from parent (usually not desired)
4. Permission Modes (Common Ones)
default (most common):
- Normal permission checking
- User approves tool usage as needed
- Safe default choice
acceptEdits (for editing workflows):
- Auto-approves Read and Edit operations
- Good for refactoring/cleanup agents
- Still asks for Write, Bash, etc.
plan (for planning agents):
- Agent researches and creates plan
- No execution until plan approved
- Good for complex implementation planning
Most agents use default - only use others when you have a specific workflow need.
Agent Design Patterns
Three proven patterns for building effective agents. Each pattern includes complete templates you can copy and customize.
📄 See design-patterns.md for detailed templates
Quick overview:
- Read-Only Analyzer - For auditing, evaluation, reporting (Haiku/Sonnet + read-only tools)
- Code Generator/Modifier - For creating/editing code (Sonnet + Read/Edit/Write/Bash)
- Workflow Orchestrator - For multi-step coordination (Sonnet + Task tool)
Resource Organization and Progressive Disclosure
File Structure Patterns
Simple agent (single file):
agents/
└── agent-name.md # <500 lines, self-contained
Complex agent (with references):
agents/
└── agent-name/
├── agent-name.md # <500 lines, core workflow
└── references/ # REQUIRED subdirectory
├── examples.md
└── guide.md
Key Difference: Agents vs Skills
Agents MUST use references/ subdirectory:
- Main file:
agent-name/agent-name.md - References:
agent-name/references/*.md
Skills use flat structure (no subdirectory):
- Main file:
skill-name/SKILL.md - References:
skill-name/*.md(co-located at root)
Why? This is a validation hook constraint:
- Agent hook validates ALL
.mdfiles inagents/except those inreferences/ - Skill hook validates ONLY
SKILL.mdfiles - Flattened agent references would fail validation (missing frontmatter)
📄 See ~/.claude/docs/agent-vs-skill-structure.md for detailed explanation
When to Use References
Single file (simple agent):
- Agent <500 lines
- No extensive examples or reference material
- Clear, focused purpose
- Example:
evaluator.md(404 lines)
Directory with references/ (complex agent):
- Main content would exceed 500 lines
- Extensive examples, tables, or workflows
- Multiple distinct topic areas
- Example:
test-runner/(328 lines + 2 references)
Reference File Linking
REQUIRED: Reference Files section in main file:
## Reference Files
This agent uses reference materials in the `references/` directory:
- [examples.md](references/examples.md) - Concrete test case examples
- [common-failures.md](references/common-failures.md) - Failure pattern catalog
Best practices:
- Link ALL files in
references/directory - Provide clear descriptions of each reference
- Place section near top of agent file
- Keep structure one level deep (no nested subdirectories)
Agent Creation Process
Step 1: Define Purpose and Scope
Start by clarifying:
Questions to ask:
- What specific problem does this agent solve?
- What tasks should it handle?
- What tasks should it NOT handle?
- Who will use it and when?
- Does an existing agent already do this?
Use AskUserQuestion to clarify ambiguities before proceeding.
Check for existing agents:
ls -la ~/.claude/agents/
Look for similar agents that might overlap.
Step 2: Choose Model and Tools
Model selection:
- Default to Sonnet for most agents
- Use Haiku if it's a simple read-only analyzer
- Only use Opus if complexity genuinely requires it
Tool selection:
- List what the agent actually needs to do
- Map needs to minimal tool set
- Use restrictive set from design patterns above
- Don't grant tools "just in case"
Permission mode:
- Default: Use
defaultunless you have specific need - Only specify permissionMode if you need non-default behavior
Step 3: Write Focus Areas
Guidelines:
- 5-15 specific areas of expertise
- Each should be concrete and specific
- Include technologies, frameworks, patterns
- Avoid vague statements like "best practices"
Good examples (from evaluator):
- "YAML Frontmatter Validation - Required fields, syntax correctness"
- "Tool Permissions - Appropriateness of allowed-tools, security implications"
- "Progressive Disclosure - Context economy, reference file usage"
Bad examples:
- "Writing good code" (too vague)
- "Programming" (too generic)
- "Helping with tasks" (not specific)
Step 4: Define Approach/Methodology
This section explains HOW the agent works:
Include:
- Key principles the agent follows
- Step-by-step methodology
- Decision-making frameworks
- Output format (if applicable)
Example from evaluator:
## Evaluation Framework
### Correctness Criteria
- YAML frontmatter with required fields
- Valid model value
- Name matches filename
...
## Evaluation Process
### Step 1: Identify Extension Type
...
### Step 2: Apply Type-Specific Validation
...
Step 5: Write Description
Requirements:
- Explain what the agent does (capabilities)
- Include when to invoke (triggering scenarios)
- Mention key technologies/focus areas
- Target 150-500 characters
Formula: [What it does] for [use cases]. Expert/Use when [triggers]. [Key features]
Good example:
description: Master of defensive Bash scripting for production automation, CI/CD pipelines, and system utilities. Expert in safe, portable, and testable shell scripts.
Bad example:
description: Helps with bash scripts
Step 6: Create the Agent File
File location: ~/.claude/agents/agent-name.md
Filename should match name in frontmatter.
Basic structure:
---
name: agent-name
description: [comprehensive description with triggers]
model: claude-sonnet-4-5-20250929
allowed_tools:
- Read
- [other tools]
---
## Focus Areas
- [Specific area 1]
- [Specific area 2]
...
## Approach
[How the agent works, methodologies, processes]
## [Optional Additional Sections]
[Examples, best practices, output formats, etc.]
Step 7: Test the Agent
Test invocation:
- Try invoking the agent in a conversation
- Verify it has access to specified tools
- Check that focus areas guide its behavior
- Ensure description triggers correctly
Validate with /audit-agent:
/audit-agent agent-name
This will check:
- Frontmatter correctness
- Description quality
- File structure
- Best practices compliance
Agents vs Skills vs Commands
Choosing the right customization type is critical. Each has distinct characteristics and use cases.
📄 See agent-decision-guide.md for agent-specific decision framework
📄 See when-to-use-what.md for detailed decision guide (shared)
Quick guide:
- Agent - Separate subprocess, custom model, strict tools → Use for isolation and specialized tasks
- Skill - Main conversation, auto-triggers, domain knowledge → Use for extending base capabilities
- Command - User shortcut, delegates to agent/skill → Use for explicit, frequent actions
Common Mistakes to Avoid
- Vague focus areas - "Python expert" instead of "FastAPI with SQLAlchemy and pytest"
- Wrong model - Using Opus when Sonnet would work fine
- Too permissive tools - Granting all tools when only Read/Grep needed
- Missing approach section - Not explaining HOW the agent works
- Poor description - Too short or doesn't include trigger scenarios
- Name mismatch - Frontmatter name doesn't match filename
- Overlapping agents - Creating agent that duplicates existing one
- No tool restrictions - Not specifying allowed_tools (inherits all)
Examples from Existing Agents
Real-world examples showing what makes a good agent. Each example is analyzed to explain why it works well.
📄 See examples.md for detailed analysis
Examples covered:
- evaluator - Read-only evaluator pattern
- test-runner - Test runner with reporting pattern
Each example includes the full frontmatter, focus areas, and analysis of what makes it effective.
Tips for Success
- Start with an existing agent as template - Copy structure from similar agent
- Be specific in focus areas - Concrete details over generic statements
- Test early - Create minimal agent and test before adding details
- Use /audit-agent - Catch issues early
- Check for overlaps - Don't duplicate existing agents
- Document the approach - Explain HOW the agent works
- Keep tools minimal - Only grant what's needed
- Write good description - Include what, when, and key features
- Iterate based on usage - Refine after real-world testing
- Follow naming conventions - Use kebab-case, match filename to name
Reference to Standards
For detailed standards and validation:
- Naming conventions - Use kebab-case for agent names
- Frontmatter requirements - name, description, model (optional: allowed_tools, permissionMode)
- File organization -
~/.claude/agents/agent-name.md - Validation - Use
/audit-agentcommand
See audit-coordinator skill for comprehensive standards.
Related Skills
This skill is part of the authoring skill family:
- author-agent - Guide for creating agents (this skill)
- author-skill - Guide for creating skills
- author-command - Guide for creating commands
- author-output-style - Guide for creating output styles
For validation, use the corresponding audit skills:
- audit-agent - Validate agent configurations
- audit-coordinator - Comprehensive multi-faceted audits
Quick Start Checklist
Creating a new agent:
- [ ] Identify unique purpose (not covered by existing agents)
- [ ] Choose model (default: Sonnet)
- [ ] Determine minimal tool set needed
- [ ] Write 5-15 specific focus areas
- [ ] Document approach/methodology
- [ ] Write comprehensive description (150-500 chars)
- [ ] Create file at
~/.claude/agents/agent-name.md - [ ] Test invocation
- [ ] Validate with
/audit-agent agent-name - [ ] Iterate based on results