Agent Skills: Brand Brief Optimizer Skill

Evaluate and strengthen design system briefs by identifying vague language,

UncategorizedID: okgoogle13/careercopilot/brand-brief-optimizer

Install this agent skill to your local

pnpm dlx add-skill https://github.com/okgoogle13/careercopilot/tree/HEAD/.claude/skills/brand-brief-optimizer

Skill Files

Browse the full folder contents for brand-brief-optimizer.

Download Skill

Loading file tree…

.claude/skills/brand-brief-optimizer/SKILL.md

Skill Metadata

Name
brand-brief-optimizer
Description
Evaluate and strengthen design system briefs by identifying vague language,

Brand Brief Optimizer Skill

Purpose

Transforms brand briefs from aspirational documents into living standards that reliably guide design decisions. Tests brief language against real-world component decisions to reveal gaps and ambiguities.

A powerful brief answers the questions your team will actually ask. This skill helps you write those answers clearly.

Works with any design system—Material Design 3, custom systems, hybrid approaches—by stress-testing clarity, coherence, and real-world applicability.

When to Use This Skill

Use this skill when you need to:

  • Stress-test a draft brief for clarity and coherence
  • Identify vague language that won't guide team decisions
  • Test applicability across different component types
  • Find edge cases the brief doesn't address
  • Clarify assumptions embedded in brief language
  • Validate brief coherence (do principles work together or conflict?)
  • Strengthen brief language through iterative refinement
  • Create brief coherence score (how likely is this to guide decisions?)

Process

The optimizer analyzes your brief across five dimensions:

1. Clarity Assessment

Are specific sections clear enough to guide decisions, or do they hedge?

Weak: "Use distinctive fonts that feel expressive" Strong: "Display typography uses Roboto Flex with weight range 300-800 and optical sizing enabled. Use weights 300-400 for readable body text, 600-800 for headlines with semantic emphasis. Optical sizing auto-adjusts finesse at all sizes."

2. Coherence Check

Do principles work together, or do they contradict?

Example: If brief says "maximize semantic tokens" but also "designers can use arbitrary hex values for special moments," the contradiction needs resolving.

3. Applicability Testing

Can someone actually follow this brief across different component types?

The optimizer tests brief language against:

  • Information-heavy components (dashboards, data tables, list views)
  • Emotional/hero moments (landing pages, empty states, call-to-action)
  • Form contexts (inputs, validation, error states)
  • Navigation & structure (headers, side panels, footers)

4. Edge Case Identification

What questions will your team ask that the brief doesn't answer?

Examples:

  • "Can I deviate from the color palette for accessibility needs?"
  • "How do I apply the design system to dark mode?"
  • "What's the decision tree when two principles conflict?"

5. Coherence Scoring

Quantifies brief quality across all dimensions. Not to compare against others, but to track your own improvement as you refine.

Scores 0-100:

  • 80-100: Ready to deploy (team can use with confidence)
  • 60-79: Close (needs targeted refinement)
  • 40-59: Foundational (significant work needed)
  • Below 40: Concept stage (too vague to guide decisions)

Usage Examples

Example 1: Stress-Test Draft Brief

"Evaluate this Material Design 3 brief for clarity and coherence"

Claude will:

  1. Analyze each section for vagueness
  2. Test against example component decisions
  3. Identify edge cases not addressed
  4. Find contradictions or confusing language
  5. Score overall coherence (0-100)
  6. Provide targeted recommendations

Example 2: Test Applicability

"Can this brief guide color decisions across dashboards, landing pages, and dark mode contexts?"

Claude will:

  1. Take brief's color guidance
  2. Imagine decisions needed in each context
  3. Test whether brief provides clear answer
  4. Identify where guidance breaks down
  5. Suggest clarifications needed

Example 3: Edge Case Generation

"What questions will my team ask about this brief that it doesn't clearly answer?"

Claude will:

  1. Identify assumed knowledge
  2. Spot vague pronouncements
  3. Generate likely edge cases
  4. Suggest how brief could preemptively address them
  5. Produce list of clarifications needed

Example 4: Iterative Refinement

"I've incorporated your feedback. Here's the revised section. Is it clearer?"

Claude will:

  1. Compare old vs. new language
  2. Assess if revision added specificity
  3. Identify remaining vagueness
  4. Suggest further refinement
  5. Track clarity improvement

The Optimizer Report Format

{
  "brief_evaluation": {
    "brief_name": "Material Design 3 Brand Brief",
    "design_system": "Material Design 3",
    "evaluation_date": "2026-02-05",
    "overall_coherence_score": 0-100,

    "dimension_scores": {
      "clarity": 0-100,
      "coherence": 0-100,
      "applicability": 0-100,
      "edge_case_coverage": 0-100,
      "distinctiveness": 0-100
    },

    "clarity_findings": {
      "clear_sections": ["Color semantic tokens defined with use cases", "..."],
      "vague_sections": ["Motion guidance lacks easing curve specifics", "..."],
      "hedge_language": ["Components should look polished", "..."],
      "recommendations": ["Define easing curves (emphasized vs. standard)", "..."]
    },

    "coherence_findings": {
      "working_together": ["Semantic tokens + dark mode support create accessibility", "..."],
      "potential_conflicts": ["Maximum icon density vs. clarity priority", "..."],
      "resolution_suggestions": ["Define density by context (dashboard vs. form)", "..."]
    },

    "applicability_testing": {
      "information_heavy_context": "Brief guidance works well for dashboards",
      "emotional_context": "Brief guidance works for landing pages",
      "form_context": "Brief guidance needs clarification for error states",
      "navigation_context": "Brief guidance clear for headers/panels",
      "summary": "Applicable in 3/4 tested contexts"
    },

    "uncovered_edge_cases": [
      "How to handle high-contrast mode while maintaining design aesthetic?",
      "When to deviate from semantic tokens for accessibility?",
      "Guidance for motion in reduced-motion contexts?",
      "Decision tree for conflicting principles?"
    ],

    "overall_assessment": "Brief is solid but needs targeted refinement in three areas..."
  }
}

Key Capabilities

Clarity Analysis

Identifies:

  • Specific vs. vague language
  • Rules vs. values (which guides better decisions?)
  • Assumed knowledge (what does team need to know?)
  • Ambiguities in terminology

Coherence Validation

Checks:

  • Do principles work together?
  • Are there contradictions?
  • Is hierarchy of values clear?
  • Do examples support stated principles?

Real-World Testing

Simulates decisions in:

  • Information-dense contexts (clarity needed)
  • Emotional communication (impact needed)
  • Structural components (consistency needed)
  • Forms & interaction (precision needed)
  • Accessibility contexts (inclusive design needed)

Gap Identification

Reveals:

  • Edge cases not addressed
  • Assumed knowledge not stated
  • Contradictions needing resolution
  • Future maturation areas

Workflow: Iterative Brief Refinement

  1. Write draft brief (initial thinking)
  2. Run optimizer (identify gaps)
  3. Revise based on feedback (add specificity)
  4. Re-run optimizer (verify improvements)
  5. Repeat until score stabilizes at 80+
  6. Deploy brief (team can use with confidence)
  7. Monitor real-world application (refine based on usage)

This produces briefs that actually guide work, not just aspire to.

Integration with Other Skills

With Frontend-Design

Compare brief aesthetic philosophy against frontend-design thinking—do they align?

With M3-Visual-Audit (or system-specific audit)

Brief clarity measured by whether audit results are consistent (clear brief = consistent audits).

With Token-Orchestrator

Test whether brief's token guidance is specific and defensible.

With Compliance-Dashboard

Dashboard tracks whether brief is actually guiding component development (visible through compliance patterns).

With Component-Builder

Verify that brief guidance produces coherent components across different types.

Limitations

This skill:

✅ Identifies vague and unclear language ✅ Tests brief against realistic scenarios ✅ Generates edge cases you haven't considered ✅ Provides coherence scoring for improvement tracking

❌ Cannot write the brief for you (refining language is your work) ❌ Cannot guarantee every team member interprets the same way ❌ Doesn't replace human review and discussion ❌ Coherence score is relative, not absolute

Success Criteria

A brief is ready for deployment when it:

  1. Scores 80+ on coherence (clear enough to guide decisions)
  2. Answers likely edge cases (team knows what to do when uncertain)
  3. Works across component types (applicable to your full product)
  4. Speaks to values, not just rules (team internalizes, doesn't just follow)
  5. Distinguishes your system (brief reveals your design philosophy)

Key Principle

The difference between a brief that sits on a shelf and one that becomes culture is the difference between rules and values.

Rules are obeyed. Values are internalized.

A brief optimized for clarity and coherence shifts your team from "what rules do I follow?" to "what values do I embody?"

That shift changes everything.


A great brief doesn't just guide design decisions—it shifts how your team thinks about design.