Agent Skills: Optimizing Bluera Knowledge Search

Optimize BK search with intent, detail level, and store filtering

UncategorizedID: blueraai/bluera-knowledge/search-optimization

Install this agent skill to your local

pnpm dlx add-skill https://github.com/blueraai/bluera-knowledge/tree/HEAD/skills/search-optimization

Skill Files

Browse the full folder contents for search-optimization.

Download Skill

Loading file tree…

skills/search-optimization/SKILL.md

Skill Metadata

Name
search-optimization
Description
Optimize BK search with intent, detail level, and store filtering

Optimizing Bluera Knowledge Search

Master the search() MCP tool parameters to get better results with less context usage.

Understanding Search Parameters

search(
  query: string,                    // Your search query
  intent?: SearchIntent,            // What you're looking for
  detail?: 'minimal' | 'contextual' | 'full',  // How much context to return
  limit?: number,                   // Max results (default: 10)
  stores?: string[]                 // Which stores to search
)

Search Intent: Choosing the Right Type

The intent parameter helps the search engine rank results appropriately for your query type.

Intent Decision Tree

Looking for implementation details? Use find-implementation

  • "How does X work internally?"
  • "Show me the implementation of Y"
  • "What's inside the Z class/function?"
search("Vue computed properties implementation", intent='find-implementation')
→ Prioritizes: actual class/function implementations
→ Ranks higher: ComputedRefImpl class, createComputed() function
→ Ranks lower: tests, documentation, examples

Looking for usage patterns? Use find-pattern

  • "How to use X?"
  • "Examples of Y pattern"
  • "Common ways to implement Z"
search("React hooks patterns", intent='find-pattern')
→ Prioritizes: example code, usage patterns, HOCs
→ Ranks higher: common patterns like useEffect cleanup
→ Ranks lower: internal implementation details

Looking for references? Use find-usage

  • "Where is X used?"
  • "Find all calls to Y"
  • "What depends on Z?"
search("useCallback usage", intent='find-usage')
→ Prioritizes: call sites, import statements
→ Ranks higher: files importing and using useCallback
→ Ranks lower: useCallback's own implementation

Looking for definitions/APIs? Use find-definition

  • "What is the API for X?"
  • "Show me the type definition of Y"
  • "What are the parameters for Z?"
search("FastAPI route decorator", intent='find-definition')
→ Prioritizes: function signatures, type definitions
→ Ranks higher: @app.get() decorator definition
→ Ranks lower: examples using the decorator

Looking for documentation? Use find-documentation

  • "What does the doc say about X?"
  • "Explain Y from the documentation"
  • "API reference for Z"
search("Pydantic validators documentation", intent='find-documentation')
→ Prioritizes: README, docstrings, comments
→ Ranks higher: markdown docs, inline documentation
→ Ranks lower: implementation code

Default (No Intent)

If unsure, omit intent - the search engine will use hybrid ranking:

search("authentication middleware")
→ Returns mixed: implementations, patterns, usage, docs
→ Balanced ranking across all categories

Detail Level: Progressive Context Strategy

The detail parameter controls how much code context is returned per result.

Detail Levels Explained

| Level | What You Get | Tokens/Result | Use When | |-------|--------------|---------------|----------| | minimal | Summary, file path, relevance | ~100 | Browsing many results | | contextual | + imports, types, signatures | ~300 | Need interface context | | full | + complete code, all context | ~800 | Deep dive on specific file |

Progressive Detail Strategy (Recommended)

Step 1: Start Minimal

search(query, detail='minimal', limit=20)
→ Get 20 summaries (~2k tokens total)
→ Scan quickly for relevance
→ Identify top 3-5 candidates

Step 2: Evaluate Scores

Review relevance scores:
- 0.9-1.0: Excellent match (almost certainly relevant)
- 0.7-0.9: Strong match (very likely relevant)
- 0.5-0.7: Moderate match (possibly relevant)
- < 0.5: Weak match (probably not relevant)

Step 3: Selective Deep Dive

For top results (score > 0.7):
  get_full_context(result_ids)
  → Fetch complete code only for relevant items

For moderate results (score 0.5-0.7):
  search(refined_query, detail='contextual')
  → Try different query with more context

Result Limiting

The limit parameter caps the number of results returned.

Choosing the Right Limit

| Limit | Mode | Use When | |-------|------|----------| | 20-50 | Discovery | Exploring, not sure what exists | | 10-20 | Standard | Specific question, multiple files | | 3-5 | Targeted | Know exactly what you need |

Tip: Large limits work best with detail='minimal' to control tokens.

Store Filtering

The stores parameter restricts search to specific knowledge stores.

When to Filter Stores

✅ Filter when: You know the library, comparing specific libs, want focused results

❌ Don't filter when: Discovering where code lives, want cross-library perspective

# Check available stores first
list_stores()

# Then filter
search(query, stores=['fastapi', 'express'])

Quick Reference

High-Efficiency Defaults

search(query, detail='minimal', limit=20)
→ Good for most discovery tasks
→ Review, then selectively fetch full context

High-Precision Defaults

search(query, intent='find-implementation', detail='full', limit=5, stores=['known-lib'])
→ When you know exactly what you're looking for
→ Fastest path to deep answer

Balanced Defaults

search(query, detail='contextual', limit=10)
→ Good middle ground
→ See interfaces without full implementation

Deep Dive

For advanced strategies and token optimization examples:

  • @references/strategies.md - Combined optimization strategies with token counts
  • @references/mistakes.md - Common mistakes to avoid