Exa Monitor Skill
Polls the Exa search engine every 4 hours for topics configured in EXA_MONITOR_TOPICS. Deduplicates results against previously seen URLs, stores new summaries in named memory, and integrates with the morning briefing loop.
Setup
-
Set
EXA_MONITOR_TOPICSin.env:EXA_MONITOR_TOPICS=["Claude AI updates","agent frameworks","LLM tooling","AI safety"] -
Start the monitor loop:
/loop 4h Skill({ skill: 'exa-monitor' })Or via CronCreate:
CronCreate({ schedule: '0 */4 * * *', task: "Invoke Skill({ skill: 'exa-monitor' }) to fetch new Exa search results", });
Core Logic
Step 1: Load Topics and Seen URLs
let topics;
try {
topics = JSON.parse(process.env.EXA_MONITOR_TOPICS || '["Claude AI updates","agent frameworks"]');
} catch (_e) {
topics = ['Claude AI updates', 'agent frameworks'];
}
// Load previously seen URLs from memory
const seenRaw = await readMemory('exa-seen-urls');
const seenUrls = new Set(seenRaw ? JSON.parse(seenRaw) : []);
Step 2: Search Each Topic via Exa MCP
// Use Exa MCP tool (preferred — returns structured results)
// Skill({ skill: 'exa-monitor' }) invokes mcp__Exa__web_search_exa internally:
// mcp__Exa__web_search_exa({ query: topic, numResults: 5, useAutoprompt: true })
// Falls back to mcp__Exa__get_code_context_exa for technical topics
Step 3: Filter and Deduplicate
const newResults = [];
for (const result of exaResults) {
if (seenUrls.has(result.url)) continue;
seenUrls.add(result.url);
newResults.push({
title: result.title,
url: result.url,
summary: result.text?.slice(0, 400) || result.highlights?.join(' ') || '',
topic,
publishedDate: result.publishedDate,
});
}
// Persist seen URLs via named memory (cap at 2000)
const seenArr = [...seenUrls].slice(-2000);
await writeMemory('exa-seen-urls', JSON.stringify(seenArr));
Step 4: Append to Digest
if (newResults.length > 0) {
const digest = newResults
.map(
r =>
`## ${r.title}\n**Topic:** ${r.topic}\n**Published:** ${r.publishedDate || 'unknown'}\n${r.summary}...\n[Read →](${r.url})\n`
)
.join('\n---\n');
// Append to exa-digest.md
const existing = fs.existsSync('.claude/context/memory/named/exa-digest.md')
? fs.readFileSync('.claude/context/memory/named/exa-digest.md', 'utf8')
: '';
fs.writeFileSync(
'.claude/context/memory/named/exa-digest.md',
`${existing}\n\n## Exa Update — ${new Date().toISOString().slice(0, 10)}\n\n${digest}`
);
}
Integration with Morning Briefing
/loop at 8:00am Read .claude/context/memory/named/exa-digest.md and arxiv-digest.md. Summarize the most relevant news and papers for agent-studio development. Highlight any urgent developments.
Configuration Reference
| Variable | Default | Description |
| -------------------- | ----------------------- | ---------------------------------------- |
| EXA_MONITOR_TOPICS | ["Claude AI updates"] | JSON array of search topics |
| EXA_MAX_RESULTS | 5 | Max results per topic per run |
| EXA_AUTOPROMPT | true | Let Exa optimize the query automatically |
Deduplication
- Seen URLs stored via
writeMemory('exa-seen-urls', ...)(named memory API) - Capped at 2000 most recent URLs
- Reset with:
await writeMemory('exa-seen-urls', '[]')
Related Skills
scheduled-tasks— CronCreate APIarxiv-monitor— ArXiv companion monitor for academic papersheartbeat— Full heartbeat ecosystem including this loopmemory-search— Search the exa-digest for specific topics