Agent Skills: Limitless Daily Summary Skill

Use when user says "process Limitless for [date]", "analyze my conversations", or "create daily summary". Automatically fetches Limitless Pendant recordings, improves titles, deduplicates, and creates Knowledge Framework summary with concept/event taxonomy.

UncategorizedID: tekliner/improvado-agentic-frameworks-and-skills/limitless-daily-summary

Install this agent skill to your local

pnpm dlx add-skill https://github.com/tekliner/improvado-agentic-frameworks-and-skills/tree/HEAD/skills/limitless-daily-summary

Skill Files

Browse the full folder contents for limitless-daily-summary.

Download Skill

Loading file tree…

skills/limitless-daily-summary/SKILL.md

Skill Metadata

Name
limitless-daily-summary
Description
Use when user says "process Limitless for [date]", "analyze my conversations", or "create daily summary". Automatically fetches Limitless Pendant recordings, improves titles, deduplicates, and creates Knowledge Framework summary with concept/event taxonomy.

Limitless Daily Summary Skill

Complete workflow for processing Limitless Pendant recordings into structured daily summaries with concept taxonomy, event taxonomy, and notable topic detection.

When to Use This Skill

Use this skill when:

  • Process Limitless recordings for any date
  • Create daily summary with concept/event taxonomy
  • Analyze conversation patterns and notable moments
  • Generate Knowledge Framework documentation from recordings

Quick Start Checklist

When user wants Limitless summary:

[ ] 1. Identify date (today, yesterday, or specific YYYY-MM-DD)
[ ] 2. Fetch data: 03_limitless_unified_client.py --mode lifelogs --date
[ ] 3. Check for duplicates: 05_deduplicate_recordings.py --dry-run
[ ] 4. Remove duplicates if found (rarely needed)
[ ] 5. Create LLM summary: 08_create_llm_daily_summary.py --date
[ ] 6. Review 00_daily_summary.md output (3 Mermaid diagrams, 50+ links)

5-Second Decision Tree:

  • User says "process Limitless"? → Run full 3-step workflow
  • User wants today's data? → Use $(date +%Y-%m-%d)
  • User wants specific date? → Use YYYY-MM-DD format

Practical Workflow

BEFORE processing recordings:

  1. Determine date (today vs yesterday vs specific)
  2. Run 3-step process:
    • Step 1: python 03_limitless_unified_client.py --mode lifelogs --date YYYY-MM-DD
    • Step 2: (Optional) python 05_deduplicate_recordings.py --date YYYY-MM-DD
    • Step 3: python 08_create_llm_daily_summary.py --date YYYY-MM-DD
  3. Verify output (recordings/ folder + 00_daily_summary.md)
  4. Check quality (specific topics, not generic categories)

Example rapid application:

User: "Analyze my conversations from November 8"

Agent thinks:
- Date: 2025-11-08
- Step 1: Fetch recordings (improved titles, PST timezone)
- Step 2: Check duplicates (--dry-run first)
- Step 3: LLM summary (Gemini 2.0 Flash, 1M context)
- Output: Concept taxonomy with SPECIFIC topics (not "Work Projects")

Complete Workflow

Step 1: Fetch Data from Limitless API

python 03_limitless_unified_client.py --mode lifelogs --date YYYY-MM-DD

# Output:
# - recordings/ folder with files: YYYY-MM-DD_HH:MM-HH:MM_Title.md
# - 00_full_lifelogs.json (raw API data)
# - Improved titles (generic "различных тем" → specific topics)
# - Timestamps in PST timezone

Features:

  • Date + time range in filename
  • Generic title improvement (extracts key topics from content)
  • Timezone conversion to PST
  • Logs title improvements

Step 2: Deduplicate Files (Optional)

# Dry run (check what would be removed)
python 05_deduplicate_recordings.py --date YYYY-MM-DD --dry-run

# Actually remove duplicates
python 05_deduplicate_recordings.py --date YYYY-MM-DD

# Output:
# - Removes files with same chat ID
# - Keeps first occurrence
# - Logs all removals

When to use: If API returns duplicate recordings (rare but possible)

Step 3: Create LLM-Based Daily Summary

python 08_create_llm_daily_summary.py --date YYYY-MM-DD

# Output:
# - 00_daily_summary.md in Knowledge Framework format
# - Real semantic analysis (NOT keyword counting)
# - Concept taxonomy with actual discussion themes
# - TWO-LEVEL Event Timeline (main events + sub-events)
# - 50+ clickable links to original recordings
# - 3 Mermaid diagrams
# - Session ID attribution
# - Uses Gemini 2.0 Flash (1M context window)

Summary Structure

Concept Taxonomy (§1.0) - SPECIFIC TOPICS (NOT GENERIC)

CRITICAL REQUIREMENT: Use CONCRETE topics, not generic categories

❌ BAD (Too Generic):

  • "Work Projects"
  • "Family Activities"
  • "Political Discussion"
  • "Technology Topics"

✅ GOOD (Specific & Concrete):

  • "AI Agents Hackathon Win"
  • "Beach Trip & Crab Hunt"
  • "Putin Longevity Theory"
  • "Data Pipeline Debugging"

What LLM Extracts:

  • ACTUAL discussion topics from transcripts (not generic categories)
  • Concrete events that happened (not abstract themes)
  • Specific names of projects, people, places mentioned
  • Real issues discussed (not "business" but "M&A bank conflict")

Example Mermaid Diagram (SPECIFIC):

graph TD
    Day["2025-11-08"] --> AI["🤖 AI Agents Hackathon"]
    Day --> Startup["💼 Cattle Care Startup"]
    Day --> Health["⚕️ Diabetes & Stoicism"]
    Day --> Unusual["⚡ Drug Use Discussion"]

    AI --> Framework["Knowledge Framework"]
    AI --> Longevity["Longevity Research"]
    Startup --> Farmers["Working with Farmers"]
    Startup --> Geography["Remote Regions"]
    Health --> Lifestyle["Healthy Lifestyle"]
    Health --> Stoic["Stoic Philosophy"]

NOT like this (too generic):

graph TD
    Day[YYYY-MM-DD] --> Work[Work Topics]
    Day --> Personal[Personal Life]
    Day --> Tech[Technology]

Event Timeline (§2.0) - TWO-LEVEL STRUCTURE

Hierarchical Event Structure:

  • Main Events (4-6): Major activity blocks (Conference Session, Office Discussions)
  • Sub-Events (2-4 per main): Specific activities within block (Keynote, Q&A, Coffee Break)

Analysis Approach:

  • LLM semantically groups recordings by topic/context
  • Identifies major transitions in day (work → social, meeting → break)
  • Creates hierarchical structure showing BOTH overview AND detail

Two-Level Mermaid Diagram:

graph TD
    Start[06:00] --> Morning[🌅 Morning Prep]
    Morning --> ConfPrep[💼 Conference Prep]
    Morning --> PersonalWell[⚕️ Personal Well-being]

    Start --> Afternoon[☀️ Afternoon: Presentations]
    Afternoon --> TechPres[🤖 Tech Presentations]
    Afternoon --> ProjectDisc[⚙️ Project Discussions]

    Start --> Evening[🌙 Evening: Social]
    Evening --> MusicDisc[🎵 Music Discussions]
    Evening --> SocialGathering[🍻 Social Gathering]

    ConfPrep --> End[22:31]
    PersonalWell --> End
    TechPres --> End
    ProjectDisc --> End
    MusicDisc --> End
    SocialGathering --> End

Key Features:

  • Shows day structure at-a-glance (main events)
  • Reveals details on demand (sub-events)
  • Each sub-event links to original recording
  • Time ranges provided for each main event

Notable Topics (§3.0)

Detection Criteria:

  • Short recordings (<30 sec)
  • Keywords: "странн", "необычн", "интересн", "Burning Man", "биохакинг"
  • Special markers (starred recordings)

Mermaid Diagram:

graph TD
    Unusual[Notable Moments] --> U1[Short Interactions]
    Unusual --> U2[Unusual Keywords]
    Unusual --> U3[Special Events]

    U1 --> UN1["HH:MM: Title..."]
    U2 --> UN2["HH:MM: Title..."]

Example Usage

Process Today's Recordings

# User: "Process my Limitless recordings from today"

# Step 1: Fetch data
cd /path/to/code
python 03_limitless_unified_client.py --mode lifelogs --date $(date +%Y-%m-%d)

# Step 2: Create summary
python 07_create_enhanced_daily_summary.py --date $(date +%Y-%m-%d)

# Result:
# - N recordings with timestamps
# - Enhanced summary with 3 diagrams
# - Concept & event taxonomy

Process Specific Date

# User: "Analyze my conversations from November 8"

python 03_limitless_unified_client.py --mode lifelogs --date 2025-11-08
python 08_create_llm_daily_summary.py --date 2025-11-08

Full Workflow with Cleanup

# User: "Get yesterday's Limitless data, remove duplicates, create summary"

DATE=$(date -v-1d +%Y-%m-%d)  # Yesterday (macOS)

# Fetch
python 03_limitless_unified_client.py --mode lifelogs --date $DATE

# Check for duplicates
python 05_deduplicate_recordings.py --date $DATE --dry-run

# Remove if found
python 05_deduplicate_recordings.py --date $DATE

# Create LLM-based summary
python 08_create_llm_daily_summary.py --date $DATE

File Locations

Scripts:

algorithms/A8_G&A_div/Daniel Personal/Daniel_communications/code/
├── 03_limitless_unified_client.py      # Fetch & improve titles
├── 05_deduplicate_recordings.py        # Remove duplicates
└── 08_create_llm_daily_summary.py      # LLM-based semantic analysis

Output Structure:

calls/YYYY-MM-DD/
└── lifelogs/
    ├── 00_daily_summary.md             # Enhanced summary (NEW!)
    ├── 00_full_lifelogs.json           # Raw API data
    └── recordings/
        ├── YYYY-MM-DD_HH:MM-HH:MM_Topic1.md
        ├── YYYY-MM-DD_HH:MM-HH:MM_Topic2.md
        └── ...

Filename Format

Pattern: YYYY-MM-DD_HH:MM-HH:MM_Title.md

Example: 2025-11-08_14:30-14:45_Обсуждение проекта.md

Benefits:

  • Date visible without opening file
  • Start and end time clearly shown
  • Duration easily calculated
  • Automatic chronological sorting

Title Improvement

Algorithm:

# Detects generic patterns:
# - "Обсуждение различных тем"
# - "Discussion of various topics"
# - "Conversation", "Meeting"

# Extracts topics from first 500 chars:
# - Finds capitalized words (proper nouns)
# - Counts frequency
# - Takes top 3 mentioned 2+ times
# - Creates: "Обсуждение [Topic1], [Topic2]"

Example:

Input:  "Обсуждение различных тем"
Output: "Обсуждение Zoom, Встречи, Планы"

LLM-Based Summarization (Technical Details)

How It Works:

¶1 Full-day context analysis:

  • Model: Gemini 2.0 Flash with 1M token context window
  • Input: Entire day's transcripts (~230K tokens for 82 recordings)
  • Processing: Single API call - model sees ALL conversations
  • Result: Semantic understanding, not keyword counting

¶2 Two-level Event Timeline creation:

  • Main Events (4-6): LLM identifies major activity blocks
    • Example: "Morning Prep & Planning", "Afternoon: Presentations", "Evening: Social"
  • Sub-Events (2-4 per main): Specific activities within each block
    • Example: "💼 Conference Prep", "⚕️ Personal Well-being"
  • Mermaid structure: graph TD showing hierarchical relationships
  • Benefits: Shows BOTH overview (main events) AND detail (sub-events)

¶3 Clickable links to recordings:

  • Format: [Time Range](recordings/filename.md)
  • Count: 50+ links throughout document
  • Usage: Cmd+Click in Cursor/VS Code opens original transcript
  • Link construction: LLM matches time ranges to actual filenames

¶4 Advantages over keyword-based approach:

  • Before (07_create_enhanced_daily_summary.py): Word frequency → "Танцуют (18)" (meaningless)
  • After (08_create_llm_daily_summary.py): Semantic analysis → "AI Agents Presentation" (actual topic)
  • Key improvement: Understanding MEANING not WORDS

¶5 Token budget management:

  • Typical day: 200K-300K tokens
  • Well within Gemini 2.0 Flash 1M context
  • Allows processing entire day in single call
  • Enables cross-conversation understanding

Concept Extraction (Legacy Info)

Note: The LLM-based approach (08_create_llm_daily_summary.py) extracts concepts semantically, not via keyword matching. This section describes the old keyword-based approach for reference.

Keywords by Category:

Work:

  • проект, встреч, задач, работ, клиент
  • meeting, project, task, deadline

Places:

  • офис, дом, Остин, Израиль
  • office, home, Austin, Israel, конференц

Technology:

  • API, код, баг, deploy, dashboard, database
  • token, HTML, code, bug

People:

  • Capitalized words >4 chars
  • Mentioned 2+ times
  • Not generic (Unknown, Обсуждение)

Dependencies

  • Python 3.8+
  • Packages: requests, python-dotenv, pytz
  • Environment: LIMITLESS_API_KEY in .env
  • Session ID script: data_sources/claude_code/get_session_id.py

Quality Standards

Knowledge Framework compliant:

  • Thesis previewing all sections
  • MECE structure (§1.0, §2.0, §3.0)
  • Mermaid diagrams (Continuant + Occurrent + Participation)
  • Paragraph numbering (¶1, ¶2)
  • Session ID attribution

Comprehensive analysis:

  • Concept taxonomy covering all major themes
  • Temporal taxonomy with nested events
  • Notable topic detection

Clear timestamps:

  • All files have date + time range
  • PST timezone conversion
  • Duration easily calculable

Troubleshooting

No recordings found:

# Check if date is correct (PST timezone)
# Limitless API uses PST, so date might differ from your local time
python 03_limitless_unified_client.py --mode lifelogs --date YYYY-MM-DD --timezone America/Los_Angeles

Generic titles not improved:

  • Title improvement extracts topics from transcript
  • If transcript is very short, may not find topics
  • Algorithm looks for words mentioned 2+ times

Many unusual topics detected:

  • Short recordings (<30 sec) flagged as unusual
  • Check if these are actual conversations or noise
  • Adjust threshold in script if needed

Next Steps

After creating summary:

  1. Review 00_daily_summary.md
  2. Check concept taxonomy for accuracy
  3. Verify time periods match actual day flow
  4. Look at notable topics for interesting moments
  5. Update Notion/Jira if action items found

Meta Note: This skill automates complete Limitless recording processing - from API fetch to structured daily summary with visual taxonomy diagrams.