Limitless Conversations Skill - Complete Documentation
Thesis
This skill enables retrieval and analysis of Limitless.ai conversations for any date/period with PST timezone support, automatic theme extraction, and Knowledge Framework structured summaries.
Overview
The skill provides access to Limitless API conversations through two main operations: (1.0) fetching and summarizing conversations for specific dates with automatic theme analysis, and (2.0) extracting individual conversations with full metadata. Both operations use PST timezone, create Knowledge Framework structured output, and save files with numeric prefixes per repository standards.
graph TD
User[User Request] --> DateParse[Parse Date/Range]
DateParse --> API[Limitless API]
API --> Process[Process Conversations]
Process --> Summary[Create Summary]
Process --> Extract[Extract Individuals]
Summary --> Files[Save Files]
Extract --> Files
Files --> Output[Return to User]
graph LR
Fetch[Fetch API Data] --> Analyze[Theme Analysis]
Analyze --> Structure[Knowledge Framework]
Structure --> Save[Save Artifacts]
1.0 Fetching Conversations
¶1 Ordering principle: Sections ordered by workflow - parse user request for date/range (1.1), make API call (1.2), process response (1.3), create summary (1.4).
1.1 Date Parsing
¶1 Accept flexible date inputs:
- "yesterday", "today" → Calculate date in PST
- "2025-11-04" → Direct YYYY-MM-DD format
- "last week" → Calculate 7-day range
- "November 1st" → Parse to YYYY-MM-DD
¶2 ALWAYS use PST timezone:
from datetime import datetime
import pytz
pst = pytz.timezone('America/Los_Angeles')
date_str = datetime.now(pst).strftime("%Y-%m-%d")
¶3 For date ranges - iterate over days:
from datetime import datetime, timedelta
end_date = datetime.now()
for i in range(7): # Last 7 days
date = (end_date - timedelta(days=i)).strftime("%Y-%m-%d")
process_date(date)
1.2 API Integration
¶1 Use existing client function:
import sys
sys.path.append('$PROJECT_ROOT')
from algorithms.A8_G&A_div.Daniel_Personal.Daniel_communications.code.00_process_conversations import fetch_conversations
# Fetch for specific date
data = fetch_conversations("2025-11-04", timezone="America/Los_Angeles")
chats = data.get('data', {}).get('chats', [])
¶2 API endpoint details:
- URL:
https://api.limitless.ai/v1/chats - Headers:
X-API-Key: {LIMITLESS_API_KEY} - Params:
date(YYYY-MM-DD),timezone(default: America/Los_Angeles)
¶3 API key location:
- Environment variable:
LIMITLESS_API_KEYin.envat repository root - Path:
$PROJECT_ROOT/.env
1.3 Response Processing
¶1 Response structure:
{
"data": {
"chats": [
{
"id": "chat_123",
"summary": "Daily insights generation",
"createdAt": "2025-10-30T08:15:00Z",
"startedAt": "2025-10-30T08:15:00Z",
"messages": [
{
"id": "msg_456",
"text": "Create summary...",
"createdAt": "2025-10-30T08:15:00Z",
"user": {
"role": "user",
"name": "Daniel"
}
}
]
}
]
}
}
¶2 Extract key fields:
id- Conversation unique identifiersummary- AI-generated conversation titlemessages- Array of user/assistant messagescreatedAt- ISO timestampuser.role- "user" or "assistant"
1.4 Summary Creation
¶1 Use existing summary function:
from algorithms.A8_G&A_div.Daniel_Personal.Daniel_communications.code.00_process_conversations import create_summary
summary_content = create_summary(chats, date_str)
¶2 Summary structure follows Knowledge Framework:
- Thesis - Document purpose and scope
- Overview - High-level summary with Mermaid diagrams
- 1.0 Conversation Themes - Theme distribution and types
- 2.0 Key Conversations - Top 10 by message count
- 3.0 Key Decisions and Actions - Extracted from Daily Insights
- 4.0 Next Steps - Actionable items
- 5.0 Statistics - Metrics and counts
¶3 Theme auto-detection patterns:
themes = {
'Daily Insights': 'Daily insights' in summary,
'Custom Queries': 'Custom Message' in summary,
'Longevity Research': any(word in summary.lower() for word in ['aging', 'longevity', 'research']),
'M&A': any(word in summary.lower() for word in ['m&a', 'deal', 'basis', 'negotiation']),
'Team Management': any(word in summary.lower() for word in ['wendy', 'employee', 'firing', 'team'])
}
2.0 Extracting Individual Conversations
¶1 Ordering principle: Sections ordered by extraction workflow - read full transcript (2.1), iterate conversations (2.2), save individual files (2.3).
2.1 Read Full Transcript
¶1 Load previously saved JSON:
import json
from pathlib import Path
json_file = Path(f'algorithms/A8_G&A_div/Daniel Personal/Daniel_communications/calls/{date_str}/00_full_transcript.json')
with open(json_file, 'r', encoding='utf-8') as f:
data = json.load(f)
chats = data.get('data', {}).get('chats', [])
2.2 Iterate Conversations
¶1 Use existing extraction function:
from algorithms.A8_G&A_div.Daniel_Personal.Daniel_communications.code.01_extract_conversations import extract_full_conversation
for chat in chats:
conversation_text = extract_full_conversation(chat)
¶2 Conversation text format:
[user/Daniel] (2025-10-30T08:15:00Z):
Create summary of all my decisions from yesterday...
[assistant/Limitless] (2025-10-30T08:15:01Z):
Here's a summary of your key decisions from October 29...
2.3 Save Individual Files
¶1 File naming convention:
from algorithms.A8_G&A_div.Daniel_Personal.Daniel_communications.code.01_extract_conversations import clean_filename
# Format: {NN}_{summary_clean}_{id_short}.md
summary_clean = clean_filename(summary)
filename = f"{i:02d}_{summary_clean}_{chat_id[:8]}.md"
¶2 Output directory structure:
algorithms/A8_G&A_div/Daniel Personal/Daniel_communications/calls/{date}/
├── 00_summary.md # Knowledge Framework summary
├── 00_full_transcript.json # Raw API response
├── 00_all_conversations_full.md # All conversations combined
└── conversations/ # Individual conversation files
├── 01_Daily_insights_Morning_chat_abc.md
├── 02_Longevity_research_NMN_chat_def.md
└── ...
3.0 User Request Patterns
¶1 Ordering principle: Sections ordered by request complexity - single date (3.1), date range (3.2), theme filtering (3.3).
3.1 Single Date Requests
¶1 Example user inputs:
- "Get conversations from yesterday"
- "Show me Limitless chats for 2025-11-04"
- "Extract November 1st conversations"
¶2 Processing steps:
- Parse date to YYYY-MM-DD format (PST)
- Call
fetch_conversations(date_str) - Create summary with
create_summary(chats, date_str) - Save files to output directory
- Run extraction script
01_extract_conversations.py - Report statistics to user
3.2 Date Range Requests
¶1 Example user inputs:
- "Get all conversations from last week"
- "Show me chats from Nov 1-7"
- "Extract conversations for October 2025"
¶2 Processing steps:
- Parse date range (start_date, end_date)
- Iterate over each date
- Fetch and process separately per date
- Combine statistics at the end
- Report total conversations across all dates
¶3 Example implementation:
from datetime import datetime, timedelta
start = datetime.strptime("2025-11-01", "%Y-%m-%d")
end = datetime.strptime("2025-11-07", "%Y-%m-%d")
total_chats = 0
for i in range((end - start).days + 1):
date = (start + timedelta(days=i)).strftime("%Y-%m-%d")
data = fetch_conversations(date)
chats = data.get('data', {}).get('chats', [])
total_chats += len(chats)
# Process and save...
print(f"Total conversations: {total_chats}")
3.3 Theme Filtering (Future Enhancement)
¶1 Potential user inputs:
- "Show only Longevity Research conversations from last week"
- "Get M&A related chats from October"
¶2 Implementation approach:
- Fetch all conversations for date range
- Filter by theme keywords in summary
- Create filtered summary
- Save only matching conversations
4.0 File Management
¶1 Ordering principle: Sections ordered by file lifecycle - creation (4.1), naming (4.2), organization (4.3).
4.1 File Creation
¶1 MANDATORY numeric prefixes:
- All files MUST start with
00_,01_,02_, etc. - Applies to: Python scripts, markdown docs, JSON files, directories
¶2 Output files created:
from pathlib import Path
base_dir = Path('algorithms/A8_G&A_div/Daniel Personal/Daniel_communications')
output_dir = base_dir / 'calls' / date_str
output_dir.mkdir(parents=True, exist_ok=True)
# Main files
summary_file = output_dir / '00_summary.md'
json_file = output_dir / '00_full_transcript.json'
consolidated_file = output_dir / '00_all_conversations_full.md'
# Individual conversations
conversations_dir = output_dir / 'conversations'
conversations_dir.mkdir(exist_ok=True)
4.2 File Naming
¶1 Summary file:
- Name:
00_summary.md - Format: Knowledge Framework structured
- Includes: Thesis, Overview, Mermaid diagrams, numbered sections
¶2 Full transcript:
- Name:
00_full_transcript.json - Format: Raw API response
- Includes: All metadata, messages, timestamps
¶3 Individual conversations:
- Pattern:
{NN}_{summary_clean}_{id_short}.md - Example:
01_Daily_insights_Morning_chat_abc12345.md - Clean summary: Remove special chars, replace spaces with underscores
4.3 File Organization
¶1 Directory structure:
algorithms/A8_G&A_div/Daniel Personal/Daniel_communications/
├── code/
│ ├── 00_process_conversations.py # Main fetcher & summarizer
│ └── 01_extract_conversations.py # Individual extractor
└── calls/
├── 2025-10-30/
│ ├── 00_summary.md
│ ├── 00_full_transcript.json
│ ├── 00_all_conversations_full.md
│ └── conversations/
│ ├── 01_...md
│ └── 02_...md
└── 2025-11-04/
└── ...
5.0 Error Handling
¶1 API key missing:
api_key = os.getenv("LIMITLESS_API_KEY")
if not api_key:
raise ValueError("LIMITLESS_API_KEY not found in .env file")
¶2 API request failure:
try:
response = requests.get(url, headers=headers, params=params)
response.raise_for_status()
except requests.exceptions.HTTPError as e:
print(f"API request failed: {e}")
print(f"Response: {response.text}")
raise
¶3 Invalid date format:
try:
datetime.strptime(date_str, "%Y-%m-%d")
except ValueError:
raise ValueError(f"Invalid date format: {date_str}. Use YYYY-MM-DD")
¶4 Empty response:
chats = data.get('data', {}).get('chats', [])
if not chats:
print(f"No conversations found for {date_str}")
return
6.0 Usage Examples
¶1 Quick single date fetch:
import sys
sys.path.append('$PROJECT_ROOT')
from algorithms.A8_G&A_div.Daniel_Personal.Daniel_communications.code.00_process_conversations import main
# Edit date_str in main() then run
main() # Fetches, summarizes, saves
¶2 Custom date range script:
from datetime import datetime, timedelta
from pathlib import Path
# Fetch last 7 days
end_date = datetime.now()
for i in range(7):
date_str = (end_date - timedelta(days=i)).strftime("%Y-%m-%d")
# Run processing
data = fetch_conversations(date_str)
chats = data.get('data', {}).get('chats', [])
# Save files
output_dir = Path(f'algorithms/A8_G&A_div/Daniel Personal/Daniel_communications/calls/{date_str}')
output_dir.mkdir(parents=True, exist_ok=True)
# ... save summary, json, etc.
¶3 Terminal execution:
# Edit date in 00_process_conversations.py then:
cd $PROJECT_ROOT
$PROJECT_ROOT/claude_venv/bin/python \
algorithms/A8_G\&A_div/Daniel\ Personal/Daniel_communications/code/00_process_conversations.py
# Then extract individual conversations:
$PROJECT_ROOT/claude_venv/bin/python \
algorithms/A8_G\&A_div/Daniel\ Personal/Daniel_communications/code/01_extract_conversations.py
7.0 Agent Instructions
When user requests Limitless conversations:
¶1 Parse request:
- Extract date or date range from user input
- Convert to YYYY-MM-DD format using PST timezone
- Validate date format
¶2 Single date workflow:
- Call
fetch_conversations(date_str, timezone="America/Los_Angeles") - Get
chatsfrom response - Create summary with
create_summary(chats, date_str) - Save 3 files:
00_summary.md,00_full_transcript.json,00_all_conversations_full.md - Run
01_extract_conversations.pyto create individual files - Report statistics to user
¶3 Date range workflow:
- Calculate all dates in range
- Iterate over each date
- Run single date workflow for each
- Combine statistics
- Report total to user
¶4 Output to user:
✅ Fetched Limitless conversations for {date_str}
📊 Statistics:
- Total conversations: {len(chats)}
- Total messages: {total_messages}
- Themes: {list(themes.keys())}
📁 Files saved to:
algorithms/A8_G&A_div/Daniel Personal/Daniel_communications/calls/{date_str}/
- 00_summary.md (Knowledge Framework summary)
- 00_full_transcript.json (Raw API data)
- 00_all_conversations_full.md (All conversations)
- conversations/ ({len(chats)} individual files)
🔗 Most common theme: {top_theme}
8.0 Statistics
¶1 Current script capabilities (as of v1.0):
- Fetch conversations for single date ✅
- PST timezone support ✅
- Knowledge Framework summaries ✅
- Theme auto-detection ✅
- Individual conversation extraction ✅
- Full transcript preservation ✅
¶2 Tested dates:
- 2025-10-30 (47 conversations processed successfully)
¶3 Performance:
- API latency: ~500ms per request
- Processing time: ~2-3 seconds for 50 conversations
- File creation: ~1 second for full extraction
Author Checklist
- [x] Thesis states specific outcome
- [x] Overview introduces all MECE sections
- [x] Mermaid diagrams for Continuants (TD) and Occurrents (LR)
- [x] All sections numbered (1.0, 2.0, 3.0)
- [x] Ordering principle stated for each major section
- [x] Paragraphs numbered (¶1, ¶2, ¶3)
- [x] Code examples for all key operations
- [x] Error handling documented
- [x] File paths use repository standard numeric prefixes
- [x] Cross-references to existing code files