article-extractor
Extract clean article content from URLs and save as markdown. Triggers when user provides a webpage URL and wants to download it, extract content, get a clean version without ads, capture an article for offline reading, save an article, grab content from a page, archive a webpage, clip an article, or read something later. Handles blog posts, news articles, tutorials, documentation pages, and similar web content. Supports Wayback Machine for dead links or paywalled content. This skill handles the entire workflow - do NOT use web_fetch or other tools first, just call the extraction script directly with the URL.
video-downloader
Downloads videos from YouTube and other platforms for offline viewing, editing, or archival. Handles various formats and quality options.
video-downloader
Downloads videos from YouTube and other platforms for offline viewing, editing, or archival. Handles various formats and quality options.
markdown-to-standalone-html
Convert Markdown documents (*.md files) to self-contained HTML files with embedded images. Use when you need a portable, offline-friendly single HTML file from Markdown—ideal for blog posts, essays, reports, or any content that should work without external dependencies.
llamafile
When setting up local LLM inference without cloud APIs. When running GGUF models locally. When needing OpenAI-compatible API from a local model. When building offline/air-gapped AI tools. When troubleshooting local LLM server connections.
save-web-page
Guide for saving a web page for offline use using the monolith CLI. Use this when instructed to save a web page.