chat
Use when starting a new project with llmring, building an application using LLMs, making basic chat completions, or sending messages to OpenAI, Anthropic, Google, or Ollama - covers lockfile creation (MANDATORY first step), semantic alias usage, unified interface for all providers with consistent message structure and response handling
funsloth-check
Validate datasets for Unsloth fine-tuning. Use when the user wants to check a dataset, analyze tokens, calculate Chinchilla optimality, or prepare data for training.
google-image-search
Search and download images via Google Custom Search API with LLM-powered selection. This skill should be used when finding images for articles, presentations, research documents, or enriching Obsidian notes with relevant visuals. Supports simple queries, batch processing from JSON config, automatic config generation from terms, and full note enrichment with automatic image insertion below headings.
ba
Task tracker for LLM sessions. Use "$ba ready" to see available work, "$ba claim <id>" to take ownership, "$ba finish <id>" when done.
perplexity
AI-powered search engine with real-time web grounding and citations
prompt-engineer
Use when designing prompts for LLMs, optimizing model performance, building evaluation frameworks, or implementing advanced prompting techniques like chain-of-thought, few-shot learning, or structured outputs.
fine-tuning-expert
Use when fine-tuning LLMs, training custom models, or optimizing model performance for specific tasks. Invoke for parameter-efficient methods, dataset preparation, or model adaptation.
commit-helper
Intelligent commit message generation following conventional commit format.
llm-router
This skill should be used when users want to route LLM requests to different AI providers (OpenAI, Grok/xAI, Groq, DeepSeek, OpenRouter) using SwiftOpenAI-CLI. Use this skill when users ask to "use grok", "ask grok", "use groq", "ask deepseek", or any similar request to query a specific LLM provider in agent mode.
octave-mythology
Functional mythological compression for OCTAVE documents. Semantic shorthand for LLM audiences, not prose decoration
openai-responses
|
google-gemini-embeddings
|
claude-api
|
llm-patterns
AI-first application patterns, LLM testing, prompt management
gemini
Gemini CLI for one-shot Q&A, summaries, and generation.
mcp-builder
Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).
Page 3 of 3 · 52 results