Back to tags
Tag

Agent Skills with tag: llm-integration

9 skills match this tag. Use tags to discover related Agent Skills and explore similar workflows.

prompt-engineering

Use this skill when you writing commands, hooks, skills for Agent, or prompts for sub agents or any other LLM interaction, including optimizing prompts, improving LLM outputs, or designing production prompt templates.

prompt-generationllm-integrationtoken-optimizationplugin-hooks
prof-ramos
prof-ramos
0

multi-llm-consult

Consult external LLMs (Gemini, OpenAI/Codex, Qwen) for second opinions, alternative plans, independent reviews, or delegated tasks. Use when a user asks for another model's perspective, wants to compare answers, or requests delegating a subtask to Gemini/Codex/Qwen.

multi-llmllm-integrationmulti-model-consensusdelegation
NickCrew
NickCrew
52

repo-clipboard

Snapshot the current directory into pseudo-XML for LLM context. Use when you need to share a repo (or a sub-tree) with Codex/LLMs, especially for code review/debugging, generating an agent-friendly “repo snapshot”, or piping context into tools like `llm` (see skill $llm-cli). Supports `.gitignore`-aware file discovery, common ignore patterns, extension filtering, regex include/exclude, optional file-list printing, line-range snippets, and writes `/tmp/repo_clipboard.{stdout,stderr}` for reuse.

repository-managementgitignorecode-reviewllm-integration
santiago-afonso
santiago-afonso
1

mcp-builder

Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).

mcpmcp-sdkfastmcpllm-integration
Nymbo
Nymbo
1

mcp-builder

Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).

mcpllm-integrationpythonnodejs
Evilander
Evilander
23

llamafile

When setting up local LLM inference without cloud APIs. When running GGUF models locally. When needing OpenAI-compatible API from a local model. When building offline/air-gapped AI tools. When troubleshooting local LLM server connections.

llm-integrationlocal-developmentoffline-accesstroubleshooting
Jamie-BitFlight
Jamie-BitFlight
181

litellm

When calling LLM APIs from Python code. When connecting to llamafile or local LLM servers. When switching between OpenAI/Anthropic/local providers. When implementing retry/fallback logic for LLM calls. When code imports litellm or uses completion() patterns.

pythonllm-integrationopenaianthropic
Jamie-BitFlight
Jamie-BitFlight
181

llm-cli

Process textual and multimedia files with various LLM providers using the llm CLI. Supports both non-interactive and interactive modes with model selection, config persistence, and file input handling.

clillm-integrationlarge-language-modelsfile-conversion
glebis
glebis
0

openai

OpenAI API via curl. Use this skill for GPT chat completions, DALL-E image generation, Whisper audio transcription, embeddings, and text-to-speech.

openaicurlllm-integrationimage-generation
vm0-ai
vm0-ai
0