Back to tags
Tag

Agent Skills with tag: llm-integration

36 skills match this tag. Use tags to discover related Agent Skills and explore similar workflows.

prompt-engineering

Use this skill when you writing commands, hooks, skills for Agent, or prompts for sub agents or any other LLM interaction, including optimizing prompts, improving LLM outputs, or designing production prompt templates.

prompt-generationllm-integrationtoken-optimizationplugin-hooks
prof-ramos
prof-ramos
0

senior-ml-engineer

World-class ML engineering skill for productionizing ML models, MLOps, and building scalable ML systems. Expertise in PyTorch, TensorFlow, model deployment, feature stores, model monitoring, and ML infrastructure. Includes LLM integration, fine-tuning, RAG systems, and agentic AI. Use when deploying ML models, building ML platforms, implementing MLOps, or integrating LLMs into production systems.

mlopsmodel-deploymentfeature-storellm-integration
ovachiever
ovachiever
81

google-gemini-file-search

|

google-geminigemini-clisemantic-searchfile-search
ovachiever
ovachiever
81

llava

Large Language and Vision Assistant. Enables visual instruction tuning and image-based conversations. Combines CLIP vision encoder with Vicuna/LLaMA language models. Supports multi-turn image chat, visual question answering, and instruction following. Use for vision-language chatbots or image understanding tasks. Best for conversational image analysis.

multi-turn-conversationsvisual-question-answeringvision-languageconversational-image-analysis
ovachiever
ovachiever
81

langchain

Framework for building LLM-powered applications with agents, chains, and RAG. Supports multiple providers (OpenAI, Anthropic, Google), 500+ integrations, ReAct agents, tool calling, memory management, and vector store retrieval. Use for building chatbots, question-answering systems, autonomous agents, or RAG applications. Best for rapid prototyping and production deployments.

framework-selectionllm-integrationagent-frameworkvector-store
ovachiever
ovachiever
81

llamaguard

Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy. Deploy with vLLM, HuggingFace, Sagemaker. Integrates with NeMo Guardrails.

moderationsafetycontent-filteringllm-integration
ovachiever
ovachiever
81

llamaindex

Data framework for building LLM applications with RAG. Specializes in document ingestion (300+ connectors), indexing, and querying. Features vector indices, query engines, agents, and multi-modal support. Use for document Q&A, chatbots, knowledge retrieval, or building RAG pipelines. Best for data-centric LLM applications.

llm-integrationretrieval-augmented-generationvector-storedocument-indexing
ovachiever
ovachiever
81

repomix

Pack entire codebases into AI-friendly files for LLM analysis. Use when consolidating code for AI review, generating codebase summaries, or preparing context for ChatGPT, Claude, or other AI tools.

repomixcodebase-analysisllm-integrationcode-consolidation
julianobarbosa
julianobarbosa
0

repomix

Repository packaging for AI/LLM analysis. Capabilities: pack repos into single files, generate AI-friendly context, codebase snapshots, security audit prep, filter/exclude patterns, token counting, multiple output formats. Actions: pack, generate, export, analyze repositories for LLMs. Keywords: Repomix, repository packaging, LLM context, AI analysis, codebase snapshot, Claude context, ChatGPT context, Gemini context, code packaging, token count, file filtering, security audit, third-party library analysis, context window, single file output. Use when: packaging codebases for AI, generating LLM context, creating codebase snapshots, analyzing third-party libraries, preparing security audits, feeding repos to Claude/ChatGPT/Gemini.

repomixrepository-packagingllm-integrationcodebase-snapshot
samhvw8
samhvw8
2

multi-llm-consult

Consult external LLMs (Gemini, OpenAI/Codex, Qwen) for second opinions, alternative plans, independent reviews, or delegated tasks. Use when a user asks for another model's perspective, wants to compare answers, or requests delegating a subtask to Gemini/Codex/Qwen.

multi-llmllm-integrationmulti-model-consensusdelegation
NickCrew
NickCrew
52

repo-clipboard

Snapshot the current directory into pseudo-XML for LLM context. Use when you need to share a repo (or a sub-tree) with Codex/LLMs, especially for code review/debugging, generating an agent-friendly “repo snapshot”, or piping context into tools like `llm` (see skill $llm-cli). Supports `.gitignore`-aware file discovery, common ignore patterns, extension filtering, regex include/exclude, optional file-list printing, line-range snippets, and writes `/tmp/repo_clipboard.{stdout,stderr}` for reuse.

repository-managementgitignorecode-reviewllm-integration
santiago-afonso
santiago-afonso
1

mcp-builder

Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).

mcpmcp-sdkfastmcpllm-integration
Nymbo
Nymbo
1

mcp-builder

Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).

mcpllm-integrationpythonnodejs
Evilander
Evilander
23

langchain-architecture

Design LLM applications using the LangChain framework with agents, memory, and tool integration patterns. Use when building LangChain applications, implementing AI agents, or creating complex LLM workflows.

langchainllm-integrationai-agentsworkflow
camoneart
camoneart
4

distributed-claude-sender

Send prompts to a remote Claude instance on a VPS for distributed AI collaboration, different model backends, or independent context.

anthropic-sdkllm-integrationdistributed-computingmulti-backend
ebowwa
ebowwa
32

claude-skillkit

>

claude-agent-sdkskill-authoringanthropicllm-integration
rfxlamia
rfxlamia
51

llamafile

When setting up local LLM inference without cloud APIs. When running GGUF models locally. When needing OpenAI-compatible API from a local model. When building offline/air-gapped AI tools. When troubleshooting local LLM server connections.

llm-integrationlocal-developmentoffline-accesstroubleshooting
Jamie-BitFlight
Jamie-BitFlight
111

litellm

When calling LLM APIs from Python code. When connecting to llamafile or local LLM servers. When switching between OpenAI/Anthropic/local providers. When implementing retry/fallback logic for LLM calls. When code imports litellm or uses completion() patterns.

pythonllm-integrationopenaianthropic
Jamie-BitFlight
Jamie-BitFlight
111

Page 1 of 2 · 36 results