Agent Skills: OpenTelemetry LLM Skill

OpenTelemetry instrumentation for LLM applications with distributed tracing

UncategorizedID: a5c-ai/babysitter/opentelemetry-llm

Install this agent skill to your local

pnpm dlx add-skill https://github.com/a5c-ai/babysitter/tree/HEAD/plugins/babysitter/skills/babysit/process/specializations/ai-agents-conversational/skills/opentelemetry-llm

Skill Files

Browse the full folder contents for opentelemetry-llm.

Download Skill

Loading file tree…

plugins/babysitter/skills/babysit/process/specializations/ai-agents-conversational/skills/opentelemetry-llm/SKILL.md

Skill Metadata

Name
opentelemetry-llm
Description
OpenTelemetry instrumentation for LLM applications with distributed tracing

OpenTelemetry LLM Skill

Capabilities

  • Configure OpenTelemetry SDK for LLM apps
  • Implement LLM-specific instrumentation
  • Set up trace exporters (Jaeger, OTLP)
  • Design semantic conventions for LLM
  • Configure span attributes for AI workloads
  • Implement context propagation

Target Processes

  • llm-observability-monitoring
  • agent-deployment-pipeline

Implementation Details

Core Components

  1. TracerProvider: SDK configuration
  2. SpanProcessor: Batch/simple processors
  3. Exporters: Jaeger, OTLP, Console
  4. Instrumentation: Auto and manual

LLM Semantic Conventions

  • gen_ai.system (OpenAI, Anthropic)
  • gen_ai.request.model
  • gen_ai.request.max_tokens
  • gen_ai.response.finish_reason
  • gen_ai.usage.prompt_tokens

Configuration Options

  • Exporter selection
  • Sampling strategies
  • Resource attributes
  • Span limits
  • Context propagation

Best Practices

  • Consistent attribute naming
  • Appropriate sampling
  • Error handling traces
  • Propagate context across services

Dependencies

  • opentelemetry-sdk
  • opentelemetry-exporter-*
  • openinference (optional)