Agent Skills: openai

OpenAI API for GPT models. Use when user mentions "OpenAI", "GPT", "ChatGPT

UncategorizedID: vm0-ai/vm0-skills/openai

Install this agent skill to your local

pnpm dlx add-skill https://github.com/vm0-ai/vm0-skills/tree/HEAD/openai

Skill Files

Browse the full folder contents for openai.

Download Skill

Loading file tree…

openai/SKILL.md

Skill Metadata

Name
openai
Description
OpenAI API for GPT models. Use when user mentions "OpenAI", "GPT", "ChatGPT

Troubleshooting

If requests fail, run zero doctor check-connector --env-name OPENAI_TOKEN or zero doctor check-connector --url https://api.openai.com/v1/chat/completions --method POST

How to Use

All examples below assume you have OPENAI_TOKEN set.

Base URL: https://api.openai.com/v1

1. Basic Chat Completion

Send a simple chat message:

Write to /tmp/openai_request.json:

{
  "model": "gpt-4o-mini",
  "messages": [{"role": "user", "content": "Hello, who are you?"}]
}

Then run:

curl -s "https://api.openai.com/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer $OPENAI_TOKEN" -d @/tmp/openai_request.json | jq '.choices[0].message.content'

Available models:

  • gpt-4o: Latest flagship model (128K context)
  • gpt-4o-mini: Fast and affordable (128K context)
  • gpt-4-turbo: Previous generation (128K context)
  • gpt-3.5-turbo: Legacy model (16K context)
  • o1: Reasoning model for complex tasks
  • o1-mini: Smaller reasoning model

2. Chat with System Prompt

Use a system message to set behavior:

Write to /tmp/openai_request.json:

{
  "model": "gpt-4o-mini",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant that responds in JSON format."},
    {"role": "user", "content": "List 3 programming languages with their main use cases."}
  ]
}

Then run:

curl -s "https://api.openai.com/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer $OPENAI_TOKEN" -d @/tmp/openai_request.json | jq '.choices[0].message.content'

3. Streaming Response

Get real-time token-by-token output:

Write to /tmp/openai_request.json:

{
  "model": "gpt-4o-mini",
  "messages": [{"role": "user", "content": "Write a haiku about programming."}],
  "stream": true
}

Then run:

curl -s "https://api.openai.com/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer $OPENAI_TOKEN" -d @/tmp/openai_request.json

Streaming returns Server-Sent Events (SSE) with delta chunks.

4. JSON Mode

Force the model to return valid JSON:

Write to /tmp/openai_request.json:

{
  "model": "gpt-4o-mini",
  "messages": [
    {"role": "system", "content": "Return JSON only."},
    {"role": "user", "content": "Give me info about Paris: name, country, population."}
  ],
  "response_format": {"type": "json_object"}
}

Then run:

curl -s "https://api.openai.com/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer $OPENAI_TOKEN" -d @/tmp/openai_request.json | jq '.choices[0].message.content'

5. Vision (Image Analysis)

Analyze an image with GPT-4o:

Write to /tmp/openai_request.json:

{
  "model": "gpt-4o-mini",
  "messages": [
    {
      "role": "user",
      "content": [
        {"type": "text", "text": "What is in this image?"},
        {"type": "image_url", "image_url": {"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/3/3a/Cat03.jpg/1200px-Cat03.jpg"}}
      ]
    }
  ],
  "max_tokens": 300
}

Then run:

curl -s "https://api.openai.com/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer $OPENAI_TOKEN" -d @/tmp/openai_request.json | jq '.choices[0].message.content'

6. Function Calling (Tools)

Define functions the model can call:

Write to /tmp/openai_request.json:

{
  "model": "gpt-4o-mini",
  "messages": [{"role": "user", "content": "What is the weather in Tokyo?"}],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get current weather for a location",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {"type": "string", "description": "City name"}
          },
          "required": ["location"]
        }
      }
    }
  ]
}

Then run:

curl -s "https://api.openai.com/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer $OPENAI_TOKEN" -d @/tmp/openai_request.json | jq '.choices[0].message.tool_calls'

7. Generate Embeddings

Create vector embeddings for text:

Write to /tmp/openai_request.json:

{
  "model": "text-embedding-3-small",
  "input": "The quick brown fox jumps over the lazy dog."
}

Then run:

curl -s "https://api.openai.com/v1/embeddings" -H "Content-Type: application/json" -H "Authorization: Bearer $OPENAI_TOKEN" -d @/tmp/openai_request.json | jq '.data[0].embedding[:5]'

This extracts the first 5 dimensions of the embedding vector.

Embedding models:

  • text-embedding-3-small: 1536 dimensions, fastest
  • text-embedding-3-large: 3072 dimensions, most capable

8. Generate Image (DALL-E 3)

Create an image from text:

Write to /tmp/openai_request.json:

{
  "model": "dall-e-3",
  "prompt": "A white cat sitting on a windowsill, digital art",
  "n": 1,
  "size": "1024x1024"
}

Then run:

curl -s "https://api.openai.com/v1/images/generations" -H "Content-Type: application/json" -H "Authorization: Bearer $OPENAI_TOKEN" -d @/tmp/openai_request.json | jq '.data[0].url'

Parameters:

  • size: 1024x1024, 1792x1024, or 1024x1792
  • quality: standard or hd
  • style: vivid or natural

9. Audio Transcription (Whisper)

Transcribe audio to text:

curl -s "https://api.openai.com/v1/audio/transcriptions" -H "Authorization: Bearer $OPENAI_TOKEN" -F "file=@audio.mp3" -F "model=whisper-1" | jq '.text'

Supports: mp3, mp4, mpeg, mpga, m4a, wav, webm (max 25MB).

10. Text-to-Speech

Generate audio from text:

Write to /tmp/openai_request.json:

{
  "model": "tts-1",
  "input": "Hello! This is a test of OpenAI text to speech.",
  "voice": "alloy"
}

Then run:

curl -s "https://api.openai.com/v1/audio/speech" -H "Content-Type: application/json" -H "Authorization: Bearer $OPENAI_TOKEN" -d @/tmp/openai_request.json --output speech.mp3

Voices: alloy, echo, fable, onyx, nova, shimmer

Models: tts-1 (fast), tts-1-hd (high quality)

11. List Available Models

Get all available models:

curl -s "https://api.openai.com/v1/models" -H "Authorization: Bearer $OPENAI_TOKEN" | jq -r '.data[].id' | sort | head -20

12. Check Token Usage

Extract usage from response:

Write to /tmp/openai_request.json:

{
  "model": "gpt-4o-mini",
  "messages": [{"role": "user", "content": "Hi!"}]
}

Then run:

curl -s "https://api.openai.com/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer $OPENAI_TOKEN" -d @/tmp/openai_request.json | jq '.usage'

This returns token counts for both input and output.

Response includes:

  • prompt_tokens: Input token count
  • completion_tokens: Output token count
  • total_tokens: Sum of both

Guidelines

  1. Choose the right model: Use gpt-4o-mini for most tasks, gpt-4o for complex reasoning, o1 for advanced math/coding
  2. Set max_tokens: Prevent runaway generation and control costs
  3. Use streaming for long responses: Better UX for real-time applications
  4. JSON mode requires system prompt: Include JSON instructions when using response_format
  5. Vision requires gpt-4o models: Only gpt-4o and gpt-4o-mini support image input
  6. Batch similar requests: Use embeddings API batch input for efficiency
  7. Monitor usage: Check dashboard regularly to avoid unexpected charges
openai Skill | Agent Skills