Agent Skills: Hook SDK Integration

LLM invocation patterns from hooks via SDK. Use when you need background agents, CLI calls, or cost optimization.

UncategorizedID: chkim-su/forge-editor/hook-sdk-integration

Install this agent skill to your local

pnpm dlx add-skill https://github.com/chkim-su/forge-editor/tree/HEAD/skills/hook-sdk-integration

Skill Files

Browse the full folder contents for hook-sdk-integration.

Download Skill

Loading file tree…

skills/hook-sdk-integration/SKILL.md

Skill Metadata

Name
hook-sdk-integration
Description
LLM invocation patterns from hooks via SDK. Use when you need background agents, CLI calls, or cost optimization.

Hook SDK Integration

Patterns for invoking LLM calls from hooks using u-llm-sdk/claude-only-sdk.

IMPORTANT: SDK Detailed Guide

Load when implementing SDK:

Skill("forge-editor:llm-sdk-guide")

This skill covers SDK call pattern interfaces. llm-sdk-guide covers SDK detailed APIs and types.

Quick Start

# Background agent pattern (non-blocking)
(python3 sdk-agent.py "$INPUT" &)
echo '{"status": "started"}'
exit 0

Key Findings (Verified: 2025-12-30)

| Item | Result | |------|--------| | SDK calls | Possible from hooks | | Latency | ~30s (CLI session initialization) | | Background | Non-blocking execution possible (0.01s return) | | Cost | Included in subscription (no additional API cost) |

Architecture

Hook (bash) → Background (&) → SDK (Python) → CLI → Subscription usage
     │                                                    │
     └─── Immediate return (0.01s) ───────────────────────┘

Pattern Selection

| Situation | Pattern | Reason | |-----------|---------|--------| | Need fast evaluation | type: "prompt" | In-session execution, fast | | Need isolation | Direct CLI call | Separate MCP config possible | | Complex logic | SDK + Background | Type-safe, non-blocking | | Cost reduction | Local LLM (ollama) | Free, privacy |

SDK Configuration (Python)

from u_llm_sdk import LLM, LLMConfig
from llm_types import Provider, ModelTier, AutoApproval

config = LLMConfig(
    provider=Provider.CLAUDE,
    tier=ModelTier.LOW,
    auto_approval=AutoApproval.FULL,
    timeout=60.0,
)

async with LLM(config) as llm:
    result = await llm.run("Your prompt")

Cost Structure

| Method | Cost | |--------|------| | type: "prompt" | Included in subscription | | Claude CLI | Included in subscription | | SDK via CLI | Included in subscription | | Direct API | Per-token billing |

References