Agent Skills: GitHub Ops Skill

GitHub operations wrapper. Helps manage PRs, Issues, and Reviews efficiently via CLI.

UncategorizedID: oimiragieo/agent-studio/github-ops

Install this agent skill to your local

pnpm dlx add-skill https://github.com/oimiragieo/agent-studio/tree/HEAD/.claude/skills/github-ops

Skill Files

Browse the full folder contents for github-ops.

Download Skill

Loading file tree…

.claude/skills/github-ops/SKILL.md

Skill Metadata

Name
github-ops
Description
Workflow for repository reconnaissance and operations using GitHub CLI (gh). Optimizes token usage by using structured API queries instead of blind file fetching.

GitHub Ops Skill

Provides structured guidance for repository reconnaissance using gh api and gh search.

Overview

Repository reconnaissance often fails when agents guess file paths or attempt to fetch large files blindly. This skill enforces a structured Map -> Identify -> Fetch sequence using the GitHub CLI to minimize token waste and improve reliability.

⚑ Essential Reconnaissance Commands

Use these commands to understand a repository structure before fetching content.

1. List Repository Root

gh api repos/{owner}/{repo}/contents --jq '.[].name'

2. List Specific Directory

gh api repos/{owner}/{repo}/contents/{path} --jq '.[].name'

3. Fetch File Content (Base64 Decoded)

gh api repos/{owner}/{repo}/contents/{path} --jq '.content' | base64 -d

4. Search for Pattern in Repository

gh search code "{pattern}" --repo {owner}/{repo}

5. Get Repository Metadata

gh repo view {owner}/{repo} --json description,stargazerCount,updatedAt

πŸ”„ Token-Efficient Workflow

  1. Map Tree: List the root and core directories (commands, src, docs).
  2. Identify Entrypoints: Look for README.md, gemini-extension.json, package.json, or SKILL.md.
  3. Targeted Fetch: Download only the entrypoints first.
  4. Deep Dive: Use gh search code to find logic patterns rather than reading every file.

πŸ›‘οΈ Platform Safety (Windows)

  • When using base64 -d, ensure the output is redirected to a file using the Write tool if it's large.
  • Avoid Linux-style /dev/stdin patterns in complex pipes.
  • Use native paths for any local storage.

Iron Laws

  1. ALWAYS follow the Map β†’ Identify β†’ Fetch sequence before reading any file β€” blindly fetching files by guessed path wastes tokens, triggers 404s, and produces hallucinated repo structure.
  2. NEVER fetch a file without first listing its parent directory or confirming it exists via gh api β€” large files fetched unnecessarily can exhaust the context window.
  3. ALWAYS use --jq to filter gh api JSON output to only the fields needed β€” unfiltered API responses contain hundreds of irrelevant fields that inflate token usage.
  4. NEVER use gh search code without a scoping qualifier (repo, org, or path) β€” unscoped code search returns results from all of GitHub, producing irrelevant noise.
  5. ALWAYS prefer gh api structured queries over reading repository files directly when repository metadata is needed β€” API queries are faster, structured, and don't require authentication context for public repos.

Anti-Patterns

| Anti-Pattern | Why It Fails | Correct Approach | | ---------------------------------------------- | --------------------------------------------------------------- | ---------------------------------------------------------------------------------------- | | Guessing file paths and fetching them directly | High 404 rate; wasted tokens on non-existent paths | Map root tree first: gh api repos/{owner}/{repo}/git/trees/HEAD --jq '.tree[].path' | | Fetching entire files for a single field | Large files exhaust context; slow and imprecise | Use --jq to extract only the required field from API response | | Unscoped gh search code queries | Returns GitHub-wide results; noise overwhelms signal | Always add --repo owner/name or --owner org scope qualifier | | Reading binary or generated files | Binary content is unreadable; generated files change frequently | Identify file type first; skip binaries; read source files only | | Sequential API calls for each file | Unnecessary round-trips inflate latency | Batch: use gh api trees or search to identify multiple targets, then fetch in parallel |

GitHub MCP Server Operations

When the official GitHub MCP server (@modelcontextprotocol/server-github) is configured, use these higher-level tools for repository management and automation:

// settings.json configuration
"github": {
  "command": "npx",
  "args": ["-y", "@modelcontextprotocol/server-github"],
  "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}" }
}

PR Automation Pattern

# Create PR with auto-generated description
gh pr create \
  --title "feat: add feature X" \
  --body "$(gh api repos/{owner}/{repo}/compare/{base}...{head} --jq '.commits[].commit.message' | head -5)" \
  --base main \
  --head feature/x

# Auto-merge after CI passes
gh pr merge --auto --squash --delete-branch

Issue Management

# List open issues by label
gh issue list --label "bug" --state open --json number,title,assignees

# Bulk-close resolved issues
gh issue list --label "stale" --json number --jq '.[].number' | \
  xargs -I{} gh issue close {} --comment "Closing as stale"

# Create issue from template
gh issue create \
  --title "Bug: [description]" \
  --body-file .github/ISSUE_TEMPLATE/bug_report.md \
  --label "bug,needs-triage"

Release Automation

# Create release with auto-generated notes
gh release create v1.2.0 \
  --generate-notes \
  --title "v1.2.0" \
  --target main

# Upload release assets
gh release upload v1.2.0 dist/*.tar.gz dist/*.zip

Workflow Management

# Trigger workflow manually
gh workflow run deploy.yml --field environment=production

# Watch workflow run
gh run watch $(gh run list --workflow=deploy.yml --limit=1 --json databaseId --jq '.[0].databaseId')

# Download workflow artifacts
gh run download --name=build-artifacts --dir=./artifacts

Assigned Agents

  • artifact-integrator: Lead agent for repository onboarding.
  • developer: PR management and exploration.

Memory Protocol (MANDATORY)

Before starting: Read .claude/context/memory/learnings.md

After completing:

  • New pattern -> .claude/context/memory/learnings.md
  • Issue found -> .claude/context/memory/issues.md
  • Decision made -> .claude/context/memory/decisions.md

ASSUME INTERRUPTION: If it's not in memory, it didn't happen.