GitHub Ops Skill
Provides structured guidance for repository reconnaissance using gh api and gh search.
Overview
Repository reconnaissance often fails when agents guess file paths or attempt to fetch large files blindly. This skill enforces a structured Map -> Identify -> Fetch sequence using the GitHub CLI to minimize token waste and improve reliability.
β‘ Essential Reconnaissance Commands
Use these commands to understand a repository structure before fetching content.
1. List Repository Root
gh api repos/{owner}/{repo}/contents --jq '.[].name'
2. List Specific Directory
gh api repos/{owner}/{repo}/contents/{path} --jq '.[].name'
3. Fetch File Content (Base64 Decoded)
gh api repos/{owner}/{repo}/contents/{path} --jq '.content' | base64 -d
4. Search for Pattern in Repository
gh search code "{pattern}" --repo {owner}/{repo}
5. Get Repository Metadata
gh repo view {owner}/{repo} --json description,stargazerCount,updatedAt
π Token-Efficient Workflow
- Map Tree: List the root and core directories (
commands,src,docs). - Identify Entrypoints: Look for
README.md,gemini-extension.json,package.json, orSKILL.md. - Targeted Fetch: Download only the entrypoints first.
- Deep Dive: Use
gh search codeto find logic patterns rather than reading every file.
π‘οΈ Platform Safety (Windows)
- When using
base64 -d, ensure the output is redirected to a file using theWritetool if it's large. - Avoid Linux-style
/dev/stdinpatterns in complex pipes. - Use native paths for any local storage.
Iron Laws
- ALWAYS follow the Map β Identify β Fetch sequence before reading any file β blindly fetching files by guessed path wastes tokens, triggers 404s, and produces hallucinated repo structure.
- NEVER fetch a file without first listing its parent directory or confirming it exists via
gh apiβ large files fetched unnecessarily can exhaust the context window. - ALWAYS use
--jqto filtergh apiJSON output to only the fields needed β unfiltered API responses contain hundreds of irrelevant fields that inflate token usage. - NEVER use
gh search codewithout a scoping qualifier (repo, org, or path) β unscoped code search returns results from all of GitHub, producing irrelevant noise. - ALWAYS prefer
gh apistructured queries over reading repository files directly when repository metadata is needed β API queries are faster, structured, and don't require authentication context for public repos.
Anti-Patterns
| Anti-Pattern | Why It Fails | Correct Approach |
| ---------------------------------------------- | --------------------------------------------------------------- | ---------------------------------------------------------------------------------------- |
| Guessing file paths and fetching them directly | High 404 rate; wasted tokens on non-existent paths | Map root tree first: gh api repos/{owner}/{repo}/git/trees/HEAD --jq '.tree[].path' |
| Fetching entire files for a single field | Large files exhaust context; slow and imprecise | Use --jq to extract only the required field from API response |
| Unscoped gh search code queries | Returns GitHub-wide results; noise overwhelms signal | Always add --repo owner/name or --owner org scope qualifier |
| Reading binary or generated files | Binary content is unreadable; generated files change frequently | Identify file type first; skip binaries; read source files only |
| Sequential API calls for each file | Unnecessary round-trips inflate latency | Batch: use gh api trees or search to identify multiple targets, then fetch in parallel |
GitHub MCP Server Operations
When the official GitHub MCP server (@modelcontextprotocol/server-github) is configured, use these higher-level tools for repository management and automation:
// settings.json configuration
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}" }
}
PR Automation Pattern
# Create PR with auto-generated description
gh pr create \
--title "feat: add feature X" \
--body "$(gh api repos/{owner}/{repo}/compare/{base}...{head} --jq '.commits[].commit.message' | head -5)" \
--base main \
--head feature/x
# Auto-merge after CI passes
gh pr merge --auto --squash --delete-branch
Issue Management
# List open issues by label
gh issue list --label "bug" --state open --json number,title,assignees
# Bulk-close resolved issues
gh issue list --label "stale" --json number --jq '.[].number' | \
xargs -I{} gh issue close {} --comment "Closing as stale"
# Create issue from template
gh issue create \
--title "Bug: [description]" \
--body-file .github/ISSUE_TEMPLATE/bug_report.md \
--label "bug,needs-triage"
Release Automation
# Create release with auto-generated notes
gh release create v1.2.0 \
--generate-notes \
--title "v1.2.0" \
--target main
# Upload release assets
gh release upload v1.2.0 dist/*.tar.gz dist/*.zip
Workflow Management
# Trigger workflow manually
gh workflow run deploy.yml --field environment=production
# Watch workflow run
gh run watch $(gh run list --workflow=deploy.yml --limit=1 --json databaseId --jq '.[0].databaseId')
# Download workflow artifacts
gh run download --name=build-artifacts --dir=./artifacts
Assigned Agents
- artifact-integrator: Lead agent for repository onboarding.
- developer: PR management and exploration.
Memory Protocol (MANDATORY)
Before starting:
Read .claude/context/memory/learnings.md
After completing:
- New pattern ->
.claude/context/memory/learnings.md - Issue found ->
.claude/context/memory/issues.md - Decision made ->
.claude/context/memory/decisions.md
ASSUME INTERRUPTION: If it's not in memory, it didn't happen.