Agent Skills: Clean Code Standard — Quick Reference

Cross-language clean code standard with stable CC-* rule IDs. Use when writing/reviewing code, defining team standards, or citing lint findings.

UncategorizedID: vasilyu1983/ai-agents-public/software-clean-code-standard

Install this agent skill to your local

pnpm dlx add-skill https://github.com/vasilyu1983/AI-Agents-public/tree/HEAD/frameworks/shared-skills/skills/software-clean-code-standard

Skill Files

Browse the full folder contents for software-clean-code-standard.

Download Skill

Loading file tree…

frameworks/shared-skills/skills/software-clean-code-standard/SKILL.md

Skill Metadata

Name
software-clean-code-standard
Description
Cross-language clean code standard with stable CC-* rule IDs. Use when writing/reviewing code, defining team standards, or citing lint findings.

Clean Code Standard — Quick Reference

This skill is the authoritative clean code standard for this repository's shared skills. It defines stable rule IDs (CC-*), how to apply them in reviews, and how to extend them safely via language overlays and explicit exceptions.

Modern Best Practices (January 2026): Prefer small, reviewable changes and durable change context. Use RFC 2119 normative language consistently. Treat security-by-design and secure defaults as baseline (OWASP Top 10, NIST SSDF). Build observable systems (OpenTelemetry). For durable links and current tool choices, consult data/sources.json.


Quick Reference

| Task | Tool/Framework | Command | When to Use | |------|-----|---------|-------------| | Cite a standard | CC-* rule ID | N/A | PR review comments, design discussions, postmortems | | Categorize feedback | CC-NAM, CC-ERR, CC-SEC, etc. | N/A | Keep feedback consistent without "style wars" | | Add stack nuance | Language overlay | N/A | When the base rule is too generic for a language/framework | | Allow an exception | Waiver record | N/A | When a rule must be violated with explicit risk | | Reuse shared checklists | assets/checklists/ | N/A | When you need product-agnostic review/release checklists | | Reuse utility patterns | references/*-utilities.md | N/A | When extracting shared auth/logging/errors/resilience/testing utilities |

When to Use This Skill

  • Defining or enforcing clean code rules across teams and languages.
  • Reviewing code: cite CC-* IDs and avoid restating standards in reviews.
  • Building automation: map linters/CI gates to CC-* IDs.
  • Resolving recurring review debates: align on rule IDs, scope, and exceptions.

When NOT to Use This Skill

Decision Tree: Base Rule vs Overlay vs Exception

Feedback needed: [What kind of guidance is this?]
    ├─ Universal, cross-language rule? → Add/modify `CC-*` in `references/clean-code-standard.md`
    │
    ├─ Language/framework-specific nuance? → Add overlay entry referencing existing `CC-*`
    │
    └─ One-off constraint or temporary tradeoff?
        ├─ Timeboxed? → Add waiver with expiry + tracking issue
        └─ Permanent? → Propose a new rule or revise scope/exception criteria

Navigation

Resources

Templates

Utility Patterns

Related Skills


Optional: AI/Automation

  • Map automation findings to CC-* IDs (linters, SAST, dependency scanning) so humans can review impact, not tooling noise.
  • Keep AI-assisted suggestions advisory; human reviewers approve/deny with rule citations (https://conventionalcomments.org/).

Trend Awareness Protocol

IMPORTANT: When users ask recommendation questions about clean code standards, linters, or code quality tools, you MUST use a web search capability (if available) to check current trends before answering. If web search is unavailable, say so and answer using data/sources.json, clearly flagging that the recommendation may be stale.

Trigger Conditions

  • "What's the best linter for [language]?"
  • "What should I use for [code quality/static analysis]?"
  • "What's the latest in clean code practices?"
  • "Current best practices for [code standards/formatting]?"
  • "Is [ESLint/Prettier/Biome] still relevant in 2026?"
  • "[Biome] vs [ESLint] vs [other]?"
  • "Best static analysis tool for [language]?"

Required Searches

  1. Search: "clean code best practices 2026"
  2. Search: "[specific linter] vs alternatives 2026"
  3. Search: "code quality tools trends 2026"
  4. Search: "[language] linter comparison 2026"

What to Report

After searching, provide:

  • Current landscape: What linters/formatters are popular NOW
  • Emerging trends: New tools, standards, or patterns gaining traction
  • Deprecated/declining: Tools/approaches losing relevance or support
  • Recommendation: Based on fresh data, not just static knowledge

Example Topics (verify with fresh search)

  • JavaScript/TypeScript linters (ESLint, Biome, oxlint)
  • Formatters (Prettier, dprint, Biome)
  • Python quality (Ruff, mypy, pylint)
  • Go linting (golangci-lint, staticcheck)
  • Rust analysis (clippy, cargo-deny)
  • Code quality metrics and reporting tools
  • AI-assisted code review tools

Fact-Checking

  • Use web search/web fetch to verify current external facts, versions, pricing, deadlines, regulations, or platform behavior before final answers.
  • Prefer primary sources; report source links and dates for volatile information.
  • If web access is unavailable, state the limitation and mark guidance as unverified.