Code Review and Metric Collection
Review the changes and save the metrics data to a local temporary file.
Script Directory
Agent Execution Instructions:
- Determine this SKILL.md file's directory path as
SKILL_DIR - Script path =
${SKILL_DIR}/scripts/<script-name>.sh
| Script | Purpose |
| -------------------- | ------------------------------------------------------------------------ |
| collect-metrics.sh | Collect git statistics and code review metrics to a local temporary file |
| show-diff.sh | Show git diff with exclusions |
Workflow
Copy this checklist and check off items as you complete them:
- [ ] Step 1: Git Repo verification & Changes confirmation
- [ ] Step 2: Code Review
- [ ] 2.1 Load guidelines
- [ ] 2.2 Analyze changes
- [ ] 2.3 Output Result
- [ ] Step 3: User feedback & Validation
- [ ] Step 4: Collect metrics β οΈ REQUIRED
Step 1: Git Repo verification & Changes confirmation
Ensure if is in a git repository directory and existing changes to review (staged, unstaged, and untracked files):
if ! git rev-parse --git-dir > /dev/null 2>&1; then
echo "Error: Not a git repository. This command requires git version control."
exit 1
fi
echo "Success: Git repository verified."
if [ -z "$(git status --porcelain)" ]; then
echo "No changes detected. Working tree is clean."
exit 1
else
echo "Changes detected."
fi
Step 2: Code Review
2.1 Load guidelines
Load references/guidelines.md, treat it as the canonical set of rules to follow.
2.2 Analyze changes
- Get staged, unstaged, and untracked changes using Bash:
git status HEAD --short && bash ${SKILL_DIR}/scripts/show-diff.sh` - Analyze code changes to identify issues and store details (FilePath, Line, Suggestion) for subsequent output.
- Set initial counts:
CRITICAL_ISSUES_COUNT,HIGH_PRIORITY_ISSUES_COUNT.
2.3 Output Result
Show categorized issues to the user, exactly follow one of the two templates:
Template A (any findings)
## Code review Skill Output
### π¨ CRITICAL (Must fix)
#### <brief description of CRITICAL issue>
- FilePath: <path> line <line>
- Suggested fix: <brief description of suggested fix>
... (repeat for each CRITICAL issue) ...
---
### β οΈ HIGH PRIORITY (Should fix)
#### <brief description of HIGH PRIORITY issue>
- FilePath: <path> line <line>
- Suggested fix: <brief description of suggested fix>
... (repeat for each HIGH PRIORITY issue) ...
---
### π‘ SUGGESTIONS (Consider)
#### <brief description of SUGGESTIONS issue>
- FilePath: <path> line <line>
- Suggested fix: <brief description of suggested fix>
... (repeat for each SUGGESTIONS issue) ...
Template B (no issues)
## Code review Skill Output
π Great! No issues found.
Step 3: User feedback & Validation
- If any issues found, ask the user: "Do you have any objections to the review results?" And two options are provided:
1. Yes, I have objectionsγ2. No, proceed to the next step - If the user choose "Yes":
- Provide a numbered list of all CRITICAL and HIGH PRIORITY issues.
- Ask the user: "Please select the numbers of the issues you disagree with and briefly explain why (e.g., '1-this is a test file, 3-intentional technical debt')"
- Parse the user's response to identify which specific CRITICAL and HIGH PRIORITY issues were rejected.
- Extract the user's reason for each rejected item.
- Calculate
REJECTED_CRITICAL_COUNTandREJECTED_HIGH_COUNT. - Format the rejected details into a single-line string
REJECTION_DETAILS(Format:[Level]Issue brief=>User reason;[Level]Issue brief=>User reason).
- If no issues found or no objections:
REJECTED_CRITICAL_COUNT=0REJECTED_HIGH_COUNT=0REJECTION_DETAILS=""
Step 4: Collect metrics β οΈ REQUIRED
CHANGESET_SUMMARY="<Brief summary of changes (single-line only, within 100 words)>"
CODE_REVIEW_SUMMARY="<Detailed review summary (single-line only)>"
bash ${SKILL_DIR}/scripts/collect-metrics.sh \
"$CHANGESET_SUMMARY" \
"$CODE_REVIEW_SUMMARY" \
"$CRITICAL_ISSUES_COUNT" \
"$HIGH_PRIORITY_ISSUES_COUNT" \
"$REJECTED_CRITICAL_COUNT" \
"$REJECTED_HIGH_COUNT" \
"$REJECTION_DETAILS"
Notes
- Never modify the code after review