The Code Reviewer Agent
The code-reviewer agent performs technical review of implemented code after tests pass. It evaluates quality, flags issues by severity, scores the implementation, and either approves progression or blocks it until critical issues are resolved.
Role
The reviewer’s job is honest technical assessment — not encouragement. It applies YAGNI, KISS, and DRY checks, catches security issues, verifies architecture compliance, and produces actionable feedback. It never rubber-stamps code to keep a pipeline moving.
When the Reviewer Is Invoked
Cook spawns the code-reviewer automatically after the tester reports all tests passing. You can also invoke it directly:
/gk:code-review
For a full codebase scan:
/gk:code-review codebase
For parallel review of large features (3+ files changed):
/gk:code-review codebase parallel
Review Process
The reviewer follows a structured pipeline for multi-file features:
Scout edge cases → Review implementation → Score → Flag issues → Fix cycle (if needed) → Final approval
1. Edge Case Scouting
Before writing a single review comment, the reviewer activates the /gk:scout skill to analyze:
- Affected files and their data flows
- Error paths and boundary conditions
- Potential race conditions or concurrency issues
- Security attack surfaces
2. Code Inspection
The reviewer reads every file in scope and evaluates:
| Category | What Is Checked |
|---|---|
| Correctness | Logic errors, off-by-ones, null handling |
| Security | Input validation, SQL injection, XSS, auth bypass |
| Performance | N+1 queries, missing indexes, unnecessary allocations |
| Architecture | YAGNI violations, abstraction leaks, tight coupling |
| Standards | File size limits, naming conventions, comment quality |
| DRY | Duplicated logic that should be extracted |
3. Scoring
The reviewer produces a numeric score from 0–10:
Score: 9.2/10
Critical issues: 0
Important issues: 1
Minor issues: 3
In --auto mode, cook auto-approves scores of 9.5 or higher with 0 critical issues. Below that threshold, the reviewer’s feedback goes to the developer for a fix cycle.
Issue Severity
| Severity | Definition | Action Required |
|---|---|---|
| Critical | Security vulnerability, data loss risk, broken functionality | Fix immediately, re-review |
| Important | Performance problem, bad pattern, missing error handling | Fix before merge |
| Minor | Style inconsistency, naming, comments | Fix if time permits |
Review Feedback Format
The reviewer produces structured findings — never vague comments:
## Code Review — Auth Module
**Score: 8.7/10**
### Critical (0)
None.
### Important (1)
**jwt-verify.ts:34** — Token expiry check uses `Date.now()` comparison
instead of the JWT library's built-in verification. This can be bypassed
by clock skew.
Fix: Use `jwt.verify(token, secret, { clockTolerance: 30 })` instead.
### Minor (2)
**auth-service.ts:12** — Variable `t` should be named `token` for clarity.
**auth-routes.ts:45** — Missing JSDoc on the exported `createAuthRouter` function.
### Verdict
Approve after fixing the Important issue. Re-review not required for minors.
Task-Managed Review Pipeline
For large features, the reviewer creates a task dependency chain:
TaskCreate: "Scout edge cases" → pending
TaskCreate: "Review implementation" → pending, blockedBy: [scout]
TaskCreate: "Fix critical issues" → pending, blockedBy: [review]
TaskCreate: "Verify fixes pass" → pending, blockedBy: [fix]
This ensures fixes are verified before the pipeline advances — the reviewer does not trust “it’s fixed” claims without running verification commands and reading actual output.
Next Steps
- Agent System Architecture — see how the reviewer fits the full pipeline
- Skills Catalog — the
/gk:code-reviewskill reference