GigiKit Guides

The Code Reviewer Agent

The code-reviewer agent performs technical review of implemented code after tests pass. It evaluates quality, flags issues by severity, scores the implementation, and either approves progression or blocks it until critical issues are resolved.

Role

The reviewer’s job is honest technical assessment — not encouragement. It applies YAGNI, KISS, and DRY checks, catches security issues, verifies architecture compliance, and produces actionable feedback. It never rubber-stamps code to keep a pipeline moving.

When the Reviewer Is Invoked

Cook spawns the code-reviewer automatically after the tester reports all tests passing. You can also invoke it directly:

/gk:code-review

For a full codebase scan:

/gk:code-review codebase

For parallel review of large features (3+ files changed):

/gk:code-review codebase parallel

Review Process

The reviewer follows a structured pipeline for multi-file features:

Scout edge cases → Review implementation → Score → Flag issues → Fix cycle (if needed) → Final approval

1. Edge Case Scouting

Before writing a single review comment, the reviewer activates the /gk:scout skill to analyze:

  • Affected files and their data flows
  • Error paths and boundary conditions
  • Potential race conditions or concurrency issues
  • Security attack surfaces

2. Code Inspection

The reviewer reads every file in scope and evaluates:

CategoryWhat Is Checked
CorrectnessLogic errors, off-by-ones, null handling
SecurityInput validation, SQL injection, XSS, auth bypass
PerformanceN+1 queries, missing indexes, unnecessary allocations
ArchitectureYAGNI violations, abstraction leaks, tight coupling
StandardsFile size limits, naming conventions, comment quality
DRYDuplicated logic that should be extracted

3. Scoring

The reviewer produces a numeric score from 0–10:

Score: 9.2/10
Critical issues: 0
Important issues: 1
Minor issues: 3

In --auto mode, cook auto-approves scores of 9.5 or higher with 0 critical issues. Below that threshold, the reviewer’s feedback goes to the developer for a fix cycle.

Issue Severity

SeverityDefinitionAction Required
CriticalSecurity vulnerability, data loss risk, broken functionalityFix immediately, re-review
ImportantPerformance problem, bad pattern, missing error handlingFix before merge
MinorStyle inconsistency, naming, commentsFix if time permits

Review Feedback Format

The reviewer produces structured findings — never vague comments:

## Code Review — Auth Module

**Score: 8.7/10**

### Critical (0)
None.

### Important (1)
**jwt-verify.ts:34** — Token expiry check uses `Date.now()` comparison
instead of the JWT library's built-in verification. This can be bypassed
by clock skew.
Fix: Use `jwt.verify(token, secret, { clockTolerance: 30 })` instead.

### Minor (2)
**auth-service.ts:12** — Variable `t` should be named `token` for clarity.
**auth-routes.ts:45** — Missing JSDoc on the exported `createAuthRouter` function.

### Verdict
Approve after fixing the Important issue. Re-review not required for minors.

Task-Managed Review Pipeline

For large features, the reviewer creates a task dependency chain:

TaskCreate: "Scout edge cases"          → pending
TaskCreate: "Review implementation"     → pending, blockedBy: [scout]
TaskCreate: "Fix critical issues"       → pending, blockedBy: [review]
TaskCreate: "Verify fixes pass"         → pending, blockedBy: [fix]

This ensures fixes are verified before the pipeline advances — the reviewer does not trust “it’s fixed” claims without running verification commands and reading actual output.

Next Steps