codex-review

Use this skill for cross-model code reviews using the Codex CLI. Activates on mentions of codex review, cross-model review, code review with codex, peer review, review my code, review this PR, review changes, codex check, second opinion, or gpt review.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "codex-review" with this command: npx skills add hyperb1iss/hyperskills/hyperb1iss-hyperskills-codex-review

Cross-Model Code Review with Codex CLI

Cross-model validation using the codex binary directly. Claude writes code, Codex reviews it — different architecture, different training distribution, no self-approval bias.

Core insight: Single-model self-review is systematically biased. Cross-model review catches different bug classes because the reviewer has fundamentally different blind spots than the author.

Prerequisite: The codex CLI must be installed and authenticated. Verify with codex --help. Configure defaults in ~/.codex/config.toml:

model = "gpt-5.4"
review_model = "gpt-5.4"
# Note: review_model overrides model for codex review specifically
model_reasoning_effort = "high"

Two Ways to Invoke Codex

ModeCommandBest For
codex reviewStructured diff review with prioritized findingsPre-PR reviews, commit reviews, WIP checks
codex execFreeform non-interactive deep-dive with full prompt controlSecurity audits, architecture review, focused investigation

Key flags:

FlagApplies ToPurpose
-c model="gpt-5.4"bothModel selection (review has no -m flag)
-m, --modelexec onlyModel selection shorthand
-c model_reasoning_effort="xhigh"bothReasoning depth: low / medium / high / xhigh
--base <BRANCH>review onlyDiff against base branch
--commit <SHA>review onlyReview a specific commit
--uncommittedreview onlyReview working tree changes

Review Patterns

Pattern 1: Pre-PR Full Review (Default)

The standard review before opening a PR. Use for any non-trivial change.

Step 1 — Structured review (catches correctness + general issues):
  Run via Bash:
    codex review --base main -c model="gpt-5.4"

Step 2 — Security deep-dive (if code touches auth, input handling, or APIs):
  Run via Bash:
    codex exec -m gpt-5.4 \
      -c model_reasoning_effort="xhigh" \
      "<security prompt from references/prompts.md>"

Step 3 — Fix findings, then re-review:
  Run via Bash:
    codex review --base main -c model="gpt-5.4"

Pattern 2: Commit-Level Review

Quick check after each meaningful commit.

codex review --commit <SHA> -c model="gpt-5.4"

Pattern 3: WIP Check

Review uncommitted work mid-development. Catches issues before they're baked in.

codex review --uncommitted -c model="gpt-5.4"

Pattern 4: Focused Investigation

Surgical deep-dive on a specific concern (error handling, concurrency, data flow).

codex exec -m gpt-5.4 \
  -c model_reasoning_effort="xhigh" \
  "Analyze [specific concern] in the changes between main and HEAD.
   For each issue found: cite file and line, explain the risk,
   suggest a concrete fix. Confidence threshold: only flag issues
   you are >=70% confident about."

Pattern 5: Ralph Loop (Implement-Review-Fix)

Iterative quality enforcement — implement, review, fix, repeat. Max 3 iterations.

Iteration 1:
  Claude -> implement feature
  Bash: codex review --base main -c model="gpt-5.4" -> findings
  Claude -> fix critical/high findings

Iteration 2:
  Bash: codex review --base main -c model="gpt-5.4" -> verify fixes + catch remaining
  Claude -> fix remaining issues

Iteration 3 (final):
  Bash: codex review --base main -c model="gpt-5.4" -> clean bill of health
  (or accept known trade-offs and document them)

STOP after 3 iterations. Diminishing returns beyond this.

Multi-Pass Strategy

For thorough reviews, run multiple focused passes instead of one vague pass. Each pass gets a specific persona and concern domain.

PassFocusModeReasoning
CorrectnessBugs, logic, edge cases, race conditionscodex reviewdefault
SecurityOWASP Top 10, injection, auth, secretscodex exec with security promptxhigh
ArchitectureCoupling, abstractions, API consistencycodex exec with architecture promptxhigh
PerformanceO(n^2), N+1 queries, memory leakscodex exec with performance prompthigh

Run passes sequentially. Fix critical findings between passes to avoid noise compounding.

When to use multi-pass vs single-pass:

Change SizeStrategy
< 50 lines, single concernSingle codex review
50-300 lines, feature workcodex review + security pass
300+ lines or architecture changeFull 4-pass
Security-sensitive (auth, payments, crypto)Always include security pass

Decision Tree: Which Pattern?

digraph review_decision {
    rankdir=TB;
    node [shape=diamond];

    "What stage?" -> "Pre-commit" [label="writing code"];
    "What stage?" -> "Pre-PR" [label="ready to submit"];
    "What stage?" -> "Post-commit" [label="just committed"];
    "What stage?" -> "Investigating" [label="specific concern"];

    node [shape=box];
    "Pre-commit" -> "Pattern 3: WIP Check";
    "Pre-PR" -> "How big?";
    "Post-commit" -> "Pattern 2: Commit Review";
    "Investigating" -> "Pattern 4: Focused Investigation";

    "How big?" [shape=diamond];
    "How big?" -> "Pattern 1: Pre-PR Review" [label="< 300 lines"];
    "How big?" -> "Full Multi-Pass" [label=">= 300 lines"];
}

Prompt Engineering Rules

  1. Assign a persona — "senior security engineer" beats "review for security"
  2. Specify what to skip — "Skip formatting, naming style, minor docs gaps" prevents bikeshedding
  3. Require confidence scores — Only act on findings with confidence >= 0.7
  4. Demand file:line citations — Vague findings without location are not actionable
  5. Ask for concrete fixes — "Suggest a specific fix" not just "this is a problem"
  6. One domain per pass — Security-only, architecture-only. Mixing dilutes depth.

Ready-to-use prompt templates are in references/prompts.md.

Anti-Patterns

Anti-PatternWhy It FailsFix
"Review this code"Too vague — produces surface-level bikesheddingUse specific domain prompts with persona
Single pass for everythingContext dilution — every dimension gets shallow treatmentMulti-pass with one concern per pass
Self-review (Claude reviews Claude's code)Systematic bias — models approve their own patternsCross-model: Claude writes, Codex reviews
No confidence thresholdNoise floods signal — 0.3 confidence findings waste timeOnly act on >= 0.7 confidence
Style comments in reviewLLMs default to bikeshedding without explicit skip directives"Skip: formatting, naming, minor docs"
> 3 review iterationsDiminishing returns, increasing noise, overbakingStop at 3. Accept trade-offs.
Review without project contextGeneric advice disconnected from codebase conventionsCodex reads CLAUDE.md/AGENTS.md automatically
Using an MCP wrapperUnnecessary indirection over a CLI binaryCall codex directly via Bash

What This Skill is NOT

  • Not a replacement for human review. Cross-model review catches bugs but can't evaluate product direction or user experience.
  • Not a linter. Don't use Codex review for formatting or style — that's what linters are for.
  • Not infallible. 5-15% false positive rate is normal. Triage findings, don't blindly fix everything.
  • Not for self-approval. The whole point is cross-model validation. Don't use Claude to review Claude's code.

References

For ready-to-use prompt templates, see references/prompts.md.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

tui-design

No summary provided by upstream source.

Repository SourceNeeds Review
Research

research

No summary provided by upstream source.

Repository SourceNeeds Review
General

orchestrate

No summary provided by upstream source.

Repository SourceNeeds Review