Back to Prompt Library
DeveloperClaude

Code Review & Refactoring Guide

Perform a systematic code review — code smells, performance bottlenecks, security vulnerabilities, test gaps, and a prioritised refactoring roadmap.

Customise your prompt
Full Prompt
Act as a senior software engineer and code quality expert with deep experience reviewing production codebases across startups and enterprise engineering teams. Perform a thorough, systematic code review of the code I provide.

Language / Framework: [e.g. TypeScript / React / Next.js or Python / FastAPI]
Codebase context: [Brief description — what does this code do, how large is the system]
Review focus: [General quality / Pre-production readiness / Security audit / Performance / Refactoring for maintainability]
Team context: [Solo developer / Small team / Large team with code standards]

[PASTE CODE HERE]

— REVIEW FRAMEWORK —

SECTION 1: FIRST IMPRESSIONS (30-second read)
• What is the overall quality signal from a quick scan?
• What do naming conventions, file structure, and code density tell you immediately?
• What is the most urgent thing that needs to change?

SECTION 2: CODE SMELL INVENTORY
For each smell found, provide:
• Location (function/class/line reference)
• Smell type: Long Method / God Class / Feature Envy / Primitive Obsession / Shotgun Surgery / Duplicate Code / Dead Code / Magic Numbers / Inappropriate Intimacy / etc.
• Impact: how does this smell currently hurt the codebase (maintenance cost, bug probability, onboarding friction)?
• Refactoring prescription: the exact pattern or technique to fix it (Extract Method, Replace Magic Number, Introduce Parameter Object, etc.) with a before/after example

SECTION 3: PERFORMANCE ANALYSIS
• Identify N+1 query problems, unnecessary re-renders, blocking operations, or O(n²) algorithms
• Memory leaks or retention issues (uncleaned event listeners, growing caches, retained closures)
• Bundle size or import issues (if frontend)
• Database query efficiency: missing indexes, unnecessary full scans, over-fetching
• For each issue: estimated impact (low / medium / high) and the fix

SECTION 4: SECURITY AUDIT
Check for and report on:
• Injection vulnerabilities (SQL, command, LDAP, template)
• Authentication and authorisation flaws (missing auth checks, insecure direct object references)
• Sensitive data exposure (secrets in code, over-exposed API responses, unencrypted storage)
• Input validation gaps (unsanitised user input, missing type coercion)
• Dependency vulnerabilities (outdated packages with known CVEs)
• OWASP Top 10 checklist: go through each category and mark as Pass / Fail / N/A

SECTION 5: RELIABILITY & ERROR HANDLING
• Missing or incorrect error handling
• Uncaught promise rejections or unhandled exceptions
• Race conditions or concurrency issues
• Retry logic for external service calls
• Graceful degradation: what happens when dependencies fail?

SECTION 6: TEST COVERAGE GAPS
• Which critical paths have no tests?
• Which functions are testing implementation instead of behaviour?
• Missing edge cases: empty inputs, boundary values, error states, concurrent requests
• Test quality: are tests deterministic, isolated, and meaningful?
• Recommend: unit, integration, or E2E test for each gap found

SECTION 7: MAINTAINABILITY & READABILITY
• Documentation gaps: what needs a comment (only where the WHY is non-obvious)
• Function and variable naming improvements (with specific rename suggestions)
• Functions doing more than one thing (single responsibility violations)
• Abstraction level: too abstract (over-engineered), too concrete (hard to modify), or just right?
• Magic values, unclear boolean flags, confusing conditional logic

SECTION 8: REFACTORING ROADMAP (Prioritised)
Organise all findings into a prioritised action plan:
| Priority | Finding | File/Line | Effort | Impact | Recommended Action |
|---|---|---|---|---|---|

Priority levels:
• P0 — Fix before this code ships (security, data loss, critical bugs)
• P1 — Fix in the next sprint (reliability, performance, correctness)
• P2 — Fix in the next quarter (maintainability, test coverage)
• P3 — Nice to have (style, minor cleanup, documentation)

SECTION 9: POSITIVE PATTERNS (What to Keep & Scale)
• Identify 2–3 things done well that should be adopted across the codebase
• Name the pattern, where it appears, and why it's good

Open this prompt in

ChatGPT & Claude — prompt pre-loaded automatically
Gemini — copied to clipboard, just paste

Pair with a tool

Get better results with Developer Tools

Open Developer Tools

How to use

  1. 1Fill in your details above for a personalised prompt
  2. 2Click a platform to open it — prompt loads automatically
  3. 3Replace any remaining [PLACEHOLDERS] as needed
  4. 4Use Developer Tools on CodeBrewTools to enhance results