Actionable rules and workflow for consistent, high-quality code reviews across multi-language projects.
Your team's code reviews are scattered, inconsistent, and burning precious development time. Some PRs slip through with security holes, others get nitpicked to death over formatting, and critical business logic errors make it to production while reviewers argue about semicolons.
You're already doing code reviews—but you're probably doing them wrong. Here's what broken review processes actually cost your team:
These rules transform chaotic review processes into a lean, automated workflow that catches more bugs while requiring less human effort. Instead of letting reviewers waste time on formatting and syntax, this system automates the mundane and focuses human intelligence on what matters: business logic, security, and architecture.
The core principle: Machines handle syntax and style, humans focus on correctness and design.
# PR contains 847 lines across 23 files
# Mix of bug fix, refactoring, and new feature
# 3 hours of review time
# 47 comments, mostly style issues
# Security vulnerability missed
# Merged with technical debt
# Stage 1: Automated (2 minutes)
✅ ESLint/mypy/golint pass
✅ SonarQube security scan clean
✅ SBOM diff shows no new high-risk deps
✅ Coverage threshold maintained
# Stage 2: AI Pre-Review (30 seconds)
🤖 CriticGPT flags 3 potential logic issues
🤖 Jules suggests performance optimization
# Stage 3: Human Review (15 minutes)
👥 Domain expert + junior developer
🎯 Focus on business logic and error handling
✅ 2 approvals, 4 constructive comments
Result: 3 hours → 18 minutes, higher defect detection, better knowledge sharing.
# .github/workflows/review-gate.yml
name: Review Gate
on: [pull_request]
jobs:
automated-checks:
runs-on: ubuntu-latest
steps:
- name: Run linters
run: |
eslint . --fix
mypy . --strict
golint ./...
- name: Security scan
run: |
sonar-scanner
snyk test
cyclonedx-bom -o sbom.json
- name: AI pre-review
run: criticgpt-annotate ${{ github.event.pull_request.number }}
Create .github/pull_request_template.md:
### Context
Fixes #[issue] – [brief description]
### Changes
- [Specific change 1]
- [Specific change 2]
### Testing
`[command to run tests]` → [coverage %]
### Security Impact
[None/Low/Medium/High] – [explanation]
### Checklist
- [ ] Lint/Type checks pass
- [ ] SBOM updated
- [ ] Added/updated tests
- [ ] Documentation updated
{
"required_reviews": 2,
"dismiss_stale_reviews": true,
"require_code_owner_reviews": true,
"required_status_checks": {
"strict": true,
"contexts": ["ci/automated-checks", "ci/security-scan"]
}
}
JavaScript/TypeScript:
// .eslintrc.json
{
"extends": ["@typescript-eslint/recommended"],
"rules": {
"@typescript-eslint/no-explicit-any": "error",
"@typescript-eslint/strict-boolean-expressions": "error"
}
}
Python:
# mypy.ini
[mypy]
strict = True
no_implicit_optional = True
warn_return_any = True
Go:
# Makefile
lint:
go vet ./...
golint ./...
staticcheck ./...
Track these KPIs to prove the system works:
These rules don't just improve code quality—they transform how your team collaborates. By automating the mundane and systematizing the critical, you'll catch more bugs, ship faster, and create a learning environment where junior developers grow quickly.
Start with automation first: Set up the CI pipeline and automated checks before changing your review process. Once machines handle the basics, your human reviewers can focus on what they do best: understanding business logic, catching edge cases, and sharing knowledge.
Your code reviews should make your codebase better with every merge. These rules ensure they do.
You are an expert in multi-language code review, Git-based workflows, static analysis (SonarQube, ESLint, mypy), Application Security Posture Management (ASPM), SBOM, and AI-assisted review tools (CriticGPT, Google Jules).
Key Principles
- Keep reviews small and focused: 200–400 LOC per session.
- Time-box sessions to ≤60 min (≈500 LOC/hr) to avoid fatigue.
- Automate what machines do best (style, syntax, simple bugs); reserve human effort for design, business logic, and security.
- Shift-left security: run security checks (SAST, SBOM, ASPM) before manual review starts.
- Separate concerns: individual pull requests (PRs) must contain exactly one logical change (feature, fix, or refactor).
- Reviews are learning tools: document findings and share with the whole team.
- Always leave the codebase better: every merged PR must improve readability, safety, or performance.
Language-Agnostic Rules
- Pull-Request Hygiene
• Title: <scope>: <concise summary> (e.g., auth: add JWT rotation)
• Description template: Context | Changes | Testing | Security Impact | Checklist.
• Link related tickets; include screenshots or API samples when UI/contract changes.
- Reviewer Selection
• At least one domain expert + one newcomer for knowledge spread.
• Rotate secondary reviewers to minimize siloing.
- Checklist (apply every review)
• Correctness: Does the code do what the ticket says?
• Security: OWASP Top-10, injection, auth, crypto misuse.
• Error handling: Are failures surfaced, logged, retried, or propagated?
• Tests: Unit + integration updated? Coverage ≥ targeted threshold.
• Documentation: Public APIs, complex functions, env vars updated.
• Performance: Big-O changes, DB indices, memory spikes.
• Accessibility & i18n (if UI).
• Compliance: licenses, SBOM delta, GDPR/PII.
- Metrics
• Review size (LOC) ≤400.
• Review time ≤60 min.
• Defect discovery rate ≥ 70% pre-merge (track via defect-leak metrics).
• Response SLA: PR author pings after 24 h idle.
Language-Specific Focus Areas
JavaScript/TypeScript
- Enforce ESLint + Prettier auto-fix prior to PR.
- Verify strict null checks ("strict": true).
- Prefer pure functions; avoid side-effects in reducers/services.
- Ensure async flows handle Promise rejections (await wrapped in try/catch).
Python
- mypy passes with --strict; no "Any" in new code.
- Follow PEP8; black formatted.
- Validate resource cleanup via context managers.
- Confirm logging uses structured logging (json).
Go
- go vet, golint, staticcheck clean.
- Errors wrapped with %w; no naked returns in public funcs.
- Concurrency: verify context cancellation paths, avoid goroutine leaks.
Error Handling & Validation Rules
- Fail fast: validate inputs at function boundary; return typed error (or Result).
- Use early exits over deeply nested conditionals.
- Log unexpected (non-business) errors with correlation IDs.
- Surface user-safe messages; hide internals.
Tooling Workflow
1. CI Stage 1 (Automated)
- Run unit tests + coverage gate.
- Linters & formatters.
- SAST (SonarQube) + dependency scan (Snyk, osv).
- SBOM generation & diff (CycloneDX).
2. CI Stage 2 (AI Pre-Review)
- CriticGPT / Jules annotate potential issues; commit comments labelled "ai:".
3. Manual Review
- Use GitHub/GitLab inline comments.
- Tag severity: (blocking, suggestion, nit).
4. Merge Gate
- All blocking comments resolved.
- CI green, coverage diff ≥0.
Platform-Specific Rules (GitHub/GitLab)
- Require minimum 2 approvals.
- Block direct pushes to protected branches.
- Auto-dismiss stale approvals after new commits.
- Enable "suggested changes" to allow quick fixes.
Testing Rules
- Each PR must add/adjust tests for new paths (red-green-refactor).
- For legacy code without tests, author writes characterization tests first.
- Snapshot/UI tests must include deterministic seeds.
Performance & Security
- Add benchmarks when algorithmic complexity changes.
- For DB queries, attach EXPLAIN output in PR description if changed.
- If new external dependency added, check license & CVE score ≤7.
Common Pitfalls to Flag
- Mixed concerns in single PR (split them).
- Large binary files committed (use Git LFS).
- Missing rollback strategy in migrations.
- Secrets checked in (scan with truffleHog).
Example PR Template
```
### Context
Fixes #1234 – incorrect VAT calculation.
### Changes
* Refactor tax module
* Add EU VAT rates table
### Testing
``pytest tests/tax --cov`` -> 92% cov.
### Security Impact
None – no external input.
### Checklist
- [x] Lint/Type checks pass
- [x] SBOM updated
- [x] Added unit & integration tests
```
Adopt these rules to ensure every code review is efficient, comprehensive, and continuously improves team velocity and software quality.