Comprehensive rules for writing safe, performant, and maintainable concurrent Go services.
You know the drill. Your Go service handles 10 requests per second in development, then crumbles under production load. Race conditions appear only in CI. Goroutines leak until your service crashes. Memory usage climbs steadily until the container gets OOMKilled at 3 AM.
Sound familiar? You're not alone. Even experienced Go developers struggle with concurrency complexity, and the cost is real: production incidents, performance bottlenecks, and sleepless nights debugging race conditions that only appear under load.
Here's what happens when concurrent Go code goes wrong:
Production Failures: That subtle race condition in your user authentication service that only triggers under peak load, causing login failures for 15% of users.
Resource Leaks: Goroutines that never terminate, slowly consuming memory until your service needs daily restarts. One major e-commerce platform lost $40,000/hour during Black Friday due to goroutine leaks.
Performance Degradation: Mutex contention bringing your high-throughput API to its knees. What should handle 1000 RPS now struggles with 100.
Debugging Nightmares: Intermittent deadlocks that take weeks to reproduce and fix, while your team context-switches between firefighting and feature development.
The real problem isn't that concurrency is hard—it's that most Go developers lack systematic approaches to concurrent code design, testing, and optimization.
These Cursor Rules transform how you write concurrent Go services. Instead of hoping your code works under load, you'll build systems that are provably safe, measurably fast, and maintainable at scale.
What You Get:
// Typical goroutine leak waiting to happen
go func() {
for {
select {
case work := <-workChan:
process(work) // What if this blocks forever?
}
}
}()
// Self-documenting, leak-proof pattern
func (s *Service) StartWorker(ctx context.Context) {
go func() {
defer s.wg.Done()
for {
select {
case work := <-s.workChan:
if err := s.processWithTimeout(ctx, work); err != nil {
s.logger.Error("work failed", zap.Error(err))
}
case <-ctx.Done():
return // Guaranteed cleanup path
}
}
}()
}
Database Operations: Instead of fighting connection pool exhaustion and transaction deadlocks, you'll implement proper isolation levels with automatic retry logic for serialization failures.
HTTP Services: Your APIs will handle request bursts gracefully with CPU-aware concurrency limits, proper timeouts, and automatic goroutine lifecycle management.
Background Processing: Worker pools that scale with available CPU cores, handle backpressure elegantly, and shut down cleanly without losing work.
Latency Reduction: One team reduced P99 latency from 2.3s to 180ms by implementing proper channel patterns and eliminating mutex contention.
Throughput Gains: A financial services API increased throughput from 500 to 3,000 RPS by switching from global mutexes to lock-free atomic operations for hot counters.
Reliability Improvement: Production incidents dropped 80% after implementing systematic context cancellation and proper error propagation across goroutines.
Your code passes go test -race every time. No more mysterious bugs that only appear under load.
Every goroutine has a guaranteed termination path. Memory usage stays constant even during traffic spikes.
CPU-aware goroutine management means your service scales with available cores instead of fighting the scheduler.
When issues occur, comprehensive instrumentation with pprof and structured logging pinpoints problems in minutes, not days.
Automated concurrency testing in CI catches issues before production. Your team spends time building features, not firefighting.
# Add to your Cursor settings
curl -o .cursorrules https://raw.githubusercontent.com/your-repo/cursor-rules/main/go-concurrency.json
internal/concurrency/
├── worker-pool.go # Reusable worker patterns
├── semaphore.go # Resource limiting
└── retry.go # Exponential backoff
service/
├── handler.go # HTTP handlers with concurrency contracts
└── processor.go # Business logic
main.go # Graceful shutdown coordination
Start with worker pools for any background processing:
// Cursor automatically generates this pattern
func NewWorkerPool(ctx context.Context, size int) *WorkerPool {
p := &WorkerPool{
workChan: make(chan Work, size*2), // Buffered for bursts
workers: size,
}
for i := 0; i < size; i++ {
p.wg.Add(1)
go p.worker(ctx, i) // Each worker gets unique ID for debugging
}
return p
}
# Race detection in CI
go test -race -timeout=30s ./...
# Load testing integration
vegeta attack -targets=endpoints.txt -rate=1000 -duration=60s
// Automatic pprof instrumentation
if *enablePprof {
go func() {
log.Println(http.ListenAndServe("localhost:6060", nil))
}()
}
Your team ships concurrent features without fear. New developers onboard quickly with systematic patterns. Performance optimization becomes predictable, not guesswork.
Stop treating concurrency as a necessary evil that breaks your production systems. These Cursor Rules give you the systematic approach that turns Go's concurrency primitives into a competitive advantage.
Your services will handle traffic spikes gracefully, your debugging sessions will be measured in minutes instead of days, and your team will ship concurrent features with confidence.
The question isn't whether you can afford to implement these patterns. It's whether you can afford not to—especially the next time a race condition takes down your production system during peak traffic.
Start building bulletproof concurrent Go services today.
You are an expert in Go 1.22+, Linux, Docker, Kubernetes, PostgreSQL, Redis.
Key Principles
- Prefer immutable data structures and message-passing over shared mutable state.
- Keep the number of active goroutines proportional to available CPU cores (GOMAXPROCS).
- Never block a goroutine indefinitely; always provide a cancellation path (context.Context or timeout).
- Fail fast: detect violation of invariants early and return explicit, wrapped errors.
- Measure first, optimise second: instrument every hot path with pprof and metrics.
- Code must compile with “go vet”, “staticcheck”, and run clean with “go test -race”.
- Every public API must define its concurrency contract in comments (thread-safety, ordering guarantees, timeouts).
Go
- Launch work with anonymous self-describing goroutines only when latency improvement is measurable. Otherwise keep execution synchronous.
- Always pair a goroutine with either:
• a Context for cancellation; or
• a WaitGroup when the parent must wait; never both.
- Use sync/atomic for single-value hot counters; use sync.Map only for read-mostly patterns; avoid global mutexes.
- Channels
• Use unbuffered channels for hand-off semantics; buffered for bursty producers (size ≤ peak bursts).
• Close channels only from the sender side; never close a channel you didn’t create.
• Replace select{case <-ctx.Done(): … default: …} polling with blocking receive when possible.
- Locks
• Use sync.RWMutex for granular read dominance; upgrade to sync.Mutex when writes > 20 %.
• Acquire multiple locks in lexical order (A then B) to prevent deadlocks.
• Wrap mutex acquisition in helper functions to enforce ordering when more than two locks exist.
- Goroutine Leaks
• Detect with runtime/pprof.Lookup("goroutine").WriteTo.
• When returning from a function that spawned goroutines, ensure all are done via WaitGroup or Context.
- File/Package Layout
• internal/concurrency/* : reusable patterns (worker-pool.go, semaphore.go, retry.go).
• service/* : business logic with explicit concurrency contract.
• main.go : wiring, graceful shutdown, signal handling.
Error Handling and Validation
- Validate arguments before launching goroutines; do not start work you’ll reject later.
- Wrap errors with fmt.Errorf("%w") including goroutine identifier when applicable.
- For fan-out/fan-in patterns use errgroup.Group; all sub-tasks must return errors not panics.
- Panic is allowed only for programmer errors (e.g., nil pointer on immutable struct). Recover exactly once in main() and log the stack trace.
- Use exponential back-off with jitter for retriable concurrency errors (e.g., database serialization failures).
net/http + database/sql Framework Rules
- Use http.Server with ReadTimeout, WriteTimeout and IdleTimeout configured (< 30 s) to mitigate Slowloris attacks.
- Handle each request in its own goroutine (provided by net/http). Do NOT spawn child goroutines unless doing long-running async work; pass request Context downward.
- Use database/sql connection pooling defaults; adjust MaxOpenConns to 2×CPU cores and MaxIdleConns to CPU cores.
- Wrap DB operations in transactions with proper isolation level; for high-contention tables prefer PostgreSQL SERIALIZABLE + retry loop.
- Never hold DB connections while waiting on network I/O of another service.
Additional Sections
Testing
- go test -race is mandatory in CI.
- For load tests use Vegeta or hey; for distributed tests use Locust. Target 2× expected peak RPS.
- Stress test goroutine leaks with go test -run TestXxx -v -count=1 -race -timeout 30s
- Use deterministic seeds in rand.NewSource to make concurrent tests reproducible.
Performance
- Continuously profile with pprof; automate "go tool pprof -top" in benchmarks.
- Use sync.Pool for short-lived, same-size objects >10 µs allocation cost.
- Avoid reflection on hot paths; prefer code-generated marshaling (e.g., easyjson).
- Inline trivial atomic counters: var hits uint64; atomic.AddUint64(&hits,1)
Security
- Enforce context deadlines to avoid resource exhaustion DoS.
- Guard critical sections against TOCTOU: re-check critical pre-conditions after acquiring locks.
- Dispose of sensitive data with explicit zeroing when using byte slices that cross goroutines.
- Follow GDPR Privacy-by-Design: no sharing of personal data across worker pools without encryption.
Common Pitfalls & Remedies
- Deadlocks: Use go test -run TestDeadlock -timeout 5s -race and enable runtime.SetBlockProfileRate(1).
- Starvation: Monitor active goroutines vs. CPUs; if goroutines >> 10×CPU and avg runnable > 1 => investigate scheduling.
- Throttling: Respect third-party API rate-limits by gating outgoing requests with a token bucket implemented via time.Ticker + channel.
Cheat-Sheet Patterns
- Worker Pool
```go
func NewPool(size int) (*Pool, context.CancelFunc) { /* ... */ }
// Blocks until ctx cancelled; guarantees no leak.
```
- Semaphore
```go
sem := make(chan struct{}, 8)
sem <- struct{}{}
<-sem
```
- Fan-Out / Fan-In with errgroup
```go
g, ctx := errgroup.WithContext(ctx)
for _, job := range jobs {
j := job
g.Go(func() error { return process(ctx, j) })
}
if err := g.Wait(); err != nil { /* handle */ }
```
Reference Tools
- staticcheck.io – static analysis suite
- go-tidy, gofumpt – code formatting
- race detector – built-in
- pprof & trace – performance
- delve – debugging concurrent processes
- Uber’s atomic & zap for zero-alloc logging