Comprehensive Rules for building, operating, and scaling backend services that use message queues.
Your message queue implementations are eating up valuable development time. Context-switching between different queue systems, debugging cryptic failures, and manually managing error handling patterns is slowing down your team's delivery velocity.
Building robust message queue systems in production means juggling multiple complex concerns simultaneously:
The result? Developers spend more time fighting infrastructure than building business logic.
These Cursor Rules transform your IDE into a message queue development powerhouse that knows how to build production-ready queue systems across Kafka, RabbitMQ, and AWS SQS. Instead of memorizing different APIs and patterns, you get consistent, battle-tested implementations that handle the complexity for you.
The rules encode advanced message queue patterns like idempotent consumers, proper error handling strategies, and monitoring best practices directly into your development workflow.
Eliminate Queue-Specific Learning Curves
Built-in Production Reliability
Consistent Error Handling Across Systems
Before: Manually researching Kafka producer configuration, figuring out proper partitioning strategy, implementing retry logic, setting up monitoring
With Rules: Ask for "user signup event publisher with Kafka" and get:
// Generated publisher with proper configuration
export async function publishUserSignupEvent(userId: string, email: string) {
const message = {
userId,
email,
timestamp: new Date().toISOString(),
version: 'v1'
};
// Automatic partitioning, idempotence, and error handling
await kafkaProducer.send({
topic: 'user.signup.created.v1',
messages: [{
key: userId, // Proper partitioning
value: JSON.stringify(message),
headers: { 'content-type': 'application/json' }
}]
});
}
Time Saved: 3-4 hours of research and implementation → 5 minutes of generation and customization
Before: Implementing RabbitMQ consumer, setting up proper topology, adding validation, implementing retry logic, configuring monitoring
With Rules: Request "invoice processing consumer with RabbitMQ and retry handling" and receive:
// Complete consumer with all production patterns
export class InvoiceConsumer {
async processMessage(message: ConsumeMessage) {
try {
// Automatic schema validation
const invoice = InvoiceSchema.parse(JSON.parse(message.content.toString()));
// Business logic here
await processInvoicePayment(invoice);
// Proper acknowledgment
this.channel.ack(message);
} catch (error) {
// Built-in retry strategy with exponential backoff
await this.handleError(message, error);
}
}
}
Time Saved: 2-3 days of implementation and testing → 30 minutes of setup and business logic
Before: Manually querying queue metrics, setting up monitoring dashboards, correlating logs across services
With Rules: Get monitoring setup with proper metrics collection:
// Automatic metrics export
const queueMetrics = {
queueDepth: new prometheus.Gauge({
name: 'queue_depth',
help: 'Current queue depth',
labelNames: ['queue_name']
}),
processingTime: new prometheus.Histogram({
name: 'msg_processing_time',
help: 'Message processing duration'
})
};
Time Saved: 1-2 hours of dashboard setup → Instant monitoring with alerts
Copy the ruleset into your .cursor-rules file in your project root.
Ask Cursor: "Create a user notification publisher for AWS SQS with retry handling"
Request: "Build the corresponding consumer with proper error handling and monitoring"
Generate: "Create integration tests for this queue using testcontainers"
Ask for: "Add Grafana dashboard configuration for queue monitoring"
Development Velocity
Code Quality Improvements
Production Reliability
Team Productivity
Your message queue development transforms from a complex infrastructure challenge into a streamlined, pattern-driven workflow. Stop reinventing queue patterns and start shipping features faster.
You are an expert in Node.js (TypeScript), Apache Kafka, RabbitMQ, AWS SQS, Docker + Kubernetes, Prometheus, and Grafana.
Key Principles
- Design for loose coupling: producers never depend on consumer code or availability.
- Prefer publish/subscribe or fan-out topologies over hard-coded direct queues.
- Treat every network/IO call as fallible; add time-outs, retries, circuit breakers.
- Make every consumer idempotent (safe to process the same message ≥1 time).
- "Fail fast, recover quickly": validate input immediately and nack/republish to DLQ with minimal processing.
- Keep messages small and self-contained (<= 1 MiB when possible) and versioned (add `v` field or semantic topic names).
- Infrastructure as code: queues, topics, DLQs, and alert rules live in code (Terraform, CDK, Helm, etc.).
TypeScript (Node.js)
- Use `async`/`await` and native Promises; never block the event loop.
- Always type message payloads with `interface` and validate at runtime with `zod` or `io-ts`.
- Publish helpers live in `/src/queues/<queue-name>/publisher.ts`; consumers in `/src/queues/<queue-name>/consumer.ts`.
- Export pure helper functions; avoid classes for stateless logic. If stateful, wrap connection/channel in a small class with a factory (`createKafkaProducer()`).
- Use `const` for bindings, `camelCase` for variables, `UPPER_SNAKE_CASE` for env vars.
- Do not swallow errors; propagate (`throw`) or `nack` with reason.
- Lint with `@typescript-eslint`, format with Prettier, strict mode enabled (`"strict": true` in `tsconfig.json`).
Error Handling and Validation
- Start every consumer with schema validation: reject → DLQ if invalid.
- Retry strategy:
• Immediate retry ≤ 3 attempts for transient errors (network, time-outs).
• Exponential backoff queue tiers: 1 min, 5 min, 30 min.
- Dead Letter Queue (DLQ): include original payload, headers, and stack trace in `x-error` header; retain ≥ 14 days.
- Use ACK/NACK semantics properly: ACK only after the business transaction is fully committed (DB, external calls).
- Implement circuit breaker on external dependencies to prevent queue storm (e.g., `opossum` library).
Framework-Specific Rules
Apache Kafka (kafkajs)
- Topics: `<domain>.<entity>.<event>.v<major>` (e.g., `billing.invoice.created.v1`).
- Partitions: key on business entity (e.g., `invoiceId`) for order guarantee.
- Consumer group id = `<service-name>-v<serviceVersion>`.
- Enable idempotence & acks: `acks: -1`, `enableIdempotence: true`.
- Use `transactional` producer when publishing outbox events inside DB tx.
RabbitMQ (amqplib)
- Topology file (`topology.ts`) declares exchanges, queues, bindings on start-up.
- Use topic exchange and routing keys (`*.created`, `*.deleted`).
- Set `prefetch` to match consumer concurrency to avoid memory explosions.
- Messages flagged `persistent: true` unless explicitly transient.
AWS SQS
- FIFO queues for ordering & exactly-once; standard queues when throughput > 3k msg/s.
- Message group id = business entity id.
- Use `messageDeduplicationId` with a GUID for exactly-once.
- DLQ redrive policy: `maxReceiveCount: 5`.
Additional Sections
Testing
- Use `testcontainers` to spin up Kafka/RabbitMQ in CI.
- Contract tests: publish fixture message → assert consumer writes correct DB state.
- Chaos test: kill consumer pods while producing; ensure no message loss.
Performance & Monitoring
- Export metrics: `queue_depth`, `msg_processing_time`, `consumer_lag`, `dead_letter_rate`.
- Dashboards: Grafana panel per queue with 95th/99th percentiles.
- Alerts: lag > X messages for Y minutes OR DLQ spike > threshold.
- Scale consumers horizontally (Kubernetes HPA) based on `consumer_lag` metric.
Security
- Encrypt in transit: TLS for all brokers; require certificate validation.
- Encrypt at rest when broker supports it (Kafka @ EBS, RabbitMQ server config).
- AuthN/Z: SASL/SCRAM for Kafka, username/password or certs for RabbitMQ, IAM policies for SQS.
- Never log full payloads that contain PII; use hashing/redaction middleware.
Deployment & Ops
- Docker images: one per consumer group; tag with git SHA.
- Health probes: `/liveness` returns 200, `/readiness` returns 200 only when broker connection & channel open.
- Rollback plan: consumers start with `--from-latest` unless explicitly configured, to avoid replay floods.
Common Pitfalls & Guardrails
- Don’t batch unrelated messages; keep atomic per event.
- Avoid long-running handlers (> 30 s) — offload to worker queues.
- Ensure producer backpressure: block/slow when broker responds with `queue_full`.
- Always version schemas; never change meaning of existing fields.
Directory Snapshot
```
src/
queues/
user-signup/
publisher.ts
consumer.ts
invoice-paid/
publisher.ts
consumer.ts
shared/
queue-interfaces.ts
validation.ts
config/
kafka.ts
rabbitmq.ts
tests/
queues/
user-signup.consumer.spec.ts
```