Opinionated coding rules for implementing robust, observable and high-performance data-management patterns (CQRS, event sourcing, saga) in Java/Spring-Boot microservices.
Stop fighting distributed data complexity. Start building systems that scale naturally.
You're building microservices, but your data architecture is becoming a maintenance disaster. Sound familiar?
The problem isn't your technical skills—it's that traditional database patterns don't work in distributed systems. You need battle-tested patterns that embrace distributed reality instead of fighting it.
This Cursor Rules configuration implements proven data management patterns specifically designed for Java microservices at scale. You get opinionated, production-ready implementations of:
Database-per-Service with Smart Data Sharing
// Before: Brittle cross-service database access
@Entity
public class OrderService {
@JoinColumn(name = "customer_id") // Tight coupling nightmare
private Customer customer;
}
// After: Clean API-based data access
public record CreateOrderCommand(
CustomerId customerId,
List<OrderItem> items
) {}
@EventHandler
public void handle(CustomerCreatedEvent event) {
// Maintain local projection of customer data
customerProjection.save(event.toProjection());
}
CQRS with Optimized Read/Write Paths
// Command side: Optimized for writes
@CommandHandler
@Transactional
public void handle(PlaceOrderCommand command) {
var order = Order.create(command);
orderRepository.save(order);
eventPublisher.publish(new OrderPlacedEvent(order));
}
// Query side: Optimized for reads with denormalized data
@QueryHandler
public OrderSummaryView getOrderSummary(OrderId orderId) {
return orderSummaryRepository.findById(orderId)
.map(this::enrichWithCustomerData);
}
Event Sourcing with Exactly-Once Delivery
@EventSourcingHandler
public void on(PaymentProcessedEvent event) {
this.balance = this.balance.add(event.getAmount());
this.lastProcessedAt = event.getTimestamp();
}
// Transactional outbox ensures exactly-once publishing
@TransactionalEventListener
public void publishEvents(OrderPlacedEvent event) {
outboxRepository.save(new OutboxEntry(event));
}
Eliminate Cross-Service Data Coupling
Guarantee Data Consistency Without Distributed Transactions
Optimize Performance for Real Usage Patterns
Make Debugging Distributed Systems Manageable
Before these rules: Spend 2-3 days setting up database access, managing connection pools, handling cross-service transaction failures, and debugging inconsistent state.
With these rules:
# Generate service scaffold with customer projection
./gradlew generateService --name=loyalty --depends=customer
# Customer events automatically create local projections
@EventHandler
public void on(CustomerCreatedEvent event) {
customerProjection.save(CustomerProjection.from(event));
}
Time saved: 80% reduction in setup time, zero cross-service coupling issues.
Before: Wrestling with distributed transactions, compensating actions, and partial failure scenarios that take weeks to get right.
With these rules:
@SagaCoordinator
public class OrderFulfillmentSaga {
@SagaStep(compensation = "cancelPayment")
public void processPayment(ProcessPaymentCommand cmd) {
paymentService.process(cmd);
}
@SagaStep(compensation = "releaseInventory")
public void reserveInventory(ReserveInventoryCommand cmd) {
inventoryService.reserve(cmd);
}
// Compensation methods are automatically idempotent
@Compensate
public void cancelPayment(String paymentId) {
paymentService.cancel(paymentId);
}
}
Impact: Saga patterns work out of the box with automatic compensation, state tracking, and retry logic.
Before: Dig through logs across multiple services, try to correlate timestamps, and pray you can reproduce the issue.
With these rules: Every operation includes full traceability:
// Automatic trace propagation through events
@KafkaListener(topics = "order-events")
public void handle(OrderPlacedEvent event,
@Header("trace-id") String traceId) {
try (var scope = tracer.startTrace(traceId)) {
// All downstream operations inherit trace context
processOrder(event);
}
}
Result: End-to-end tracing shows exactly how data flowed between services, when consistency was achieved, and where failures occurred.
# Clone the rules and apply to your Cursor workspace
curl -O https://raw.githubusercontent.com/your-repo/cursor-rules/.cursorrules
// Directory structure is automatically enforced
src/main/java/com/company/orderservice/
├── api/ # REST controllers and DTOs
├── command/ # Write-side aggregates and handlers
├── query/ # Read-side projections and repositories
├── saga/ # Workflow coordinators
└── event/ # Kafka producers and consumers
# docker-compose.yml - automatically generated
version: '3.8'
services:
postgres:
image: postgres:15
environment:
POSTGRES_DB: orderservice
kafka:
image: confluentinc/cp-kafka:latest
debezium:
image: debezium/connect:latest
# Automatic CDC configuration for outbox pattern
Let Cursor generate the boilerplate:
// Type: "Create order command with CQRS pattern"
// Cursor generates complete command/query separation:
@RestController
@Validated
public class OrderController {
@PostMapping("/orders")
public ResponseEntity<OrderId> createOrder(
@Valid @RequestBody CreateOrderRequest request) {
var command = CreateOrderCommand.from(request);
var orderId = commandBus.send(command);
return ResponseEntity.accepted().body(orderId);
}
@GetMapping("/orders/{id}")
public OrderSummaryView getOrder(@PathVariable OrderId id) {
return queryBus.query(new GetOrderQuery(id));
}
}
The rules automatically configure comprehensive monitoring:
# Metrics automatically exported
management:
endpoints:
web:
exposure:
include: health,metrics,prometheus
metrics:
tags:
service.name: order-service
service.version: ${app.version}
Development Velocity
Production Reliability
Team Productivity
Operational Excellence
These aren't just configuration files—they're battle-tested patterns that solve the hardest problems in distributed data management. Your microservices will finally work the way they were supposed to from the beginning.
Ready to stop fighting your data architecture and start building systems that scale? These Cursor Rules give you everything you need to implement production-grade microservices data management patterns in Java.
You are an expert in Java 17+, Spring Boot 3.x, Apache Kafka, PostgreSQL, MongoDB, Debezium CDC, Amazon Redshift, Snowflake, Docker, Kubernetes, Prometheus/Grafana, OpenTelemetry.
Key Principles
- Each microservice owns its database; expose data solely via APIs or immutable events.
- Design for eventual consistency; asynchronous messaging > distributed ACID transactions.
- Model around bounded contexts (DDD) and favour immutable event logs.
- Apply CQRS: optimise command (write) and query (read) paths independently.
- Automate infrastructure: containerise, version schemas (Flyway/Liquibase) and CI/CD pipelines.
Java
- Target Java 17 LTS, enable `-Werror` and SpotBugs; block builds on warnings.
- Prefer `record` for value objects; mark event payload classes `@Immutable`.
- Return `Optional<T>` instead of `null`; never leak JPA entities outside the service.
- Use Spring Data repositories; keep aggregate roots package-private.
- Organise command/query hierarchies with sealed interfaces.
Error Handling and Validation
- Validate inbound commands with Jakarta Bean Validation; respond with RFC-7807 problem-details.
- Implement sagas with durable state tables; store step status (`PENDING|DONE|COMPENSATED|FAILED`).
- All compensating actions must be idempotent; require an `Idempotency-Key` header.
- Kafka listeners: wrap in retry (3 × exponential back-off) and send to a DLQ after exhaustion.
- Start functions with guard clauses; keep the happy path last.
Spring Boot
- One Gradle module per microservice; enforce boundaries via ArchUnit tests.
- Expose REST via spring-doc OpenAPI; auto-generate Pact contracts.
- Annotate command handlers `@Transactional`; never span multiple aggregates.
- Publish events through transactional-outbox + Debezium for exactly-once semantics.
- Enable Micrometer + OpenTelemetry; tag metrics with `service.name`, `db.instance`.
Testing
- Command tests: `@DataJpaTest` + Testcontainers PostgreSQL; assert state & events.
- Query tests: in-memory DB, load projections, verify read models.
- Event-sourcing tests: replay full event log, compare to snapshots.
- Contract tests: Pact for REST, JSON-schema validation for Kafka topics.
- Chaos tests: inject broker restarts & network latency to validate consistency.
Performance
- Shard requests by `tenantId`; use sticky pods for affinity.
- Add read-replicas for query models; mark replicas `read-only`.
- Align Kafka partitions with `aggregateId` to guarantee per-entity ordering.
- Partition PostgreSQL event tables by time; schedule aggressive autovacuum.
- Cache hot projections in Redis (TTL); never cache writes.
Security
- Mutual TLS between services; automate rotation with cert-manager.
- Encrypt data at rest (cloud KMS); sensitive columns via pgcrypto.
- Enforce PostgreSQL Row-Level Security for multi-tenant tables.
- Use OAuth2/JWT; propagate claims in Kafka headers for lineage.
Observability
- Propagate a single `trace-id` through HTTP and Kafka headers.
- Emit domain metrics (`timer`, `histogram`) for every saga step.
- Dashboards: data freshness < 5 s, outbox lag, DLQ depth, replication lag.
Deployment & CI/CD
- Helm chart per service; secrets via sealed-secrets.
- Run migrations in `initContainers`; fail fast on errors.
- Use feature flags; block deployment if write steps are partial.
- Perform blue/green only when topic backlog = 0 to avoid reordering.
Directory Structure
- src
- main/java/com/company/service
- api (controllers, DTOs)
- command (aggregates, handlers)
- query (projections, repositories)
- saga (coordinators, compensations)
- event (Kafka producers/consumers, schemas)
- test
- unit
- integration
- contract