CQRS and Event Sourcing#
CQRS (Command Query Responsibility Segregation) and event sourcing are frequently discussed together but are independent patterns. You can use either one without the other. Understanding each separately is essential before combining them.
CQRS: Separate Read and Write Models#
Most applications use the same data model for reads and writes. The same orders table serves the API that creates orders and the API that lists them. This works until read and write requirements diverge significantly.
When they diverge: The write side needs normalized data with referential integrity. The write model for an order is the order record, its line items, payment reference, and shipping address – each in separate tables with foreign keys. The read side needs a denormalized view: order summary with customer name, item descriptions, total, and status in a single query. Joins are expensive at scale.
CQRS separates these into two models:
Commands (writes) → Write Model (normalized, optimized for consistency)
↓ events
Queries (reads) ← Read Model (denormalized, optimized for queries)The write model handles commands: PlaceOrder, CancelOrder, UpdateShippingAddress. It enforces business rules and persists state.
The read model serves queries: GetOrderSummary, ListOrdersByCustomer, GetOrdersAwaitingShipment. It is a denormalized projection optimized for specific query patterns.
Practical CQRS Without Event Sourcing#
The simplest CQRS implementation uses a regular database for writes and a separate read-optimized store:
# Write side: normalized PostgreSQL
class OrderCommandHandler:
def handle_place_order(self, cmd):
order = Order.create(cmd.customer_id, cmd.items)
self.order_repo.save(order) # Writes to PostgreSQL
self.event_bus.publish(OrderPlaced(order.id, order.items, order.total))
# Read side: denormalized read store (could be Redis, Elasticsearch, another PG schema)
class OrderProjection:
def on_order_placed(self, event):
customer = self.customer_cache.get(event.customer_id)
self.read_store.upsert("order_summaries", {
"order_id": event.order_id,
"customer_name": customer.name,
"item_count": len(event.items),
"total": event.total,
"status": "placed"
})The read store might be Redis for fast key-value lookups, Elasticsearch for full-text search, or a separate PostgreSQL schema with denormalized tables.
Event Sourcing: Store Events, Not State#
Traditional systems store current state: the orders table contains the latest version of each order. Event sourcing stores the sequence of events that led to the current state.
Instead of an orders table with mutable rows, you have an append-only event stream:
Stream: order-12345
1. OrderCreated { customer_id: "cust-1", items: [...], total: 99.50 }
2. PaymentAuthorized { charge_id: "ch_abc", amount: 99.50 }
3. OrderShipped { tracking: "1Z999AA10123456784" }
4. OrderDelivered { signed_by: "J. Smith" }Current state is derived by replaying events:
class Order:
def apply(self, event):
if isinstance(event, OrderCreated):
self.status = "created"
self.items = event.items
self.total = event.total
elif isinstance(event, PaymentAuthorized):
self.status = "paid"
self.charge_id = event.charge_id
elif isinstance(event, OrderShipped):
self.status = "shipped"
self.tracking = event.tracking
elif isinstance(event, OrderDelivered):
self.status = "delivered"
@classmethod
def load(cls, events):
order = cls()
for event in events:
order.apply(event)
return orderEvent Store Design#
An event store needs three capabilities: append events to a stream, read all events for a stream, and subscribe to new events.
With EventStoreDB:
# Append an event
curl -X POST http://localhost:2113/streams/order-12345 \
-H "Content-Type: application/vnd.eventstore.events+json" \
-d '[{
"eventId": "fbf4a1a0-b4b0-4f0c-8e2e-1c9e1f1f1f1f",
"eventType": "OrderCreated",
"data": { "customer_id": "cust-1", "total": 99.50 }
}]'
# Read stream
curl http://localhost:2113/streams/order-12345With Kafka as an event store (common but limited):
Topic: orders
Key: order-12345
Partitioning: by order ID (all events for one order on same partition, preserving order)
Retention: infinite (log.retention.ms=-1) or tiered storage
Compaction: DISABLED (you need every event, not just the latest per key)Kafka works for event streaming but lacks native stream-per-aggregate reads. You read the entire partition and filter. For small-to-medium scale with Kafka already in your stack, this is acceptable. For event sourcing at scale, EventStoreDB or a purpose-built event store is better.
Snapshots#
Replaying hundreds of events per aggregate on every read is expensive. Snapshots cache the aggregate state at a point in time:
class OrderRepository:
def load(self, order_id):
# Try loading snapshot first
snapshot = self.snapshot_store.get(order_id)
if snapshot:
order = Order.from_snapshot(snapshot.state)
# Replay only events after the snapshot
events = self.event_store.read(order_id, after_version=snapshot.version)
else:
order = Order()
events = self.event_store.read(order_id)
for event in events:
order.apply(event)
return order
def save_snapshot(self, order_id, order, version):
self.snapshot_store.save(order_id, order.to_snapshot(), version)Snapshot frequency depends on your event volume. A common strategy: snapshot every 100 events, or when load time exceeds a threshold.
Projections#
Projections transform event streams into read models. They are the bridge between event sourcing and CQRS.
class OrderDashboardProjection:
"""Builds a real-time order dashboard from order events."""
def handle(self, event):
if isinstance(event, OrderCreated):
self.db.execute("""
INSERT INTO order_dashboard (order_id, customer_id, total, status, created_at)
VALUES (%s, %s, %s, 'created', %s)
""", (event.order_id, event.customer_id, event.total, event.timestamp))
elif isinstance(event, OrderShipped):
self.db.execute("""
UPDATE order_dashboard SET status = 'shipped', tracking = %s
WHERE order_id = %s
""", (event.tracking, event.order_id))Projections can be rebuilt from scratch by replaying all events. This lets you add new read models without schema migrations – just create a new projection and replay.
Eventual Consistency Handling#
With CQRS, the read model lags behind the write model. A user places an order and immediately queries – the order might not appear yet.
Practical solutions:
- Read-your-writes: After a write, return the result directly from the command handler, not the read model. The confirmation page uses command response data, not a query.
- Causal consistency tokens: The write side returns a token (event sequence number). The read query includes this token and waits until the projection has processed up to that point.
- UI optimistic updates: The frontend shows the expected state immediately and reconciles when the read model catches up.
When CQRS and Event Sourcing Are Overkill#
CQRS is overkill when: Read and write models are identical, traffic is low, the team is small, or you are building a CRUD application with no complex query requirements.
Event sourcing is overkill when: You do not need audit trails, temporal queries, or event replay. The operational complexity of event stores, projections, and eventual consistency is significant. Most applications are better served by a traditional database with an audit log table.
Use event sourcing when you genuinely need to answer “what was the state of this entity at any point in time?” or when your domain is naturally event-driven (financial transactions, logistics tracking, collaborative editing). Do not adopt it because it sounds architecturally elegant.