Skip to main content

CQRS Pattern in Q01 Core APIs

What is CQRS?

CQRS (Command Query Responsibility Segregation) is an architectural pattern that separates read operations (queries) from write operations (commands) using different models and often different infrastructure.

Traditional approach:

┌──────────┐
│ API │
└────┬─────┘

v
┌──────────┐
│ Database │
└──────────┘

CQRS approach:

┌──────────┐        ┌──────────┐
│ Query API│ │Command API│
└────┬─────┘ └────┬──────┘
│ │
v v
┌──────────┐ ┌──────────┐
│ Read DB │ sync │ Write DB │
│(optimized)│<───────│(normalized)│
└──────────┘ └──────────┘

Why CQRS in Q01?

Problem: Different Optimization Needs

Reads and writes have fundamentally different requirements:

AspectReadsWrites
Frequency80-90% of requests10-20% of requests
LatencyMust be fastCan be slightly slower
ConsistencyEventually consistent OKMust be immediately consistent
CachingHighly cacheableCannot cache
JoinsMany joins for rich dataMinimal joins for validation
ScalingRead replicas, cachingTransaction logs, write-ahead logs
OptimizationDenormalized viewsNormalized schema

One model cannot optimize for both.

Solution: Separate Read and Write Services

Q01 implements CQRS at the service level:

          ┌──────────────┐
│ CoreService │ ← Single API surface
│ (API Gateway)│
└──────┬───────┘

┌─────────┴─────────┐
│ │
v v
┌─────────────┐ ┌─────────────┐
│ CoreQuery │ │ CoreWrite │
│ (PHP/Symfony)│ │ (PHP/Symfony)│
│ │ │ │
│ Read-optimized│ │Write-optimized│
└──────┬──────┘ └──────┬──────┘
│ │
v v
┌─────────────┐ ┌─────────────┐
│Read Database│ sync│Write Database│
│ (can be │<─────│ (normalized, │
│ denormalized,│ │ transactional)│
│ cached) │ │ │
└─────────────┘ └─────────────┘

Q01's CQRS Implementation

CoreService: Unified API Surface

Despite internal separation, clients see a single unified API:

# Read operation
GET /api/v4/core/PRD?source=productList

# Write operation
POST /api/v4/core/PRD

CoreService routes transparently:

  • HTTP method determines routing
  • Clients don't know (or care) about internal backends

Routing Logic

The gateway uses a simple rule-based routing mechanism:

┌─────────────────────────────────────┐
│ Incoming Request Analysis │
└──────────────┬──────────────────────┘

v
┌───────────────┐
│ HTTP Method? │
└───────┬───────┘

┌───────┴────────┐
│ │
v v
┌────────┐ ┌──────────┐
│ GET │ │POST/PUT/ │
│ │ │PATCH/DEL │
└───┬────┘ └────┬─────┘
│ │
v v
┌─────────────┐ ┌─────────────┐
│ CoreQuery │ │ CoreWrite │
│ (Read API) │ │ (Write API) │
└─────────────┘ └─────────────┘

Route Examples:
• GET /api/v4/core/PRD → CoreQuery
• GET /api/v4/core/PRD/123 → CoreQuery
• GET /api/v4/core/PRD/fields → CoreQuery
• POST /api/v4/core/PRD → CoreWrite
• PUT /api/v4/core/PRD/123 → CoreWrite
• PATCH /api/v4/core/PRD/123 → CoreWrite
• DELETE /api/v4/core/PRD/123 → CoreWrite

Benefits:

  • ✅ Clients use standard REST verbs
  • ✅ No special headers or parameters needed
  • ✅ Backend changes transparent to clients
  • ✅ Can evolve read/write infrastructure independently

CoreQuery: Read-Optimized Service

Responsibilities

  1. Metadata-driven query construction from TB_COST, TB_OBJECT
  2. Field visibility filtering by COD_ON_OFF and center_dett
  3. Permission enforcement via peso and source
  4. Dynamic SQL generation with optimal joins
  5. Result formatting and projection

Optimizations

1. Database Views (via TB_OBJECT with TYPE_OBJ=1):

-- Instead of joining 5 tables on every query:
SELECT p.*, c.name as category, s.name as supplier, ...
FROM products p
JOIN categories c ON p.category_id = c.id
JOIN suppliers s ON p.supplier_id = s.id
-- (expensive for frequent reads)

-- Create database view via TB_OBJECT:
CREATE VIEW v_products_enriched AS
SELECT p.*, c.name as category, s.name as supplier, ...
FROM products p
LEFT JOIN categories c ON p.category_id = c.id
LEFT JOIN suppliers s ON p.supplier_id = s.id;

-- Query the view (fast):
SELECT * FROM v_products_enriched WHERE price > 100;

TB_OBJECT metadata with TYPE_OBJ=1 defines database views, CoreQuery uses them automatically.

For TYPE_OBJ=2 ("OGGETTI SELECT"), CoreQuery executes select-of-selects without requiring a physical database view.

2. Projection (only fetch needed fields):

# Instead of SELECT * (slow, wasteful):
GET /api/v4/core/PRD

# Client requests specific fields (fast):
GET /api/v4/core/PRD?$select=XPRD01,XPRD02

# CoreQuery generates:
SELECT XPRD01, XPRD02 FROM TB_ANAG_PRD00 WHERE ...

3. Filtering at Database (not in application):

GET /api/v4/core/PRD?$filter=XPRD02 gt 100 AND XPRD06 eq 1

# CoreQuery generates WHERE clause:
SELECT * FROM TB_ANAG_PRD00 WHERE XPRD02 > 100 AND XPRD06 = 1

Note: Use filter operators (gt, lt, eq, ne, ge, le) instead of SQL symbols to prevent SQL injection.

Database applies filters efficiently with indexes.

4. Pagination (limit data transfer):

GET /api/v4/core/PRD?$offset=100&$num_rows=50

# CoreQuery generates:
SELECT * FROM TB_ANAG_PRD00 LIMIT 50 OFFSET 100

5. Field-Level Caching (possible future enhancement):

  • Cache TB_COST metadata (rarely changes)
  • Cache TB_TYPE_FIELDS definitions
  • Reduce metadata queries significantly

6. Read Replicas (possible scaling):

  • Point CoreQuery to read-only database replica
  • Distribute read load across multiple databases
  • Write operations unaffected

Trade-Offs

Pros:

  • Fast reads (optimized queries, views, caching)
  • Scalable (read replicas, horizontal scaling)
  • Flexible (supports complex filters, projections)

Cons:

  • Eventually consistent - if using read replicas with replication lag
  • Cannot modify data (read-only service)

CoreWrite: Write-Optimized Service

Responsibilities

  1. Multi-layer validation (token, grant, required fields, types)
  2. Pre-insert functions (counters, UUIDs, sequences)
  3. Transaction management (ACID guarantees)
  4. Cascade operations (INSERT_CASCADE, UPDATE_CASCADE, DELETE_CASCADE)
  5. Outbox pattern (write-ahead log for event sourcing)
  6. RabbitMQ publishing (distribute change events)

Optimizations

1. Transactional Consistency:

// Everything in a single transaction
$this->db_data->beginTransaction();

// Main write
INSERT INTO TB_ANAG_PRD00 (...) VALUES (...);

// Cascades (if metadata defines them)
INSERT INTO TB_ANAG_PRDDET00 (...) VALUES (...);

// Outbox for event sourcing
INSERT INTO TB_ANAG_OUTBOX00 (...) VALUES (...);

$this->db_data->commit();

// ACID guarantee: all succeed or all fail

2. Outbox Pattern (at-least-once delivery):

┌──────────────────────────────────┐
│ Database │
│ │
│ ┌──────────────┐ │
│ │TB_ANAG_PRD00 │ Main Data │
│ └──────────────┘ │
│ │
│ ┌──────────────┐ │
│ │TB_ANAG_OUTBOX│ Write-Ahead │
│ │ 00 │ Log │
│ └──────────────┘ │
│ │
│ Both written in same transaction│
└──────┬───────────────────────────┘

│ After commit
v
┌──────────────┐
│ RabbitMQ │ Async publish
└──────────────┘

If RabbitMQ is down:
- Transaction still commits
- Outbox record remains
- Background job retries publish

Guarantees: Data is safe even if message broker fails.

3. Batch Operations:

POST /api/v4/core/PRD
[
{source: "...", XPRD01: "Product 1", XPRD02: 10},
{source: "...", XPRD01: "Product 2", XPRD02: 20},
{source: "...", XPRD01: "Product 3", XPRD02: 30}
]

# Single transaction for all three inserts
# Reduces roundtrips, improves throughput

4. Validation Caching:

  • Cache TB_COST metadata (validation rules)
  • Cache TB_COUNTER metadata (counter configurations)
  • Avoid redundant metadata queries

5. Connection Pooling:

  • Reuse database connections across requests
  • Reduce connection overhead

Trade-Offs

Pros:

  • Strong consistency (ACID transactions)
  • Guaranteed event delivery (outbox pattern)
  • Cascades handled automatically
  • Audit trail built-in

Cons:

  • Slower than reads (validation, transactions, outbox overhead)
  • Harder to scale horizontally (write bottleneck)
  • Cannot use caching (data must be fresh)

Consistency Model

Immediate Consistency (Within CoreWrite)

All changes in a single transaction are immediately consistent:

// Create order + order items in one transaction
beginTransaction();
INSERT INTO TB_ANAG_ORD00 (...); // order
INSERT INTO TB_ANAG_ORDITEM00 (...); // item 1
INSERT INTO TB_ANAG_ORDITEM00 (...); // item 2
commit();

// If any INSERT fails, ALL rollback - no partial state

Read-Your-Writes Consistency:

1. Client creates product via CoreWrite
2. CoreWrite commits transaction, returns success
3. Client immediately reads product via CoreQuery
4. CoreQuery returns the just-created product

✅ Guaranteed to see your own writes

Eventual Consistency (Across Services)

Changes propagated via RabbitMQ are eventually consistent:

1. CoreWrite creates order in database
2. CoreWrite publishes event to RabbitMQ
3. Inventory service subscribes to events
4. Inventory service receives event (async, some delay)
5. Inventory service reserves stock

There's a delay (typically < 100ms) between steps 1 and 5.

This is acceptable because:

  • Subscribers are separate microservices (not Core APIs)
  • Business workflows can tolerate brief delays
  • At-least-once delivery guarantees eventual consistency

Configuration

Environment variables control backend endpoints:

# Read API (CoreQuery)
R_API_HOST=corequery-service
R_API_PORT=8081
R_API_VERSION=v4

# Write API (CoreWrite)
W_API_HOST=corewrite-service
W_API_PORT=8082
W_API_VERSION=v4

CoreService uses these to construct backend URLs:

GET request → http://corequery-service:8081/api/v4/core/PRD
POST request → http://corewrite-service:8082/api/v4/core/PRD

Can be scaled independently:

  • Deploy 5 CoreQuery instances for high read load
  • Deploy 2 CoreWrite instances for moderate write load
  • Scale each based on actual usage patterns

Benefits in Practice

Real-World Scenario

E-commerce platform:

  • 90% reads (browsing products, viewing orders)
  • 10% writes (checkout, update cart)

Traditional monolith:

All requests → Same server → Same database
Peak traffic: 10,000 req/s

Server must handle:
- 9,000 read req/s (fast queries)
- 1,000 write req/s (slow transactions)

Writes block reads. Reads slow down. Everything suffers.

Q01 with CQRS:

9,000 reads → CoreQuery → Read replica (optimized for reads)
1,000 writes → CoreWrite → Write master (optimized for writes)

CoreQuery:
- Uses denormalized views
- Caches metadata
- Horizontal scaling (3 instances = 3,000 req/s each)
- Fast, consistent performance

CoreWrite:
- Handles transactions safely
- Publishes events asynchronously
- Doesn't block reads
- Focused optimization

Result: 10x better performance with same infrastructure

Summary

Q01 implements CQRS at the service level:

AspectCoreQueryCoreWrite
PurposeRead operationsWrite operations
OptimizationSpeed, scalabilityConsistency, reliability
ConsistencyEventually consistent (acceptable)Immediately consistent (required)
CachingYes (metadata, results)No (must be fresh)
ScalingHorizontal (read replicas)Vertical (better hardware)
RoutingGET requestsPOST/PUT/PATCH/DELETE requests

Key benefits:

  1. Performance - reads don't block writes, writes don't slow reads
  2. Scalability - scale read and write infrastructure independently
  3. Flexibility - optimize each service for its specific needs
  4. Transparency - clients see unified API surface

Key trade-offs:

  1. ⚠️ Complexity - two services instead of one
  2. ⚠️ Consistency - eventual consistency for cross-service events
  3. ⚠️ Infrastructure - requires message queue (RabbitMQ)

For multi-tenant SaaS platforms with read-heavy workloads, CQRS is a net win.

Next Steps