Skip to main content

Advanced Topics

Overview

This section covers advanced features, internal architecture, and specialized use cases of Q01 Core APIs. These topics go beyond basic CRUD operations to explore performance optimization, event-driven architecture, multi-database support, and integration patterns.

Topics Covered:

  1. Multi-Database Support - MySQL, Oracle, PostgreSQL
  2. Media Handling - Binary files, images, documents
  3. Outbox Subscribers - RabbitMQ event consumers
  4. Caching Strategies - Client and server-side caching
  5. Performance Optimization - Query optimization, indexing
  6. Webhook Integration - External system notifications
  7. Custom Functions - Pre-insert, post-update hooks
  8. Internal Architecture - CoreService routing to CoreQuery/CoreWrite
  9. Batch Processing - Large-scale data operations

Who Should Read This?

Platform Engineers:

  • Understanding internal architecture
  • Performance optimization
  • Multi-database configuration
  • Event-driven architecture

Backend Developers:

  • Implementing event subscribers
  • Custom validation functions
  • Webhook integrations
  • Caching strategies

System Architects:

  • Designing scalable integrations
  • Event sourcing patterns
  • Performance considerations
  • Database architecture

Multi-Database Support

Q01 Core APIs support multiple database engines through PDO abstraction layer.

Supported Databases:

  • MySQL 5.7+
  • Oracle 11g+
  • PostgreSQL 9.6+

Key Features:

  • ✅ Single codebase for all databases
  • ✅ Database-specific optimizations
  • ✅ Automatic query adaptation
  • ✅ Connection pooling
  • ✅ Multi-database transactions

Configuration:

// Database configuration
'databases' => [
'meta' => [
'driver' => 'mysql',
'host' => 'mysql-meta.example.com',
'database' => 'q01_meta',
'username' => 'q01_user',
'password' => 'secure_password'
],
'data' => [
'driver' => 'oracle',
'host' => 'oracle-data.example.com',
'database' => 'q01_data',
'username' => 'q01_user',
'password' => 'secure_password'
],
'auth' => [
'driver' => 'postgresql',
'host' => 'postgres-auth.example.com',
'database' => 'q01_auth',
'username' => 'q01_user',
'password' => 'secure_password'
]
]

Database Separation:

  • Meta DB: TB_DIM, TB_COST, TB_VAR, TB_MENU (metadata tables)
  • Data DB: TB_ANAG_{DIM}00 (dimension data tables)
  • Auth DB: User credentials, JWT secrets

Media Handling

Special endpoints handle binary files (images, PDFs, documents).

Media Endpoints:

# Upload image
POST /api/v4/media/upload
Content-Type: multipart/form-data

# Get image (binary response)
GET /api/v4/media/{dimension}/{id}/{field}

# Get thumbnail
GET /api/v4/media/{dimension}/{id}/{field}?size=thumbnail

Features:

  • ✅ Image resize and thumbnails
  • ✅ MIME type validation
  • ✅ Virus scanning
  • ✅ CDN integration
  • ✅ Lazy loading support

Storage Options:

  • Local filesystem
  • AWS S3
  • Azure Blob Storage
  • Google Cloud Storage

Event-Driven Architecture

Outbox Pattern with RabbitMQ enables event-driven microservices.

Event Flow:

Write Operation (POST/PUT/PATCH/DELETE)

1. Update data table
2. Insert OUTBOX event
3. COMMIT transaction

Background Publisher

RabbitMQ Fanout Exchange (corewrite.fanout)

Subscribers:
├── search-indexer (Elasticsearch)
├── cache-invalidator (Redis)
├── analytics-writer (Data warehouse)
├── notification-sender (Email/SMS)
└── webhook-dispatcher (External systems)

Subscriber Example:

const amqp = require('amqplib');

async function startSubscriber() {
const connection = await amqp.connect('amqp://rabbitmq:5672');
const channel = await connection.createChannel();

await channel.assertExchange('corewrite.fanout', 'fanout', { durable: true });
const queue = await channel.assertQueue('my-subscriber', { durable: true });
await channel.bindQueue(queue.queue, 'corewrite.fanout', '');

channel.consume(queue.queue, async (msg) => {
const event = JSON.parse(msg.content.toString());

await handleEvent(event);
channel.ack(msg);
});
}

Caching Strategies

Multiple caching layers optimize performance.

Caching Layers:

  1. Client-Side Cache - Browser cache, sessionStorage
  2. CDN Cache - Static assets, media files
  3. Application Cache - Redis, Memcached
  4. Database Cache - Query result cache

Cache Invalidation:

Write Operation

OUTBOX Event

RabbitMQ

cache-invalidator subscriber

Redis DEL keys matching pattern

Example:

class CachedAPIClient {
constructor(cache, ttl = 60) {
this.cache = cache;
this.ttl = ttl;
}

async getProduct(id) {
const cacheKey = `product:${id}`;

// Check cache
const cached = await this.cache.get(cacheKey);
if (cached) return JSON.parse(cached);

// Fetch from API
const product = await this.fetchProduct(id);

// Store in cache
await this.cache.set(cacheKey, JSON.stringify(product), this.ttl);

return product;
}
}

Performance Optimization

Query Optimization:

  • Database indexes on filter fields
  • Pagination with offset/limit
  • Field selection ($fields parameter)
  • Eager loading related records
  • Query result caching

Indexing Strategy:

-- Tenant isolation (most important)
CREATE INDEX idx_prd_tenant ON TB_ANAG_PRD00(PRD_SOURCE, PRD_CENTRO_DETT, PRD_AMBIENTE);

-- Common filters
CREATE INDEX idx_prd_category ON TB_ANAG_PRD00(XPRD05);
CREATE INDEX idx_prd_active ON TB_ANAG_PRD00(XPRD06);

-- Timestamps for sorting
CREATE INDEX idx_prd_created ON TB_ANAG_PRD00(CREATED_AT);
CREATE INDEX idx_prd_updated ON TB_ANAG_PRD00(UPDATED_AT);

-- TREC for soft delete filtering
CREATE INDEX idx_prd_trec ON TB_ANAG_PRD00(TREC);

-- Composite index for common query
CREATE INDEX idx_prd_common ON TB_ANAG_PRD00(PRD_SOURCE, TREC, XPRD06, CREATED_AT);

Webhook Integration

Notify external systems on data changes.

Webhook Flow:

Write Operation

OUTBOX Event

RabbitMQ

webhook-dispatcher subscriber

HTTP POST to external webhook URL

Webhook Configuration:

CREATE TABLE WEBHOOKS (
WEBHOOK_ID VARCHAR(36) PRIMARY KEY,
SOURCE VARCHAR(50), -- Tenant
EVENT_TYPE VARCHAR(100), -- ProductCreated, OrderUpdated, etc.
URL VARCHAR(500), -- External webhook URL
SECRET VARCHAR(255), -- HMAC signature secret
ACTIVE BOOLEAN DEFAULT TRUE,
CREATED_AT TIMESTAMP
);

INSERT INTO WEBHOOKS (WEBHOOK_ID, SOURCE, EVENT_TYPE, URL, SECRET)
VALUES (
'uuid-...',
'ecommerce',
'ProductCreated',
'https://external-system.com/webhooks/products',
'webhook_secret_key'
);

Webhook Payload:

{
"event_id": "uuid-...",
"event_type": "ProductCreated",
"timestamp": "2025-12-19T16:00:00Z",
"source": "ecommerce",
"data": {
"PRD_ID": "123",
"XPRD01": "Widget Pro",
"XPRD02": 99.99
},
"signature": "hmac-sha256-signature"
}

Custom Functions

Extend Core APIs with custom logic.

Pre-Insert Functions:

  • {uuid} - Generate UUID
  • {md5} - Hash field value
  • {counter} - Sequential counter
  • {timestamp} - Current timestamp
  • {slugify} - URL-friendly slug
  • Custom functions via plugins

Post-Update Hooks:

  • Validation functions
  • Derived field calculation
  • External API calls
  • Notification triggers

Example:

// Custom pre-insert function
public function registerCustomFunction(string $name, callable $function): void {
$this->customFunctions[$name] = $function;
}

// Usage in TB_COST.VALUE_TEMP
// {custom:generateSKU}

// Function implementation
$this->registerCustomFunction('generateSKU', function($dimension, $data) {
$category = $data['XPRD05'];
$sequence = $this->getNextSequence($category);
return "{$category}-{$sequence}";
});

Internal Architecture

CoreService (Go/Buffalo) routes requests to backend services.

Routing Logic:

Client Request → CoreService (Port 8080)

Route Analysis (HTTP Method + Named Route)

GET → CoreQuery (PHP/Symfony) - Read operations
POST/PUT/PATCH/DELETE → CoreWrite (PHP/Symfony) - Write operations

Response proxied back to client

Service Separation Benefits:

  • ✅ CQRS pattern (Command Query Responsibility Segregation)
  • ✅ Independent scaling (read vs write)
  • ✅ Read replicas for queries
  • ✅ Write optimization with transactions
  • ✅ Horizontal scaling

Environment Variables:

# CoreQuery backend
R_API_HOST=corequery.internal
R_API_PORT=8000
R_API_VERSION=v4

# CoreWrite backend
W_API_HOST=corewrite.internal
W_API_PORT=8001
W_API_VERSION=v4

Batch Processing

Large-scale operations with millions of records.

Strategies:

  1. Parallel Requests - Multiple concurrent API calls (throttled)
  2. Bulk Import - CSV/Excel import with validation
  3. Background Jobs - Asynchronous processing with queue
  4. Streaming - Process large datasets in chunks

Background Job Example:

class BulkImportJob {
async importProducts(csvFile, batchSize = 100) {
const products = await this.parseCSV(csvFile);
const batches = this.chunk(products, batchSize);

for (const batch of batches) {
await this.processBatch(batch);
await this.updateProgress(batches.indexOf(batch), batches.length);
}
}

async processBatch(batch) {
const requests = batch.map(product =>
this.createProduct(product).catch(error => ({
product,
error: error.message
}))
);

const results = await Promise.all(requests);

const failures = results.filter(r => r.error);
if (failures.length > 0) {
await this.logFailures(failures);
}
}
}

Best Practices

✅ DO:

Use event subscribers for async tasks:

// ✅ Good - decouple via events
// Core APIs handle write + outbox event
// Subscriber processes async (search index, email, etc.)

Cache frequently accessed data:

// ✅ Good - reduce API calls
const cached = cache.get(key);
if (cached) return cached;

Optimize database queries:

-- ✅ Good - indexed fields in WHERE
SELECT * FROM TB_ANAG_PRD00
WHERE PRD_SOURCE = 'tenant' -- Indexed
AND XPRD05 = 'category' -- Indexed
AND TREC != 'C' -- Indexed
LIMIT 100;

❌ DON'T:

Don't poll for changes:

// ❌ Bad - polling
setInterval(async () => {
const changes = await getChanges();
}, 1000);

// ✅ Good - event subscriber
subscribeToChanges((event) => {
handleChange(event);
});

Don't fetch large datasets without pagination:

# ❌ Bad - no limit
GET /api/v4/core/PRD

# ✅ Good - paginated
GET /api/v4/core/PRD?$num_rows=100&$offset=0

Summary

  • ✅ Multi-database support (MySQL, Oracle, PostgreSQL)
  • ✅ Media handling for binary files
  • ✅ Event-driven architecture with RabbitMQ
  • ✅ Multi-layer caching strategies
  • ✅ Query optimization and indexing
  • ✅ Webhook integration for external systems
  • ✅ Custom functions for business logic
  • ✅ Internal CQRS architecture
  • ✅ Batch processing for large-scale operations

Key Takeaways:

  1. Use event subscribers for async processing
  2. Implement caching at multiple layers
  3. Optimize queries with proper indexes
  4. Leverage CQRS pattern for scaling
  5. Use webhooks for external integrations
  6. Process large datasets in batches
  7. Monitor performance and optimize