Why Metadata-Driven Architecture?
The Problem with Hardcoded APIs
Traditional REST APIs hardcode entity structures in code:
// Traditional approach - hardcoded ❌
router.post('/api/products', async (req, res) => {
const { name, price, category } = req.body;
// Validation hardcoded
if (!name) return res.status(400).json({error: "Name required"});
if (price < 0) return res.status(400).json({error: "Invalid price"});
// SQL hardcoded
const result = await db.execute(
'INSERT INTO products (name, price, category) VALUES (?, ?, ?)',
[name, price, category]
);
res.json({id: result.insertId});
});
router.post('/api/customers', async (req, res) => {
// Another hardcoded endpoint with different validation...
});
router.post('/api/orders', async (req, res) => {
// Yet another hardcoded endpoint...
});
Problems:
- Adding a new field requires code deployment
- Adding a new entity type requires new endpoint code
- Changing validation rules requires code changes
- Permission changes scattered across codebase
- Inconsistent validation - each endpoint implements its own rules
- Copy-paste errors - repetitive code breeds bugs
- Testing complexity - every endpoint needs separate tests
The Metadata-Driven Solution
Q01 Core APIs use metadata as the source of truth:
// Q01 approach - metadata-driven ✅
router.post('/api/core/:dim', async (req, res) => {
const dim = req.params.dim;
// Load metadata for this dimension
const metadata = await loadMetadata(dim);
// Validate using metadata rules
const errors = validateAgainstMetadata(req.body, metadata);
if (errors.length > 0) return res.status(400).json({errors});
// Build SQL from metadata
const sql = buildInsertSQL(metadata, req.body);
// Execute
const result = await db.execute(sql);
res.json({id: result.insertId});
});
// ONE endpoint handles ALL entity types!
// Adding new entity = insert metadata, not deploy code
Advantages:
- ✅ Add fields by updating TB_COST - no code deployment
- ✅ Add entity types by updating TB_DIM - no new endpoints
- ✅ Change validation by updating metadata - no code changes
- ✅ Adjust permissions via TB_MENU - immediate effect
- ✅ Consistent validation - one validation engine for all entities
- ✅ No repetition - metadata reuse eliminates copy-paste
- ✅ Testing simplified - test metadata engine once
Real-World Benefits
1. Rapid Feature Development
Traditional:
Product Owner: "We need a new 'Suppliers' entity with 15 fields"
Developer: "That's a 2-week sprint - API endpoints, validation, tests, database migrations"
Q01:
Product Owner: "We need a new 'Suppliers' entity with 15 fields"
Developer: "Give me 30 minutes to create the metadata"
Time to market reduced from weeks to minutes.
2. Runtime Flexibility
Scenario: A customer needs an extra field "Tax ID" on products.
Traditional:
1. Create database migration
2. Update API code to accept tax_id
3. Add validation for tax_id
4. Update tests
5. Deploy new version
6. Wait for customer maintenance window
Q01:
1. INSERT INTO TB_COST (DIM_COD, COD_COST, TIPO_CAMPO, ...)
VALUES ('PRD', 'XPRD25', 'varchar(50)', ...)
2. Done - immediately available via API
Deployment eliminated. Changes take effect immediately.
3. Multi-Tenant Customization
Different tenants need different fields on the same entity.
Traditional:
- Create "custom_field_1", "custom_field_2" columns (ugly)
- Maintain separate code branches per tenant (nightmare)
- Use JSON columns (loses type safety and query performance)
Q01:
- Define tenant-specific fields in metadata with
COD_AMBIENTEfiltering - Core APIs automatically show/hide fields based on tenant context
- Type-safe, performant, clean schema
4. Permission Complexity Made Simple
Scenario: Junior users can see prices but not costs. Senior users see both.
Traditional:
// Hardcoded permission checks scattered everywhere ❌
router.get('/api/products/:id', auth, (req, res) => {
const product = await db.getProduct(id);
// Permission logic hardcoded in application
if (req.user.role !== 'senior') {
delete product.cost;
delete product.margin;
}
res.json(product);
});
Q01:
-- Metadata defines permissions ✅
-- In TB_COST, set COD_UTENTE for sensitive fields:
UPDATE TB_COST
SET COD_UTENTE = '50' -- Only users with peso >= 50 see this
WHERE COD_COST IN ('XPRD_COST', 'XPRD_MARGIN');
-- Core APIs automatically filter based on user's peso from JWT
-- No application code needed
5. Consistent Validation Everywhere
Traditional:
// Different endpoints implement validation differently ❌
// /api/products - uses Joi
router.post('/api/products', validate(productSchema), ...);
// /api/orders - uses express-validator
router.post('/api/orders', [
body('amount').isNumeric(),
body('customer').notEmpty()
], ...);
// /api/customers - manual validation
router.post('/api/customers', (req, res) => {
if (!req.body.email) return res.status(400)...
});
Q01:
All validation defined in TB_COST:
- TIPO_CAMPO: Data type (int, varchar, date, AC|, VARS|)
- CHECK_CAMPO: Validation regex or enum
- OBBLIGATORIO: Required flag
- COD_ON_OFF: Context-specific required (N=required on create, M=required on edit)
One validation engine enforces all rules consistently.
6. Field Visibility by Context
Scenario: Some fields visible in list view, others only in detail view.
Traditional:
// Hardcoded field selection ❌
router.get('/api/products', (req, res) => {
// List view - hardcoded subset
const sql = 'SELECT id, name, price FROM products';
});
router.get('/api/products/:id', (req, res) => {
// Detail view - hardcoded full set
const sql = 'SELECT id, name, price, description, cost, supplier FROM products WHERE id = ?';
});
Q01:
-- Metadata defines visibility ✅
-- COD_ON_OFF bitmask:
-- L = List view
-- D = Detail view
-- N = New form
-- M = Modify form
-- R = Search form
UPDATE TB_COST
SET COD_ON_OFF = 'LD' -- Show in list and detail
WHERE COD_COST IN ('XPRD01', 'XPRD02'); -- name, price
UPDATE TB_COST
SET COD_ON_OFF = 'D' -- Show ONLY in detail
WHERE COD_COST = 'XPRD04'; -- description
-- Core APIs automatically select fields based on center_dett parameter
GET /api/v4/core/PRD?center_dett=visualizza → Returns only 'L' fields
GET /api/v4/core/PRD/123?center_dett=dettaglio → Returns 'D' and 'L' fields
7. Database Engine Abstraction
Traditional:
// Hardcoded for specific database ❌
const sql = `
INSERT INTO products (id, name, created_at)
VALUES (NEXTVAL('product_seq'), ?, NOW()) -- PostgreSQL-specific
`;
// To support MySQL, need different code:
const sql = `
INSERT INTO products (name, created_at)
VALUES (?, NOW()) -- MySQL auto-increment
`;
Q01:
// Database abstraction via metadata ✅
// Core APIs detect database type and generate appropriate SQL:
//
// MySQL: Uses AUTO_INCREMENT
// Oracle: Uses sequences from getSequence()
//
// Application doesn't know or care which database is used
8. Cascading Operations Without Code
Scenario: Deleting an order should delete all line items.
Traditional:
// Hardcoded cascade logic ❌
async function deleteOrder(orderId) {
await db.execute('DELETE FROM order_items WHERE order_id = ?', [orderId]);
await db.execute('DELETE FROM order_notes WHERE order_id = ?', [orderId]);
await db.execute('DELETE FROM order_shipments WHERE order_id = ?', [orderId]);
await db.execute('DELETE FROM orders WHERE id = ?', [orderId]);
// Miss one? Data orphaned. Change structure? Update code.
}
Q01:
// Metadata-driven cascades ✅
// Define DELETE_CASCADE in metadata:
{
"DELETE_CASCADE": {
"srcKey": "ORDER_ID",
"applyTo": {
"ORDERITEMS": {"destKey": "ORDER_ID"},
"ORDERNOTES": {"destKey": "ORDER_ID"},
"ORDERSHIPMENTS": {"destKey": "ORDER_ID"}
}
}
}
// Core APIs automatically cascade based on metadata
// Add new related table? Update metadata, not code.
9. Audit Trail Automation
Traditional:
// Manual audit logging ❌
async function updateProduct(id, data) {
const old = await db.getProduct(id);
await db.updateProduct(id, data);
const newData = await db.getProduct(id);
// Manual audit
await auditLog.create({
entity: 'products',
entityId: id,
before: old,
after: newData,
user: currentUser,
timestamp: new Date()
});
}
Q01:
// Automatic audit via outbox pattern ✅
// Every write automatically:
// 1. Stores change in TB_ANAG_OUTBOX00
// 2. Publishes to RabbitMQ
// 3. Includes full before/after state
// 4. Tags with user, tenant, timestamp
//
// No application code needed
10. Multi-Language Support
Traditional:
// Hardcoded labels ❌
const labels = {
en: {productName: "Product Name", price: "Price"},
it: {productName: "Nome Prodotto", price: "Prezzo"}
};
Q01:
-- Labels in metadata ✅
-- TB_COST.DESC_COST stores label
-- TB_LINGUA provides translations
-- Core APIs return localized labels based on session lingua
Cost-Benefit Analysis
Initial Investment
- Learning curve - developers must understand metadata-driven concepts
- Metadata design - requires upfront thought about data model
- Tool development - need UI for metadata management
Long-Term Returns
- 90% reduction in API code - one endpoint handles all entities
- Zero-deployment changes - field/entity additions via metadata
- Consistent behavior - one validation/permission/audit engine
- Faster feature delivery - new entities in minutes, not sprints
- Easier maintenance - change rules in metadata, not scattered code
- Multi-tenant flexibility - per-tenant customization without code branches
ROI is positive after ~3 months for most teams.
When Metadata-Driven Makes Sense
✅ Good Fit:
- Multi-tenant SaaS platforms
- Configurable enterprise software
- Rapid prototyping requirements
- Frequent schema changes expected
- Complex permission models
- Audit trail requirements
- Multi-database support needed
❌ Not Ideal:
- Simple CRUD with stable schema
- Performance-critical real-time systems (metadata lookup overhead)
- Greenfield projects with unknown requirements (metadata design requires clarity)
Summary
Metadata-driven architecture provides:
- Agility - add features by configuring metadata, not deploying code
- Consistency - one engine enforces all validation/permissions/audit rules
- Flexibility - runtime changes without downtime
- Multi-tenancy - tenant-specific customization without code branches
- Maintainability - change behavior in metadata, not scattered code
The trade-off is upfront complexity in understanding metadata concepts. But once mastered, productivity and consistency gains are substantial.
Next Steps
- Understand the Architecture Overview - how Core APIs implement this
- Learn the CQRS Pattern - read/write separation
- Explore Core Concepts - master the metadata model