Skip to main content

Architecture Patterns

Real-world deployment patterns showing where ekoDB fits in your stack. Choose the pattern that matches your workload.


The Core Principle

Separate data processing from data serving. Your app should run fast while your backend handles the complexity of large datasets.

┌─────────────────────────────────────────────────────────────────────┐
│ Data Tiering Model │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ Data Lake/Warehouse → Projection Layer → App Layer │
│ (Snowflake, Databricks) (PostgreSQL, ekoDB) (ekoDB/Redis)│
│ │
│ ▪ All data ▪ Scoped subsets ▪ Hot data │
│ ▪ Batch processing ▪ By user/group/project ▪ Sub-ms reads│
│ ▪ Analytics ▪ Transformed for app ▪ Sessions │
│ │
└─────────────────────────────────────────────────────────────────────┘

Not every app needs all three tiers. Match complexity to requirements.


Pattern 1: Simple CRUD

Best for: MVPs, internal tools, straightforward applications

┌──────────────┐         ┌──────────────┐
│ │ │ │
│ Client │ ◄─────► │ ekoDB │
│ (Web/App) │ │ (Primary) │
│ │ │ │
└──────────────┘ └──────────────┘

Configuration:

  • Storage Mode: fast or balanced
  • Durability: durable for production

When to use:

  • Data fits on a single node
  • Queries are straightforward
  • No complex analytics requirements
  • Team wants minimal operational overhead

Example: Internal Dashboard

┌─────────────────┐       ┌─────────────────┐
│ React Admin │ │ ekoDB │
│ Dashboard │◄─────►│ │
│ │ │ ▪ Users │
│ ▪ User mgmt │ REST │ ▪ Settings │
│ ▪ Settings │ + │ ▪ Audit logs │
│ ▪ Reports │ WS │ ▪ Reports │
└─────────────────┘ └─────────────────┘

📚 See Basic Operations for CRUD examples and Storage Modes for configuration.


Pattern 2: Social Media Platform

Best for: High read volume, real-time feeds, user-generated content

┌─────────────────────────────────────────────────────────────────────┐
│ │
│ ┌──────────┐ ┌──────────────┐ ┌──────────────┐ ┌───────┐│
│ │ Warehouse│ │ PostgreSQL │ │ ekoDB │ │ Client││
│ │ (Spark) │───►│ (Primary) │───►│ (Cache + │◄──►│ Apps ││
│ │ │ │ │ │ Real-time) │ │ ││
│ └──────────┘ └──────────────┘ └──────────────┘ └───────┘│
│ │ │ │
│ │ Analytics & │ Live feeds │
│ └─────────ML Pipeline └────WebSocket │
│ │
└─────────────────────────────────────────────────────────────────────┘

Data Flow:

LayerTechnologyPurpose
Cold StorageSpark/DatabricksHistorical analytics, ML training
PrimaryPostgreSQLUser accounts, relationships, content
Hot LayerekoDBFeed cache, sessions, real-time notifications
ClientWeb/MobileUser-facing applications

ekoDB Role:

  • KV Store: User sessions, auth tokens (Fast mode)
  • Document Store: Feed cache by user/group (scoped projections)
  • WebSocket: Real-time notifications, typing indicators
  • Vector Search: Content recommendations

Example Configuration:

{
"storage_mode": "fast",
"durable_operations": false,
"collections": {
"sessions": { "ttl": "24h" },
"feed_cache": { "ttl": "5m" },
"notifications": { "ttl": "7d" }
}
}

Why this works:

  • PostgreSQL handles writes and complex queries
  • ekoDB serves read-heavy feed requests at 200K+ ops/sec
  • Scoped caching: each user sees only their feed, not the entire dataset
  • WebSocket eliminates polling for real-time features

📚 See Key-Value Store for session management and Basic Operations for real-time subscriptions.


Pattern 3: Financial Services

Best for: Transactions, audit compliance, strong consistency requirements

┌─────────────────────────────────────────────────────────────────────┐
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ PostgreSQL │ │ ekoDB │ │ ekoDB │ │
│ │ (Ledger) │───►│ (Read Cache)│ │ (Audit Logs) │ │
│ │ │ │ │ │ Cold Mode │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ API Gateway │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┐ │
│ │ Clients │ │
│ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘

Data Flow:

LayerTechnologyModePurpose
LedgerPostgreSQLACIDCore transactions, balances
Read CacheekoDBFast + DurableAccount lookups, balance checks
AuditekoDBCold + DurableImmutable transaction history

Critical Configuration:

{
"ledger_cache": {
"storage_mode": "fast",
"durable_operations": true
},
"audit_logs": {
"storage_mode": "cold",
"durable_operations": true
}
}

Why Cold Mode for Audit:

  • Append-only storage optimized for write throughput
  • Compact binary format for efficient disk usage
  • Immutable once written
  • Regulatory compliance friendly

Security Configuration:

  • AES-256-GCM encryption at rest
  • TLS/SSL in transit (HTTPS/WSS only)
  • JWT authentication with short TTL
  • Field-level access control

Example: Transaction Flow

1. Client requests transfer
2. API validates via ekoDB (session check, rate limit)
3. PostgreSQL executes transaction (ACID)
4. PostgreSQL confirms commit
5. ekoDB cache invalidated/updated
6. ekoDB Cold logs audit entry (immutable)
7. Client receives confirmation

📚 See Transactions for ACID operations and Security for encryption details.


Pattern 4: E-Commerce Platform

Best for: Product catalogs, shopping carts, search, recommendations

┌─────────────────────────────────────────────────────────────────────┐
│ │
│ ┌──────────────┐ ┌──────────────────────────────────────┐│
│ │ Warehouse │ │ ekoDB ││
│ │ (Analytics) │ │ ┌────────────┐ ┌────────────────┐ ││
│ │ │────────►│ │ Products │ │ Sessions/Cart │ ││
│ │ ▪ Sales │ │ │ (Catalog) │ │ (KV TTL) │ ││
│ │ ▪ Inventory │ │ └────────────┘ └────────────────┘ ││
│ │ ▪ Trends │ │ ┌────────────┐ ┌────────────────┐ ││
│ └──────────────┘ │ │ Search │ │ Vectors for │ ││
│ │ │ (Full-text)│ │ Recommendations│ ││
│ │ └────────────┘ └────────────────┘ ││
│ └──────────────────────────────────────┘││
│ │ │
│ ▼ │
│ ┌──────────────┐ │
│ │ Storefront │ │
│ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘

ekoDB handles everything in one binary:

FeatureTraditional StackekoDB
Product catalogPostgreSQLDocument collections
Product searchElasticsearchBuilt-in full-text
RecommendationsPinecone + ML serviceBuilt-in vector search
Shopping cartRedisKV with TTL
SessionsRedisKV with TTL
Real-time inventoryKafka + RedisWebSocket subscriptions

Example: Product Document

{
"sku": "WIDGET-001",
"name": "Premium Widget",
"description": "High-quality widget for all your needs",
"price": 29.99,
"inventory": 150,
"categories": ["widgets", "premium"],
"embedding": [0.1, 0.2, 0.3, ...] // 384-dim for recommendations
}

Search + Recommendations in one query:

// Full-text search
const results = await client.search("products", {
text: "premium widget",
fields: ["name", "description"],
fuzzy: true
});

// Similar products (vector search via search method)
const similar = await client.search("products", {
vector: currentProduct.embedding,
vector_field: "embedding",
vector_k: 5
});

Cart with TTL:

await client.kvSet(`cart:${userId}`, cartData, 86400); // TTL in seconds (24h)

Storage Mode Configuration:

{
"products": {
"storage_mode": "balanced",
"durable_operations": true
},
"sessions_cart": {
"storage_mode": "fast",
"durable_operations": false
},
"order_history": {
"storage_mode": "cold",
"durable_operations": true
},
"inventory_updates": {
"storage_mode": "fast",
"durable_operations": true
}
}
Data TypeModeDurabilityWhy
ProductsbalancedtrueRead-heavy but updates matter, large catalog
Sessions/CartfastfalseEphemeral, TTL-based, regenerable
Order HistorycoldtrueAppend-only, compliance, archival
InventoryfasttrueReal-time updates, must persist

📚 See Storage Modes for detailed mode comparisons and Basic Operations for CRUD examples.


Pattern 5: IoT / Time-Series

Best for: Sensor data, logs, metrics, high write throughput

┌─────────────────────────────────────────────────────────────────────┐
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Devices │ │ ekoDB │ │ Warehouse │ │
│ │ (Sensors) │───►│ Cold Mode │───►│ (Archival) │ │
│ │ │ │ (Ingestion) │ │ │ │
│ └──────────────┘ └──────┬───────┘ └──────────────┘ │
│ │ │
│ TCP/WebSocket │ │
│ High throughput ▼ │
│ ┌──────────────┐ │
│ │ ekoDB │ │
│ │ Fast Mode │◄───── Dashboard/Alerts │
│ │ (Real-time) │ │
│ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘

Dual-Node Strategy:

NodeModePurpose
IngestionCold + Non-DurableMaximum write throughput
QueryFast + DurableReal-time dashboards, alerts

Why Cold Mode for Ingestion:

  • Append-only storage optimized for sequential writes
  • Minimal disk I/O overhead
  • Automatic archival to warehouse via Ripple

Configuration:

{
"ingestion_node": {
"storage_mode": "cold",
"durable_operations": false,
"ripple": {
"targets": ["query_node", "warehouse"]
}
},
"query_node": {
"storage_mode": "fast",
"durable_operations": true
}
}

Data Flow:

Sensor → ekoDB Cold (ingest) → Ripple → ekoDB Fast (query)
→ Ripple → Snowflake (archive)

📚 See Ripples for multi-node sync and Storage Modes for cold mode tuning.


Pattern 6: AI / RAG Application

Best for: LLM applications, semantic search, chat with memory

┌─────────────────────────────────────────────────────────────────────┐
│ │
│ ┌──────────────┐ ┌──────────────────────────────────┐ │
│ │ Documents │ │ ekoDB │ │
│ │ (Upload) │───►│ ┌────────────┐ ┌────────────┐ │ │
│ │ │ │ │ Vectors │ │ Chat │ │ │
│ └──────────────┘ │ │ (Embeddings│ │ History │ │ │
│ │ │ + Search) │ │ (Branching)│ │ │
│ ┌──────────────┐ │ └────────────┘ └────────────┘ │ │
│ │ LLM │◄──►│ ┌────────────┐ ┌────────────┐ │ │
│ │ (OpenAI/ │ │ │ KV for │ │ Document │ │ │
│ │ Claude) │ │ │ Sessions │ │ Metadata │ │ │
│ └──────────────┘ │ └────────────┘ └────────────┘ │ │
│ └──────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┐ │
│ │ Chat UI │ │
│ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘

ekoDB provides the complete RAG stack:

ComponentTraditionalekoDB
Vector storePinecone/WeaviateBuilt-in vector search
Document storePostgreSQLDocument collections
Chat historyRedis/PostgreSQLBuilt-in chat with branching
Session stateRedisKV with TTL
MetadataPostgreSQLSame document

RAG Workflow:

// 1. Store documents with embeddings
await client.insert("knowledge_base", {
content: "ekoDB is a multi-model database...",
embedding: await embed(content), // 384-dim
source: "documentation",
updated: new Date()
});

// 2. Semantic search for context
const context = await client.search("knowledge_base", {
vector: await embed(userQuery),
vector_field: "embedding",
vector_k: 5
});

// 3. Chat with memory (built-in branching)
const response = await client.chatMessage(sessionId, {
message: userQuery,
});

Chat Branching:

ekoDB's built-in chat system supports conversation branching:

Main conversation
├── Branch A (what-if scenario)
│ └── Continue exploring...
└── Branch B (alternative approach)
└── Different direction...

Storage Mode Configuration:

{
"knowledge_base": {
"storage_mode": "balanced",
"durable_operations": true
},
"chat_history": {
"storage_mode": "cold",
"durable_operations": true
},
"sessions": {
"storage_mode": "fast",
"durable_operations": false
},
"embeddings_cache": {
"storage_mode": "fast",
"durable_operations": true
}
}
Data TypeModeDurabilityWhy
Knowledge BasebalancedtrueLarge documents, occasional updates, must persist
Chat HistorycoldtrueAppend-only conversations, audit trail
SessionsfastfalseEphemeral user state, TTL-based
EmbeddingsfasttrueFrequently accessed vectors, expensive to regenerate

📚 See Chat & RAG for conversation management and Vector Search for embedding operations.


Storage Mode Quick Reference

Choose the right combination for your data:

ModeDurableBest ForTrade-offThroughput
fasttrueUser data, account state, critical recordsHighest durability + speed~200K ops/sec
fastfalseSessions, caches, ephemeral dataMaximum speed, data loss on crash OK~250K ops/sec
balancedtrueLarge datasets, production workloadsGood balance for mixed read/write~150K ops/sec
balancedfalseDevelopment, bulk imports, migrationsFaster imports, not for production~180K ops/sec
coldtrueAudit logs, compliance, archivalAppend-only, immutable, significant space savings~100K writes/sec
coldfalseHigh-throughput ingestion, metrics, logsMaximum write speed, some loss acceptable~120K writes/sec

When to use each mode:

  • fast: Data fits in memory, sub-millisecond reads required
  • balanced: Datasets larger than RAM, mixed read/write workloads
  • cold: Write-heavy, append-only, archival or compliance data

Durability guidelines:

  • durable: true: Financial data, user records, anything you can't regenerate
  • durable: false: Caches, sessions, computed data, anything with TTL

📚 See White Paper for configuration options and Performance Benchmarks for detailed metrics.


Decision Matrix

If you need...Use PatternStorage ModeDurabilityKey Features
Simple app, minimal ops1: Simple CRUDfast or balancedtrueSingle node, straightforward queries
High read volume, feeds2: Social Mediafastfalse for cacheWebSocket, TTL-based sessions
Strong consistency, audit3: Financialfast + coldtrue alwaysLedger cache + immutable audit logs
Search + recommendations4: E-Commercebalanced + fastMixedFull-text + vectors + KV cart
High write throughput5: IoTcoldfalse for ingestAppend-only chunks, Ripple sync
LLM/RAG application6: AI/RAGbalanced + coldtrueVectors + chat history + sessions

Storage Mode by Data Type

Data TypeRecommended ModeDurabilityExamples
User accountsfasttrueProfiles, preferences, auth tokens
Sessions/cachefastfalseLogin sessions, API cache, rate limits
Large documentsbalancedtrueProduct catalogs, knowledge bases
Bulk importsbalancedfalseData migrations, batch uploads
Audit/compliancecoldtrueTransaction logs, access logs
High-volume writescoldfalseMetrics, IoT sensors, clickstream

Anti-Patterns

Don't do this:

Anti-PatternWhy It FailsInstead
Everything in one collectionNo isolation, hard to scaleSeparate by access pattern
Non-durable for financial dataData loss on crashAlways durable: true
Warehouse queries from appSlow, expensiveProject to ekoDB first
Polling for real-timeWasteful, laggyUse WebSocket subscriptions
Skipping the cache layerDatabase overload at scaleAdd ekoDB/Redis for hot data
cold mode for random readsOptimized for sequential writesUse balanced or fast
fast + non-durable for user dataData loss on crashUse durable: true
Same mode for all collectionsMissed optimization opportunitiesMatch mode to access pattern
balanced for ephemeral cacheUnnecessary disk I/OUse fast + non-durable + TTL

Storage Mode Mistakes:

Wrong ChoiceProblemCorrect Choice
cold for user profilesSlow random readsfast or balanced
fast for 100GB+ datasetsMemory pressurebalanced or cold
non-durable for ordersLost transactionsAlways durable: true
durable for computed cacheUnnecessary I/O overheadnon-durable + regenerate

Migration Path

Start simple, add complexity as needed:

Stage 1: ekoDB only

Stage 2: ekoDB + separate cache (if needed)

Stage 3: PostgreSQL (writes) + ekoDB (reads/cache)

Stage 4: Warehouse + PostgreSQL + ekoDB (full tiering)

Most applications never need Stage 4. Don't prematurely optimize.

Storage Mode Migration:

Start: fast + durable (simple, safe default)

Scale: balanced + durable (dataset grows beyond RAM)

Optimize: Mixed modes per collection (different access patterns)

Enterprise: cold for audit + fast for cache + balanced for data

See Also

Core Documentation

Advanced Topics

Scaling & Operations