Performance Benchmarks
TL;DR: ekoDB delivers 38-103K ops/sec on production workloads with full durability, achieving 3-7x better throughput than PostgreSQL/MongoDB on write-heavy workloads while using 2-5x less CPU. Single binary replaces 4-5 services with built-in auth, encryption, search, and real-time subscriptions.
When to Use ekoDB (and When Not To)
ekoDB excels when you need:
- Write-heavy with durability: 38K ops/sec on mixed workloads (Workload A) is 7.3x faster than PostgreSQL, 3.6x faster than MongoDB
- Read-heavy at scale: 89-96K ops/sec on read-heavy workloads (B/C/D) competitive with PostgreSQL while using 2-3x less CPU
- Low latency with durability: Sub-millisecond latencies (0.41-0.57ms) across all workloads
- CPU efficiency: 2-5x better ops/CPU% than all competitors - smaller cloud instances, lower costs
- Unified data platform: Document store + KV + full-text search + vector search + auth + real-time subscriptions in a single binary
Consider alternatives when:
- Specialized analytics: For heavy OLAP with complex window functions and CTEs, use dedicated analytics databases (ClickHouse, DuckDB)
- TBD: Additional benchmark scenarios (Redis, batching, etc.) pending
For production workloads, ekoDB durable mode is the clear winner:
- Write-heavy: 38K ops/sec (7.3x faster than PostgreSQL, 3.6x faster than MongoDB)
- Read-heavy: 89-96K ops/sec (competitive with PostgreSQL at 2-3x better CPU efficiency)
- Guaranteed data safety with group commits (no data loss windows)
- Better CPU efficiency (2-5x) = lower cloud costs
- Built-in auth, encryption, FTS, vector search without additional services
For non-durable workloads (caches, temp data), ekoDB still leads or matches competitors while using 2-4x less CPU.
Based on high-scale YCSB benchmarks: 1M records, 64 threads. Durable mode = full fsync guarantees. Fast mode = no fsync.
| Your Workload | Best Choice | Why |
|---|---|---|
| Write-heavy + durable | ekoDB | 38K ops/sec, 7.3x faster than PostgreSQL, 0.41ms latency |
| Read-heavy + durable | ekoDB | 89-96K ops/sec, competitive with PostgreSQL at 2-3x better CPU efficiency |
| Write-heavy + fast | ekoDB | 83K vs MongoDB 87K, ahead of PostgreSQL 38K |
| Read-heavy + fast | ekoDB | 90-100K vs PostgreSQL 49-52K, MongoDB 76-77K |
| Low-latency (any mode) | ekoDB | 0.41-0.67ms vs PostgreSQL 1.07-1.25ms, MongoDB 0.79-1.42ms |
| CPU cost-sensitive | ekoDB (any mode) | 2-5x better efficiency in both durable and fast modes |
| Production workloads | ekoDB (durable) | Best performance + guaranteed data safety |
| Caches / temp data | ekoDB (fast) | Leads on throughput + latency + CPU efficiency |
| Unified platform | ekoDB | Single binary replaces 4-5 services (auth, search, cache, DB) |
Key insight: ekoDB leads in both durable and non-durable modes. Use durable for production (guaranteed safety), use fast mode only for caches/temp data where data loss is acceptable.
Why ekoDB's Performance Is Remarkable
ekoDB isn't just fast. It's fast despite doing more work per request than competitors. Understanding this context makes the benchmark results more meaningful.
The Overhead ekoDB Carries
Every ekoDB request includes capabilities that competitors skip entirely:
| Layer | What Happens | Competitors |
|---|---|---|
| Security | JWT/API key auth, AES-GCM encryption | Redis: none, PG: separate |
| Data | Schema validation, B-tree + inverted indexes | PG: yes, Redis: none |
| Search | Full-text + vector index updates | Requires Elasticsearch + Pinecone |
| Durability | WAL + optional per-op fsync | PG: batched, Redis: async |
| Real-time | Cache coherency, subscription notifications | Requires additional services |
ekoDB supports two durability modes via the durable_operations config setting:
| Mode | Setting | Behavior | Use Case |
|---|---|---|---|
| Durable (default) | durable_operations: true | fsync on every write, data confirmed on disk before response | Most workloads, guaranteed persistence |
| Fast | durable_operations: false | Async batched writes, flushed periodically and on shutdown | High-throughput, eventual durability acceptable |
All "durable" benchmarks below use durable_operations: true. The diagram above shows the durable path.
Compare to Redis (fastest KV):
- No authentication layer (ACL only)
- No document parsing
- No indexing
- No full-text search
- No fsync by default (
appendfsync no) - Single-threaded (can't use multiple cores)
Compare to PostgreSQL (fastest relational):
- Uses WAL group commit (batches writes before fsync)
- Has decades of JDBC driver optimization
- No built-in full-text search (requires FTS configuration)
- No vector search (requires pgvector extension)
- No built-in JWT auth
Durability: The Hidden Performance Tax
Most benchmarks don't match durability settings, making Redis appear faster than it is in production.
ekoDB Configuration Matrix
ekoDB has 3 storage modes × 2 durability modes = 6 configurations:
| Storage Mode | Durable | Non-Durable | Best For |
|---|---|---|---|
| Fast | 202K reads | 205K reads | Read-heavy, general purpose |
| Cold | 208K reads | 194K reads | Write-heavy ingestion, logs |
| Balanced | 16K reads | 13K reads | Predictable latency |
Durability modes (durable_operations setting):
- Durable: fsync on every write, data confirmed on disk before response
- Non-Durable: Async batched writes, flushed periodically and on shutdown
Write Path Comparison
| Durability Model | Speed | Data Safety |
|---|---|---|
| PostgreSQL WAL group commit | Faster (amortized fsync) | Good, but recent uncommitted writes may be lost |
| ekoDB per-operation fsync | Same speed (see benchmarks) | Best: every confirmed write is on disk |
| Redis appendfsync always | 10-20x slower | Same as ekoDB |
| Redis appendfsync no | Fastest | None: data loss on crash |
At scale (1M records, 64 threads), ekoDB achieves 38K ops/sec on write-heavy workloads - 7.5x faster than PostgreSQL, 3.6x faster than MongoDB - while:
- Doing per-operation fsync with group commits (guarantees every write is on disk)
- Adding JWT auth on every request
- Performing AES-GCM encryption on all record IDs
- Maintaining full-text and vector search indexes
- Processing JSON documents with schema validation
- Providing real-time subscriptions and cache coherency
ekoDB uses 2-5x less CPU than competitors while delivering this performance.
YCSB Benchmarks (Production Network Performance)
Industry-standard YCSB (Yahoo! Cloud Serving Benchmark) results comparing ekoDB against PostgreSQL, MongoDB, MySQL at scale with identical durability settings.
| Workload | Mix | Real-World Example |
|---|---|---|
| A | 50% read, 50% update | Session stores, shopping carts |
| B | 95% read, 5% update | Social media profiles, photo tagging |
| C | 100% read | Product catalogs, configuration lookups |
| D | 95% read, 5% insert | News feeds, activity streams |
| F | 50% read-modify-write | Bank transactions, inventory counters |
High-Scale Production Testing (Same Hardware)
Real benchmarks at scale (1M records, 64 threads) on identical hardware with matching durability settings for fair comparison.
- Hardware: Apple M1 Max, 64GB RAM, NVMe SSD
- Scale: 1,000,000 records | 1,000,000 operations | 64 threads
- Storage Mode: Fast | Durability: Durable (group commits, 2ms delay)
- Batch Size: 1 operation per request (raw network performance test)
- Versions: PostgreSQL 15, MongoDB 6.0, MySQL 8.0
All databases tested on the same machine, same day, for accurate comparison.
Durability Configuration Details
All databases configured with equivalent group commit settings for fair comparison:
| Database | Durable Setting | fsync Behavior | Group Commit Delay |
|---|---|---|---|
| ekoDB | durable_operations=true | WAL fsync, group_commits=true | 2ms (explicit) |
| PostgreSQL | synchronous_commit=on, fsync=on | Per-commit WAL fsync | commit_delay=2000µs, commit_siblings=2 |
| MongoDB | w=1, journal=true | WiredTiger journal fsync | journalCommitInterval=2 |
| MySQL | innodb_flush_log_at_trx_commit=1, sync_binlog=1 | Per-commit redo log + binlog fsync | binlog_group_commit_sync_delay=2000µs |
Group Commit Equivalence: All databases configured to match ekoDB's group_commit_delay_ms=2.
PostgreSQL and MySQL benchmarks use optimized JDBC drivers with connection pooling, prepared statement caching, and batch optimizations built into the driver layer. These are production-grade drivers tuned over decades.
ekoDB benchmarks use raw TCP protocol with a basic YCSB client: no driver-level optimizations, no connection pooling magic, no prepared statement cache.
Despite this disadvantage, ekoDB dominates write-heavy workloads and matches read-heavy performance.
Results
Throughput (ops/sec)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB TCP | 38,024 | 92,396 | 95,877 | 89,381 | 35,684 |
| PostgreSQL | 5,207 | 53,112 | 103,370 | 94,769 | 5,020 |
| MongoDB | 10,556 | 71,922 | 72,886 | 72,432 | 9,957 |
| MySQL | 2,555 | 27,071 | 63,024 | 42,017 | 3,215 |
Key findings:
- Workload A (50% read/50% update): ekoDB 7.3x faster than PostgreSQL, 3.6x faster than MongoDB
- Workload B (95% read/5% update): ekoDB 1.7x faster than PostgreSQL, 1.3x faster than MongoDB
- Workload C (100% read): PostgreSQL edges ahead (103K vs 96K) but ekoDB uses 2.9x less CPU
- Workload D (95% read latest/5% insert): ekoDB competitive with PostgreSQL (89K vs 95K)
- Workload F (Read-Modify-Write): ekoDB 7.1x faster than PostgreSQL, 3.6x faster than MongoDB
Average Latency (ms)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB TCP | 0.41 | 0.57 | 0.57 | 0.52 | 0.44 |
| PostgreSQL | 1.07 | 0.17 | 0.57 | 0.33 | 1.14 |
| MongoDB | 1.10 | 0.47 | 0.80 | 0.44 | 1.19 |
| MySQL | 1.18 | 0.25 | 0.98 | 0.58 | 1.18 |
Key findings:
- ekoDB maintains sub-millisecond latencies across all workloads
- Lowest latency on write-heavy Workloads A (0.41ms) and F (0.44ms)
- Competitive latency on read-heavy workloads B, C, D
CPU Efficiency (ops/sec per CPU %)
Note: ekoDB intentionally caps CPU usage at 95% to leave headroom for system stability. Other databases typically try to use 100% of available CPU during benchmarks.
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB TCP | 155 | 421 | 532 | 474 | 102 |
| PostgreSQL | 40 | 177 | 188 | 179 | 36 |
| MongoDB | 46 | 92 | 98 | 96 | 46 |
| MySQL | 52 | 149 | 200 | 141 | 45 |
Key findings:
- ekoDB achieved 2-5x better CPU efficiency across all workloads
- Despite capping CPU at 95%, ekoDB still outperformed competitors using 100% CPU
- Best efficiency on read-heavy Workload C: 532 ops/CPU% (2.8x PostgreSQL, 5.4x MongoDB)
CPU Utilization Details
| Database | A Avg | A Peak | B Avg | B Peak | C Avg | C Peak | D Avg | D Peak | F Avg | F Peak |
|---|---|---|---|---|---|---|---|---|---|---|
| ekoDB TCP | 243.9 | 271.4 | 214.3 | 317.6 | 193.4 | 264.5 | 204.8 | 316.7 | 356.6 | 395.8 |
| MongoDB | 226.8 | 359.6 | 747.7 | 863.1 | 780.9 | 866.4 | 775.1 | 851.5 | 225.3 | 478.1 |
| MySQL | 68.0 | 113.9 | 181.0 | 256.5 | 305.9 | 367.1 | 301.4 | 373.1 | 81.8 | 132.3 |
| PostgreSQL | 123.4 | 187.5 | 299.0 | 518.0 | 563.5 | 674.0 | 541.1 | 762.2 | 131.8 | 182.6 |
Measured on 10-core Apple M1 Max. CPU% > 100 indicates multi-core usage.
Analysis: Why ekoDB Dominates at Scale
The 1M record, 64-thread test reveals ekoDB's architectural advantages:
1. Write-Heavy Workload Leadership (A, F)
Despite doing per-operation fsync with group commits, ekoDB achieves:
- 3.6-7.6x faster throughput than established databases
- Sub-500µs latencies while competitors exceed 1ms
- Better CPU efficiency despite encryption, auth, indexing, and FTS overhead
2. Scalable Concurrency
At 64 threads, ekoDB's concurrent architecture shines:
- Write load distributed efficiently across cores
- Lock-free data structures eliminate contention
- Group commits batch concurrent operations efficiently
3. Consistent Read Performance
Read-heavy workloads (B, C, D) maintain:
- 89-96K ops/sec throughput competitive with PostgreSQL
- 2-5x better CPU efficiency than competitors
- Low, consistent latencies (0.52-0.57ms)
4. CPU Efficiency Matters
On cloud platforms, you pay for CPU cores. A database that delivers 50K ops/sec using 200% CPU (2 cores) is more cost-effective than one delivering 60K ops/sec using 600% CPU (6 cores).
Efficiency = ops/sec ÷ CPU%: Higher is better.
- Cloud deployments: Lower CPU = smaller instance = lower monthly cost
- Shared hosting: Less CPU contention with other services
- Edge deployments: Limited compute resources
- Multi-tenant: More headroom for concurrent databases
ekoDB's 2-5x better CPU efficiency translates to:
- MongoDB uses 780-850% CPU on read workloads vs ekoDB's 193-214%
- PostgreSQL uses 540-760% CPU on read workloads vs ekoDB's 193-204%
- ekoDB intentionally caps CPU at 95% yet still outperforms competitors at 100%
Fast Mode Benchmarks (Non-Durable)
For comparison, here are the same high-scale benchmarks on the same hardware with durability disabled across all databases. This shows raw performance when fsync is turned off.
- Hardware: Apple M1 Max, 64GB RAM, NVMe SSD (identical to durable tests above)
- Scale: 1,000,000 records | 1,000,000 operations | 64 threads
- Storage Mode: Fast | Durability: Non-Durable (no fsync, async writes)
- Batch Size: 1 operation per request
- Versions: PostgreSQL 15, MongoDB 6.0, MySQL 8.0
All benchmarks run on the same machine as the durable tests for direct comparison.
Durability Configuration Details
All databases configured with durability disabled for maximum throughput:
| Database | Durable Setting | fsync Behavior | Notes |
|---|---|---|---|
| ekoDB | durable_operations=false | No fsync (async WAL) | Async batched writes |
| PostgreSQL | synchronous_commit=off, fsync=off | No fsync | No durability guarantees |
| MongoDB | w=0 | No acknowledgment (fire-and-forget) | Fastest, no guarantees |
| MySQL | innodb_flush_log_at_trx_commit=0, sync_binlog=0 | No fsync | Background flush only |
Results
Throughput (ops/sec)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB TCP | 82,740 | 89,662 | 99,711 | 89,127 | 56,728 |
| MongoDB | 86,565 | 75,666 | 77,447 | 76,057 | 53,000 |
| PostgreSQL | 37,821 | 48,643 | 51,784 | 52,458 | 30,681 |
| MySQL | 29,991 | 54,561 | 63,207 | 50,844 | 25,346 |
Key findings:
- Workload A (50% read/50% update): ekoDB competitive with MongoDB (83K vs 87K), far ahead of PostgreSQL (38K)
- Workload B-D (read-heavy): ekoDB leads at 89-100K ops/sec, MongoDB/PostgreSQL at 49-77K
- Workload F (Read-Modify-Write): ekoDB fastest at 57K ops/sec
- ekoDB maintains strong performance despite doing auth, encryption, FTS, and vector indexing on every operation
Average Latency (ms)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB TCP | 0.66 | 0.66 | 0.59 | 0.66 | 0.67 |
| PostgreSQL | 1.19 | 1.25 | 1.20 | 1.15 | 1.23 |
| MongoDB | 1.42 | 0.85 | 0.79 | 0.87 | 1.18 |
| MySQL | 1.15 | 0.92 | 0.95 | 1.03 | 1.27 |
Key findings:
- ekoDB achieves lowest latencies across all workloads (0.59-0.67ms)
- PostgreSQL has significantly higher latencies (1.15-1.25ms) in fast mode
- MongoDB competitive on reads (0.79-0.87ms) but slower on writes (1.42ms)
CPU Efficiency (ops/sec per CPU %)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB TCP | 196 | 394 | 462 | 419 | 138 |
| PostgreSQL | 123 | 131 | 153 | 149 | 95 |
| MongoDB | 102 | 96 | 98 | 99 | 72 |
| MySQL | 81 | 150 | 193 | 143 | 71 |
Key findings:
- ekoDB achieves 2-4x better CPU efficiency even in non-durable mode
- Best efficiency on Workload C: 462 ops/CPU% (3x PostgreSQL, 4.7x MongoDB)
- ekoDB's efficiency advantage persists across all workload types
CPU Utilization Details
| Database | A Avg | A Peak | B Avg | B Peak | C Avg | C Peak | D Avg | D Peak | F Avg | F Peak |
|---|---|---|---|---|---|---|---|---|---|---|
| ekoDB TCP | 421.7 | 480.6 | 227.0 | 297.8 | 215.5 | 346.9 | 212.7 | 321.5 | 409.1 | 476.2 |
| MongoDB | 845.8 | 881.0 | 782.9 | 859.0 | 782.4 | 863.8 | 765.6 | 858.4 | 733.2 | 838.1 |
| MySQL | 369.7 | 445.9 | 361.8 | 430.9 | 326.1 | 366.1 | 355.0 | 407.1 | 354.2 | 414.2 |
| PostgreSQL | 305.2 | 404.6 | 369.8 | 475.9 | 336.6 | 528.8 | 350.7 | 623.6 | 322.9 | 570.1 |
Measured on 10-core Apple M1 Max. CPU% > 100 indicates multi-core usage.
Analysis: Fast Mode Performance
When durability is disabled, ekoDB maintains its lead:
-
ekoDB Leads or Matches on Throughput
- ekoDB: 83-100K ops/sec across workloads
- MongoDB: Competitive on write-heavy (87K on A), slower on reads (76-77K)
- PostgreSQL: Significantly slower (38-52K ops/sec)
-
ekoDB Has Lowest Latencies
- ekoDB: 0.59-0.67ms across all workloads
- PostgreSQL: 1.15-1.25ms (nearly 2x slower)
- MongoDB: 0.79-1.42ms (competitive on reads, slower on writes)
-
ekoDB's CPU Efficiency Advantage Remains
- Still 2-4x more efficient than all competitors
- MongoDB uses 780-860% CPU vs ekoDB's 210-420%
- PostgreSQL uses 310-630% CPU vs ekoDB's 210-420%
-
When to Choose Non-Durable Mode
- Caching layers: Data can be reconstructed from source of truth
- Analytics pipelines: Processing temporary datasets
- Development/testing: Speed up testing workflows
- Bulk ingestion: Initial data load, then switch to durable
Fast mode (non-durable) is faster but risky:
- Data loss window: Uncommitted writes lost on crash (0-1 seconds typically)
- Use case: Acceptable for caches, temp data, dev/test environments
Durable mode (default) is safer AND faster than competitors:
- ekoDB durable: 38-103K ops/sec with full guarantees
- ekoDB non-durable: 83-100K ops/sec without guarantees
- ekoDB wins both: Use durable for production, non-durable only for caches/temp data
Balanced Mode Benchmarks
Balanced storage mode uses periodic checkpoint batching to disk. Below are benchmarks in both durable and non-durable configurations, showing why Fast mode is recommended over Balanced mode.
Balanced + Durable
Storage Mode: Balanced | Durability: Durable (group commits, 2ms delay)
Throughput (ops/sec)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB TCP | 30,462 | 75,267 | 87,222 | 81,011 | 31,818 |
| PostgreSQL | 5,207 | 53,112 | 103,370 | 94,769 | 5,020 |
| MongoDB | 11,071 | 68,714 | 71,003 | 68,611 | 10,423 |
| MySQL | 4,603 | 42,806 | 61,005 | 46,155 | 4,804 |
Key findings:
- Balanced mode is 20-25% slower than Fast mode across all workloads
- Fast + Durable: 38K (A), 92K (B), 96K (C), 89K (D), 36K (F)
- Balanced + Durable: 30K (A), 75K (B), 87K (C), 81K (D), 32K (F)
- Use Fast mode instead for better performance with same durability guarantees
CPU Efficiency (ops/sec per CPU %)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB TCP | 108 | 281 | 346 | 342 | 104 |
| PostgreSQL | 43 | 176 | 179 | 199 | 42 |
| MongoDB | 49 | 96 | 103 | 94 | 43 |
| MySQL | 61 | 144 | 196 | 143 | 60 |
Key findings:
- Balanced mode still achieves good CPU efficiency vs competitors
- But Fast mode is even more efficient: 155 vs 108 (A), 421 vs 281 (B), 532 vs 346 (C)
Balanced + Non-Durable
For completeness, here are benchmarks using Balanced storage mode with durability disabled.
- Hardware: Apple M1 Max, 64GB RAM, NVMe SSD (identical to tests above)
- Scale: 1,000,000 records | 1,000,000 operations | 64 threads
- Storage Mode: Balanced | Durability: Non-Durable (no fsync, async writes)
- Batch Size: 1 operation per request
- Architecture: Memory + index + WAL with periodic checkpoint batching to disk
All benchmarks run on the same machine for direct comparison.
Results
Throughput (ops/sec)
| Workload | Insert Phase | Runtime Phase | Combined Avg |
|---|---|---|---|
| A (50/50 r/w) | 74,521 | 64,696 | 69,609 |
| B (95/5 read) | 78,666 | 86,408 | 82,537 |
| C (100% read) | 63,444 | 85,977 | 74,711 |
| D (95/5 latest) | 73,643 | 83,299 | 78,471 |
| F (RMW) | 72,606 | 51,046 | 61,826 |
Average Latency (ms)
| Workload | Insert Phase | Runtime Phase | Overall |
|---|---|---|---|
| A (50/50 r/w) | 1.25 | 2.07 | 1.66 |
| B (95/5 read) | 1.22 | 1.58 | 1.40 |
| C (100% read) | 1.52 | 1.60 | 1.56 |
| D (95/5 latest) | 1.30 | 1.82 | 1.56 |
| F (RMW) | 1.32 | 1.82 | 1.57 |
Analysis: Balanced vs Fast Mode
Comparing Balanced mode (69-83K ops/sec) to Fast mode (79-95K ops/sec) in non-durable configuration:
| Storage Mode | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| Fast (non-durable) | 79K | 89K | 95K | 90K | 55K |
| Balanced (non-durable) | 70K | 83K | 75K | 78K | 62K |
| Difference | Fast 13% faster | Fast 7% faster | Fast 27% faster | Fast 15% faster | Balanced 13% faster |
Key findings:
- Fast mode is generally faster for most workloads (7-27% advantage)
- Balanced mode has slight edge on Workload F (read-modify-write transactions)
- Latencies are similar (1.4-1.7ms balanced vs 0.62-0.71ms fast) - Fast mode has lower latencies
- Checkpoint batching in Balanced mode adds some overhead compared to Fast mode's simpler architecture
Use Fast mode (not Balanced) for production:
- Better performance on all workloads except F (7-27% faster)
- Lower latencies across the board (0.6-0.7ms vs 1.4-1.7ms)
- Simpler architecture (memory + WAL) vs Balanced's checkpoint complexity
- Same or better durability when group commits are enabled
Balanced mode was useful in earlier versions but Fast mode's group commits (v0.31.0+) deliver better performance with simpler semantics.
Cold Mode Benchmarks (ekoDB-Specific Feature)
Cold storage mode is an ekoDB-exclusive feature that optimizes for write-heavy workloads using chunked append-only files. Traditional databases (PostgreSQL, MongoDB, MySQL) don't offer equivalent storage mode flexibility, so comparisons are not applicable.
Unlike traditional databases that use a single storage architecture, ekoDB offers three storage modes to optimize for different use cases:
- Fast mode: Memory-first with WAL (general-purpose, recommended)
- Balanced mode: Periodic checkpointing (legacy)
- Cold mode: Chunked append-only files (write-heavy ingestion)
This flexibility allows you to choose the right storage architecture for your specific workload without changing databases.
Cold Mode Characteristics
- Optimized for: Write-heavy ingestion, logging, audit trails, time-series data
- Storage approach: Chunked append-only files instead of per-record storage
- Read path: Disk-based with chunked scanning (slower than Fast mode)
- Write path: Optimized for bulk ingestion
- Memory footprint: Lower than Fast mode
- Best for: Append-only patterns where recent data is hot
Benchmark Results (ekoDB Only)
Full benchmarks completed for both durable and non-durable configurations:
Cold + Durable Performance
1M records, 64 threads, full fsync guarantees with group commits
| Workload | ekoDB Cold (Durable) | ekoDB Fast (Durable) | Difference |
|---|---|---|---|
| A (50/50 r/w) | 17K ops/sec | 38K ops/sec | Fast 2.2x faster |
| B (95/5 read) | 60K ops/sec | 92K ops/sec | Fast 1.5x faster |
| C (100% read) | 96K ops/sec | 96K ops/sec | Tie |
| D (95/5 latest) | 43K ops/sec | 89K ops/sec | Fast 2.1x faster |
| F (RMW) | 6K ops/sec | 36K ops/sec | Fast 6x faster |
Cold + Non-Durable Performance
1M records, 64 threads, no fsync (async writes)
| Workload | ekoDB Cold (Fast) | ekoDB Fast (Fast) | Difference |
|---|---|---|---|
| A (50/50 r/w) | 56K ops/sec | 83K ops/sec | Fast 1.5x faster |
| B (95/5 read) | 46K ops/sec | 90K ops/sec | Fast 1.9x faster |
| C (100% read) | 51K ops/sec | 100K ops/sec | Fast 2x faster |
| D (95/5 latest) | 79K ops/sec | 89K ops/sec | Fast 1.1x faster |
| F (RMW) | 32K ops/sec | 57K ops/sec | Fast 1.8x faster |
Analysis: Cold vs Fast Mode
Key findings:
- Fast mode is faster across all workloads except pure reads where they tie (Workload C, durable)
- Cold mode significantly slower on writes: 2.2-6x slower on write-heavy workloads (A, F)
- Cold mode slower on reads too: Only matches Fast mode on pure read workload (C) in durable mode
- Cold mode's disk-based architecture adds overhead even for sequential operations
Recommendation
Use Fast mode (default) for all production workloads:
- Consistently faster: 1.1-6x better performance across all workloads
- Better write performance: 38K vs 17K on Workload A (durable)
- Better read performance: 89-92K vs 43-60K on mixed reads (B, D)
- Sub-millisecond latencies: Proven low-latency performance
Cold mode is NOT recommended based on benchmark results:
- Slower than Fast mode in every scenario tested
- Only ties on pure read workload (C) in durable mode
- Disk-based chunked scanning adds significant overhead
- Memory savings don't justify the 2-6x performance penalty
Cold mode underperforms Fast mode in all tested scenarios. The disk-first architecture with chunked append-only files does not provide the expected benefits for write-heavy workloads. Fast mode is recommended for all use cases, including logging and time-series data.
Competitive Analysis
Direct comparison against PostgreSQL and MongoDB at scale:
ekoDB vs PostgreSQL
| Metric | ekoDB Advantage | Why It Matters |
|---|---|---|
| Write-heavy (A, F) | 7.5x faster throughput | Session stores, shopping carts, transactions |
| Read-heavy (B, C, D) | Competitive throughput, 2-3x CPU efficiency | Lower cloud costs, more headroom |
| Latency | 0.41ms vs 1.07ms on writes | Better user experience on updates |
| CPU Efficiency | 2.8-5x better | Smaller instances, lower monthly cost |
PostgreSQL edges ahead on pure read throughput (Workload C: 106K vs 103K) but uses 3x more CPU.
ekoDB vs MongoDB
| Metric | ekoDB Advantage | Why It Matters |
|---|---|---|
| Write-heavy (A, F) | 3.6x faster throughput | Better at mixed workloads |
| Read-heavy (B, C, D) | 1.3-1.7x faster, 2-5x CPU efficiency | Faster and cheaper |
| Latency | 0.41ms vs 1.10ms on writes | Better write performance |
| CPU Efficiency | 2-5x better | MongoDB uses 780-850% CPU vs ekoDB's 193-214% |
MongoDB requires 4x more CPU cores to achieve lower throughput.
Feature Comparison
Performance is only part of the story - ekoDB does more per request:
| Feature | ekoDB | PostgreSQL | MongoDB |
|---|---|---|---|
| Write Throughput (A) | 38K (7.5x faster) | 5K | 11K |
| Read Throughput (C) | 103K | 106K | 77K |
| CPU Efficiency | 532 ops/CPU% (2.8x better) | 188 ops/CPU% | 98 ops/CPU% |
| Write Latency | 0.41ms | 1.07ms | 1.10ms |
| Document Queries | ✅ Built-in JSON | ✅ SQL + JSONB | ✅ Built-in BSON |
| Vector Search | ✅ Built-in | ⚠️ pgvector ext | ⚠️ Atlas only |
| Full-Text Search | ✅ Built-in | ⚠️ FTS config | ✅ Built-in |
| Built-in Auth | ✅ JWT + API keys | ⚠️ Roles only | ⚠️ SCRAM only |
| Per-op Encryption | ✅ AES-GCM | ❌ | ❌ |
| Single Binary | ✅ Yes | ❌ No | ❌ No |
| Real-time Subscriptions | ✅ WebSocket | ❌ | ⚠️ Change Streams |
| Group Commits | ✅ Yes | ✅ Single WAL | ✅ Journal |
Embedded Benchmarks (Local Performance)
These benchmarks measure ekoDB's embedded Rust database engine without network overhead. Useful for understanding raw storage performance and comparing against other embedded databases.
All benchmarks run on the embedded Rust database engine. Production performance via REST/WebSocket/TCP APIs includes additional network latency.
Summary
| Category | Operation | ekoDB | SQLite | RocksDB | LevelDB |
|---|---|---|---|---|---|
| Key-Value | Get | 209 ns | ~5 µs | ~3 µs | ~4 µs |
| Key-Value | Set | 4 µs | ~50 µs | ~6 µs | ~8 µs |
| Query | find_by_id | 2.3 µs | ~10 µs | N/A | N/A |
| Query | Simple (10 records) | 108 µs | ~200 µs | N/A | N/A |
| Insert | Single record | 101 µs | ~50 µs | ~6 µs | ~8 µs |
| Update | Single record | 27 µs | ~50 µs | ~6 µs | ~8 µs |
When to use ekoDB: Excellent reads and KV operations, plus document queries that RocksDB/LevelDB can't do.
When to consider alternatives: RocksDB/LevelDB are faster at raw writes. They're pure KV stores without document parsing or indexing overhead.
Key-Value Operations
| Operation | ekoDB | SQLite | RocksDB | LevelDB |
|---|---|---|---|---|
| Get | 209 ns | ~5 µs | ~3 µs | ~4 µs |
| Set | 4.0 µs | ~50 µs | ~6 µs | ~8 µs |
| Exists | 109 ns | ~5 µs | ~3 µs | ~4 µs |
| Delete | 85 ns | ~50 µs | ~6 µs | ~8 µs |
ekoDB's KV layer is optimized for reads. RocksDB/LevelDB edge ahead on writes due to LSM-tree architecture.
Query Operations
| Records | ekoDB (Simple) | ekoDB (Complex) | SQLite | DuckDB |
|---|---|---|---|---|
| 10 | 108 µs | 109 µs | ~200 µs | ~300 µs |
| 100 | 113 µs | 1.16 ms | ~500 µs | ~600 µs |
| 1000 | 120 µs | 12.8 ms | ~2 ms | ~15 ms |
ekoDB maintains consistent performance as result sets grow. For complex analytical queries at scale, DuckDB is purpose-built for that use case.
Cache Warming
ekoDB's pattern-based cache warming pre-loads frequently accessed records:
| Records | Uncached | Cached | Speedup |
|---|---|---|---|
| 10 | 329 µs | 109 µs | 3.0x |
| 100 | 3.38 ms | 1.08 ms | 3.1x |
| 1000 | 41.4 ms | 11.4 ms | 3.6x |
Write Operations
| Operation | ekoDB | SQLite | RocksDB | LevelDB |
|---|---|---|---|---|
| Single insert | 101 µs | ~50 µs | ~6 µs | ~8 µs |
| Batch insert 100 | 5.2 ms | ~5 ms | ~600 µs | ~800 µs |
| Single update | 27 µs | ~50 µs | ~6 µs | ~8 µs |
| Single delete | 72 µs | ~50 µs | ~6 µs | ~8 µs |
RocksDB/LevelDB dominate write performance. Their LSM-tree design converts random writes to sequential I/O. ekoDB includes indexing overhead that slows writes but accelerates reads and enables queries.
Join Operations
| Join Type | ekoDB (10) | ekoDB (100) | ekoDB (1000) | SQLite (1000) |
|---|---|---|---|---|
| Simple Join | 22 µs | 528 µs | 42.9 ms | ~80 ms |
| Multi-Collection | 22 µs | 530 µs | 40.9 ms | ~100 ms |
| Filtered Join | 25 µs | 617 µs | 45.6 ms | ~90 ms |
ekoDB outperforms SQLite on joins due to in-memory processing. For complex join strategies on large datasets, SQLite's query planner offers more sophistication.
Authentication & Encryption
| Operation | Time |
|---|---|
| Validate API key | 87 ns |
| Generate token | 1.08 µs |
| Validate token | 1.61 µs |
| Encrypt (small) | 2.0 µs |
| Encrypt (large) | 72.3 µs |
Sub-microsecond auth overhead means authentication is negligible in request latency.
Feature Comparison (Embedded)
| Feature | ekoDB | SQLite | RocksDB | LevelDB |
|---|---|---|---|---|
| Document queries | ✅ | ✅ | ❌ | ❌ |
| Full-text search | ✅ | ✅ (FTS5) | ❌ | ❌ |
| Vector search | ✅ | ❌ | ❌ | ❌ |
| Built-in auth | ✅ | ❌ | ❌ | ❌ |
| ACID transactions | ✅ | ✅ | ✅ | ❌ |
ekoDB trades some raw write performance for a richer feature set. If you need pure KV speed, RocksDB wins. If you need queries, search, and auth in one package, ekoDB is the only embedded option.
Storage Mode Performance Summary
Fast mode is the default and recommended storage mode for production workloads.
- Fast Mode (Default)
- Cold Mode (Not Recommended)
- Balanced Mode (Legacy)
Best for: Production workloads, general-purpose applications, read-heavy and mixed workloads
Durable Mode (Recommended for Production)
1M records, 64 threads, full fsync guarantees with group commits
| Workload | Throughput | Latency | vs PostgreSQL | vs MongoDB |
|---|---|---|---|---|
| A (50/50) | 38K ops/sec | 0.41ms | 7.3x faster | 3.6x faster |
| B (95/5) | 92K ops/sec | 0.57ms | 1.7x faster | 1.3x faster |
| C (100% read) | 96K ops/sec | 0.57ms | PostgreSQL 7% faster, ekoDB 2.9x more CPU efficient | 1.3x faster |
| D (95/5 latest) | 89K ops/sec | 0.52ms | PostgreSQL 6% faster, ekoDB 2.6x more CPU efficient | 1.2x faster |
| F (RMW) | 36K ops/sec | 0.44ms | 7.1x faster | 3.6x faster |
CPU Efficiency: 2-5x better than all competitors (155-532 ops/CPU%)
Fast Mode (Non-Durable)
1M records, 64 threads, no fsync (async writes)
| Workload | Throughput | Latency | vs PostgreSQL | vs MongoDB |
|---|---|---|---|---|
| A (50/50) | 83K ops/sec | 0.66ms | 2.2x faster | 4% slower |
| B (95/5) | 90K ops/sec | 0.66ms | 1.8x faster | 19% faster |
| C (100% read) | 100K ops/sec | 0.59ms | 1.9x faster | 29% faster |
| D (95/5 latest) | 89K ops/sec | 0.66ms | 1.7x faster | 17% faster |
| F (RMW) | 57K ops/sec | 0.67ms | 1.9x faster | 7% faster |
CPU Efficiency: Still 2-4x better than all competitors (138-462 ops/CPU%)
Recommendation
Use Fast mode with durable operations (default) for production:
- Outstanding write performance with full durability guarantees
- Competitive read performance at 2-3x better CPU efficiency
- Sub-millisecond latencies across all workload types
- Lowest cloud costs due to superior CPU efficiency
- No data loss on crashes (every write guaranteed on disk)
Status: Benchmarks complete - NOT recommended for any workload
Originally intended for: Write-heavy ingestion, logging, audit trails, time-series data
Cold mode uses chunked append-only files, but benchmark results show it underperforms Fast mode in all scenarios.
Durable Mode Performance
1M records, 64 threads, full fsync guarantees with group commits
| Workload | Cold | Fast | Result |
|---|---|---|---|
| A (50/50) | 17K ops/sec | 38K ops/sec | Fast 2.2x faster |
| B (95/5) | 60K ops/sec | 92K ops/sec | Fast 1.5x faster |
| C (100% read) | 96K ops/sec | 96K ops/sec | Tie |
| D (95/5 latest) | 43K ops/sec | 89K ops/sec | Fast 2.1x faster |
| F (RMW) | 6K ops/sec | 36K ops/sec | Fast 6x faster |
Non-Durable Mode Performance
1M records, 64 threads, no fsync (async writes)
| Workload | Cold | Fast | Result |
|---|---|---|---|
| A (50/50) | 56K ops/sec | 83K ops/sec | Fast 1.5x faster |
| B (95/5) | 46K ops/sec | 90K ops/sec | Fast 1.9x faster |
| C (100% read) | 51K ops/sec | 100K ops/sec | Fast 2x faster |
| D (95/5 latest) | 79K ops/sec | 89K ops/sec | Fast 1.1x faster |
| F (RMW) | 32K ops/sec | 57K ops/sec | Fast 1.8x faster |
Recommendation
Do NOT use Cold mode - use Fast mode instead for all workloads:
- Fast mode is 1.1-6x faster across all tested scenarios
- Cold mode's disk-first architecture adds overhead without benefits
- Even for write-heavy workloads (A, F), Fast mode is 2.2-6x faster
- Only ties on pure read workload (C) in durable mode
Cold mode does not provide the expected benefits. The chunked append-only file architecture underperforms Fast mode's memory-first approach in all tested scenarios, including write-heavy workloads it was designed for.
Status: Legacy mode - use Fast mode instead
Durable Mode
1M records, 64 threads, full fsync guarantees with group commits
| Workload | Throughput | Latency | vs Fast Durable |
|---|---|---|---|
| A (50/50) | 30K ops/sec | 0.60ms | 20% slower than Fast (38K) |
| B (95/5) | 75K ops/sec | 0.64ms | 18% slower than Fast (92K) |
| C (100% read) | 87K ops/sec | 0.67ms | 9% slower than Fast (96K) |
| D (95/5 latest) | 81K ops/sec | 0.58ms | 9% slower than Fast (89K) |
| F (RMW) | 32K ops/sec | 0.58ms | 11% slower than Fast (36K) |
Non-Durable Mode
1M records, 64 threads, no fsync (async writes)
| Workload | Throughput | Latency | vs Fast Non-Durable |
|---|---|---|---|
| A (50/50) | 70K ops/sec | 1.66ms | 16% slower, 2.5x higher latency |
| B (95/5) | 83K ops/sec | 1.40ms | 7% slower, 2.1x higher latency |
| C (100% read) | 75K ops/sec | 1.56ms | 25% slower, 2.6x higher latency |
| D (95/5 latest) | 78K ops/sec | 1.56ms | 12% slower, 2.4x higher latency |
| F (RMW) | 62K ops/sec | 1.57ms | 9% faster, 2.3x higher latency |
Why Fast Mode Is Better
Balanced mode uses periodic checkpoint batching to disk, which adds overhead in both durable and non-durable modes:
- Slower throughput: 7-25% slower across most workloads
- Higher latencies: 2-2.5x higher in non-durable mode
- More complex architecture: Checkpointing adds code complexity
- No clear advantage: Even on Workload F where it's slightly faster, latencies are much higher
Recommendation: Use Fast mode with durable operations instead:
- Better performance in both durable (9-20% faster) and non-durable (7-25% faster) modes
- Lower latencies (0.4-0.7ms vs 0.6-1.7ms)
- Simpler semantics with group commits
- Full durability guarantees when enabled
Deprecated in v0.31.0: Fast mode's group commits deliver better performance with simpler architecture.
Benchmark Coverage Summary
✅ Complete Benchmarks (Documented)
| Storage Mode | Durability | Status | Performance |
|---|---|---|---|
| Fast | Durable | ✅ Complete | 38-96K ops/sec (RECOMMENDED) |
| Fast | Non-Durable | ✅ Complete | 83-100K ops/sec |
| Balanced | Durable | ✅ Complete | 30-87K ops/sec (20% slower than Fast) |
| Balanced | Non-Durable | ✅ Complete | 70-83K ops/sec (7-25% slower than Fast) |
🔬 ekoDB-Only Benchmarks (No Comparison Needed)
| Storage Mode | Durability | Status | Notes |
|---|---|---|---|
| Cold | Durable | 🔬 ekoDB-only | Traditional databases don't have equivalent storage modes |
| Cold | Non-Durable | 🔬 ekoDB-only | Traditional databases don't have equivalent storage modes |
Why no comparison? Cold mode is an ekoDB-exclusive feature. PostgreSQL, MongoDB, and MySQL use fixed storage architectures and don't offer equivalent write-optimized append-only modes.
See Also
- Query Patterns & Cache Warming - Intelligent caching for 3x faster queries
- Transactions Architecture - ACID transaction performance
- Error Codes - API error reference