Performance Benchmarks
TL;DR: ekoDB delivers 37–127K ops/sec on production workloads with full durability and encryption. Both ekoDB (Key-Value) and ekoDB (Collections) score A grade and lead every YCSB workload against 4 competitors (PostgreSQL, MongoDB, MySQL, Redis) — 3.7–7.3x faster on writes, 1.1–2.1x faster on reads — while using 2–5x less CPU. A single ~50MB Rust binary replaces 4–5 services with built-in auth, encryption, search, and real-time subscriptions.
When to Use ekoDB (and When Not To)
ekoDB excels when you need: write-heavy workloads with durability, read-heavy throughput, sub-millisecond latencies, CPU-efficient deployments, or a unified platform replacing multiple services (document store + KV + FTS + vector search + auth + real-time subs) in a single binary.
Consider alternatives when: you need specialized OLAP with complex window functions and CTEs (ClickHouse, DuckDB).
Based on YCSB benchmarks: 1M records, 64 threads, Fast storage mode. See Results Summary for numbers.
| Your Workload | Best Choice | Why |
|---|---|---|
| Write-heavy + durable | ekoDB | Fastest across all competitors with sub-millisecond latency |
| Read-heavy + durable | ekoDB | Leads all 4 competitors on every read workload |
| Non-durable (any mix) | ekoDB | Competitive or leading, while carrying auth/encryption overhead competitors skip |
| Low-latency (any mode) | ekoDB | Consistent sub-millisecond across all workloads |
| CPU cost-sensitive | ekoDB | Better efficiency across durable and non-durable modes |
| Unified platform | ekoDB | Single binary replaces 4–5 services (auth, search, cache, DB) |
Why ekoDB's Performance Is Remarkable
ekoDB isn't just fast. It's fast despite doing more work per request than competitors. Understanding this context makes the benchmark results more meaningful.
The Overhead ekoDB Carries
Every ekoDB request includes capabilities that competitors skip entirely:
| Layer | What Happens | Competitors |
|---|---|---|
| Security | JWT/API key auth, AES-GCM encryption on all stored data | Redis: none, PG: separate |
| Data | Schema validation, automatic indexing | PG: yes, Redis: none |
| Search | Full-text + vector search index updates | Requires Elasticsearch + Pinecone |
| Durability | Configurable persistence guarantees per operation | PG: batched, Redis: async |
| Real-time | Subscription notifications | Requires additional services |
ekoDB supports two durability modes via the durable_operations config setting:
| Mode | Setting | Behavior | Use Case |
|---|---|---|---|
| Durable (default) | durable_operations: true | Every confirmed write persisted to disk before response | Most workloads, guaranteed persistence |
| Non-Durable | durable_operations: false | Writes persisted asynchronously | High-throughput, eventual durability acceptable |
Durability: The Hidden Performance Tax
Most benchmarks don't match durability settings, making some databases appear faster than they are in production. A database that skips durability guarantees will always benchmark faster — but that speed comes at the cost of data loss on crash. All ekoDB benchmarks use equivalent durability settings across databases for a fair comparison.
YCSB Results Summary
Industry-standard YCSB benchmarks: 1M records, 64 threads, Fast storage mode, durable writes with matching durability settings across all databases. Full tables and configuration details in Detailed Benchmarks below.
| Database | Score | Grade |
|---|---|---|
| ekoDB (Key-Value) | 84 | A |
| ekoDB (Collections) | 81 | A |
| PostgreSQL | 46 | C |
| MongoDB | 41 | C |
| Redis | 35 | D |
| MySQL | 29 | F |
Key findings:
- Write-heavy (A, F): ekoDB delivers 39K ops/sec — 7.3x faster than PostgreSQL, 3.7x faster than MongoDB
- Read-heavy (B, C, D): 112–127K ops/sec, leading all 4 competitors on every workload
- Latency: Sub-millisecond across all workloads (0.39–0.59ms)
- CPU efficiency: 2–5x better ops/CPU% than PostgreSQL and MongoDB
- Redis: Struggles with durable writes — 10ms+ latency, only 4–6K ops/sec on write-heavy workloads
Analysis: Why ekoDB Leads
The 1M record, 64-thread YCSB results reveal three architectural advantages:
1. Write Performance Under Durability
ekoDB maintains fast writes even with full persistence guarantees. Where competitors see throughput collapse when durability is enabled (Redis drops to single-digit K ops/sec), ekoDB's write path is designed for durable workloads from the ground up — not bolted on as an afterthought.
2. Scalable Concurrency
At 64 threads, ekoDB scales efficiently with minimal lock contention. Both the Key-Value and Collections engines distribute write load across cores, maintaining consistent performance as thread count increases.
3. CPU Efficiency
On cloud platforms, you pay for CPU cores. A database that delivers 50K ops/sec using 200% CPU (2 cores) is more cost-effective than one delivering 60K ops/sec using 600% CPU (6 cores).
Efficiency = ops/sec ÷ CPU%: Higher is better. Lower CPU per operation means smaller instances, lower cloud costs, and more headroom for co-located services.
ekoDB achieves these results while carrying significantly more overhead per request than every competitor — auth, encryption, indexing, and real-time subscriptions on every operation. A ~50MB Rust binary with zero garbage collection pauses.
Detailed YCSB Benchmarks by Storage Mode
Industry-standard YCSB (Yahoo! Cloud Serving Benchmark) results comparing ekoDB against PostgreSQL, MongoDB, MySQL, Redis, and ekoDB (Key-Value) with matching durability settings.
| Workload | Mix | Real-World Example |
|---|---|---|
| A | 50% read, 50% update | Session stores, shopping carts |
| B | 95% read, 5% update | Social media profiles, photo tagging |
| C | 100% read | Product catalogs, configuration lookups |
| D | 95% read, 5% insert | News feeds, activity streams |
| F | 50% read-modify-write | Bank transactions, inventory counters |
- Hardware: Apple M1 Max, 64GB RAM, NVMe SSD
- Scale: 1,000,000 records | 1,000,000 operations | 64 threads
- Batch Size: 1 operation per request (raw network performance test)
- Databases: 6 databases tested — ekoDB (Collections), ekoDB (Key-Value), PostgreSQL 15, MongoDB 6.0, MySQL 8.0, Redis 7.x
All databases tested on the same machine, same day, for accurate comparison per configuration.
PostgreSQL benchmarks use optimized JDBC drivers with connection pooling, prepared statement caching, and batch optimizations built into the driver layer. These are production-grade drivers tuned over decades.
ekoDB benchmarks use raw TCP protocol with a basic YCSB client: no driver-level optimizations, no connection pooling magic, no prepared statement cache.
Despite this disadvantage, ekoDB leads on every workload tested.
Fast Mode (Recommended)
Note: All benchmarks include AES-GCM encryption on all stored data.
- Fast + Durable
- Fast + Non-Durable
Storage Mode: Fast | Durability: Durable (guaranteed persistence)
Durability Settings
| Database | Durable Setting | fsync Behavior | Group Commit Delay |
|---|---|---|---|
| ekoDB | durable_operations=true | WAL fsync, group_commits=true | 2ms (explicit) |
| PostgreSQL | synchronous_commit=on, fsync=on | Per-commit WAL fsync | commit_delay=2000us, commit_siblings=2 |
| MongoDB | w=1, journal=true | WiredTiger journal fsync | journalCommitInterval=2 |
| MySQL | innodb_flush_log_at_trx_commit=1, sync_binlog=1 | Per-commit redo log + binlog fsync | binlog_group_commit_sync_delay=2000us |
| Redis | appendfsync=always | Per-operation AOF fsync | N/A (no group commit) |
This is the recommended production configuration. Every confirmed write is persisted to disk before the client receives a response — no data loss on crash, power failure, or unexpected shutdown. In real-world terms: a session store handling 39K mixed read/update operations per second, a product catalog serving 124K lookups per second, or a financial ledger processing 38K atomic read-modify-write transactions per second — all with full durability, encryption, and auth on every operation.
Throughput (ops/sec)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB (Key-Value) | 39,369 | 112,854 | 116,442 | 127,389 | 37,929 |
| ekoDB (Collections) | 36,964 | 115,128 | 123,655 | 118,315 | 36,937 |
| MongoDB | 10,511 | 72,327 | 77,095 | 76,017 | 10,130 |
| MySQL | 2,505 | 26,730 | 63,016 | 45,409 | 2,324 |
| PostgreSQL | 5,420 | 54,555 | 107,227 | 94,949 | 4,790 |
| Redis | 6,022 | 6,403 | 103,864 | 5,999 | 4,200 |
Key findings:
- ekoDB (Key-Value) leads Workloads A, D; ekoDB (Collections) leads B, C — both score A grade (84 and 81 out of 100)
- Workload A (50% read/50% update): ekoDB (Key-Value) 7.3x faster than PostgreSQL, 3.7x faster than MongoDB
- Workload B (95% read/5% update): ekoDB (Collections) 2.1x faster than PostgreSQL, 1.6x faster than MongoDB
- Workload C (100% read): ekoDB (Collections) 15% faster than PostgreSQL (124K vs 107K), 1.6x faster than MongoDB
- Workload D (95% read latest/5% insert): ekoDB (Key-Value) 127K — fastest across all databases and workloads, 1.3x faster than PostgreSQL, 1.7x faster than MongoDB
- Workload F (Read-Modify-Write): ekoDB (Key-Value) 7.9x faster than PostgreSQL, 3.7x faster than MongoDB
- Redis struggles with durable writes (
appendfsync=always): 10ms+ latency on non-read workloads, only 4–6K ops/sec on A/D/F
Average Latency (ms)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB (Key-Value) | 0.53 | 0.46 | 0.59 | 0.40 | 0.43 |
| ekoDB (Collections) | 0.43 | 0.40 | 0.45 | 0.39 | 0.41 |
| MongoDB | 1.17 | 0.44 | 0.79 | 0.43 | 1.38 |
| MySQL | 1.54 | 0.25 | 0.94 | 0.58 | 1.87 |
| PostgreSQL | 1.01 | 0.17 | 0.56 | 0.31 | 1.16 |
| Redis | 10.60 | 9.96 | 0.61 | 10.12 | 10.10 |
Key findings:
- ekoDB maintains sub-millisecond latencies across all workloads (0.39–0.59ms)
- Lowest latency on write-heavy Workloads A (0.43ms) and F (0.41ms)
- PostgreSQL has lower latency on read-heavy workloads B (0.17ms) and C (0.56ms)
- Redis latency explodes on write-heavy workloads: 10.60ms on Workload A vs ekoDB's 0.43ms — a 24x difference due to per-operation AOF fsync
CPU Efficiency (ops/sec per CPU %)
Note: ekoDB intentionally caps CPU usage at 95% to leave headroom for system stability. Other databases typically try to use 100% of available CPU during benchmarks.
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB (Key-Value) | 248 | 499 | 548 | 479 | 226 |
| ekoDB (Collections) | 145 | 357 | 447 | 412 | 109 |
| MongoDB | 46 | 103 | 111 | 116 | 50 |
| MySQL | 46 | 150 | 198 | 155 | 38 |
| PostgreSQL | 45 | 172 | 238 | 222 | 40 |
| Redis | 257 | 280 | 1,336 | 271 | 175 |
Key findings:
- ekoDB (Key-Value) achieves 2–5x better CPU efficiency than PostgreSQL and MongoDB across all workloads
- Best efficiency on Workload D: ekoDB (Key-Value) 479 ops/CPU% (2.2x PostgreSQL, 4.1x MongoDB)
- Redis achieves high CPU efficiency on Workload C (1,336 ops/CPU%) due to minimal CPU usage on pure reads, but collapses on write-heavy workloads
CPU Utilization Details
| Database | A Avg | A Peak | B Avg | B Peak | C Avg | C Peak | D Avg | D Peak | F Avg | F Peak |
|---|---|---|---|---|---|---|---|---|---|---|
| ekoDB (Key-Value) | 158.4 | 289.8 | 226.1 | 312.5 | 212.1 | 305.0 | 265.4 | 356.1 | 167.8 | 229.5 |
| ekoDB (Collections) | 254.8 | 346.6 | 322.0 | 431.4 | 276.4 | 383.1 | 287.1 | 371.0 | 337.3 | 430.4 |
| MongoDB | 224.0 | 344.4 | 697.2 | 819.8 | 690.7 | 780.6 | 651.3 | 763.7 | 201.8 | 552.1 |
| MySQL | 53.3 | 85.8 | 177.4 | 253.9 | 317.6 | 366.2 | 292.3 | 365.6 | 61.1 | 96.3 |
| PostgreSQL | 120.2 | 182.9 | 316.3 | 527.6 | 450.4 | 657.0 | 426.7 | 630.8 | 118.8 | 173.3 |
| Redis | 23.4 | 27.9 | 22.8 | 26.8 | 77.7 | 94.1 | 22.1 | 27.0 | 23.9 | 27.8 |
Measured on 10-core Apple M1 Max. CPU% > 100 indicates multi-core usage.
Storage Mode: Fast | Durability: Non-Durable (async persistence)
Durability Settings
| Database | Durability Setting | Guarantee |
|---|---|---|
| ekoDB | durable_operations=false | Writes persisted asynchronously |
| PostgreSQL | synchronous_commit=off, fsync=off | No persistence guarantee |
| MongoDB | w=0 | No write acknowledgment |
Non-durable mode disables persistence guarantees across all databases for a fair throughput ceiling comparison. This configuration suits ephemeral workloads — session caches, rate limiters, real-time analytics counters, and development environments where maximum throughput matters more than surviving a crash. Even in this mode, ekoDB still performs JWT auth, AES-GCM encryption, and full search indexing on every operation — overhead that PostgreSQL and MongoDB skip entirely.
Throughput (ops/sec)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB (Collections) | 87,827 | 117,536 | 121,788 | 118,203 | 66,124 |
| MongoDB | 88,472 | 79,517 | 81,024 | 81,380 | 60,449 |
| PostgreSQL | 80,425 | 98,561 | 103,670 | 105,943 | 57,707 |
Key findings:
- Workload A (50% read/50% update): Three-way race — MongoDB (88K), ekoDB (88K), PostgreSQL (80K). ekoDB and MongoDB essentially tied on session stores and shopping carts, with ekoDB carrying auth/encryption overhead that MongoDB skips
- Workloads B–D (read-heavy): ekoDB leads clearly at 118–122K ops/sec, 12–20% ahead of PostgreSQL (99–106K) and 45–50% ahead of MongoDB (79–81K). These are the most common production patterns — social media profiles, product catalogs, news feeds
- Workload F (Read-Modify-Write): ekoDB leads (66K) over MongoDB (60K) and PostgreSQL (58K) — important for financial transactions, inventory counters, and any atomic read-then-update pattern
Average Latency (ms)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB (Collections) | 0.56 | 0.47 | 0.46 | 0.47 | 0.51 |
| PostgreSQL | 0.54 | 0.54 | 0.58 | 0.54 | 0.60 |
| MongoDB | 1.39 | 0.79 | 0.74 | 0.78 | 1.02 |
Key findings:
- ekoDB and PostgreSQL deliver near-identical sub-millisecond latencies (0.46–0.56ms vs 0.54–0.60ms) — ekoDB edges ahead on read-heavy workloads (B/C/D), PostgreSQL marginally leads on Workload A
- MongoDB trails with significantly higher latencies on write-heavy workloads (1.02–1.39ms on A and F)
- The latency gap matters most for user-facing applications: at p50, both ekoDB and PostgreSQL deliver responses in under 600µs regardless of workload mix
CPU Efficiency (ops/sec per CPU %)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB (Collections) | 174 | 319 | 348 | 339 | 131 |
| PostgreSQL | 146 | 204 | 206 | 209 | 116 |
| MongoDB | 122 | 104 | 111 | 110 | 75 |
- ekoDB leads CPU efficiency across all workloads — 1.1–1.7x better than PostgreSQL, 1.4–3.1x better than MongoDB
- On cloud platforms, this efficiency advantage directly translates to lower compute costs: ekoDB delivers more throughput per CPU core than competitors while also handling auth, encryption, and search indexing
CPU Utilization Details
| Database | A Avg | A Peak | B Avg | B Peak | C Avg | C Peak | D Avg | D Peak | F Avg | F Peak |
|---|---|---|---|---|---|---|---|---|---|---|
| ekoDB (Collections) | 502.5 | 587.8 | 368.3 | 428.2 | 349.3 | 397.4 | 348.1 | 411.5 | 504.2 | 564.2 |
| MongoDB | 719.5 | 876.9 | 762.9 | 818.2 | 728.2 | 825.6 | 738.4 | 835.4 | 802.9 | 875.1 |
| PostgreSQL | 548.2 | 683.9 | 482.3 | 667.1 | 502.7 | 707.7 | 505.2 | 717.8 | 496.2 | 683.6 |
Measured on 10-core Apple M1 Max. CPU% > 100 indicates multi-core usage.
Non-durable mode is faster but risky:
- Data loss window: Uncommitted writes lost on crash (typically 0–1 seconds)
- Use case: Caches, temp data, dev/test environments, bulk ingestion
Durable mode (default) is recommended for production:
- In durable mode, ekoDB leads across all YCSB workloads with equivalent durability settings
- Use durable for production, non-durable only for caches/temp data
Cache Mode: ekoDB KV vs Redis
A head-to-head comparison of ekoDB's Key-Value engine against Redis in pure cache mode — no persistence, no durability overhead. This isolates raw in-memory throughput performance.
- Hardware: Apple M1 Max, 64GB RAM, NVMe SSD
- Scale: 1,000,000 records | 1,000,000 operations | 64 threads
- Storage Mode: Fast | Durability: Non-Durable (no fsync, no persistence)
- ekoDB:
durable_operations=false(async WAL) - Redis:
appendonly=no(no persistence)
Throughput (ops/sec)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB (Key-Value) | 87,078 | 90,498 | 90,058 | 80,782 | 61,660 |
| Redis | 91,701 | 87,627 | 88,582 | 84,182 | 58,851 |
Average Latency (ms)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB (Key-Value) | 0.70 | 0.69 | 0.72 | 0.77 | 0.67 |
| Redis | 0.69 | 0.72 | 0.71 | 0.71 | 0.72 |
CPU Efficiency (ops/sec per CPU %)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB (Key-Value) | 419 | 550 | 588 | 432 | 305 |
| Redis | 1,189 | 1,139 | 1,141 | 1,087 | 766 |
Performance Scores
| Database | Score | Grade |
|---|---|---|
| Redis | 97 | A+ |
| ekoDB (Key-Value) | 83 | A |
ekoDB matches Redis on throughput (~82K avg ops/sec) and latency (~0.72ms avg) — despite carrying the full overhead described above on every operation. Redis wins decisively on CPU efficiency due to its single-threaded, minimal-overhead design.
If you're choosing between Redis and ekoDB for caching, ekoDB delivers equivalent speed plus built-in auth, encryption, search, and real-time subscriptions — without additional services.
Balanced Mode
Balanced storage mode is designed for datasets larger than available RAM, automatically managing which records stay in memory and which are stored on disk.
- Balanced + Durable
- Balanced + Non-Durable
Storage Mode: Balanced | Durability: Durable (guaranteed persistence)
Balanced mode is designed for datasets that exceed available RAM — user profile stores, content management systems, and product catalogs where the total dataset is large but the active working set fits in memory. ekoDB leads on write-heavy and mixed workloads (A, B, F) with 2–3x better CPU efficiency than all competitors, while PostgreSQL edges ahead on pure-read workloads (C, D) where its mature query optimizer is most effective.
Throughput (ops/sec)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB (Collections) | 30,462 | 75,267 | 87,222 | 81,011 | 31,818 |
| PostgreSQL | 5,207 | 53,112 | 103,370 | 94,769 | 5,020 |
| MongoDB | 11,071 | 68,714 | 71,003 | 68,611 | 10,423 |
| MySQL | 4,603 | 42,806 | 61,005 | 46,155 | 4,804 |
Average Latency (ms)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB (Collections) | 0.60 | 0.64 | 0.67 | 0.58 | 0.58 |
| PostgreSQL | 1.06 | 0.17 | 0.59 | 0.34 | 1.17 |
| MongoDB | 1.17 | 0.49 | 0.87 | 0.51 | 1.17 |
| MySQL | 1.26 | 0.30 | 0.99 | 0.66 | 1.20 |
CPU Efficiency (ops/sec per CPU %)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB (Collections) | 108 | 281 | 346 | 342 | 104 |
| PostgreSQL | 43 | 176 | 179 | 199 | 42 |
| MongoDB | 49 | 96 | 103 | 94 | 43 |
| MySQL | 61 | 144 | 196 | 143 | 60 |
vs Fast + Durable: Balanced mode trails across all workloads — Fast + Durable delivers 39K (A), 115K (B), 124K (C), 127K (D), 38K (F). The gap is most pronounced on read-heavy workloads where Fast mode's in-memory primary storage eliminates disk access entirely.
Storage Mode: Balanced | Durability: Non-Durable (async persistence)
Non-durable Balanced mode targets development, staging, and batch-processing workloads where datasets exceed RAM and crash recovery is not critical. Without persistence overhead, all databases compete closer to their throughput ceilings.
Throughput (ops/sec)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB (Collections) | 51,760 | 51,560 | 59,354 | 80,626 | 42,746 |
| PostgreSQL | 52,059 | 73,665 | 85,690 | 93,318 | 55,894 |
| MongoDB | 82,563 | 75,483 | 77,036 | 75,982 | 51,634 |
| MySQL | 29,932 | 54,145 | 59,837 | 49,044 | 25,319 |
Average Latency (ms)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB (Collections) | 1.14 | 1.18 | 1.02 | 0.73 | 1.11 |
| PostgreSQL | 0.93 | 0.81 | 0.72 | 0.63 | 0.67 |
| MongoDB | 1.49 | 0.85 | 0.79 | 0.85 | 1.21 |
| MySQL | 1.20 | 0.94 | 0.99 | 1.05 | 1.29 |
CPU Efficiency (ops/sec per CPU %)
| Database | Workload A | Workload B | Workload C | Workload D | Workload F |
|---|---|---|---|---|---|
| ekoDB (Collections) | 141 | 199 | 245 | 329 | 119 |
| PostgreSQL | 131 | 178 | 154 | 168 | 123 |
| MongoDB | 122 | 100 | 105 | 99 | 71 |
| MySQL | 84 | 149 | 195 | 153 | 69 |
Note: In Balanced + Non-Durable mode, PostgreSQL leads on read-heavy workloads (C, D, F) and MongoDB leads on Workload A. ekoDB maintains a CPU efficiency advantage across most workloads. For maximum non-durable throughput, use Fast mode (66–122K ops/sec vs Balanced 43–81K).
| Mode | Optimized For | Best For |
|---|---|---|
| Fast | Maximum throughput, in-memory performance | Low-latency reads, data fits in RAM |
| Balanced | Memory-efficient, larger-than-RAM datasets | Datasets larger than RAM, general-purpose |
| Cold | Sequential writes, minimal disk footprint | Bulk ingestion, append-heavy, cost-optimized storage |
Cold Mode
Cold storage mode is optimized for sequential write throughput and minimal disk footprint (10–20x less disk usage than Balanced mode). Best for bulk ingestion, append-heavy workloads, and cost-optimized storage. Traditional databases don't offer equivalent storage mode flexibility, so these are ekoDB-only comparisons against Fast mode.
- Cold + Durable
- Cold + Non-Durable
1M records, 64 threads, full durability guarantees
Cold mode trades read performance for minimal disk footprint and sequential write efficiency — ideal for audit logs, event sourcing, IoT telemetry, and bulk ingestion where data is written once and read infrequently. With durable writes, data safety is guaranteed even at reduced throughput. These are ekoDB-only comparisons since traditional databases don't offer equivalent storage mode flexibility.
| Workload | ekoDB Cold | ekoDB Fast | Result |
|---|---|---|---|
| A (50/50) | 17,013 | 38,024 | Fast 2.2x faster |
| B (95/5) | 60,103 | 92,396 | Fast 1.5x faster |
| C (100% read) | 95,675 | 97,628 | Fast 1.0x faster |
| D (95/5 latest) | 42,658 | 90,893 | Fast 2.1x faster |
| F (RMW) | 5,864 | 37,135 | Fast 6.3x faster |
1M records, 64 threads, async persistence
Without persistence overhead, Cold mode serves as a high-throughput ingestion pipeline — bulk loading sensor data, importing large datasets, or processing log streams where write speed matters and data can be re-derived from source if needed.
| Workload | ekoDB Cold | ekoDB Fast | Result |
|---|---|---|---|
| A (50/50) | 55,788 | 85,992 | Fast 1.5x faster |
| B (95/5) | 46,367 | 89,662 | Fast 1.9x faster |
| C (100% read) | 50,759 | 99,711 | Fast 2.0x faster |
| D (95/5 latest) | 79,051 | 89,127 | Fast 1.1x faster |
| F (RMW) | 32,326 | 56,728 | Fast 1.8x faster |
Cold mode trades read performance for minimal disk footprint and sequential write efficiency. Use Cold mode for bulk data ingestion, ETL pipelines, append-only data models, and scenarios where storage cost matters more than read latency. For low-latency reads, use Fast or Balanced mode.
Feature Comparison
Beyond raw performance, ekoDB consolidates capabilities that typically require multiple external services into a single ~50MB binary. The table below shows what ships built-in versus what requires extensions or separate infrastructure.
| Feature | ekoDB | PostgreSQL | MongoDB | MySQL | Redis |
|---|---|---|---|---|---|
| Document Queries | ✅ Built-in JSON | ✅ SQL + JSONB | ✅ Built-in BSON | ⚠️ JSON columns | ⚠️ RedisJSON module |
| Vector Search | ✅ Built-in | ⚠️ pgvector ext | ⚠️ Atlas only | ❌ Needs Pinecone | ⚠️ RediSearch module |
| Full-Text Search | ✅ Built-in | ⚠️ FTS config | ⚠️ Atlas Search | ⚠️ FULLTEXT index | ⚠️ RediSearch module |
| Built-in Auth | ✅ JWT + API keys | ⚠️ Roles only | ⚠️ SCRAM only | ⚠️ Roles only | ⚠️ ACL only |
| Per-op Encryption | ✅ AES-GCM | ⚠️ pgcrypto ext | ⚠️ CSFLE client-side | ⚠️ TDE / app-level | ❌ TLS in-transit only |
| Single Binary | ✅ Yes | ❌ Multi-process | ❌ mongod + mongos | ❌ Multi-process | ✅ Yes |
| Real-time Subscriptions | ✅ WebSocket | ⚠️ LISTEN/NOTIFY | ⚠️ Change Streams | ❌ Needs external | ⚠️ Pub/Sub |
| Durable Writes | ✅ Per-operation | ✅ Per-commit | ✅ Journal | ✅ Per-commit | ⚠️ AOF fsync |
Embedded Benchmarks (Local Performance)
These benchmarks measure ekoDB's core Rust database engine running in-process — no network round-trips, no serialization, no TCP overhead. This represents the performance floor for applications that embed ekoDB directly as a library rather than connecting over the network, and provides a direct comparison against established embedded databases like SQLite, RocksDB, and LevelDB.
All benchmarks run on the embedded Rust database engine. Production performance via REST/WebSocket/TCP APIs includes additional network latency.
Summary
| Category | Operation | ekoDB | SQLite | RocksDB | LevelDB |
|---|---|---|---|---|---|
| Key-Value | Get | 185 ns | ~5 µs | ~3 µs | ~4 µs |
| Key-Value | Set | 3.5 µs | ~50 µs | ~6 µs | ~8 µs |
| Query | find_by_id | 2.3 µs | ~10 µs | N/A | N/A |
| Query | Simple (10 records) | 108 µs | ~200 µs | N/A | N/A |
| Insert | Single record | 101 µs | ~50 µs | ~6 µs | ~8 µs |
| Update | Single record | 27 µs | ~50 µs | ~6 µs | ~8 µs |
When to use ekoDB: Excellent reads and KV operations, plus document queries that RocksDB/LevelDB can't do. The KV layer delivers 185 ns Get and 3.5 µs Set with high-concurrency access.
When to consider alternatives: RocksDB/LevelDB are faster at raw writes. They're pure KV stores without document parsing or indexing overhead.
Key-Value Operations
| Operation | ekoDB | SQLite | RocksDB | LevelDB |
|---|---|---|---|---|
| Get | 185 ns | ~5 µs | ~3 µs | ~4 µs |
| Set | 3.5 µs | ~50 µs | ~6 µs | ~8 µs |
| Exists | 109 ns | ~5 µs | ~3 µs | ~4 µs |
| Delete | 85 ns | ~50 µs | ~6 µs | ~8 µs |
ekoDB's KV layer uses lock-free concurrent storage optimized for reads. RocksDB/LevelDB edge ahead on writes due to LSM-tree architecture.
Query Operations
| Records | ekoDB (Simple) | ekoDB (Complex) | SQLite | DuckDB |
|---|---|---|---|---|
| 10 | 108 µs | 109 µs | ~200 µs | ~300 µs |
| 100 | 113 µs | 1.16 ms | ~500 µs | ~600 µs |
| 1000 | 120 µs | 12.8 ms | ~2 ms | ~15 ms |
ekoDB maintains consistent performance as result sets grow. For complex analytical queries, DuckDB is purpose-built for that use case.
Cache Warming
ekoDB's pattern-based cache warming pre-loads frequently accessed records:
| Records | Uncached | Cached | Speedup |
|---|---|---|---|
| 10 | 329 µs | 109 µs | 3.0x |
| 100 | 3.38 ms | 1.08 ms | 3.1x |
| 1000 | 41.4 ms | 11.4 ms | 3.6x |
Write Operations
| Operation | ekoDB | SQLite | RocksDB | LevelDB |
|---|---|---|---|---|
| Single insert | 101 µs | ~50 µs | ~6 µs | ~8 µs |
| Batch insert 100 | 5.2 ms | ~5 ms | ~600 µs | ~800 µs |
| Single update | 27 µs | ~50 µs | ~6 µs | ~8 µs |
| Single delete | 72 µs | ~50 µs | ~6 µs | ~8 µs |
RocksDB/LevelDB dominate write performance. Their LSM-tree design converts random writes to sequential I/O. ekoDB includes indexing overhead that slows writes but accelerates reads and enables queries.
Join Operations
| Join Type | ekoDB (10) | ekoDB (100) | ekoDB (1000) | SQLite (1000) |
|---|---|---|---|---|
| Simple Join | 22 µs | 528 µs | 42.9 ms | ~80 ms |
| Multi-Collection | 22 µs | 530 µs | 40.9 ms | ~100 ms |
| Filtered Join | 25 µs | 617 µs | 45.6 ms | ~90 ms |
ekoDB outperforms SQLite on joins due to in-memory processing. For complex join strategies on large datasets, SQLite's query planner offers more sophistication.
Authentication & Encryption
| Operation | Time |
|---|---|
| Validate API key | 87 ns |
| Generate token | 1.08 µs |
| Validate token | 1.61 µs |
| Encrypt (small) | 2.0 µs |
| Encrypt (large) | 72.3 µs |
Sub-microsecond auth overhead means authentication is negligible in request latency.
Feature Comparison (Embedded)
| Feature | ekoDB | SQLite | RocksDB | LevelDB |
|---|---|---|---|---|
| Document queries | ✅ | ✅ | ❌ | ❌ |
| Full-text search | ✅ | ✅ (FTS5) | ❌ | ❌ |
| Vector search | ✅ | ❌ | ❌ | ❌ |
| Built-in auth | ✅ | ❌ | ❌ | ❌ |
| ACID transactions | ✅ | ✅ | ✅ | ❌ |
ekoDB trades some raw write performance for a richer feature set. If you need pure KV speed, RocksDB wins. If you need queries, search, and auth in one package, ekoDB is the only embedded option.
See Also
- Query Patterns & Cache Warming - Intelligent caching for 3x faster queries
- Transactions Architecture - ACID transaction performance
- Error Codes - API error reference