Skip to main content

Performance Benchmarks

TL;DR: ekoDB delivers 37–127K ops/sec on production workloads with full durability and encryption. Both ekoDB (Key-Value) and ekoDB (Collections) score A grade and lead every YCSB workload against 4 competitors (PostgreSQL, MongoDB, MySQL, Redis) — 3.7–7.3x faster on writes, 1.1–2.1x faster on reads — while using 2–5x less CPU. A single ~50MB Rust binary replaces 4–5 services with built-in auth, encryption, search, and real-time subscriptions.

When to Use ekoDB (and When Not To)

ekoDB excels when you need: write-heavy workloads with durability, read-heavy throughput, sub-millisecond latencies, CPU-efficient deployments, or a unified platform replacing multiple services (document store + KV + FTS + vector search + auth + real-time subs) in a single binary.

Consider alternatives when: you need specialized OLAP with complex window functions and CTEs (ClickHouse, DuckDB).

Quick Decision Guide

Based on YCSB benchmarks: 1M records, 64 threads, Fast storage mode. See Results Summary for numbers.

Your WorkloadBest ChoiceWhy
Write-heavy + durableekoDBFastest across all competitors with sub-millisecond latency
Read-heavy + durableekoDBLeads all 4 competitors on every read workload
Non-durable (any mix)ekoDBCompetitive or leading, while carrying auth/encryption overhead competitors skip
Low-latency (any mode)ekoDBConsistent sub-millisecond across all workloads
CPU cost-sensitiveekoDBBetter efficiency across durable and non-durable modes
Unified platformekoDBSingle binary replaces 4–5 services (auth, search, cache, DB)

Why ekoDB's Performance Is Remarkable

ekoDB isn't just fast. It's fast despite doing more work per request than competitors. Understanding this context makes the benchmark results more meaningful.

The Overhead ekoDB Carries

Every ekoDB request includes capabilities that competitors skip entirely:

LayerWhat HappensCompetitors
SecurityJWT/API key auth, AES-GCM encryption on all stored dataRedis: none, PG: separate
DataSchema validation, automatic indexingPG: yes, Redis: none
SearchFull-text + vector search index updatesRequires Elasticsearch + Pinecone
DurabilityConfigurable persistence guarantees per operationPG: batched, Redis: async
Real-timeSubscription notificationsRequires additional services
Durability Configuration

ekoDB supports two durability modes via the durable_operations config setting:

ModeSettingBehaviorUse Case
Durable (default)durable_operations: trueEvery confirmed write persisted to disk before responseMost workloads, guaranteed persistence
Non-Durabledurable_operations: falseWrites persisted asynchronouslyHigh-throughput, eventual durability acceptable

Durability: The Hidden Performance Tax

Most benchmarks don't match durability settings, making some databases appear faster than they are in production. A database that skips durability guarantees will always benchmark faster — but that speed comes at the cost of data loss on crash. All ekoDB benchmarks use equivalent durability settings across databases for a fair comparison.


YCSB Results Summary

Industry-standard YCSB benchmarks: 1M records, 64 threads, Fast storage mode, durable writes with matching durability settings across all databases. Full tables and configuration details in Detailed Benchmarks below.

DatabaseScoreGrade
ekoDB (Key-Value)84A
ekoDB (Collections)81A
PostgreSQL46C
MongoDB41C
Redis35D
MySQL29F

Key findings:

  • Write-heavy (A, F): ekoDB delivers 39K ops/sec — 7.3x faster than PostgreSQL, 3.7x faster than MongoDB
  • Read-heavy (B, C, D): 112–127K ops/sec, leading all 4 competitors on every workload
  • Latency: Sub-millisecond across all workloads (0.39–0.59ms)
  • CPU efficiency: 2–5x better ops/CPU% than PostgreSQL and MongoDB
  • Redis: Struggles with durable writes — 10ms+ latency, only 4–6K ops/sec on write-heavy workloads

Analysis: Why ekoDB Leads

The 1M record, 64-thread YCSB results reveal three architectural advantages:

1. Write Performance Under Durability

ekoDB maintains fast writes even with full persistence guarantees. Where competitors see throughput collapse when durability is enabled (Redis drops to single-digit K ops/sec), ekoDB's write path is designed for durable workloads from the ground up — not bolted on as an afterthought.

2. Scalable Concurrency

At 64 threads, ekoDB scales efficiently with minimal lock contention. Both the Key-Value and Collections engines distribute write load across cores, maintaining consistent performance as thread count increases.

3. CPU Efficiency

Why CPU Efficiency Matters

On cloud platforms, you pay for CPU cores. A database that delivers 50K ops/sec using 200% CPU (2 cores) is more cost-effective than one delivering 60K ops/sec using 600% CPU (6 cores).

Efficiency = ops/sec ÷ CPU%: Higher is better. Lower CPU per operation means smaller instances, lower cloud costs, and more headroom for co-located services.

The Bottom Line

ekoDB achieves these results while carrying significantly more overhead per request than every competitor — auth, encryption, indexing, and real-time subscriptions on every operation. A ~50MB Rust binary with zero garbage collection pauses.


Detailed YCSB Benchmarks by Storage Mode

Industry-standard YCSB (Yahoo! Cloud Serving Benchmark) results comparing ekoDB against PostgreSQL, MongoDB, MySQL, Redis, and ekoDB (Key-Value) with matching durability settings.

YCSB Workloads Explained
WorkloadMixReal-World Example
A50% read, 50% updateSession stores, shopping carts
B95% read, 5% updateSocial media profiles, photo tagging
C100% readProduct catalogs, configuration lookups
D95% read, 5% insertNews feeds, activity streams
F50% read-modify-writeBank transactions, inventory counters
Test Configuration
  • Hardware: Apple M1 Max, 64GB RAM, NVMe SSD
  • Scale: 1,000,000 records | 1,000,000 operations | 64 threads
  • Batch Size: 1 operation per request (raw network performance test)
  • Databases: 6 databases tested — ekoDB (Collections), ekoDB (Key-Value), PostgreSQL 15, MongoDB 6.0, MySQL 8.0, Redis 7.x

All databases tested on the same machine, same day, for accurate comparison per configuration.

Driver Asymmetry

PostgreSQL benchmarks use optimized JDBC drivers with connection pooling, prepared statement caching, and batch optimizations built into the driver layer. These are production-grade drivers tuned over decades.

ekoDB benchmarks use raw TCP protocol with a basic YCSB client: no driver-level optimizations, no connection pooling magic, no prepared statement cache.

Despite this disadvantage, ekoDB leads on every workload tested.


Note: All benchmarks include AES-GCM encryption on all stored data.

Storage Mode: Fast | Durability: Durable (guaranteed persistence)

Durability Settings

DatabaseDurable Settingfsync BehaviorGroup Commit Delay
ekoDBdurable_operations=trueWAL fsync, group_commits=true2ms (explicit)
PostgreSQLsynchronous_commit=on, fsync=onPer-commit WAL fsynccommit_delay=2000us, commit_siblings=2
MongoDBw=1, journal=trueWiredTiger journal fsyncjournalCommitInterval=2
MySQLinnodb_flush_log_at_trx_commit=1, sync_binlog=1Per-commit redo log + binlog fsyncbinlog_group_commit_sync_delay=2000us
Redisappendfsync=alwaysPer-operation AOF fsyncN/A (no group commit)

This is the recommended production configuration. Every confirmed write is persisted to disk before the client receives a response — no data loss on crash, power failure, or unexpected shutdown. In real-world terms: a session store handling 39K mixed read/update operations per second, a product catalog serving 124K lookups per second, or a financial ledger processing 38K atomic read-modify-write transactions per second — all with full durability, encryption, and auth on every operation.

Throughput (ops/sec)

DatabaseWorkload AWorkload BWorkload CWorkload DWorkload F
ekoDB (Key-Value)39,369112,854116,442127,38937,929
ekoDB (Collections)36,964115,128123,655118,31536,937
MongoDB10,51172,32777,09576,01710,130
MySQL2,50526,73063,01645,4092,324
PostgreSQL5,42054,555107,22794,9494,790
Redis6,0226,403103,8645,9994,200

Key findings:

  • ekoDB (Key-Value) leads Workloads A, D; ekoDB (Collections) leads B, C — both score A grade (84 and 81 out of 100)
  • Workload A (50% read/50% update): ekoDB (Key-Value) 7.3x faster than PostgreSQL, 3.7x faster than MongoDB
  • Workload B (95% read/5% update): ekoDB (Collections) 2.1x faster than PostgreSQL, 1.6x faster than MongoDB
  • Workload C (100% read): ekoDB (Collections) 15% faster than PostgreSQL (124K vs 107K), 1.6x faster than MongoDB
  • Workload D (95% read latest/5% insert): ekoDB (Key-Value) 127K — fastest across all databases and workloads, 1.3x faster than PostgreSQL, 1.7x faster than MongoDB
  • Workload F (Read-Modify-Write): ekoDB (Key-Value) 7.9x faster than PostgreSQL, 3.7x faster than MongoDB
  • Redis struggles with durable writes (appendfsync=always): 10ms+ latency on non-read workloads, only 4–6K ops/sec on A/D/F

Average Latency (ms)

DatabaseWorkload AWorkload BWorkload CWorkload DWorkload F
ekoDB (Key-Value)0.530.460.590.400.43
ekoDB (Collections)0.430.400.450.390.41
MongoDB1.170.440.790.431.38
MySQL1.540.250.940.581.87
PostgreSQL1.010.170.560.311.16
Redis10.609.960.6110.1210.10

Key findings:

  • ekoDB maintains sub-millisecond latencies across all workloads (0.39–0.59ms)
  • Lowest latency on write-heavy Workloads A (0.43ms) and F (0.41ms)
  • PostgreSQL has lower latency on read-heavy workloads B (0.17ms) and C (0.56ms)
  • Redis latency explodes on write-heavy workloads: 10.60ms on Workload A vs ekoDB's 0.43ms — a 24x difference due to per-operation AOF fsync

CPU Efficiency (ops/sec per CPU %)

Note: ekoDB intentionally caps CPU usage at 95% to leave headroom for system stability. Other databases typically try to use 100% of available CPU during benchmarks.

DatabaseWorkload AWorkload BWorkload CWorkload DWorkload F
ekoDB (Key-Value)248499548479226
ekoDB (Collections)145357447412109
MongoDB4610311111650
MySQL4615019815538
PostgreSQL4517223822240
Redis2572801,336271175

Key findings:

  • ekoDB (Key-Value) achieves 2–5x better CPU efficiency than PostgreSQL and MongoDB across all workloads
  • Best efficiency on Workload D: ekoDB (Key-Value) 479 ops/CPU% (2.2x PostgreSQL, 4.1x MongoDB)
  • Redis achieves high CPU efficiency on Workload C (1,336 ops/CPU%) due to minimal CPU usage on pure reads, but collapses on write-heavy workloads

CPU Utilization Details

DatabaseA AvgA PeakB AvgB PeakC AvgC PeakD AvgD PeakF AvgF Peak
ekoDB (Key-Value)158.4289.8226.1312.5212.1305.0265.4356.1167.8229.5
ekoDB (Collections)254.8346.6322.0431.4276.4383.1287.1371.0337.3430.4
MongoDB224.0344.4697.2819.8690.7780.6651.3763.7201.8552.1
MySQL53.385.8177.4253.9317.6366.2292.3365.661.196.3
PostgreSQL120.2182.9316.3527.6450.4657.0426.7630.8118.8173.3
Redis23.427.922.826.877.794.122.127.023.927.8

Measured on 10-core Apple M1 Max. CPU% > 100 indicates multi-core usage.

Durability vs Performance Trade-off

Non-durable mode is faster but risky:

  • Data loss window: Uncommitted writes lost on crash (typically 0–1 seconds)
  • Use case: Caches, temp data, dev/test environments, bulk ingestion

Durable mode (default) is recommended for production:

  • In durable mode, ekoDB leads across all YCSB workloads with equivalent durability settings
  • Use durable for production, non-durable only for caches/temp data

Cache Mode: ekoDB KV vs Redis

A head-to-head comparison of ekoDB's Key-Value engine against Redis in pure cache mode — no persistence, no durability overhead. This isolates raw in-memory throughput performance.

Test Configuration
  • Hardware: Apple M1 Max, 64GB RAM, NVMe SSD
  • Scale: 1,000,000 records | 1,000,000 operations | 64 threads
  • Storage Mode: Fast | Durability: Non-Durable (no fsync, no persistence)
  • ekoDB: durable_operations=false (async WAL)
  • Redis: appendonly=no (no persistence)

Throughput (ops/sec)

DatabaseWorkload AWorkload BWorkload CWorkload DWorkload F
ekoDB (Key-Value)87,07890,49890,05880,78261,660
Redis91,70187,62788,58284,18258,851

Average Latency (ms)

DatabaseWorkload AWorkload BWorkload CWorkload DWorkload F
ekoDB (Key-Value)0.700.690.720.770.67
Redis0.690.720.710.710.72

CPU Efficiency (ops/sec per CPU %)

DatabaseWorkload AWorkload BWorkload CWorkload DWorkload F
ekoDB (Key-Value)419550588432305
Redis1,1891,1391,1411,087766

Performance Scores

DatabaseScoreGrade
Redis97A+
ekoDB (Key-Value)83A
What This Means

ekoDB matches Redis on throughput (~82K avg ops/sec) and latency (~0.72ms avg) — despite carrying the full overhead described above on every operation. Redis wins decisively on CPU efficiency due to its single-threaded, minimal-overhead design.

If you're choosing between Redis and ekoDB for caching, ekoDB delivers equivalent speed plus built-in auth, encryption, search, and real-time subscriptions — without additional services.


Balanced Mode

Balanced storage mode is designed for datasets larger than available RAM, automatically managing which records stay in memory and which are stored on disk.

Storage Mode: Balanced | Durability: Durable (guaranteed persistence)

Balanced mode is designed for datasets that exceed available RAM — user profile stores, content management systems, and product catalogs where the total dataset is large but the active working set fits in memory. ekoDB leads on write-heavy and mixed workloads (A, B, F) with 2–3x better CPU efficiency than all competitors, while PostgreSQL edges ahead on pure-read workloads (C, D) where its mature query optimizer is most effective.

Throughput (ops/sec)

DatabaseWorkload AWorkload BWorkload CWorkload DWorkload F
ekoDB (Collections)30,46275,26787,22281,01131,818
PostgreSQL5,20753,112103,37094,7695,020
MongoDB11,07168,71471,00368,61110,423
MySQL4,60342,80661,00546,1554,804

Average Latency (ms)

DatabaseWorkload AWorkload BWorkload CWorkload DWorkload F
ekoDB (Collections)0.600.640.670.580.58
PostgreSQL1.060.170.590.341.17
MongoDB1.170.490.870.511.17
MySQL1.260.300.990.661.20

CPU Efficiency (ops/sec per CPU %)

DatabaseWorkload AWorkload BWorkload CWorkload DWorkload F
ekoDB (Collections)108281346342104
PostgreSQL4317617919942
MongoDB49961039443
MySQL6114419614360

vs Fast + Durable: Balanced mode trails across all workloads — Fast + Durable delivers 39K (A), 115K (B), 124K (C), 127K (D), 38K (F). The gap is most pronounced on read-heavy workloads where Fast mode's in-memory primary storage eliminates disk access entirely.

Storage Mode Guide
ModeOptimized ForBest For
FastMaximum throughput, in-memory performanceLow-latency reads, data fits in RAM
BalancedMemory-efficient, larger-than-RAM datasetsDatasets larger than RAM, general-purpose
ColdSequential writes, minimal disk footprintBulk ingestion, append-heavy, cost-optimized storage

Cold Mode

Cold storage mode is optimized for sequential write throughput and minimal disk footprint (10–20x less disk usage than Balanced mode). Best for bulk ingestion, append-heavy workloads, and cost-optimized storage. Traditional databases don't offer equivalent storage mode flexibility, so these are ekoDB-only comparisons against Fast mode.

1M records, 64 threads, full durability guarantees

Cold mode trades read performance for minimal disk footprint and sequential write efficiency — ideal for audit logs, event sourcing, IoT telemetry, and bulk ingestion where data is written once and read infrequently. With durable writes, data safety is guaranteed even at reduced throughput. These are ekoDB-only comparisons since traditional databases don't offer equivalent storage mode flexibility.

WorkloadekoDB ColdekoDB FastResult
A (50/50)17,01338,024Fast 2.2x faster
B (95/5)60,10392,396Fast 1.5x faster
C (100% read)95,67597,628Fast 1.0x faster
D (95/5 latest)42,65890,893Fast 2.1x faster
F (RMW)5,86437,135Fast 6.3x faster
Cold Mode Trade-offs

Cold mode trades read performance for minimal disk footprint and sequential write efficiency. Use Cold mode for bulk data ingestion, ETL pipelines, append-only data models, and scenarios where storage cost matters more than read latency. For low-latency reads, use Fast or Balanced mode.


Feature Comparison

Beyond raw performance, ekoDB consolidates capabilities that typically require multiple external services into a single ~50MB binary. The table below shows what ships built-in versus what requires extensions or separate infrastructure.

FeatureekoDBPostgreSQLMongoDBMySQLRedis
Document Queries✅ Built-in JSON✅ SQL + JSONB✅ Built-in BSON⚠️ JSON columns⚠️ RedisJSON module
Vector Search✅ Built-in⚠️ pgvector ext⚠️ Atlas only❌ Needs Pinecone⚠️ RediSearch module
Full-Text Search✅ Built-in⚠️ FTS config⚠️ Atlas Search⚠️ FULLTEXT index⚠️ RediSearch module
Built-in Auth✅ JWT + API keys⚠️ Roles only⚠️ SCRAM only⚠️ Roles only⚠️ ACL only
Per-op Encryption✅ AES-GCM⚠️ pgcrypto ext⚠️ CSFLE client-side⚠️ TDE / app-level❌ TLS in-transit only
Single Binary✅ Yes❌ Multi-process❌ mongod + mongos❌ Multi-process✅ Yes
Real-time Subscriptions✅ WebSocket⚠️ LISTEN/NOTIFY⚠️ Change Streams❌ Needs external⚠️ Pub/Sub
Durable Writes✅ Per-operation✅ Per-commit✅ Journal✅ Per-commit⚠️ AOF fsync

Embedded Benchmarks (Local Performance)

These benchmarks measure ekoDB's core Rust database engine running in-process — no network round-trips, no serialization, no TCP overhead. This represents the performance floor for applications that embed ekoDB directly as a library rather than connecting over the network, and provides a direct comparison against established embedded databases like SQLite, RocksDB, and LevelDB.

Benchmark Environment

All benchmarks run on the embedded Rust database engine. Production performance via REST/WebSocket/TCP APIs includes additional network latency.

Summary

CategoryOperationekoDBSQLiteRocksDBLevelDB
Key-ValueGet185 ns~5 µs~3 µs~4 µs
Key-ValueSet3.5 µs~50 µs~6 µs~8 µs
Queryfind_by_id2.3 µs~10 µsN/AN/A
QuerySimple (10 records)108 µs~200 µsN/AN/A
InsertSingle record101 µs~50 µs~6 µs~8 µs
UpdateSingle record27 µs~50 µs~6 µs~8 µs

When to use ekoDB: Excellent reads and KV operations, plus document queries that RocksDB/LevelDB can't do. The KV layer delivers 185 ns Get and 3.5 µs Set with high-concurrency access.

When to consider alternatives: RocksDB/LevelDB are faster at raw writes. They're pure KV stores without document parsing or indexing overhead.

Key-Value Operations

OperationekoDBSQLiteRocksDBLevelDB
Get185 ns~5 µs~3 µs~4 µs
Set3.5 µs~50 µs~6 µs~8 µs
Exists109 ns~5 µs~3 µs~4 µs
Delete85 ns~50 µs~6 µs~8 µs

ekoDB's KV layer uses lock-free concurrent storage optimized for reads. RocksDB/LevelDB edge ahead on writes due to LSM-tree architecture.

Query Operations

RecordsekoDB (Simple)ekoDB (Complex)SQLiteDuckDB
10108 µs109 µs~200 µs~300 µs
100113 µs1.16 ms~500 µs~600 µs
1000120 µs12.8 ms~2 ms~15 ms

ekoDB maintains consistent performance as result sets grow. For complex analytical queries, DuckDB is purpose-built for that use case.

Cache Warming

ekoDB's pattern-based cache warming pre-loads frequently accessed records:

RecordsUncachedCachedSpeedup
10329 µs109 µs3.0x
1003.38 ms1.08 ms3.1x
100041.4 ms11.4 ms3.6x

Write Operations

OperationekoDBSQLiteRocksDBLevelDB
Single insert101 µs~50 µs~6 µs~8 µs
Batch insert 1005.2 ms~5 ms~600 µs~800 µs
Single update27 µs~50 µs~6 µs~8 µs
Single delete72 µs~50 µs~6 µs~8 µs

RocksDB/LevelDB dominate write performance. Their LSM-tree design converts random writes to sequential I/O. ekoDB includes indexing overhead that slows writes but accelerates reads and enables queries.

Join Operations

Join TypeekoDB (10)ekoDB (100)ekoDB (1000)SQLite (1000)
Simple Join22 µs528 µs42.9 ms~80 ms
Multi-Collection22 µs530 µs40.9 ms~100 ms
Filtered Join25 µs617 µs45.6 ms~90 ms

ekoDB outperforms SQLite on joins due to in-memory processing. For complex join strategies on large datasets, SQLite's query planner offers more sophistication.

Authentication & Encryption

OperationTime
Validate API key87 ns
Generate token1.08 µs
Validate token1.61 µs
Encrypt (small)2.0 µs
Encrypt (large)72.3 µs

Sub-microsecond auth overhead means authentication is negligible in request latency.

Feature Comparison (Embedded)

FeatureekoDBSQLiteRocksDBLevelDB
Document queries
Full-text search✅ (FTS5)
Vector search
Built-in auth
ACID transactions

ekoDB trades some raw write performance for a richer feature set. If you need pure KV speed, RocksDB wins. If you need queries, search, and auth in one package, ekoDB is the only embedded option.


See Also