Real-Time Subscriptions
ekoDB supports real-time push notifications when data changes. Subscribe to a collection and receive events for every insert, update, or delete — no polling required.
Two transport options:
- WebSocket (
/api/ws) — bidirectional, supports multiple subscriptions per connection, best for applications that also perform CRUD over the same connection - SSE (
/api/subscribe/{collection}) — server-push only, one subscription per connection, simpler to integrate from browsers
Both deliver the same MutationNotification payload and support filtered subscriptions.
WebSocket Subscriptions
Connecting
Connect to the WebSocket endpoint with a Bearer token:
wss://{EKODB_HOST}/api/ws
Authorization: Bearer {TOKEN}
The server assigns a unique connection ID and sends a 15-second heartbeat ping to keep the connection alive across proxies and NATs.
Subscribe to a Collection
Send a JSON message to subscribe:
{
"type": "Subscribe",
"messageId": "sub-1",
"payload": {
"collection": "orders"
}
}
Response:
{
"type": "Success",
"messageId": "sub-1",
"payload": {
"data": {
"subscribed": true,
"collection": "orders",
"subscription_id": "sub_abc123"
}
}
}
Filtered Subscriptions
Reduce noise by subscribing only to mutations where a specific field matches a value:
{
"type": "Subscribe",
"messageId": "sub-2",
"payload": {
"collection": "orders",
"filter_field": "status",
"filter_value": "pending"
}
}
Only mutations where the status field equals "pending" are delivered. Delete events always pass through regardless of filter (to notify you when matching records are removed).
Receiving Mutations
When subscribed data changes, the server pushes a MutationNotification:
{
"type": "MutationNotification",
"payload": {
"collection": "orders",
"event": "insert",
"record_ids": ["rec_abc123"],
"records": {
"id": "rec_abc123",
"product": "Widget",
"status": "pending",
"quantity": 5
},
"timestamp": "2026-04-11T14:30:45.123456Z"
}
}
Event types:
| Event | Description |
|---|---|
insert | Single record inserted |
update | Single record updated |
delete | Single record deleted |
batch_insert | Multiple records inserted |
batch_update | Multiple records updated |
batch_delete | Multiple records deleted |
For insert and update events, records contains the full record data. For delete events, records is null — only record_ids are provided.
Schema Change Notifications
When a collection's schema changes, the server pushes a SchemaChanged event to all subscribers of that collection:
{
"type": "SchemaChanged",
"payload": {
"collection": "orders",
"version": 3,
"primary_key_alias": "id"
}
}
Client libraries use this to invalidate their schema cache automatically.
Unsubscribe
{
"type": "Unsubscribe",
"payload": {
"collection": "orders"
}
}
All subscriptions are automatically cleaned up when the WebSocket connection closes.
Limits
| Setting | Default |
|---|---|
| Max subscriptions per connection | 100 |
| Event buffer per connection | 256 |
| Rate limit | 100 messages/second |
| Heartbeat interval | 15 seconds |
If a subscriber falls behind (buffer full), events are dropped rather than blocking the mutation path. This is by design — ekoDB prioritizes write throughput over guaranteed delivery to slow subscribers.
SSE Subscriptions
For simpler integrations (especially browsers), use Server-Sent Events:
curl -N https://{EKODB_HOST}/api/subscribe/orders \
-H "Authorization: Bearer {TOKEN}"
With filtering:
curl -N "https://{EKODB_HOST}/api/subscribe/orders?filter_field=status&filter_value=pending" \
-H "Authorization: Bearer {TOKEN}"
Events received:
event: subscribed
data: {"subscription_id":"sse_abc123","collection":"orders"}
event: mutation
data: {"collection":"orders","event":"insert","record_ids":["rec_abc123"],"timestamp":"2026-04-11T14:30:45Z"}
event: schema_changed
data: {"collection":"orders","version":3,"primary_key_alias":"id"}
Browser Integration
// The native browser EventSource API does NOT support custom request
// headers. Because the ekoDB SSE endpoint requires `Authorization:
// Bearer <token>`, use a polyfill such as `event-source-polyfill` that
// lets you pass headers (or proxy the request through your own backend
// that injects the header server-side).
import { EventSourcePolyfill } from "event-source-polyfill";
const source = new EventSourcePolyfill(
"https://my-db.ekodb.net/api/subscribe/orders",
{ headers: { Authorization: "Bearer " + token } }
);
source.addEventListener("mutation", (event) => {
const notification = JSON.parse(event.data);
console.log(notification.event, notification.record_ids);
});
source.addEventListener("schema_changed", (event) => {
const schema = JSON.parse(event.data);
console.log("Schema updated to version", schema.version);
});
source.addEventListener("error", () => {
console.log("Connection lost, reconnecting...");
});
:::info SSE vs WebSocket SSE is simpler but limited to one subscription per connection. If you need to subscribe to multiple collections or perform CRUD over the same connection, use WebSocket instead. :::
Client Library Usage
All client libraries provide a high-level subscribe API that handles the WebSocket connection, message routing, and reconnection.
- 🦀 Rust
- 🐍 Python
- 📘 TypeScript
- 🔷 Go
- 🟣 Kotlin
use ekodb_client::Client;
let client = Client::builder()
.base_url("https://my-db.ekodb.net")
.api_key("API_KEY")
.build()?;
let mut ws = client.websocket("wss://my-db.ekodb.net/api/ws").await?;
// Subscribe to all mutations on "orders"
let mut rx = ws.subscribe("orders", None, None).await?;
// Subscribe with filter
let mut rx_pending = ws.subscribe("orders", Some("status"), Some("pending")).await?;
// Receive mutations
while let Some(notification) = rx.recv().await {
println!("{}: {} records", notification.event, notification.record_ids.len());
}
from ekodb_client import Client
client = Client.new("https://my-db.ekodb.net", "API_KEY")
ws = await client.websocket("wss://my-db.ekodb.net/api/ws")
# Subscribe to all mutations
receiver = await ws.subscribe("orders")
# Subscribe with filter
receiver_pending = await ws.subscribe("orders", filter_field="status", filter_value="pending")
# Receive mutations
while True:
notification = await receiver.recv()
if notification is None:
break
print(f"{notification['event']}: {notification['record_ids']}")
import { EkoDBClient } from '@ekodb/ekodb-client';
const client = new EkoDBClient({
baseURL: "https://my-db.ekodb.net",
apiKey: "API_KEY"
});
await client.init();
// Subscribe to all mutations
const stream = await client.subscribe("orders");
// Subscribe with filter
const filteredStream = await client.subscribe("orders", {
filterField: "status",
filterValue: "pending"
});
// Receive mutations
stream.on("mutation", (notification) => {
console.log(`${notification.event}: ${notification.recordIds}`);
});
stream.on("error", (error) => {
console.error("Subscription error:", error);
});
// Clean up
stream.close();
import (
"fmt"
"github.com/ekoDB/ekodb-client-go"
)
client, _ := ekodb.NewClient("https://my-db.ekodb.net", "API_KEY")
ws, _ := client.WebSocket("wss://my-db.ekodb.net/api/ws")
// Subscribe to all mutations
ch, _ := ws.Subscribe("orders")
// Subscribe with filter
chPending, _ := ws.Subscribe("orders", ekodb.SubscribeOptions{
FilterField: "status",
FilterValue: "pending",
})
// Receive mutations
for notification := range ch {
fmt.Printf("%s: %v\n", notification.Event, notification.RecordIDs)
}
import io.ekodb.client.EkoDBClient
val client = EkoDBClient("https://my-db.ekodb.net", "API_KEY")
val ws = client.webSocket("wss://my-db.ekodb.net/api/ws")
// Subscribe to all mutations
val flow = ws.subscribe("orders")
// Subscribe with filter
val filteredFlow = ws.subscribe("orders", SubscribeOptions(
filterField = "status",
filterValue = "pending"
))
// Receive mutations
flow.collect { notification ->
println("${notification.event}: ${notification.recordIds}")
}
Architecture Notes
Non-blocking broadcasts. Mutations are never slowed down by subscribers. The server uses try_send — if a subscriber's buffer is full, the event is dropped and a warning is logged. This means real-time notifications are best-effort; for guaranteed delivery of every change, use Ripples instead.
One channel per connection. All subscriptions on a single WebSocket connection share one internal channel (capacity 256). This keeps memory usage constant regardless of how many collections you subscribe to.
Automatic cleanup. When a WebSocket disconnects, all subscriptions are removed immediately. No stale subscriptions accumulate.
Permission enforcement. Subscriptions require read permission on the collection. The server checks permissions at subscribe time and returns a 403 error if denied.