Skip to content
WeftKitBeta
Ecosystem

Distribution

Raft consensus. Same-type replication. Configurable consistency.

Multi-instance data replication using Raft consensus. Supports eventual and strong consistency with quorum writes, anti-entropy repair, and automatic failover.

Raft consensus protocol
Eventual + Strong consistency modes
Quorum writes (configurable)
Anti-entropy repair
Same-type replication only
Requires Discovery component

The Distribution component provides multi-instance data replication for WeftKit databases using the Raft consensus protocol. When you need your data available across multiple nodes — for high availability, read scaling, or geographic distribution — Distribution manages the entire replication lifecycle.

Distribution only replicates same-type databases: a Relational instance replicates to other Relational instances. This architectural constraint ensures query language compatibility and type-safe replication semantics. Discovery is a hard dependency — Distribution requires a running Discovery instance to locate peers.

Internals

How It Works

Step-by-step walkthrough of the internal architecture.

1

Peer Discovery

Distribution queries the Discovery component to locate all members of its replication group. No manual IP configuration needed.

2

Raft Leader Election

A single leader is elected using randomized election timeouts (150–300ms). The leader handles all writes and coordinates log replication.

3

Log Replication

Write operations are appended to the leader's Raft log and replicated to followers before acknowledgment. Quorum acknowledgment ensures durability.

4

Consistency Enforcement

Strong consistency: only the leader serves reads. Eventual consistency: followers serve reads from their local state for lower latency.

5

Anti-Entropy Repair

Background Merkle-tree-based digest comparison detects and repairs divergent replicas automatically without operator intervention.

Applications

Use Cases

Common deployment patterns and scenarios.

High Availability

Automatic failover when the leader node becomes unavailable. No data loss with quorum writes.

Read Scaling

Route read queries to follower replicas for horizontal read throughput scaling.

Multi-Region Deployment

Deploy replicas across availability zones or data centers for geographic redundancy.

Zero-Downtime Upgrades

Upgrade one replica at a time while the cluster continues serving traffic.

Configuration

Configuration Reference

TOML configuration options for this component.

[distribution]
# Requires a running Discovery instance
discovery_endpoint = "http://localhost:7474"

# Replication group name (same-type DBs only)
group = "relational-primary"

# Consistency mode: "strong" | "eventual"
consistency = "strong"

# Quorum size (majority by default)
quorum = "majority"

# Election timeout range (ms)
election_timeout_min_ms = 150
election_timeout_max_ms = 300