Skip to content
WeftKitBeta
Ecosystem

Pool Manager

One million connections. Zero compromise.

io_uring-based connection proxy handling 1M+ concurrent connections with intelligent query routing, read/write splitting, connection multiplexing, and a built-in query cache.

io_uring async I/O (Linux)
4 routing algorithms
Read/write splitting
Query result caching
< 1 μs connection acquisition
> 500K queries/sec proxy throughput

Pool Manager is the connection proxy that sits in front of one or more Standalone servers and handles the hard problem of massive connection concurrency. Built on Linux's io_uring async I/O interface, it achieves over 1 million concurrent client connections while adding sub-microsecond overhead to each query.

Beyond raw connection scaling, Pool Manager adds intelligent routing, read/write splitting, connection multiplexing, and a transparent query result cache. All at the proxy layer — your application code and database server remain unchanged.

Internals

How It Works

Step-by-step walkthrough of the internal architecture.

1

io_uring Listener

Pool Manager uses Linux's io_uring interface for kernel-bypass async I/O. All I/O operations are submitted as ring buffer entries, eliminating syscall overhead per operation.

2

Connection Multiplexing

Many client connections are multiplexed over a smaller pool of server connections. A connection to the Pool Manager does not require a corresponding server connection until a query is issued.

3

Query Routing

Queries are inspected (for SQL-speaking protocols) to determine read vs. write intent. Reads are routed to read replicas or follower nodes based on the selected routing algorithm.

4

Query Cache

SELECT results are cached with configurable TTL. Cache keys are query text + parameters. Cache invalidation on writes to affected tables.

5

Backend Health

Pool Manager polls backend health from Discovery. Unhealthy backends are removed from the routing pool automatically. Connections are drained gracefully.

Applications

Use Cases

Common deployment patterns and scenarios.

Massive Concurrency

Handle 1M+ concurrent client connections to a database that supports far fewer native connections.

Read Scaling

Automatically route read queries to replica nodes without application-level read/write splitting logic.

Connection Storm Protection

Absorb connection storms from application servers without overwhelming the database with connection overhead.

Query Caching

Cache frequent read results at the proxy layer for zero-DB-hit latency on hot queries.

Configuration

Configuration Reference

TOML configuration options for this component.

[pool]
# Standalone backend addresses
backends = [
  "127.0.0.1:20000",
  "127.0.0.1:20001",
]

# Client listener
bind = "0.0.0.0:6432"

# Max client connections
max_client_connections = 1048576  # 1M

# Max server connections per backend
max_server_connections = 1000

# Routing algorithm: "round_robin" | "least_conn" | "random" | "hash"
routing = "least_conn"

# Query cache TTL (0 = disabled)
cache_ttl_ms = 5000