An in‑JVM key‑value store with enough Raft consensus to feel distributed, without the cloud bill.
- Raft Consensus: real leader elections (no emperor’s new clothes), AppendEntries, terms, and quorum-based commits.
- Heartbeat Scheduler: leader’s metronome beats every
beatTimems to keep followers from staging a coup. - Persistent Metadata:
currentTerm&votedForon disk (metaX.txt), so crashes don’t invent ghost votes. - Durable Command Log: every
PUTis immortalized inlog0X.txt,replayed on restart, because amnesia is cheating.
| Operation | Routing | Consistency |
|---|---|---|
| PUT | Proxy → Leader | Linearizable (strict order, no surprises) |
| GET | Proxy → Random Follower | Eventual (fast reads, occasional staleness) |
- TCP+JSON interface because HTTP is so 2005.
- Leader election: followers timeout, become candidates, shake virtual ballots one winner per term.
- Log replication: leader maintains
nextIndex/matchIndex, retries on failure, holds grudges. - Commit: majority ACK → leader advances
commitIndex→ state machine applies. - Election safety: one vote per term, enforced by persisted
votedFor.
Client ↔ Proxy ↔ Raft Nodes (threads in one JVM, dramatic flair)
- Proxy (
MultiThreadProxy)
- Accepts client TCP connections.
- Routes
PUT→ leader queue;GET→ random follower queue (spin the wheel).
- ClusterRegistry
- The cluster’s phone book: roles, request queues, heartbeat queues.
- APIs:
getLeaderQueue(),getRandomFollowerQueue(),getAllPeersQueues(), etc.
- Follower / Leader (
CurrStateimplementations)
- Follower: resets election timer on heartbeat, replies to GETs, votes once per term, applies commits.
- Leader: schedules heartbeats, handles PUTs, replicates logs, tracks quorum, commits & applies.
- Heartbeat Scheduler
- Leader’s
ScheduledExecutorServicefires AppendEntries (no entries) everybeatTimems. - Followers reset
deadlineon valid heartbeat and no heartbeat = panic election.
- Persistence (
FileLogger)
- Command log: JSON lines in
log0X.txt. - Metadata: JSON in
metaX.txt, socurrentTerm&votedForsurvive reboots.
SET
{ "key": "snack", "value": "chips" }GET
{ "key": "snack" }OK
{ "key":"snack","value":"chips","status":"ok" }Missing
{ "key":"snack","value":null,"status":"error" }Idle timeout
{ "close":"ok" }# Compile
mvn clean compile
# Start server (Proxy spawns Raft nodes)
mvn exec:java -Dexec.mainClass=org.example.ServerMain
# Start a client
mvn exec:java -Dexec.mainClass=org.example.ClientMainThen throw JSON at it and watch the consensus circus.
- No snapshots: log grows forever, we need bring a bigger disk.
- Single‑JVM: no real network faults, but you get threads flaming out.
- Dynamic membership: someday you’ll add nodes without restarting everything.
MIT License —> hack it, break it, impress your peers with your consensus cred.