Stay vigilant against phishing attacks. Chorus One sends emails exclusively to contacts who have subscribed. If you are in doubt, please don’t hesitate to reach out through our official communication channels.
We would like to thank Keone and the entire Monad team for their valuable discussions and insightful feedback.
Introduction
The Monad blockchain is designed to tackle the scalability and performance limitations of existing systems like Ethereum. It maximizes throughput and efficiency while preserving decentralization and security. Its architecture is composed of different integrated components: the Monad Client, which handles consensus and execution. MonadBFT, a consensus mechanism derived from HotStuff. The Execution Model, which leverages parallelism and speculative execution, and finally MonadDB, a state database purpose-built for Monad. Additional innovations such as RaptorCast and a local mempool design further enhance performance and reliability. Together, these elements position Monad as a next-generation blockchain capable of supporting decentralized EVM applications with low latency and strong guarantees of safety and liveness. Below, we'll provide a technical overview of the Monad architecture, which consists of the Monad Client, MonadBFT, the execution model, and monadDB.
Monad Architecture Overview
The Monad Client:
The Monad architecture is built around a modular node design that orchestrates transaction processing, consensus, state management, and networking. Validators run the Monad Client, a software with a part written in Rust (for consensus) and C/C++ (for execution) to optimize performance. Similar to Ethereum, the Monad client is split into two layers:
Consensus Layer: Establishes transaction ordering and ensures network-wide agreement using MonadBFT, a fast Byzantine Fault Tolerant (BFT) protocol achieving ~800ms finality.
Execution Layer: Verifies and executes transactions, updating the blockchain state in parallel for efficiency.
Consensus: MonadBFT
MonadBFT is a modern BFT consensus mechanism from the HotStuff family. It combines the below properties:
Pipelined consensus (enables low block times - 400 ms)
Resistance to tail forks
Linear communication complexity (enables a larger, more decentralized network)
Two-round finality
One-round speculative finality (speculative, but very unlikely to revert)
Pipelined Structure
Traditional HotStuff requires 3 phases to finalize a block, each happening one after the other:
Proposal: The leader (or proposer) creates a block proposal with transaction data and sends it to all validators.
Voting: Validators evaluate the block and return a signed vote (accept/reject the block).
Certification: The leader aggregates the votes. If at least two-thirds of validators sign off, their signatures are bundled into a Quorum Certificate (QC), which serves as cryptographic proof that a supermajority (≥2/3) has agreed to the block.
This sequential process delays block finalization. MonadBFT only requires 2 phases, which makes finality faster, but also, it uses a pipelined approach, overlapping phases: when block k is proposed, block k–1 is voted on, and block k–2 is finalized simultaneously. This parallelism reduces latency.
On Monad, at any round, validators propose a new block, vote on the previous, and finalize the one before that.
Comparison: HotStuff vs MonadBFT
The Monad documentation includes a clear infographic illustrating MonadBFT’s pipelined approach, showing how each round overlaps proposal, voting, and finalization to achieve sub-second finality.
Although pipelining increases block frequency and lowers latency, it comes with a big problem that previously hadn’t been addressed by any pipelined consensus algorithms. That problem is tail-forking.
Tail-forking is best explained with an example. Suppose the next few leaders are Alice, Bob, and Charlie. In pipelined consensus, as mentioned before, second-stage communication about Alice's block piggybacks on top of Bob's proposal for a new block.
Historically, this meant that if Bob missed or mistimed his chance to produce a block, Alice's proposal would also not end up going through; it would be "tail-forked" out and the next validator would rewrite the history Alice was trying to propose.
MonadBFT has tail-fork resistance because of a sophisticated fallback plan in the event of a missed round. Briefly: when a round is missed, the network collaborates to communicate enough information about what was previously seen to ensure that Alice's original proposal ultimately gets restored. For more details, see this blog post explaining the problem and the solution.
The Leader Election
MonadBFT employs a stake-weighted, deterministic leader schedule within fixed 50,000-block epochs (~5.5 hours) to ensure fairness and predictability:
Stake-Weighted Determinism: At epoch start, validator stake weights are locked. Each validator uses a cryptographic random function to generate an identical leader schedule, assigning slots proportional to stake (a validator with 3% stake gets 3% of slots). The leader changes with every new block, and all validators know exactly who the leader is for each slot across the entire epoch.
Security and Liveness: If a leader fails to propose a block within ~0.4s, validators broadcast signed timeout messages. When ≥2/3 stake submits timeouts, these form a Timeout Certificate (TC), sent to the next leader. The new leader includes the TC in its proposal, signaling the failure and ensuring chain continuity with the highest known block.
Linear Communication
Unlike older BFT protocols with quadratic (O(n²)) message complexity, MonadBFT scales linearly (O(n)). Validators send a fixed number of messages per round to the current or next leader, reducing bandwidth and CPU costs. This enables 100–200+ validators to operate on modest hardware and with modest network bandwidth limits without network overload.
Fault Tolerance
MonadBFT tolerates up to 1/3 of validator stake being offline while retaining liveness, and up to 2/3 of validator stake being malicious while retaining safety (no invalid state transitions).
Block Propagation with RaptorCast
To support fast consensus, Monad uses RaptorCast for efficient block propagation. Instead of broadcasting entire blocks, RaptorCast splits blocks into erasure-coded chunks distributed via a two-level broadcast tree:
The leader sends each chunk to one validator (level 1 nodes), who forwards that chunk to all others (level 2 nodes).
Validators can reconstruct blocks from any subset of chunks of size roughly matching the original block. Extra chunks ensure resilience against loss or faulty nodes.
This distribution results in both low latency (two hops per chunk) and low bandwidth utilization, unlike slower gossip protocols.
If a validator lags, it syncs missing blocks from peers, updating its state via MonadDB (see State Management section below). With consensus efficiently establishing transaction order, Monad's execution model builds on this foundation to process those transactions at high speed.
Execution Model
Monad’s execution model overcomes Ethereum’s single-threaded limitation (10–30 TPS) by leveraging modern multi-core CPUs for parallel and speculative transaction processing, as enabled by the decoupled consensus described above.
Asynchronous Execution
After consensus, transactions are executed asynchronously during the 0.4 s block window. This decoupling allows consensus to proceed without waiting for execution, maximizing CPU utilization.
Optimistic Parallel Execution
With Optimistic Parallel Execution, Monad tries to speed up blockchain transaction processing by running transactions at the same time (in parallel) whenever possible, rather than one by one. Here’s a simple explanation of how it works:
Run Everything in Parallel First:
Monad executes all transactions in a block simultaneously, assuming no conflicts, and creates a PendingResult for each, recording the inputs (state read, like pre-transaction account balances) and outputs (new state, like updated balances).
Check and Commit Results One by One:
After the parallel execution, Monad checks each PendingResult in order (serially).
If a transaction’s inputs are still valid (they match the current blockchain state), Monad applies the outputs to update the blockchain.
If the inputs are invalid (because another transaction changed the state), Monad re-executes that transaction with the updated state to get the correct result.
This saves time because many transactions don’t conflict, so running them in parallel is faster. Even when transactions conflict (for example: two transfers from the same account), Monad only re-executes the ones that fail the input check, which is usually fast because the data is already in memory.
Here’s a simple example with 4 transactions in a block:
Tom swaps USDC for MON on Uniswap Pool A: This modifies the state of Uniswap Pool A (USDC and MON balances in the pool) and Tom’s balances (decreases USDC, increases MON).
Jordan mints an NFT: This interacts with an NFT contract, creating a new token and assigning it to Jordan.
Alice transfers MON to Eve: This decreases Alice’s MON balance and increases Eve’s MON balance.
Paul swaps USDC for MON also on Uniswap Pool A: This also modifies Uniswap Pool A’s state and Paul’s balances (decreases USDC, increases MON).
How Monad Processes These Transactions
Monad assumes all transactions can run simultaneously and corrects conflicts afterward:
Step 1: Parallel Execution
Monad executes all 4 transactions at the same time, assuming the initial blockchain state is consistent for each. It produces a PendingResult for each transaction, recording:
The Inputs: The state read (Uniswap Pool A’s balances, Alice’s MON balance, etc).
The Outputs: The new state after the transaction (updated pool balances, updated account balances, etc).
For example:
Tom’s swap: Reads Pool A’s current USDC and MON balances, calculates the swap ( Tom sends 100 USDC, receives X MON based on the pool’s pricing), and outputs new pool balances and Tom’s updated balances.
Jordan’s NFT mint: Reads the NFT contract’s state, creates a new token, and outputs the updated NFT contract state and Jordan’s ownership.
Alice’s transfer: Reads Alice’s MON balance, subtracts the transfer amount, adds it to Eve’s balance, and outputs the new balances.
Paul’s swap: Reads Pool A’s current balances (same as Tom’s initial read), calculates the swap, and outputs new pool balances and Paul’s updated balances.
Step 2: Serial Commitment
Monad commits the PendingResult one by one in the order they appear in the block (Tom, Jordan, Alice, Paul). It checks if each transaction’s inputs still match the current blockchain state. If they do, the outputs are applied. If not, the transaction is re-executed.
Let’s walk through the commitment process:
Tom’s swap (Transaction 0):
Monad checks the PendingResult. The inputs (Pool A’s initial USDC and MON balances) match the blockchain’s current state because no prior transaction has modified Pool A.
Monad commits the outputs: Pool A’s USDC balance increases (from Tom’s USDC), MON balance decreases (Tom receives MON), and Tom’s balances update (less USDC, more MON).
New state: Pool A’s balances are updated, Tom’s balances are updated.
Jordan’s NFT mint (Transaction 1):
The inputs (NFT contract state) are unaffected by Tom’s swap, so they match the current state.
Monad commits the outputs: A new NFT is created, and Jordan is recorded as its owner.
New state: NFT contract state is updated, Jordan owns the new NFT.
Alice’s transfer (Transaction 2):
The inputs (Alice’s MON balance) are unaffected by Tom’s swap or Jordan’s mint, so they match the current state.
New state: Alice and Eve’s MON balances are updated.
Paul’s swap (Transaction 3):
The inputs (Pool A’s USDC and MON balances) were based on the initial state, but Tom’s swap (committed in step 1) changed Pool A’s balances.
Since the inputs no longer match the current state, Monad re-executes Paul’s swap using the updated Pool A state (post-Tom’s swap).
Re-execution calculates the swap with the new pool balances, producing updated outputs: Pool A’s USDC balance increases further, MON balance decreases further, and Paul’s balances update (less USDC, more MON).
New state: Pool A’s balances are updated again, Paul’s balances are updated.
Step 3: Final State
After committing all transactions, the blockchain reflects:
Uniswap Pool A: Updated balances reflecting Tom’s and Paul’s swaps (more USDC, less MON).
Tom: Less USDC, more MON based on his swap.
Jordan: Owns a newly minted NFT.
Alice: Less MON after transferring to Eve.
Eve: More MON from Alice’s transfer.
Paul: Less USDC, more MON based on his swap (calculated with the updated pool state).
Speculative Execution
Monad enhances speed via speculative execution, where nodes process transactions in a proposed block before full consensus:
Consensus orders transactions: Validators collectively decide on the exact list and order of transactions in each block. This ordering is secured through the MonadBFT consensus protocol.
Delayed State Update: Nodes receive the ordered list but delay final state commitment.
Speculative Execution: Transactions are executed immediately on a speculative basis.
State Commitment: State updates are committed after two consensus rounds (~800ms), once the block is finalized.
Rollback if Needed: If the block proposal ends up not being finalized, speculative results are discarded, and nodes revert to the last finalized state.
In summary, Optimistic Parallel Execution is about how transactions get processed (running many in parallel to speed up the process) while Speculative Execution handles when processing begins, starting right after a block is proposed but before full network confirmation. This parallel and speculative processing relies heavily on efficient state management, which is handled by MonadDB.
State Management: MonadDB
MonadDB improves blockchain performance by natively implementing a Merkle Patricia Trie (MPT) for state storage, unlike Ethereum and other blockchains that layer the MPT on slower, generic databases like LevelDB. This custom design reduces disk access, speeds up reads and writes, and supports concurrent data requests, enabling Monad’s parallel transaction processing. For new nodes, MonadDB uses statesync to download recent state snapshots, avoiding the need to replay all transactions. These features make Monad fast, decentralized, and compatible with existing systems.
Key Features
Native Merkle Patricia Trie: MonadDB stores blockchain state (such as accounts, balances) in an MPT built directly into its custom database, eliminating overhead from generic databases. This reduces delays, minimizes disk I/O, and supports multiple simultaneous data requests, improving efficiency.
Fewer Disk Reads for Speed: By leveraging in-memory caching and optimized data layouts, MonadDB minimizes SSD access, speeding up transaction processing and state queries like account balance checks.
Handling Multiple Data Requests at Once: MonadDB handles multiple state queries at once, supporting Monad’s parallel execution model and ensuring scalability under high transaction volumes.
Role in Execution
MonadDB integrates with Monad’s execution model:
Block Proposal and Consensus: Validators propose blocks with a fixed transaction order, which MonadBFT confirms without updating the blockchain state.
Parallel Execution: After consensus, transactions are executed in parallel, with MonadDB handling state reads and tentative writes.
Serial Commitment: Transactions are committed one-by-one in the confirmed order. If a transaction’s read state is altered by an earlier transaction, it is re-executed to resolve the conflict.
State Finalization: Once a block is finalized, state changes are saved to MonadDB’s native Merkle Patricia Trie, creating a new Merkle root for data integrity.
Speculative Execution: MonadDB allows nodes to process transactions before final consensus, discarding any changes if the block isn’t finalized to ensure accuracy.
Node Synchronization and Trust Trade-Off
MonadDB enables rapid node synchronization by downloading the current state trie, similar to how git fetch updates a repository without replaying full commit history. The state is verified against the on-chain Merkle root, ensuring integrity. However there is an important trust trade-off:
Instead of independently verifying every transaction and block from the beginning of the blockchain, this approach relies on a trusted source (like a reputable peer, snapshot provider, or archive node) to supply the latest state. The downloaded state can be cryptographically verified against the on-chain Merkle root, ensuring its integrity.
This method relies on the rest of the network to have validated the state transitions correctly up to this point. If an invalid or malicious transaction was ever accepted by the network, you would not detect it without replaying and verifying the entire transaction history yourself. This trade-off offers faster syncing at the cost of partial reliance on external trust.
Monad transparently addresses this trade-off:
The team acknowledges that efficient state sync is a necessity for a fast, operable network (just as many blockchains, including Ethereum and Solana, offer state snapshots for speed).
At the same time, they make clear that if you want to verify every single transaction and the integrity of the full ledger, you would need to sync from genesis and replay every block locally, which is slower but equivalent to traditional “trustless” node operation.
Transaction Management and Networking
Monad optimizes transaction submission and propagation to minimize latency and congestion, complementing MonadBFT and RaptorCast.
Localized Mempools
Unlike global mempools, Monad uses local mempools for efficiency:
Users submit transactions to an RPC node, which validates and forwards them to the next few scheduled MonadBFT leaders.
Each leader maintains a local mempool, and selects transactions by prioritizing those with higher gas fees, and includes them in the block proposal.
If a transaction isn’t included within a few blocks, the RPC node resends it to new leaders.
Once a block is proposed, RaptorCast broadcasts it to validators, who vote via MonadBFT and execute transactions (often speculatively).
This targeted forwarding reduces network congestion, ensuring fast and reliable transaction inclusion.
Conclusion
Overall, Monad's architecture demonstrates how a blockchain can achieve high performance without sacrificing safety. By using MonadBFT, parallel execution, and an optimized database, Monad speeds up block finalization and transaction processing while keeping results deterministic and consistent. Features like RaptorCast networking and local mempools further cut down latency and network overhead. There are trade-offs, especially around fast syncing and trust assumptions, but Monad is clear about them and offers flexible options for node operators. Taken together, these choices make Monad a strong foundation for building decentralized EVM applications, delivering the low latency and strong guarantees promised in its design.
Link copied!
Join our mailing list to receive our latest updates, research reports, and industry news.
Thanks for subscribing. Watch out for us in your inbox.
Oops! Something went wrong while submitting the form.