This is a research article co-authored by @mostlyblocks (former Head of Research at Chorus One) and @ThogardPvP (CEO of @0xFastLane).
This article provides an accessible first perspective on validator timing games on Monad. Practically speaking, validators may intentionally build and gossip their block as late as possible to capture additional value from the incrementally larger set of transactions accruing over slot time.
While timing games have been practiced for multiple years on Ethereum, Monad’s performance-first architecture introduces additional risk considerations. The article recaps timing games, outlines Monad’s architecture, including a recent overhaul of MonadBFT consensus, and describes how sophisticated validators may approach timing games on Monad.
We can loosely define a decentralized network as a set of geographically distributed nodes. More rigorously, we can approximate such a network by its (stake-weighted) network graph, which has a center from which the expected latency to the set of nodes is minimized. As not all nodes are equally close to this center, the network must allow appropriate latency leeway to allow distant nodes to participate - this is a design choice.
In practice, a fast blockchain can be built by co-locating all nodes, but such a structure comes with resiliency trade-offs (e.g. the datacenter shuts down). A more resilient, decentralized architecture like Monad must allow sufficient consensus time for a node that is e.g. located in New Zealand to participate.
Competitive validators may take advantage of this latency leeway to improve the expected value of their blocks by performing two successive actions. First, they minimize their latency to peers, and second, they artificially delay their block proposal. This is referred to as a “timing game”.
To understand why this is profitable, note that as the slot progresses, users keep sending transactions, and therefore, the proposer has a larger amount of transactions to select the final block from. A more nuanced reason is that as the slot progresses, traders have access to more information, and in the case of arbitrage between the chain and another venue (e.g. the most liquid CEX), the expected opportunity value increases as the square root of block time, over time.
Timing games exist on a risk reward curve - the longer the validator delays, the higher the risk of an unwanted timeout resulting in a zero- or negative- payoff. For this reason, an informed validator must quantify the payoff of each unit of delay versus the marginal risk it runs. In summary, a sophisticated approach to timing games hinges on a strong understanding of the network’s topology and transaction arrival dynamics (i.e. the block value over slot time).
Monad's consensus mechanism is an improved version of HotStuff called "MonadBFT." HotStuff is a leader-based Byzantine fault-tolerant consensus protocol designed for high throughput and low latency. We will first examine parallelization, before describing how blocks are gossiped to the network.
In MonadBFT, the leader for a slot executes the transactions of the preceding slot **while composing the list of transactions it will include in its block, which will then be executed by the upcoming leader. Intuitively speaking, this means that during its slot, a validator fulfills two tasks: it simulates and settles the transactions passed to it by the validator preceding it in an optimized manner (execution), and packs a block to pass to the validator succeeding it (consensus).
An implication is that any transaction with a state dependency does not yield an execution guarantee until settlement time, which takes place one slot after the transaction has been included. For a typical user, this will not usually be noticeable. For traders competing for limited opportunities, the main implication is a less informed priority gas auction, and if carrying inventory risk (e.g. arbitrageurs), an additional risk margin applied over the other leg(s) of the trade.
Once a validator has decided on its block*,* it sends it out alongside either a quorum certificate (QC), or a timeout certificate (TC) for the previous block. Specifically, a QC attests that the validator has built on the preceding block n, and that this block has been attested to as valid by 2/3 of the network (i.e. an aggregated signature). Conversely, a TC attests that the preceding validator has missed its slot and that the validator has built on the preceding block n-1; a valid TC must also be signed by 2/3 of the network.
The latest iteration of MonadBFT couples any timeout vote with a quorum vote for the most recent block a validator has attested to, which means that any validator which garners a supermajority of votes eventually finalizes. This stands in contrast to previous versions, where a successful timeout quorum would have resulted in a reorg of the validator that had preceded the timed out leader (”tail fork”).
Timing games on Monad are a race between the QC (validators vote on a block) and TC (validators vote on a timeout) paths. The QC path is faster, with linear overhead (one message per peer), while the TC path is slower, validators must multicast timeout votes and aggregate these into a TC (quadratic overhead).
"Linear overhead" refers to communication complexity that grows proportionally with network size - one leader sending a single message to each of N validators. "Quadratic overhead" occurs when each validator must communicate with many others, causing message volume to grow with the square of the network size as validators both send and receive messages from their peers.
A validator playing timing games must ensure its block reaches a supermajority of the network before a timeout vote does, and therefore, there is a strong incentive to optimize connectivity with the closest 2/3 of peers plus a risk margin for latency or timeouts. Put another way, validators in distant locations might timeout more frequently due to slower block receipt, issuing TCs even as QCs form elsewhere. The upshot is an increased incentive to centralize as timing games spread.
The extent to which a proposer may risk a timeout depends on the marginal value of any delay. This is an empirical question, as traders will time their transactions such that they are late but still run a high inclusion chance (i.e. model the network), whereas retail transaction arrival times can be thought of as unpredictable. An aspect that can be stated is that due to Monad’s shorter block times, the expected value of cross-venue arbitrage is significantly lower than on e.g. Ethereum.
A validator playing timing games benefits from capturing transactions that would have otherwise accrued to a later slot; if all validators play timing games, this results in a competitive outcome with an increased risk profile. Economic incentives are likely to encourage competitive validators to engage in timing games in any scenario, as a decentralized network must accommodate distant participants, and therefore, a delay margin is available for well-networked nodes.
Under previous version of the MonadBFT design, an elevated timeout risk reflected as an increased risk of reorgs. This would have reduced block value upfront, as traders holding inventory risk (e.g. arbitrageurs) adjust their risk margin over the compound risk of not executing due to pipelined consensus plus the reorg risk. In this context, a validator playing timing games on Monad would have reduced the value of the preceding proposer as well.
This is not the case anymore under the current iteration, as timeout votes now carry a quorum vote for the latest block a validator has attested to. This additional information must be gossiped around the network with quadratic overhead (i.e. the message gets larger, increasing arrival latency), and therefore, likely adds more room for timing games.
In summary, timing games on Monad are possible and will favor sophisticated validators. While profitable for this subset, they in expectation reduce the block value of the proposers succeeding them, via excess transaction capture. Timing games are geographically centralizing by reducing the network visibility of nodes in distant locations, which reflects in inaccurate timeout votes, and a loss of influence to high performing low-latency hubs.
The financial industry has long grappled with the inefficiencies of traditional systems, namely settlement delays, restricted market access, and capital inefficiencies. Decentralized finance (DeFi) would eventually emerge as a response, but early implementations often fell short, plagued by issues like over-collateralization and limited composability.
Recognizing these challenges, Injective introduced iAssets - programmable financial primitives that facilitate enhanced liquidity allocation, position-based exposure, and cross-market composability. Unlike their static predecessors, iAssets are dynamic, on-chain instruments, with second-order utility and no pre-funding constraints. With iAssets, Injective aims to move blockchain-based stocks, commodities, and more beyond proof-of-concept, ushering in the era of Stocks 3.0.
Traditional Finance (Stocks 1.0)
Traditional financial systems operate within structured, yet inflexible frameworks, characterized by delayed settlements (typically T+2), stringent access barriers, and segregated liquidity. The opacity of processes such as prime brokerage and rehypothecation further compound systemic risks, creating inefficiencies and restricting market participation to predominantly institutional actors.
Early DeFi & Synthetic Assets (Stocks 2.0)
The initial wave of DeFi introduced tokenized and synthetic assets, allowing for asset programmability and a more open financial environment. However, these models often required excessive collateralization (often surpassing 150%), leading to substantial capital inefficiencies. Liquidity pools were isolated, limiting the effective deployment of capital, and creating vulnerabilities such as liquidation cascades during market volatility.
Understanding the shortcomings of both traditional systems and early blockchain solutions, Injective's iAssets introduce significant innovations to further the utility of on-chain assets. Key advancements include:
These characteristics mark a distinct shift from representational to programmable finance. Rather than merely mirroring the value of off-chain assets, iAssets transform them into composable building blocks—financial primitives that can be deployed across lending protocols, used as collateral, integrated into structured products, or programmed into hedging strategies. The result is a framework that not only preserves the core utility of traditional assets, but enhances them with real-time liquidity, seamless market integration, and systemic transparency.
In this light, iAssets are not just an iteration on previous tokenization efforts, they are a redefinition of what it means to own and utilize assets in a digitally native financial system.
Injective's iAssets are realized through a robust and meticulously designed technical infrastructure. At its core lies Injective's modular architecture, which has been developed over several years to support high-performance decentralized financial applications.
Exchange Module and On-Chain CLOB
The Exchange Module serves as the foundation for iAssets, providing a fully decentralized, on-chain central limit order book (CLOB). Unlike traditional automated market maker (AMM) models, the CLOB facilitates tighter spreads and more efficient price discovery. This architecture allows for professional institutions to dynamically manage liquidity, ensuring that iAssets benefit from deep and responsive markets.
Moreover, the Exchange Module plays a pivotal role in optimizing liquidity across the Injective ecosystem. By enabling a shared liquidity environment, it allows for seamless capital flow between various financial applications, including trading platforms, and structured financial products. This interconnectedness ensures that liquidity is not siloed, and instead dynamically allocated based on real-time market demands.
And iAssets haven’t wasted any time in picking up steam. Injective now hosts all Mag 7 stocks, which have done a cumulative $165M+ in trading volume since launch. iAssets as a whole have seen over $465M in trading, laying the foundation for a burgeoning asset category and aggressive innovation. And if that wasn’t enough - one particular asset of note really takes center stage, TRADFI, which achieved approximately $14 million in trading volume on its first day of listing.
Modular Design and Multi-VM Support
Injective's architecture is composed of interoperable modules, each serving a specific function within the ecosystem. This modularity gives developers access to a robust set of pre-built components, such as the Oracle Module, RWA Module, automatic smart contracts and more, without the need to build from scratch. For all Injective modules, click here. Furthermore, Injective supports multiple virtual machines (VMs), enhancing the flexibility and scalability of applications built on the network.
To learn more about Injective modules, click here.
And Injective isn’t stopping there. The team is actively working on several initiatives aimed at enhancing capital efficiency and utility, notably, their Liquidity Availability Framework.
Liquidity Availability Framework
One of the key developments is the introduction of a "Liquidity Availability" framework. This initiative seeks to optimize capital utilization by allowing liquidity to move dynamically between applications based on demand. While underutilization is a notable concern, the primary objective of liquidity availability is to address limitations brought about by application-specific liquidity, and ensure that liquidity is allocated more efficiently across the ecosystem.
Want to learn more? Check out Injective’s research paper on Liquidity Availability here.
Injective’s iAssets represent a pivotal advancement in the evolution of financial markets, transitioning from static representations to dynamic, programmable financial primitives. By addressing the limitations of both traditional finance and early decentralized finance models, iAssets offer enhanced capital efficiency, real-time liquidity, and seamless composability across financial applications.
Leveraging Injective's robust modular architecture and on-chain central limit order book, iAssets facilitate a more integrated and efficient financial ecosystem. This infrastructure not only accelerates development timelines but also fosters innovation, enabling complex financial instruments to be constructed with greater ease and reliability.
As the financial industry continues to evolve, Injective seeks to provide the foundational infrastructure necessary for the next generation of programmable finance.
Want to learn more about iAssets? Check out the iAssets research paper here.
As a leading institutional staking provider, Chorus One is proud to support the Injective ecosystem and its innovative iAssets framework. By operating a highly secure and reliable validator node on Injective, Chorus One ensures network stability and contributes to the seamless functioning of the Injective ecosystem.
Solana employs a Proof-of-Stake (PoS) consensus algorithm where a designated leader, chosen for each slot, produces a new block. The leader schedule is randomly determined before the start of each epoch, a fixed period comprising multiple slots. Leaders generate a Proof-of-History (PoH) sequence (a series of cryptographic hashes where each hash is computed from the previous hash and a counter) to order transactions and prove the passage of time. Transactions are bundled into entries within a block, timestamped by the PoH sequence, and the block is finalized when the leader completes its slot, allowing the next leader to build upon it. A slot is optimistically confirmed when validators representing two-thirds of the total stake vote on it or its descendants, signaling broad network agreement.
When forks arise (situations where multiple conflicting chains exist), validators must decide which fork to support through their votes. Each vote commits a validator to a fork, and lockout rules restrict them from voting on conflicting forks for a duration. Solana’s fork choice rule governs when a validator can switch to another fork, ensuring the network converges on a single, canonical chain.
In the Agave client, the Tower struct, defined within the core/src/consensus.rs file, serves as the central data structure for managing a validator’s voting state. It plays a pivotal role in tracking the validator’s voting history, enforcing lockout rules to prevent conflicting votes, and facilitating decisions about which fork to follow in the presence of chain splits. Within the Tower, a vote_state field (implemented as a VoteState object from the solana-vote-program crate) maintains a detailed record of all votes cast by the validator along with their associated lockout periods, ensuring adherence to consensus rules. The struct also keeps track of the most recent vote through its last_vote field, capturing the latest transaction submitted by the validator. Additionally, the Tower includes threshold_depth and threshold_size parameters, which define the criteria for confirming slots; by default, these are set to a depth of 8 slots and a stake threshold of two-thirds, respectively, determining the level of agreement required from the network.
When a validator needs to vote on a slot, it relies on the Tower’s record_bank_vote method to execute the process seamlessly. This method begins by extracting the slot number and its corresponding hash from a Bank object, which represents a snapshot of the ledger at that specific slot. It then constructs a Vote object encapsulating the slot and hash, formalizing the validator’s intent. Finally, it invokes record_bank_vote_and_update_lockouts, which in turn delegates to process_next_vote_slot to update the vote_state, ensuring that the new vote is recorded and lockout rules are applied accordingly.
Once a validator casts a vote on a slot in Solana, it becomes temporarily barred from voting on conflicting slots, a restriction governed by the 2^N lockout rule, which imposes an exponentially increasing duration with each subsequent vote. Initially, after a validator’s first vote ($N=1$), the lockout period spans $2^1 = 2$ slots, but with a second vote ($N=2$), the lockout for the first vote extends to $2^2 = 4$ slots, while the second vote introduces a new lockout of $2^1 = 2$ slots. More generally, after $N$ votes, the earliest vote in the sequence is locked out for $2^N$ slots, ensuring the validator remains committed to its chosen fork.
This mechanism is implemented within the process_next_vote_slot function, which is invoked by record_bank_vote to update the validator’s voting state. At the heart of this exponential lockout is the confirmation_count field, where each new vote increments the confirmation_count of prior votes. The confirmation_count is part of the Lockout struct, and it is used to determine the number of slot for which a vote is locked out as $2^\text{confirmation\_count}$.
To illustrate, consider a validator voting on slots 2 and 3 of a fork. With its first vote on slot 2, the confirmation_count is set to 1, resulting in a lockout of $2^1 = 2$ slots, meaning the validator is barred from voting on a conflicting fork until after slot 4. When it votes on slot 3, the confirmation_count for slot 2 increases to 2, extending its lockout to $2^2 = 4$ slots, or until slot 6, while slot 3 starts with a confirmation_count of 1, locking out until slot 5. Consequently, the validator cannot vote on a conflicting fork, such as slot 4 on a different chain, until after slot 6.
Fig. 1: Graphical representation of a validator V switching fork, with relative $2^N$ rule for lockouts.
Through this mechanism, Solana ensures that validators remain committed to their chosen fork for the duration of the lockout period, preventing double-voting or equivocation that could destabilize the network.
Validators in Solana’s consensus process sometimes miss voting on slots due to network delays or operational issues. As noted in SIMD-0033 (Timely Vote Credits), missing votes or submitting them late reduce rewards, prompting some validators to engage in backfilling, which is the retroactive voting on missed slots in a later transaction. For instance, a validator skipping slots 4 to 6 might vote on it at slot 7, claiming credits to maintain staking rewards and network contribution. However, Solana enforces strict ordering: the check_and_filter_proposed_vote_state function ensures slots in a vote transaction exceed the last voted slot in the VoteState, rejecting earlier slots with a VoteError::SlotsNotOrdered. This consensus-level check, executed on-chain, means that backfilling must advance the voting sequence, such as including slots 4 to 6 at slot 7, only if the last voted slot was 1 or earlier (see Fig. 1).
It is worth mentioning that, despite this practice appears healthy, improper backfilling can disrupt the network. When a validator backfills by voting on missed slots, each new vote extends the lockout period of earlier votes through Solana’s $2^N$ rule, deepening its commitment to the chosen fork. If these slots belong to a stale fork, the prolonged lockouts may prevent the validator from switching to the main chain, potentially delaying consensus if many validators are similarly affected, thus hindering the network’s ability to confirm new blocks.
To address this, Ashwin Sekar from Anza proposed SIMD-0218: Intermediate Vote Credits (IVC), which integrates backfilling into the protocol by crediting intermediate slots at the landing slot’s time. This controlled approach is meant to eliminate risky backfilling mods, ensuring liveness and fairness while allowing credit recovery.
Detecting backfilling in Solana poses a challenge because, as previously discussed, voting on past slots is not prohibited and can occur under normal circumstances, such as network delays or validator restarts. However, fluctuations in network-wide TVC effectiveness offer a lens to identify potential backfilling. A query obtained using Flipside reveals that TVC Effectiveness fluctuates over time, with a notable network-wide dip during epoch 786.
While a more significant dip occurred between Epochs 760 and 770, we focused on the more recent period due to the availability of granular internal TVC data, acknowledging that further investigation into historical valleys is warranted to fully understand network voting dynamics.
For this analysis, we focused on the date range between 2025-05-11 and 2025-05-15.
Fig. 2: Network TVC Effectiveness obtained using Flispide, cfr.here
Since Dune is the only data provider granting easy access to on-chain voting data, we developed a dashboard with the goal of detecting potential instances of the backfilling practice. Precisely, we developed two methods; however, both remain probabilistic, as backfilling cannot be definitively confirmed without direct access to the validator's intent.
The first method, dubbed the "Simple Mod", examines voting behaviour focusing on the relationship between the number of vote transactions signed by a validator in a single slot and the slot distance between the landing slot of those transactions and the slot of the recent block hash they reference. For example, if a validator submits 10 vote transactions in slot 110 with a recent block hash from slot 108, the distance is only 2 slots, significantly less than the number of transactions. This pattern suggests backfilling because the validator is likely catching up on missed slots in a burst: the short distance indicates the transactions were created in quick succession at slot 108, possibly to retroactively vote on slots 99 to 108, a common backfilling strategy to claim credits for earlier missed votes in a single submission, rather than voting incrementally as the chain progresses.
Fig. 3: N° of possible backfill detected using the “Simple Mod” model, cfr.here
Figure 3 shows the hourly aggregation of possible backfill detections using the “Simple Mod” method. We mainly have two peaks in the data: the first occurs at 00:00 on May 11, 2025, likely triggered by a widespread network issue or a collective validator restart, as the validators identified during this spike do not reappear in subsequent instances, suggesting a one-off event rather than sustained backfilling. The second peak, around 11:00 AM on May 13, captures a more persistent trend, involving the most frequently detected accounts across the dataset. By examining the frequency of these detections, we identified three validators consistently engaging in potential backfilling, indicating an active practice of retroactively voting on missed slots to maximize credits, alongside one validator exhibiting milder behavior, with fewer instances that suggest a less aggressive approach to catching up on voting gaps.
The second method, dubbed the "Elaborate Mod", takes a broader perspective, analyzing voting patterns to identify validators that consistently submit an unusually high number of vote transactions in single slots across multiple instances. We aggregated vote transactions hourly, flagging validators that submit more than 4 distinct vote transactions in a slot. We chose this threshold because, while a leader might include multiple vote transactions from a validator due to network latency or validator restarts, exceeding 4 votes in a single slot is unlikely under normal conditions where validators typically vote once per slot to advance the chain. We further refined the detection by requiring this behaviour to occur in over 10 distinct hourly intervals, reflecting that such frequent high-volume voting is less likely to stem from typical network operations. This pattern could indicate backfilling because validators engaging in this practice often batch votes for multiple missed slots into a single slot’s transactions, aiming to retroactively fill voting gaps.
Fig. 4: N° of possible backfill detected using the “Elaborate Mod” model, cfr.here
Figure 4 presents the hourly aggregation of possible backfill detections using the "Elaborate Mod" method, revealing a distinct pattern that complements the Simple Mod analysis. Unlike the dual peaks observed previously, this method identifies a single prominent peak around 11:00 AM on May 13, 2025, which aligns precisely with the second peak detected by the “Simple Mod”. The absence of the earlier peak from May 11 at 00:00 underscores the Elaborate Mod’s design, which reduces sensitivity to false positives caused by transient events, such as validator restarts or network-wide issues, focusing instead on sustained high-volume voting patterns indicative of deliberate backfilling. Notably, the Elaborate Mod detects a larger cohort of 120 validators engaging in potential backfilling, reflecting its broader scope in capturing consistent voting anomalies over time. Among these, the most prominent backfiller mirrors the primary validator identified by the “Simple Mod”.
Fig. 5: Vote credits earned out of the theoretical maximum, higher is better. Better vote effectiveness translates directly to higher APY for delegators. To make very subtle differences between validators more pronounced, we show the effectiveness over a 24h time window. This means that short events such as 5 minutes of downtime, can show up as a 24-hour long dip. Note the log scale!
Having identified potential backfillers, we now turn to assessing whether this practice might destabilize Solana’s consensus mechanism. As previously noted, improper backfilling can extend the lockout periods of stake committed to a stale fork, potentially slowing consensus by delaying validators’ ability to switch to the main chain. Figure 5 leverages internal data to provide a granular view of TVC effectiveness, tracking the performance of various validators. The purple validator, flagged as a backfiller by the “Elaborate Mod”, consistently outperforms others in TVC effectiveness under normal conditions, likely due to its aggressive credit recovery through backfilling. However, during vote-related disruptions (such as those coinciding with the May 13 peak) its effectiveness drops more sharply than that of vanilla validators, suggesting prolonged adherence to a wrong fork. This heightened sensitivity indicates that backfilling, while beneficial for credit accumulation, may amplify the risk of consensus delays if many validators on a stale fork face extended lockouts, raising questions about the broader safety of Solana’s consensus mechanism despite the validator’s overall performance advantage.
In this analysis, we focused on modifying the consensus to backfill missed vote opportunities. Improper backfilling can exacerbate vote lagging, strain network resources, and extend lockouts on stale forks, potentially delaying consensus.
We developed two methods to potentially detect validators involved in this practice, the “Simple Mod” and “Elaborate Mod”. Our data analysis from May 11 to May 15, 2025, highlighted periods of potential backfilling. The Simple Mod identified bursts of vote transactions exceeding the slot distance to their recent block hash. At the same time, the Elaborate Mod flagged validators consistently submitting high vote counts across multiple instances, detecting 120 validators with one primary backfiller overlapping between methods. Analysis of TVC effectiveness showed that while backfillers often outperform in credits, they face sharper drops during vote-related disruptions. This suggests prolonged adherence to wrong forks that could hinder consensus if widespread. The introduction of SIMD-0218 (Intermediate Vote Credits) offers a promising solution by formalizing backfilling within the protocol, mitigating risks like vote lagging while ensuring fair credit recovery. Nonetheless, the interplay between backfilling and consensus stability raises ongoing questions about Solana’s long-term resilience, warranting further investigation into network-wide voting patterns and their impact on liveness and fairness.
The Pectra upgrade brings substantial changes to the staking economy of Ethereum. An overlooked and interesting consequence of these changes is the impact on slashing risk on Ethereum, which has been dramatically reduced with the introduction of Pectra.
The merge introduced slashing measures to prevent malicious validators from attacking the network. Slashing is triggered when a validator violates the Casper Finality Gadget (Casper FFG) rules or the LMD GHOST consensus. The former is needed to guarantee Ethereum’s economic finality, the latter is needed to solve the nothing-a-stake problem.
Precisely, validators are required to submit attestations which correctly identify the Casper FFG source, target and head of the chain. Violating the Casper FFG rule means that a validator makes two differing attestations for the same target checkpoint, or an attestation whose source and target votes "surround" those in another attestation from the same validator.
Additionally, if selected as a proposer, the validator would also be required to propose the block. Violating the LMD GHOST rules means that a validator proposes more than one distinct block at the same height, or attests to different head blocks, with the same source and target checkpoints.
In order to trigger a slashing event, the validator should try to attack the chain, or simply it’s running a misconfigured setup. The slashing amount corresponds to
where the numerator corresponds to the effective balance of a validator (i.e. the total amount of ETH a validator is staking that is effectively used within the consensus), while the denominator is a quotient dependent on the Beacon chain state.
Prior to Pectra, the effective balance of the node was 32 ETH, though the quotient historically changed. Precisely, the quotient was 128 post-merge, 64 in Altair, and 32 in Bellatrix. This 32 ETH quotient resulted in a slash that amounted to 1 ETH.
Once the validator is slashed, it is forced to enter the exit queue, and withdrawals are delayed by around 36 days (8192 epochs).
There is a second penalty that is applied, which corresponds to a correlation penalty - i.e. it is proportional to the stake that is committing the offence - and can be written as
where, in addition to the effective balance, we now also have the total stake that is committing an offence (the balance to be slashed) and the total stake. The slashing multiplier 3 was previously equal to 2 in Altair.
During the whole exit process, the validator also incurs additional penalties for missing attestations. Since the attestation penalties are very minimal compared to the previous one, for simplicity’s sake, we will focus on the major slash.
The Pectra upgrade, amongst many other things, changes the maximum effective balance of nodes from 32 ETH to 2048 ETH. With regards to slashing, the minimum slashing quotient is set to 4096, while the slashing multiplier remains the same as the Bellatrix one.
Another important point to note is that the ‘effective balance’, which was once a fixed number (32), is now variable. This provides node operators with a choice on how much ETH to allocate to each node; the amount of ETH allocated to each node will alter the slashing risk calculus. For example, if a validator chooses to continue with 32 ETH on their node, the slashing amount incurred will be 32/4096 = 0.008 ETH. In contrast, if they consolidate to the maximum limit of 2048 ETH, the slashing they incur will be significantly more, 2048/4096 = 0.5 ETH.
Also, for the correlation penalty, we have 3 * 32 / 1,069,029 = 0.00009 ETH for a small validator, and 3 * 2048 * 2048 / (32 * 1,069,029) = 0.36783 ETH for a validator with the max allowed effective balance. Here, 1,069,029 is the number of active validators at the time of writing.
In these risk calculations, we are primarily attributing slashing to misconfigurations. Hence, having more nodes does not exponentially scale up the incidence of slashing, as it is highly unlikely that in a 64-node setup, all 64 nodes are misconfigured.
The ideal set-up for any validator greatly depends on their priorities. The prime benefit of Pectra is that you can customize your set-up for different objectives. Below is a diagram of how the slashing amount scales as the ETH staked on that particular validator key increases. With 32 ETH nodes incurring a slashing penalty of 0.0078 ETH, and nodes with 2048 ETH on them incurring a slashing penalty of 0.5 ETH.
The only variable cost here is the cost of the infrastructure itself. But this isn’t substantial unless we are talking about many 1000s of validator keys. In most setups, one could run 100s of validator keys on a Kubernetes pod, and many pods could point at one beacon node. So, if optimized correctly, the infrastructure costs of running many 32 ETH validator keys might not be substantially higher than with fully consolidated sets.
A fair question to ask at this point would be, why consolidate at all? One strong advantage of partial consolidation is quick withdrawal. Prior to Pectra, all withdrawals happened on the consensus layer and required the exit of the 32 ETH validator, with the churn limit for such exits calculated as the maximum between the minimum churn limit per epoch (set by default to 4) and the number of active validators divided by 65,536. At the time of writing, this amounts to around 16 validators per epoch (i.e. 6m and 30s).
However, with Pectra you now have the option to partially withdraw funds from a validator and have the funds available to you instantly (within a few blocks). As opposed to exiting your validator, waiting for it to go through the exit queue, and waiting an additional 27h in withdrawal delay to get your ETH back. You can think of all balances above 32 ETH as balances that are available for instant unstaking.
This is now another option you have available with Pectra. For a higher slashing risk, you can have greatly increased liquidity on your staked ETH.
The Pectra upgrade transformed the ETH staking industry from a monolithic experience to one with a great degree of variability and choice. In the near future, validators and liquid staking providers will have bespoke offerings and vaults, allowing users to choose between maximizing ARR, maximizing liquidity, and minimizing staking risk. All of these different validation strategies will be directly linked to how node operators configure their setups and how effectively they maintain their ETH nodes.
In our previous article about Pectra, we mentioned that for our customers who want a 0x02 validator, Chorus One will implement a custom effective balance limit for 0x02 validators, set at 1910 ETH. This accounts for around 2 years of compounding rewards at a rate of 3.5% annualized, before reaching the 2048 ETH cap, allowing for sustained reward optimization.
Chorus One is actively researching various validation strategies now available with Pectra and is bringing this variety right to our delegators.