Crypto Research You Can Trust

The crypto space is evolving rapidly and can get daunting for investors. That is why we have a dedicated team of researchers who turn the complex technical advancements in the field into easily understandable research reports. These reports are highly valued by institutions and investors who want to stay up-to-date with the latest developments in the crypto world.
Timing Games on Solana: Validator Incentives, Network Impacts, and Agave's Hidden Inefficiencies
Our team at Chorus One has been closely following the recent discussions around timing games on Solana, and we decided to run experiments to better understand the implications. We’ve documented our findings in this research article.
Read now

All Research

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Bectra Upgrade: What It Means for BERA Stakers
The Berachain ecosystem is about to transform significantly with the upcoming Bectra upgrade, which introduces multiple critical EIPs adapted to Berachain’s unique architecture.
June 3, 2025
5 min read

Written by @ericonomic & @FSobrini

The Berachain ecosystem is about to transform significantly with the upcoming Bectra upgrade, which introduces multiple critical EIPs adapted to Berachain’s unique architecture. Perhaps, the most revolutionary EIP is EIP-7702, which enables regular addresses to adopt programmable functionality.

This major milestone also introduces a game-changing feature BERA stakers have been waiting for: the ability to unstake their tokens from validators. Until now, once BERA was staked with a validator, it remained indefinitely locked. Bectra changes this fundamental dynamic, bringing new flexibility to the Berachain staking landscape.

Bectra is the Berachain version of Pectra, the latest Ethereum hard fork that introduced significant improvements to validator flexibility and execution layer functionality. For those interested in a deeper technical analysis of the upgrade, we recommend reading Chorus One’s comprehensive breakdown on Pectra: The Pectra Upgrade: The Next Evolution of Ethereum Staking.

Practical Changes

Although the biggest Bectra change might be EIP-7702 (account abstraction), the biggest change for Berachain that is different from Pectra is the ability for BERA stakers to withdraw their staked assets from validators. This fundamentally alters the validator-staker relationship, as validators must now continuously earn their delegations through performance and service quality.

How Unstaking Works

As a user who has staked BERA with a validator, it's important to understand that you don't directly control the unstaking process. Here's what you need to know:

  1. Contact Your Validator: If you want to unstake your BERA, you'll need to contact the validator you staked so they can put you in touch with the Withdrawal Address Owner in order to request a withdrawal.
  2. Validator Controls the Process: Only the validator (or the entity controlling the validator's Withdrawal Credential Address) can initiate the unstaking process.
  3. Withdrawal Fee: Be aware that every withdrawal requires a fee, which the validator may pass on to stakers or absorb as part of their service offering.
  4. Waiting Period: After your validator initiates your withdrawal, there's approximately a 27-hour (256 epochs) waiting period before the tokens become available.
  5. Receiving Your Tokens: The unstaked BERA will be returned to the validator's Withdrawal Credential Address, not directly to you. The Withdrawal Address Owner will need to transfer your tokens to you in a separate transaction.

Important Considerations:

  • Trust Relationship: The unstaking process highlights the importance of staking with trusted validators who have clear policies for handling withdrawal requests.
  • No Direct Control: As a regular user, you cannot directly unstake your BERA from the protocol; you must work through your validator.
  • Potential Delays: Your unstaking timeline depends on how quickly your validator processes your request (in most cases, it would depend on the Foundation or a custodian), in addition to the protocol's 27-hour waiting period.
  • Minimum Stake Requirements: Validators must maintain at least 250,000 BERA staked. If many users request withdrawals at once, the validator might need to process them in batches to maintain this minimum. If a validator's stake falls below 250,000 BERA, they will be removed from the active validator set and will no longer be able to produce blocks or earn rewards. This means they would cease to function as a validator until they stake the minimum amount again.

Redelegation Process

There is currently no direct "redelegation" mechanism in the protocol. If you want to move your stake from one validator to another:

  1. You must first request your current validator to unstake your BERA
  2. Wait for the validator to process your request and for the unstaking period to complete
  3. Once you receive your BERA tokens, you can stake them with a different validator

During this process, you won't earn staking rewards while your tokens are in the unstaking phase.

Staking Process

The staking process itself remains unchanged with the Bectra upgrade. Users can still stake their BERA with any validator, and the validator continues to receive all rewards at their Withdrawal Credential Address.

If there are multiple stakers delegating to a single validator, the protocol does not automatically distribute rewards to individual stakers; this must be handled through off-chain agreements with the validator or through third-party liquid staking solutions.

Implications for BERA Stakers

The introduction of validator stake withdrawals transforms the staking landscape on Berachain in several important ways:

New Staking Dynamics

For BERA stakers, the ability to unstake creates:

  • Freedom of movement: Switch validators freely without being tied to your original decision.
  • Strategic control: Adjust your delegation based on performance, risk tolerance or evolving network conditions.
  • Lower opportunity cost: You can now reallocate your capital when attractive opportunities arise elsewhere.

Competitive Validator Landscape

This new mobility creates a more competitive environment where:

  • Validators must continuously demonstrate value to retain delegations
  • Performance metrics like USD value per BGT emitted, PoL ARR and commission rates become critical differentiators
  • Poor-performing validators (lower BERA staking ARR) face stake migration to competitors

This competitive pressure should drive validators to optimize operations, potentially leading to better returns for stakers and improved network performance.

New Responsibilities With greater flexibility comes increased responsibility:

  • Active monitoring of validator performance in terms of BERA staking ARR becomes necessary
  • Ongoing research replaces one-time validator selection decisions
  • Understanding the trade-offs between staying with a single validator versus frequently moving your BERA to seek better returns.
  • Knowledge of unstaking mechanisms and timeframes

This creates an opportunity for stakers to be more strategic with their decisions and potentially increase returns by selecting the best-performing validators.

Why Stake with Chorus One

As the Bectra upgrade introduces more competition among validators, choosing the right validator becomes increasingly important. Chorus One stands out as a premier choice for several compelling reasons:

Industry-Leading ARR

Chorus One consistently delivers some of the highest ARRs in the Berachain ecosystem thanks to Beraboost, our built-in algorithm that directs BGT emissions to the highest-yielding Reward Vaults, maximizing returns for our stakers.

Currently, our achieved ARR for BERA stakers has ranged between 4.30% and 6.70%, with longer staking durations yielding higher returns due to compounding effects. For comparison, most BERA LSTs offer an ARR between 4.5% and 4.7%. Our ARR pick up is achieved via our proprietary algorithm (ie, BeraBoost) as well as with active DeFi, also capturing part of the inflation going to ecosystem participants:

Additionally, in terms of incentives captured per BGT emitted (a key metric reflecting a validator’s revenue efficiency), our validator consistently outperforms the average by $0.5 to $1:

Unmatched Reliability and Experience

As a world-leading staking provider and node operator since 2018, Chorus One brings extensive experience to Berachain validation. Our infrastructure features:

  • Maximum uptime and performance
  • Redundant systems and 24/7 monitoring
  • Battle-tested experience across multiple proof-of-stake networks

Transparent Operations & Dedicated Support

We believe in complete transparency with our stakers, with publicly available performance metrics and regular operational updates. Our team is always available to assist with any questions about staking with Chorus One.

Conclusion

The Bectra upgrade represents a significant evolution for Berachain, giving stakers the freedom to unstake and move their BERA between validators. This new flexibility creates both opportunities and responsibilities, as stakers can now be more strategic with their delegations.

In this more competitive landscape, Chorus One stands ready to earn your delegation through superior performance, reliability, and service. Our commitment to maximizing returns for our stakers, combined with our extensive experience, makes us an ideal partner for your BERA staking journey.

Stake with Chorus One today and experience the difference that professional validation can make for your BERA holdings.

June 3, 2025
Timing Games on Monad
This is a research article co-authored by @mostlyblocks (former Head of Research at Chorus One) and @ThogardPvP (CEO of @0xFastLane).
May 26, 2025
5 min read

This is a research article co-authored by @mostlyblocks (former Head of Research at Chorus One) and @ThogardPvP (CEO of @0xFastLane).

This article provides an accessible first perspective on validator timing games on Monad. Practically speaking, validators may intentionally build and gossip their block as late as possible to capture additional value from the incrementally larger set of transactions accruing over slot time.

While timing games have been practiced for multiple years on Ethereum, Monad’s performance-first architecture introduces additional risk considerations. The article recaps timing games, outlines Monad’s architecture, including a recent overhaul of MonadBFT consensus, and describes how sophisticated validators may approach timing games on Monad.

Timing Games & Why They Work

We can loosely define a decentralized network as a set of geographically distributed nodes. More rigorously, we can approximate such a network by its (stake-weighted) network graph, which has a center from which the expected latency to the set of nodes is minimized. As not all nodes are equally close to this center, the network must allow appropriate latency leeway to allow distant nodes to participate - this is a design choice.

In practice, a fast blockchain can be built by co-locating all nodes, but such a structure comes with resiliency trade-offs (e.g. the datacenter shuts down). A more resilient, decentralized architecture like Monad must allow sufficient consensus time for a node that is e.g. located in New Zealand to participate.

Competitive validators may take advantage of this latency leeway to improve the expected value of their blocks by performing two successive actions. First, they minimize their latency to peers, and second, they artificially delay their block proposal. This is referred to as a “timing game”.

To understand why this is profitable, note that as the slot progresses, users keep sending transactions, and therefore, the proposer has a larger amount of transactions to select the final block from. A more nuanced reason is that as the slot progresses, traders have access to more information, and in the case of arbitrage between the chain and another venue (e.g. the most liquid CEX), the expected opportunity value increases as the square root of block time, over time.

Timing games exist on a risk reward curve - the longer the validator delays, the higher the risk of an unwanted timeout resulting in a zero- or negative- payoff. For this reason, an informed validator must quantify the payoff of each unit of delay versus the marginal risk it runs. In summary, a sophisticated approach to timing games hinges on a strong understanding of the network’s topology and transaction arrival dynamics (i.e. the block value over slot time).

Parallelization, Gossiping, and Reorgs in MonadBFT

Monad's consensus mechanism is an improved version of HotStuff called "MonadBFT." HotStuff is a leader-based Byzantine fault-tolerant consensus protocol designed for high throughput and low latency. We will first examine parallelization, before describing how blocks are gossiped to the network.

In MonadBFT, the leader for a slot executes the transactions of the preceding slot **while composing the list of transactions it will include in its block, which will then be executed by the upcoming leader. Intuitively speaking, this means that during its slot, a validator fulfills two tasks: it simulates and settles the transactions passed to it by the validator preceding it in an optimized manner (execution), and packs a block to pass to the validator succeeding it (consensus).

An implication is that any transaction with a state dependency does not yield an execution guarantee until settlement time, which takes place one slot after the transaction has been included. For a typical user, this will not usually be noticeable. For traders competing for limited opportunities, the main implication is a less informed priority gas auction, and if carrying inventory risk (e.g. arbitrageurs), an additional risk margin applied over the other leg(s) of the trade.

Once a validator has decided on its block*,* it sends it out alongside either a quorum certificate (QC), or a timeout certificate (TC) for the previous block. Specifically, a QC attests that the validator has built on the preceding block n, and that this block has been attested to as valid by 2/3 of the network (i.e. an aggregated signature). Conversely, a TC attests that the preceding validator has missed its slot and that the validator has built on the preceding block n-1; a valid TC must also be signed by 2/3 of the network.

The latest iteration of MonadBFT couples any timeout vote with a quorum vote for the most recent block a validator has attested to, which means that any validator which garners a supermajority of votes eventually finalizes. This stands in contrast to previous versions, where a successful timeout quorum would have resulted in a reorg of the validator that had preceded the timed out leader (”tail fork”).

Timing Games on Monad

Timing games on Monad are a race between the QC (validators vote on a block) and TC (validators vote on a timeout) paths. The QC path is faster, with linear overhead (one message per peer), while the TC path is slower, validators must multicast timeout votes and aggregate these into a TC (quadratic overhead).

"Linear overhead" refers to communication complexity that grows proportionally with network size - one leader sending a single message to each of N validators. "Quadratic overhead" occurs when each validator must communicate with many others, causing message volume to grow with the square of the network size as validators both send and receive messages from their peers.

A validator playing timing games must ensure its block reaches a supermajority of the network before a timeout vote does, and therefore, there is a strong incentive to optimize connectivity with the closest 2/3 of peers plus a risk margin for latency or timeouts. Put another way, validators in distant locations might timeout more frequently due to slower block receipt, issuing TCs even as QCs form elsewhere. The upshot is an increased incentive to centralize as timing games spread.

The extent to which a proposer may risk a timeout depends on the marginal value of any delay. This is an empirical question, as traders will time their transactions such that they are late but still run a high inclusion chance (i.e. model the network), whereas retail transaction arrival times can be thought of as unpredictable. An aspect that can be stated is that due to Monad’s shorter block times, the expected value of cross-venue arbitrage is significantly lower than on e.g. Ethereum.

A validator playing timing games benefits from capturing transactions that would have otherwise accrued to a later slot; if all validators play timing games, this results in a competitive outcome with an increased risk profile. Economic incentives are likely to encourage competitive validators to engage in timing games in any scenario, as a decentralized network must accommodate distant participants, and therefore, a delay margin is available for well-networked nodes.

Under previous version of the MonadBFT design, an elevated timeout risk reflected as an increased risk of reorgs. This would have reduced block value upfront, as traders holding inventory risk (e.g. arbitrageurs) adjust their risk margin over the compound risk of not executing due to pipelined consensus plus the reorg risk. In this context, a validator playing timing games on Monad would have reduced the value of the preceding proposer as well.

This is not the case anymore under the current iteration, as timeout votes now carry a quorum vote for the latest block a validator has attested to. This additional information must be gossiped around the network with quadratic overhead (i.e. the message gets larger, increasing arrival latency), and therefore, likely adds more room for timing games.

In summary, timing games on Monad are possible and will favor sophisticated validators. While profitable for this subset, they in expectation reduce the block value of the proposers succeeding them, via excess transaction capture. Timing games are geographically centralizing by reducing the network visibility of nodes in distant locations, which reflects in inaccurate timeout votes, and a loss of influence to high performing low-latency hubs.

May 26, 2025
Injective iAssets: The Evolution of Stocks 3.0
A deep dive into Injective's iAssets - programmable financial primitives that facilitate enhanced liquidity allocation, position-based exposure, and cross-market composability.
May 23, 2025
5 min read

The financial industry has long grappled with the inefficiencies of traditional systems, namely settlement delays, restricted market access, and capital inefficiencies. Decentralized finance (DeFi) would eventually emerge as a response, but early implementations often fell short, plagued by issues like over-collateralization and limited composability. 

Recognizing these challenges, Injective introduced iAssets - programmable financial primitives that facilitate enhanced liquidity allocation, position-based exposure, and cross-market composability. Unlike their static predecessors, iAssets are dynamic, on-chain instruments, with second-order utility and no pre-funding constraints. With iAssets, Injective aims to move blockchain-based stocks, commodities, and more beyond proof-of-concept, ushering in the era of Stocks 3.0.

The Evolution of Financial Assets – TradFi, Early DeFi, and the Emergence of iAssets

Traditional Finance (Stocks 1.0)

Traditional financial systems operate within structured, yet inflexible frameworks, characterized by delayed settlements (typically T+2), stringent access barriers, and segregated liquidity. The opacity of processes such as prime brokerage and rehypothecation further compound systemic risks, creating inefficiencies and restricting market participation to predominantly institutional actors.​

Early DeFi & Synthetic Assets (Stocks 2.0)

The initial wave of DeFi introduced tokenized and synthetic assets, allowing for asset programmability and a more open financial environment. However, these models often required excessive collateralization (often surpassing 150%), leading to substantial capital inefficiencies. Liquidity pools were isolated, limiting the effective deployment of capital, and creating vulnerabilities such as liquidation cascades during market volatility.​

Injective's iAssets (Stocks 3.0)

Understanding the shortcomings of both traditional systems and early blockchain solutions, Injective's iAssets introduce significant innovations to further the utility of on-chain assets. Key advancements include:​

  • Programmability: iAssets embed on-chain logic, enabling sophisticated asset interactions.​

  • Composability: iAssets are fully integrated across multiple financial applications within the Injective ecosystem.​

  • Capital Efficiency: Eliminating the need for excessive collateralization or pre-funded positions.​

  • Dynamic Liquidity Management: Real-time liquidity provisioning aligned with market demands.​

These characteristics mark a distinct shift from representational to programmable finance. Rather than merely mirroring the value of off-chain assets, iAssets transform them into composable building blocks—financial primitives that can be deployed across lending protocols, used as collateral, integrated into structured products, or programmed into hedging strategies. The result is a framework that not only preserves the core utility of traditional assets, but enhances them with real-time liquidity, seamless market integration, and systemic transparency.

In this light, iAssets are not just an iteration on previous tokenization efforts, they are a redefinition of what it means to own and utilize assets in a digitally native financial system.​

Injective’s Modular Architecture – The Backbone of iAssets

Injective's iAssets are realized through a robust and meticulously designed technical infrastructure. At its core lies Injective's modular architecture, which has been developed over several years to support high-performance decentralized financial applications.​

Exchange Module and On-Chain CLOB

The Exchange Module serves as the foundation for iAssets, providing a fully decentralized, on-chain central limit order book (CLOB). Unlike traditional automated market maker (AMM) models, the CLOB facilitates tighter spreads and more efficient price discovery. This architecture allows for professional institutions to dynamically manage liquidity, ensuring that iAssets benefit from deep and responsive markets.​

Moreover, the Exchange Module plays a pivotal role in optimizing liquidity across the Injective ecosystem. By enabling a shared liquidity environment, it allows for seamless capital flow between various financial applications, including trading platforms, and structured financial products. This interconnectedness ensures that liquidity is not siloed, and instead dynamically allocated based on real-time market demands.

And iAssets haven’t wasted any time in picking up steam. Injective now hosts all Mag 7 stocks, which have done a cumulative $165M+ in trading volume since launch. iAssets as a whole have seen over $465M in trading, laying the foundation for a burgeoning asset category and aggressive innovation. And if that wasn’t enough - one particular asset of note really takes center stage, TRADFI, which achieved approximately $14 million in trading volume on its first day of listing.

Modular Design and Multi-VM Support

Injective's architecture is composed of interoperable modules, each serving a specific function within the ecosystem. This modularity gives developers access to a robust set of pre-built components, such as the Oracle Module, RWA Module, automatic smart contracts and more, without the need to build from scratch. For all Injective modules, click here.  Furthermore, Injective supports multiple virtual machines (VMs), enhancing the flexibility and scalability of applications built on the network.​

To learn more about Injective modules, click here. 

Future Developments and Innovations in iAssets

And Injective isn’t stopping there. The team is actively working on several initiatives aimed at enhancing capital efficiency and utility, notably, their Liquidity Availability Framework.

Liquidity Availability Framework

One of the key developments is the introduction of a "Liquidity Availability" framework. This initiative seeks to optimize capital utilization by allowing liquidity to move dynamically between applications based on demand. While underutilization is a notable concern, the primary objective of liquidity availability is to address limitations brought about by application-specific liquidity, and ensure that liquidity is allocated more efficiently across the ecosystem. 

Want to learn more? Check out Injective’s research paper on Liquidity Availability here. 

Redefining On-Chain Asset Utility

Injective’s iAssets represent a pivotal advancement in the evolution of financial markets, transitioning from static representations to dynamic, programmable financial primitives. By addressing the limitations of both traditional finance and early decentralized finance models, iAssets offer enhanced capital efficiency, real-time liquidity, and seamless composability across financial applications.​

Leveraging Injective's robust modular architecture and on-chain central limit order book, iAssets facilitate a more integrated and efficient financial ecosystem. This infrastructure not only accelerates development timelines but also fosters innovation, enabling complex financial instruments to be constructed with greater ease and reliability.​

As the financial industry continues to evolve, Injective seeks to provide the foundational infrastructure necessary for the next generation of programmable finance. 

Want to learn more about iAssets? Check out the iAssets research paper here. 

Chorus One: Empowering the iAssets Ecosystem

As a leading institutional staking provider, Chorus One is proud to support the Injective ecosystem and its innovative iAssets framework. By operating a highly secure and reliable validator node on Injective, Chorus One ensures network stability and contributes to the seamless functioning of the Injective ecosystem. 

CTA - Stake Your INJ

May 23, 2025
Impact of Vote Backfilling on Solana
Consensus is the backbone of any blockchain. Its implementation ensures that liveness (meaning the chain can continuously produce new blocks) and finality (meaning transactions are permanently confirmed) are well-defined. This allows the chain to progress with the agreement of a supermajority of stake-weighted validators.
May 20, 2025
5 min read

Solana employs a Proof-of-Stake (PoS) consensus algorithm where a designated leader, chosen for each slot, produces a new block. The leader schedule is randomly determined before the start of each epoch, a fixed period comprising multiple slots. Leaders generate a Proof-of-History (PoH) sequence (a series of cryptographic hashes where each hash is computed from the previous hash and a counter) to order transactions and prove the passage of time. Transactions are bundled into entries within a block, timestamped by the PoH sequence, and the block is finalized when the leader completes its slot, allowing the next leader to build upon it. A slot is optimistically confirmed when validators representing two-thirds of the total stake vote on it or its descendants, signaling broad network agreement.

When forks arise (situations where multiple conflicting chains exist), validators must decide which fork to support through their votes. Each vote commits a validator to a fork, and lockout rules restrict them from voting on conflicting forks for a duration. Solana’s fork choice rule governs when a validator can switch to another fork, ensuring the network converges on a single, canonical chain.

The Tower Struct: Managing Validator Votes

In the Agave client, the Tower struct, defined within the core/src/consensus.rs file, serves as the central data structure for managing a validator’s voting state. It plays a pivotal role in tracking the validator’s voting history, enforcing lockout rules to prevent conflicting votes, and facilitating decisions about which fork to follow in the presence of chain splits. Within the Tower, a vote_state field (implemented as a VoteState object from the solana-vote-program crate) maintains a detailed record of all votes cast by the validator along with their associated lockout periods, ensuring adherence to consensus rules. The struct also keeps track of the most recent vote through its last_vote field, capturing the latest transaction submitted by the validator. Additionally, the Tower includes threshold_depth and threshold_size parameters, which define the criteria for confirming slots; by default, these are set to a depth of 8 slots and a stake threshold of two-thirds, respectively, determining the level of agreement required from the network.

When a validator needs to vote on a slot, it relies on the Tower’s record_bank_vote method to execute the process seamlessly. This method begins by extracting the slot number and its corresponding hash from a Bank object, which represents a snapshot of the ledger at that specific slot. It then constructs a Vote object encapsulating the slot and hash, formalizing the validator’s intent. Finally, it invokes record_bank_vote_and_update_lockouts, which in turn delegates to process_next_vote_slot to update the vote_state, ensuring that the new vote is recorded and lockout rules are applied accordingly.

Lockout Rules: The $2^N$ Mechanism

Once a validator casts a vote on a slot in Solana, it becomes temporarily barred from voting on conflicting slots, a restriction governed by the 2^N lockout rule, which imposes an exponentially increasing duration with each subsequent vote. Initially, after a validator’s first vote ($N=1$), the lockout period spans $2^1 = 2$ slots, but with a second vote ($N=2$), the lockout for the first vote extends to $2^2 = 4$ slots, while the second vote introduces a new lockout of $2^1 = 2$ slots. More generally, after $N$ votes, the earliest vote in the sequence is locked out for $2^N$ slots, ensuring the validator remains committed to its chosen fork.

This mechanism is implemented within the process_next_vote_slot function, which is invoked by record_bank_vote to update the validator’s voting state. At the heart of this exponential lockout is the confirmation_count field, where each new vote increments the confirmation_count of prior votes. The confirmation_count is part of the Lockout struct, and it is used to determine the number of slot for which a vote is locked out as $2^\text{confirmation\_count}$.

To illustrate, consider a validator voting on slots 2 and 3 of a fork. With its first vote on slot 2, the confirmation_count is set to 1, resulting in a lockout of $2^1 = 2$ slots, meaning the validator is barred from voting on a conflicting fork until after slot 4. When it votes on slot 3, the confirmation_count for slot 2 increases to 2, extending its lockout to $2^2 = 4$ slots, or until slot 6, while slot 3 starts with a confirmation_count of 1, locking out until slot 5. Consequently, the validator cannot vote on a conflicting fork, such as slot 4 on a different chain, until after slot 6.

Fig. 1: Graphical representation of a validator V switching fork, with relative $2^N$ rule for lockouts.

Through this mechanism, Solana ensures that validators remain committed to their chosen fork for the duration of the lockout period, preventing double-voting or equivocation that could destabilize the network.

Vote Backfilling and Intermediate vote credits

Validators in Solana’s consensus process sometimes miss voting on slots due to network delays or operational issues. As noted in SIMD-0033 (Timely Vote Credits), missing votes or submitting them late reduce rewards, prompting some validators to engage in backfilling, which is the retroactive voting on missed slots in a later transaction. For instance, a validator skipping slots 4 to 6 might vote on it at slot 7, claiming credits to maintain staking rewards and network contribution. However, Solana enforces strict ordering: the check_and_filter_proposed_vote_state function ensures slots in a vote transaction exceed the last voted slot in the VoteState, rejecting earlier slots with a VoteError::SlotsNotOrdered. This consensus-level check, executed on-chain, means that backfilling must advance the voting sequence, such as including slots 4 to 6 at slot 7, only if the last voted slot was 1 or earlier (see Fig. 1).

It is worth mentioning that, despite this practice appears healthy, improper backfilling can disrupt the network. When a validator backfills by voting on missed slots, each new vote extends the lockout period of earlier votes through Solana’s $2^N$ rule, deepening its commitment to the chosen fork. If these slots belong to a stale fork, the prolonged lockouts may prevent the validator from switching to the main chain, potentially delaying consensus if many validators are similarly affected, thus hindering the network’s ability to confirm new blocks.

To address this, Ashwin Sekar from Anza proposed SIMD-0218: Intermediate Vote Credits (IVC), which integrates backfilling into the protocol by crediting intermediate slots at the landing slot’s time. This controlled approach is meant to eliminate risky backfilling mods, ensuring liveness and fairness while allowing credit recovery.

Backfilling Detection

Detecting backfilling in Solana poses a challenge because, as previously discussed, voting on past slots is not prohibited and can occur under normal circumstances, such as network delays or validator restarts. However, fluctuations in network-wide TVC effectiveness offer a lens to identify potential backfilling. A query obtained using Flipside reveals that TVC Effectiveness fluctuates over time, with a notable network-wide dip during epoch 786.

While a more significant dip occurred between Epochs 760 and 770, we focused on the more recent period due to the availability of granular internal TVC data, acknowledging that further investigation into historical valleys is warranted to fully understand network voting dynamics.

For this analysis, we focused on the date range between 2025-05-11 and 2025-05-15.

Fig. 2: Network TVC Effectiveness obtained using Flispide, cfr.here

Since Dune is the only data provider granting easy access to on-chain voting data, we developed a dashboard with the goal of detecting potential instances of the backfilling practice. Precisely, we developed two methods; however, both remain probabilistic, as backfilling cannot be definitively confirmed without direct access to the validator's intent.

The first method, dubbed the "Simple Mod", examines voting behaviour focusing on the relationship between the number of vote transactions signed by a validator in a single slot and the slot distance between the landing slot of those transactions and the slot of the recent block hash they reference. For example, if a validator submits 10 vote transactions in slot 110 with a recent block hash from slot 108, the distance is only 2 slots, significantly less than the number of transactions. This pattern suggests backfilling because the validator is likely catching up on missed slots in a burst: the short distance indicates the transactions were created in quick succession at slot 108, possibly to retroactively vote on slots 99 to 108, a common backfilling strategy to claim credits for earlier missed votes in a single submission, rather than voting incrementally as the chain progresses.

Fig. 3: N° of possible backfill detected using the “Simple Mod” model, cfr.here

Figure 3 shows the hourly aggregation of possible backfill detections using the “Simple Mod” method. We mainly have two peaks in the data: the first occurs at 00:00 on May 11, 2025, likely triggered by a widespread network issue or a collective validator restart, as the validators identified during this spike do not reappear in subsequent instances, suggesting a one-off event rather than sustained backfilling. The second peak, around 11:00 AM on May 13, captures a more persistent trend, involving the most frequently detected accounts across the dataset. By examining the frequency of these detections, we identified three validators consistently engaging in potential backfilling, indicating an active practice of retroactively voting on missed slots to maximize credits, alongside one validator exhibiting milder behavior, with fewer instances that suggest a less aggressive approach to catching up on voting gaps.

The second method, dubbed the "Elaborate Mod", takes a broader perspective, analyzing voting patterns to identify validators that consistently submit an unusually high number of vote transactions in single slots across multiple instances. We aggregated vote transactions hourly, flagging validators that submit more than 4 distinct vote transactions in a slot. We chose this threshold because, while a leader might include multiple vote transactions from a validator due to network latency or validator restarts, exceeding 4 votes in a single slot is unlikely under normal conditions where validators typically vote once per slot to advance the chain. We further refined the detection by requiring this behaviour to occur in over 10 distinct hourly intervals, reflecting that such frequent high-volume voting is less likely to stem from typical network operations. This pattern could indicate backfilling because validators engaging in this practice often batch votes for multiple missed slots into a single slot’s transactions, aiming to retroactively fill voting gaps.

Fig. 4: N° of possible backfill detected using the “Elaborate Mod” model, cfr.here

Figure 4 presents the hourly aggregation of possible backfill detections using the "Elaborate Mod" method, revealing a distinct pattern that complements the Simple Mod analysis. Unlike the dual peaks observed previously, this method identifies a single prominent peak around 11:00 AM on May 13, 2025, which aligns precisely with the second peak detected by the “Simple Mod”. The absence of the earlier peak from May 11 at 00:00 underscores the Elaborate Mod’s design, which reduces sensitivity to false positives caused by transient events, such as validator restarts or network-wide issues, focusing instead on sustained high-volume voting patterns indicative of deliberate backfilling. Notably, the Elaborate Mod detects a larger cohort of 120 validators engaging in potential backfilling, reflecting its broader scope in capturing consistent voting anomalies over time. Among these, the most prominent backfiller mirrors the primary validator identified by the “Simple Mod”.

Fig. 5: Vote credits earned out of the theoretical maximum, higher is better. Better vote effectiveness translates directly to higher APY for delegators. To make very subtle differences between validators more pronounced, we show the effectiveness over a 24h time window. This means that short events such as 5 minutes of downtime, can show up as a 24-hour long dip. Note the log scale!

Having identified potential backfillers, we now turn to assessing whether this practice might destabilize Solana’s consensus mechanism. As previously noted, improper backfilling can extend the lockout periods of stake committed to a stale fork, potentially slowing consensus by delaying validators’ ability to switch to the main chain. Figure 5 leverages internal data to provide a granular view of TVC effectiveness, tracking the performance of various validators. The purple validator, flagged as a backfiller by the “Elaborate Mod”, consistently outperforms others in TVC effectiveness under normal conditions, likely due to its aggressive credit recovery through backfilling. However, during vote-related disruptions (such as those coinciding with the May 13 peak) its effectiveness drops more sharply than that of vanilla validators, suggesting prolonged adherence to a wrong fork. This heightened sensitivity indicates that backfilling, while beneficial for credit accumulation, may amplify the risk of consensus delays if many validators on a stale fork face extended lockouts, raising questions about the broader safety of Solana’s consensus mechanism despite the validator’s overall performance advantage.

Conclusion

In this analysis, we focused on modifying the consensus to backfill missed vote opportunities. Improper backfilling can exacerbate vote lagging, strain network resources, and extend lockouts on stale forks, potentially delaying consensus.

We developed two methods to potentially detect validators involved in this practice, the “Simple Mod” and “Elaborate Mod”. Our data analysis from May 11 to May 15, 2025, highlighted periods of potential backfilling. The Simple Mod identified bursts of vote transactions exceeding the slot distance to their recent block hash. At the same time, the Elaborate Mod flagged validators consistently submitting high vote counts across multiple instances, detecting 120 validators with one primary backfiller overlapping between methods. Analysis of TVC effectiveness showed that while backfillers often outperform in credits, they face sharper drops during vote-related disruptions. This suggests prolonged adherence to wrong forks that could hinder consensus if widespread. The introduction of SIMD-0218 (Intermediate Vote Credits) offers a promising solution by formalizing backfilling within the protocol, mitigating risks like vote lagging while ensuring fair credit recovery. Nonetheless, the interplay between backfilling and consensus stability raises ongoing questions about Solana’s long-term resilience, warranting further investigation into network-wide voting patterns and their impact on liveness and fairness.

May 20, 2025

All Reports

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Treasury 3.0: How Digital Asset Treasuries Are Turning Crypto into Yield
August 18, 2025
BeraBoost: Maximizing Chorus One Delegator Rewards
February 6, 2025
Quarterly Network Insights: Q1 2024
June 13, 2024
Optimal Risk and Reward on EigenLayer: A first look
April 17, 2024
MEV-Boost Withdrawal Bug
March 11, 2024
Quarterly Network Insights: Q4 2023
February 28, 2024
Governance in Cosmos: 2023
January 29, 2024
The cost of artificial latency in the PBS context
December 15, 2023
Quarterly Network Insights: Q3 2023
November 7, 2023
MEV on the dYdX v4 chain
August 14, 2023
Quarterly Network Insights: Q2 2023
August 1, 2023
Quarterly Network Insights: Q1 2023
May 4, 2023
Breaking Bots: An alternative way to capture MEV on Solana
January 1, 2023
Governance in Cosmos: 2022
December 31, 2022
Annual Staking Review: 2022
December 31, 2022
Quarterly Network Insights : Q3 2022
September 30, 2022
Quarterly Network Insights: Q2 2022
June 30, 2022
Quarterly Network Insights: Q1 2022
March 31, 2022
Annual Staking Review: 2021
December 31, 2021

Want to get in touch with our research team?

Submit
Thanks for reaching out. We'll get back to you shortly.
Oops! Something went wrong while submitting the form.