Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
A primer on proposer preconfirms
We explore what preconfirmations are, why they matter, and how they’re set to transform the blockchain landscape.
September 9, 2024
5 min read

In the blockchain industry, where the balance between decentralization and efficiency often teeters on a knife's edge, innovations that address these challenges are paramount. Among these innovations, preconfirmations stand out as a powerful tool designed to enhance transaction speed, security, and reliability. Here, we’ll delve into what preconfirmations (henceforth referred to as “preconfirms” ) are, why they matter, and how they’re set to transform the blockchain landscape.

Preconfirms are not a new concept.

The idea of providing a credible heads-up or confirmation that a transaction has occurred is deeply ingrained in our daily lives. Whether it's receiving an order confirmation from Amazon, verifying a credit card payment, or processing transactions in blockchain networks, this concept is familiar and widely used. In the blockchain world, centralized sequencers like those in Arbitrum function similarly, offering guarantees that your transaction will be included in the block.

However, these guarantees are not without limitations. True finality is only achieved when the transaction is settled on Ethereum. The reliance on centralized sequencers in Layer 2 (L2) networks, which are responsible for verifying, ordering, and batching transactions before they are committed to the main blockchain (Layer 1), presents significant challenges. They can become single points of failure, leading to increased risks of transaction censorship and bottlenecks in the process.

This is where preconfirms come into play. Preconfirms were introduced to address these challenges, providing a more secure and efficient way to ensure transaction integrity in decentralized networks.

Builders, Sequencers, Proposers: Who’s Who

Before jumping into the preconfirms trenches, let’s start by clarifying some key terms that will appear throughout this article (and are essential to the broader topic).

Builders: In the context of Ethereum and PBS, builders are responsible for selecting and ordering transactions in a block. This is a specialized role with the goal of creating blocks with the highest value for the proposer, and builders are also highly centralized entities. Blocks are submited to relays, which act as mediators between builders and proposers.

Proposers: The role of the proposer is to validate the contents of the most valuable block submitted by the block builders, and to propose this block to the network to be included as the new head of the blockchain. In this landscape, proposers are the validators in the Proof-of-Stake consensus protocol, and get rewarded for proposing blocks (a fee gets paid to the builder from the proposer as well).

Sequencers: Sequencers are akin to air traffic controllers, particularly within Layer 2 Rollup networks. They are responsible for coordinating and ordering transactions between the Rollup and the Layer 1 chain (such as Ethereum) for final settlement. Because they have exclusive rights to the ordering of transactions, they also benefit from transaction fees and MEV.  Usually, they have ZK or optimistic security guarantees.

The solution: Preconfirmations

Now that we’ve set the stage, let’s dive into the concept of preconfirms.

At their core, preconfirms can provide two guarantees:

  • Inclusion Guarantees: Assurance that a transaction will be included in the next block.
  • Execution Guarantees: Assurance that a transaction will successfully execute, especially in competitive environments where multiple users are vying for the same resources, such as in trading scenarios.

These two guarantees matter. Particularly for:

Speed: Traditional block confirmations can take several seconds, whereas preconfirms can provide a credible assurance much faster. This speed is particularly beneficial for "based rollups" that batch user transactions and commit them to Ethereum, resulting in faster transaction confirmations.  @taikoxyz and @Spire_Labs are teams building based rollups.

Censorship Resistance: A proposer can request the inclusion of a transaction that some builders might not want to include.

Trading Use Cases: Traders may preconfirm transactions if it allows them to execute ahead of competitors.

Preconfirmations on Ethereum: A Closer Look

Now, zooming in on Ethereum.

The following chart describes the overall Proposer-builder separation and transaction pipeline on Ethereum.

Within the Ethereum network, preconfirms can be implemented in three distinct scenarios, depending on the specific needs of the network:

  1. Builder issued Preconfirms

Builder preconfirms suit the trading use case best. These offer low-latency guarantees and are effective in networks where a small number of builders dominate block-building. Builders can opt into proposer support, which enhances the strength of the guarantee.

However, the dominance of a few builders means that onboarding these few is key. However, since there are only a few dominant builders, successfully onboarding these players is key.

  1. Proposer issued Preconfirms.

Proposers provide stronger inclusion guarantees than builders because they have the final say on which transactions are included in the block. This method is particularly useful for "based rollups," where Layer 1 validators act as sequencers.

Yet, maintaining strong guarantees are key challenges for proposer preconfirms.

The question of which solution will ultimately win remains uncertain, as multiple factors will play a crucial role in determining the outcome. We can speculate on the success of builder opt-ins for builder preconfirms, the growing traction of based rollups, and the effectiveness of proposer declaration implementations. The balance between user demand for inclusion versus execution guarantees will also be pivotal. Furthermore, the introduction of multiple concurrent proposers on the Ethereum roadmap could significantly impact the direction of transaction confirmation solutions. Ultimately, the interplay of these elements will shape the future landscape of blockchain transaction processing.

Commit-Boost

Commit-boost is a mev-boost like sidecar for preconfirms.

Commit-boost facilitates communication between builders and proposers, enhancing the preconfirmation process. It’s designed to replace the existing MEV-boost infrastructure, addressing performance issues and extending its capabilities to include preconfirms.

Currently in testnet, commit-boost is being developed by a non-ventured-backed neutral software for Ethereum with the ambition of fully integrating preconfirms into its framework. Chorus One is currently running commit-boost on Testnet.  

Recap - The preconfirmation design space
  1. Who chooses which transactions to preconfirm.
    1. This could be the builder, the proposer, or a sophisticated third party (“a gateway”) chosen by the proposer.
  2. Where in the block the preconfirmed transactions are included.
    1. Granular control over placement can be interesting for traders even without execution preconfs.
  3. Whether only inclusion or additionally execution is guaranteed.
    1. Without an execution guarantee, an included transaction could still fail, e.g. if it tries to trade on an opportunity that has disappeared.
  4. How and what amount of collateral the builder or proposer puts up
    1. Preconfers must be disincentivized from reneging on their promised preconfs for these to be credible.
    2. E.g. This could be a Symbiotic or Eigenlayer service, and proposed collateral requirements range from 1 ETH to 1000 ETH.

Final Word

Chorus One has been deeply involved with preconfirms from the very beginning, pioneering some of the first-ever preconfirms using Bolt during the ZuBerlin and Helder testnets. We’re fully immersed in optimizing the Proposer-Builder Separation (PBS) pipeline and are excited about the major developments currently unfolding in this space. Stay tuned for an upcoming special episode of the Chorus One Podcast, where we’ll dive more into this topic.

If you’re interested in learning more, feel free to reach out to us at research@chorus.one.

About Chorus One

Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.

An introduction to oracle extractable value (OEV)
This is a joint research article written by Chorus One and Superscrypt, explaining OEV, and how it can be best captured.
August 30, 2024
5 min read

This is a joint research article written by Chorus One and Superscrypt

Blockchain transactions are public and viewable even before they get written to the block. This has led to maximal extractable value (‘MEV’), i.e. where actors frontrun and backrun visible transactions to extract profit for themselves.

The MEV space is constantly evolving as competition intensifies and new avenues to extract value are always emerging. In this article we explore one such avenue - Oracle Extractable Value, where MEV can be extracted even before transactions hit the mempool.

This is particularly relevant for borrowing & lending protocols which rely on data feeds from oracles to make decisions on whether to liquidate positions or not. Read on to find out more.

Introduction

Value is in a constant state of being created, destroyed, won or lost in any financialized system, and blockchains are no exception. User transactions are not isolated to their surroundings, but instead embedded within complex interactions that determine their final payoff.

Not all transaction costs are as explicit as gas fees. Fundamentally, the total value that can be captured from a transaction includes the payoff of downstream trades preceding or succeeding it. These can be benign in nature, for example, an arbitrage transaction to bring prices back in line with the market, or impose hidden taxes in the case of front running. Overall, maximal extractable value (or “MEV”) is the value that can be captured from strategically including and ordering transactions such that the aggregate block value is maximized.

If not extracted or monetized, value is simply lost. Presently, the actualization of MEV on Ethereum reflects a complex supply chain (“PBS”) where several actors such as wallets, searchers, block builders and validators fill specialized roles. There are returns on sophistication for all participants in this value chain, most explicitly for builders which are tasked with creating optimal blocks. Validators can play sophisticated timing games which result in additional MEV capture; for example, Chorus One has run an advanced timing games setup since early 2023, and published extensively on it. In the PBS context, the best proxy for the total MEV extracted is the final bid a builder gets to submit during the block auction.

Such returns on sophistication extend to the concept of Oracle Extractable Value (OEV), which is a type of MEV that has historically gone uncaptured by protocols. This article will explain OEV, and how it can be best captured.

Oracles

Oracles are one of crypto's critical infrastructure components: they are the choreographers that orchestrate and synchronize the off-chain world with the blockchain’s immutable ledger. Their influence is immense: they inform all the prices you see and interact with on-chain. Markets are constantly changing, and protocols and applications rely on secure oracle feed updates to provide DeFi services to millions of crypto users worldwide.

The current status-quo is that third-party oracle networks serve as intermediaries that feed external data to smart contracts. They operate separately from the blockchains they serve, which maintains the core goal of chain consensus but introduces some limitations, including concepts such as fair sequencing, required payments from protocols and apps, and multiple sources of data in a decentralized world.

In practical terms, the data from oracles represents a great resource for value extraction. The market shift an oracle price update causes can be anticipated and traded profitably, by back-running any resulting arbitrage opportunities or (more prominently) by capturing resulting liquidations. This is Oracle Extractable Value. But how is it captured, and more importantly, who profits from it?

A potential approach to understand the value in OEV (using AAVE data).
Oracle Extractable Value (OEV)

In MEV, searchers (which are essentially trading bots that run on-chain) profit from oracle updates by backrunning them in a free-for-all priority gas auction. Value is distributed between the searchers, who find opportunities particularly in the lending markets for liquidations, and the block proposers that include their prices in the ledger. Oracles themselves have not historically been a part of this equation.

OEV changes this flow by atomically coupling the backrun trade with the oracle update. This allows the oracle to capture value, by either acting as the searcher itself or auctioning off the extraction rights.

How OEV created in DeFi can be captured by MEV searchers before the dApp gets access to it.

OEV primarily impacts lending markets, where liquidations directly result from oracle updates. By bundling an oracle update with a liquidation transaction, the value capture becomes exclusive, preventing front-running since both actions are combined into a single atomic event. However, arbitrage can still occur before the oracle update through statistical methods, as traders act on the true price seen in other markets

Current landscape

UMA and Oval:

  • UMA has developed a middleware product called Oval (in collaboration with Flashbots), which aims to redistribute value more fairly within the DeFi space.
  • Oval works by wrapping data and conducting an order flow auction where participants bid for the right to use the data, with proceeds shared among protocols like Aave, UMA, and Chainlink.
  • This means that Oval inserts an auction mechanism and lets the market decide what a particular price update is worth.
  • This system helps DeFi protocols like Aave capture value that would otherwise go to liquidators or validators, potentially increasing their revenue.
  • Recently, Oval announced they had successfully completed the “world’s first OEV capture”, through a series of liquidations on the platform Morpho Labs. They even claim a 20% APY boost on some pairs on Morpho.

API3 and OEV Network:

  • API3 launched the OEV Network as a L2 solution, which uses ZK-rollups to capture and redistribute OEV within the DeFi ecosystem.
  • The network functions as an order flow auction platform where the rights to execute specific data feed updates are sold to the highest bidder.
  • This is a different extraction mechanism, as it turns the fixed liquidation bonus into a dynamic market-driven variable through competition.
  • This approach aims to enhance the revenue streams of DeFi protocols and promote a more balanced ecosystem for data providers and users.
  • API3’s solution also incentivizes API providers by distributing a portion of the captured OEV, thus encouraging direct participation and somewhat disrupting the dominance of third-party oracles​.

Warlock

  • Warlock is an upcoming OEV solution that will combine an oracle update sourced from multiple nodes with centralized backrun transactions.
  • The oracle update will feature increasing ZK trust guarantees over time, starting with computation consistency across oracle nodes.
  • Centralizing the backrun allows for lower latency updates, precludes searcher congestion, and protects against information leakage as the searcher entity retains exclusivity, i.e. does not need to obscure alpha. Warlock will service liquidations with internal inventory.
  • The upshot is that lending markets can offer more margin due to less volatility exposure via lower latency. The relative upside will scale with the sophistication of the searcher entity and the impact of congestion on auction-type OEV.
  • Overall, the warlock team estimates that a 10-20% upside will accrue to lending markets initially, with a future upside as value capture improves.

Where could this go?

The upshot of this MEV capture is that oracles have a new dimension to compete on. OEV revenue can be shared with dApps by providing oracle updates free of charge, or by outright subsidizing integrations. Ultimately, protocols with OEV integration will thus be able to bid more competitively for users.

OEV solutions share the same basic idea - shifting the value extraction from oracle updates to the oracle layer, by coupling the price feed update with backrun searcher transactions.

There are several ways of approaching this - an OEV solution may integrate with an existing oracle via an official integration, or through third party infrastructure. These solutions may also be purpose built and provide their own price update.

Heuristically, the key components of an OEV solution are the oracle update and the MEV transaction - these can be either centralized or decentralized.

We would expect purpose-built or “official” extensions to existing oracles to perform better due to less latency versus what would be required to run third party logic in addition to the upstream oracle. Additionally, these would be much more attractive from a risk perspective, as in the case of third party infrastructure, updates could break undesired integrations spontaneously.

The practical case is that a centralized auction can make most sense in latency-sensitive use cases. For example, it may allow a protocol to offer more leverage, as the risk of stranding with bad debt due stale price updates is minimized. By contract, a decentralized auction likely yields the highest aggregate value in use cases where latency is not as sensitive, i.e. where margin requirements are higher.

Mechanisms and Implications of OEV
  1. Atomic Liquidations
    • In a network supply chain, several blockchain actors can benefit from the information arbitrage that they possess.
    • Entities with privileged access to oracle data can leverage this information for liquidation or arbitrage
    • This can create unfair advantages and centralize power among those with early data access.
  2. A new dimension to compete on
    • OEV can lead to substantial profit opportunities, with estimated profits in the millions of dollars. This is especially true in highly volatile markets.
    • OEV enables oracles to distribute atomic backrun rights to searchers, capturing significant value
    • Ecosystems that distribute value in proportion to the contributions (of users, developers, and validators) are likely to thrive.
  3. Potential Risks and Concerns
    • If not managed properly, OEV can undermine the fairness and integrity of decentralized systems. Although the size of the oracle remains the same, it opens the door to competition on the value they can extract and pass onto dApps.
    • Some oracles like Chainlink have moved to reduce OEV and mitigate its impact, by refusing to endorse any third-party OEV solution. However, canonical OEV integrations are important as third party integrations bring idiosyncratic risk.
    • In traditional finance, market makers currently make all of the money from order flow. In crypto, there is a chance that value can be shared with users.
  4. Mitigation Strategies
    • Decentralization of Oracles: Using multiple independent oracles to aggregate data can reduce the risk of any single point of control.
    • Cryptographic Techniques: Techniques like zero-knowledge proofs can help ensure data integrity and fair dissemination without revealing the actual data prematurely.
    • Incentive Structures: Designing incentive structures that discourage exploitative behavior and promote fair access to data. Ultimately, the goal is a competitive market between oracles, where they compete with how much value can pass downstream.

Key Insights
  • Revenue Enhancement: By capturing OEV, projects can significantly enhance the revenue streams for DeFi protocols. For example, UMA’s Oval estimates that Aave missed out on about $62 million in revenue over three years due to not capturing OEV. By enabling these protocols to capture such value, they can reduce unnecessary payouts to liquidators and validators, redirecting this value to improve their own financial health.
  • Decentralization and Security: API3’s use of ZK-rollups and the integration with Polygon CDK provides a robust, secure, and scalable solution for capturing OEV. This approach not only ensures transparency and accountability but also aligns with the principles of decentralization by preventing a single point of failure and enabling more participants to benefit from the system. An aspect of this is also addressed by oracle-agnostic solutions and order flow auctions.
  • Incentives for API Providers: Both API3 and UMA’s solutions include mechanisms to incentivize API providers. API3, in particular, allows API providers to claim ownership of their data in Web3, providing a viable business model that promotes direct participation and reduces reliance on third-party oracles.
  • Impact on Users and Developers: For users and developers of DeFi applications, these innovations should be largely invisible yet beneficial. They help ensure that DeFi protocols operate more efficiently and profitably, potentially leading to lower costs and better services for end-users.
  • Adoption by Oracles and Protocols: Ultimately, the oracles have a part to play in the expansion and acceleration of OEV extraction, through themselves or more realistically, by partnering with third-party solutions. In the last weeks, UMA has launched OEV capture for Redstone oracle feeds, whilst Pyth Network announced their pilot for a new OEV capture solution. Protocols might also want to strike a balance between a new revenue stream ( for the protocol, liquidity pools, liquidity providers…) and the negative externalities of their user base.

OEV is still in its early stages, with much development ahead. We're excited to see how this space evolves and will continue to monitor its progress closely as new opportunities and innovations emerge.

About Chorus One

Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.

Hex Trust x Chorus One: Institutional-grade staking
Hex Trust partners with Chorus One to enhance their robust custody offerings and providing more clients with access to advanced staking solutions.
August 27, 2024
5 min read

We're thrilled to partner with Hex Trust, a leading licensed digital asset custodian. This collaboration combines Chorus One's institutional-grade staking infrastructure with Hex Trust's robust custody services, enhancing Hex Trust's offerings and providing more clients with advanced staking solutions.

"Chorus One is excited to collaborate with Hex Trust to expand staking services. This partnership aligns perfectly with our commitment to making staking accessible, secure, and fully compliant for institutional clients." — Brian Crain, CEO of Chorus One

Why Did Hex Trust Choose Chorus One?

Chorus One has maintained a proven track record as a leader in institutional-grade staking. With the largest network support in the industry and an ISO 27001:2022 certification, we are well-positioned to support Hex Trust in delivering high-quality staking services to its clients. This partnership combines an APAC-based licensed custodian with a leading staking provider to deliver compliant and secure staking options across the region.

Benefits of Staking for Institutions

Staking in Proof-of-Stake (PoS) blockchains presents a compelling opportunity for institutions like Hex Trust. It provides a secure and predictable way to generate rewards, leveraging the native token inflation and transaction fees of the blockchain. This results in a consistent revenue stream that is less volatile than traditional crypto trading.

Moreover, by participating in staking, institutions not only earn rewards but also contribute to the overall security and governance of the network. This active involvement helps strengthen the network's reliability and promotes the long-term growth of the Web3 ecosystem, aligning with the broader goals of financial innovation and digital asset adoption.

About Hex Trust

Established in 2018, Hex Trust is a fully licensed digital asset custodian dedicated to providing comprehensive services for protocols, foundations, financial institutions, and the Web3 ecosystem. Hex Trust offers a suite of services including custody, DeFi, brokerage, and more, all built on a regulated infrastructure. For more information, visit hextrust.com or follow Hex Trust on LinkedIn, X (formerly Twitter), and Telegram.

Hex Trust Disclaimer: Products or services mentioned in this material are subject to legal and regulatory requirements in applicable jurisdictions and may not be available in all jurisdictions.

About Chorus One

Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.

This partnership marks a significant step in our shared mission to make staking more accessible and secure for institutional clients. We look forward to the continued growth and success of this collaboration.

Metrics that Matter: Evaluating Chorus One’s winning Solana performance
Evaluating Solana Validator performance metrics and Chorus One's performance in July 2024
August 21, 2024
5 min read
Key Takeaways
  • Chorus One processes 11.4% more transactions per second than the average Solana validator, enhancing network throughput.
  • With a skip rate of 2.03%, Chorus One outperforms both the network average (5.19%) and the superminority (5.68%).
  • Chorus One's blocks contain 7.8% more transactions on average compared to other validators
  • Chorus One achieves top performance through advanced hardware, zero-downtime deployments, strategic data center locations, and continuous monitoring.
  • If all validators performed like Chorus One, Solana’s overall transaction capacity could increase by 11.4%.

--

There are many aspects to validator performance on Solana, and different metrics are important to different people. For users of the Solana network, throughput (transactions per second) and latency (how quickly a transaction lands) are key metrics. In this article we’ll dive into two factors that affect those: skip rate and block size. We’ll explain how Chorus One is able to outperform both network average and the superminority on these metrics. If all validators performed as well as Chorus One on these metrics, Solana would be able to process 11.4% more transactions per second.

Throughput

As a Solana user, when you submit a transaction, you want it to be included in the chain as quickly as possible, as cheaply as possible. When the chain can process only a limited amount of transactions per second, that means that only users who are willing to pay high priority fees can get their transaction included. When the chain can process more transactions per second, transaction processing capacity becomes less scarce, and transaction fees go down. Solana’s throughput is determined by the validators that make up the network, so for good network performance, it is important to delegate to a validator that performs well.

Time period and comparison

For this article we look at the month of July 2024. All metrics are reported over the period from midnight July 1st until midnight August 1st in the UTC time zone. (Slot 274965076 until 280826904, for those who want to reproduce our findings.)

In this article we contrast Chorus One against two groups of validators: the entire network (including Chorus One), and the superminority. The superminority is the smallest set of validators that together control more than one third of the stake. We use the superminority from epoch 650, the final epoch in July. It consists of the top 19 validators by stake.

Skip rate

In the Solana network, validators periodically have a duty to produce blocks. Before the start of the epoch, the protocol sets the leader schedule, which determines when every validator has to produce a block. Validators with more stake get assigned more blocks to produce.

If all goes well, when a validator’s turn comes to be the leader, the validator produces a block. The chain grows by one block, and users’ transactions get included. When things don’t go well, the leader fails to produce a block, or the block may not be accepted by the other validators. When the leader fails to extend the chain, this is called a skip, and the fraction of blocks skipped out of blocks assigned in some period of time is called the skip rate. Skips are bad for users of the network, because during a skip, no transactions get processed. Skips lower the throughput of the chain, and delay when transactions get processed. A lower skip rate is therefore better.

A validator can skip for multiple reasons. Of course a validator that is offline will be unable to produce a block, but even when it is online and produces a block, that can still result in a skip. For example, the validator could have been slightly late, and the network has already moved on, assuming the validator skipped its duty. Many of the factors that affect skip rate are directly or indirectly under the validator’s control, but some amount of skipping is inevitable in a decentralized network. During times of high activity, skip rate is generally higher network-wide than during quiet periods. Therefore, the skip rate is not meaningful in isolation, but comparing skip rate between validators is one way to judge their performance.

Over July 2024, Chorus One achieved a skip rate of 2.03%, while the network-wide skip rate was 5.19%. This means that average Solana validators fail to produce their blocks more than 2.5 times as often as Chorus One.

Maybe network average is not a fair comparison though? It may be the case that a few bad validators are pulling up the average. So let’s look at the superminority, the top validators by stake. This relatively small set of validators has the responsibility to produce one third of the blocks, so its influence on the chain’s throughput is large. Over July 2024, the superminority together achieved a skip rate of 5.68%, which is even worse than network average. Superminority validators fail to produce their blocks almost 3× as often as Chorus One.

The Solana network is effectively leaving 3.3% of its blocks on the table by keeping stake delegated to validators with high skip rates.

Block size

Aside from skip rate, a major factor for throughput is the number of transactions that every block contains. When blocks can fit more transactions, the throughput of the chain goes up. When validators are able to build larger blocks, fewer user transactions have to be postponed to the next block, so latency goes down. Furthermore, more capacity means lower transaction costs.

Over July 2024, blocks produced by Chorus One contained on average 1696.2 transactions. (This includes vote transactions that contribute to Solana’s consensus mechanism.) The network-wide average over this period was a mere 1573.3 per block. This means that Chorus One includes 7.8% more transactions per block than average validators.

Again, let’s compare this to the validators with the greatest responsibility and disproportionate impact on chain-wide throughput: the superminority. Here we see that with 1640.6 transactions per block, the superminority does outperform the network average, but nonetheless Chorus One outperforms the superminority by 3.4%.

This means that the Solana network is effectively leaving a 7.8% throughput boost on the table, by keeping stake delegated to low-performing validators. This number is only for produced blocks, we don’t count skips as zero transactions per block. This means that the 7.8% boost would come on top of the 3.3% skip rate boost. Combined, this means that Chorus One achieves 11.4% more transactions per second than average validators.

How Chorus One achieves top performance

Why is Chorus One able to process 11.4% more transactions per second than other validators? As is often the case with performance optimization, there is no single trick, but if you stack enough small optimizations, the combined result can be substantial. A few of the techniques we use:

  • We use the best hardware available on the market. Solana is very sensitive to single-core CPU performance, and with the current rate of innovation in the hardware world, a CPU that was top of the line 18 months ago no longer cuts it to be a top-tier validator today. Chorus One is always using the latest generation CPUs to ensure maximum performance.
  • We deploy with zero downtime. Occasionally we need to restart a validator client (for example to update after a new version is released) or an entire machine (for example, to apply security updates). This process can take many minutes, during which the validator cannot vote or produce blocks. This amount of downtime is unacceptable to us, so we run multiple Solana instances, on different machines. When we need to restart one instance, a different instance takes over validator duties, ensuring that we don’t skip a single block. This redundancy also enables us to maintain uptime in the case of hardware or network failures, which is something that node operators who save costs by running only a single node are unable to do.
  • We use the best locations. We work with multiple hardware providers and data centers, who offer ample bandwidth, to find the location where Solana performs best. While doing so, we have to keep decentralization of the network in mind. Being close to peers is good for performance, but we don’t want to run from a data center where too many other validators are already located; the network has to remain resilient against disasters in that location. Our secondary instance (for failover) is always located in a different country than our primary one. Operating multiple nodes in multiple locations enables us to measure which locations perform best, and enables us to respond quickly to changes in network conditions.
  • We continuously monitor our nodes, and our 24/7 oncall rotation can respond in minutes when something is amiss. As a professional node operator, we have a team of platforms engineers who are working tirelessly to keep our nodes running smoothly.

Final Word

In this article we highlighted two key Solana performance metrics that matter for users of the network: skip rate and block size. Lower skip rates and larger block sizes mean that users can get their transactions included faster and for a lower fee. These two metrics contribute to how many transactions per second Solana can process. Through multiple optimizations and operational practices, Chorus One achieves 11.4% more transactions per second than the network average. If all delegators would delegate to validators who perform as well as Chorus One, Solana would be able to process 11.4% more transactions per second.

About Chorus One

Chorus One is a leading institutional staking provider, securing over $3 billion in assets across 60+ Proof-of-Stake networks. Since 2018, Chorus One has been a trusted partner for institutions, offering enterprise-grade solutions, industry-leading research, and investments in cutting-edge protocols.

 Join our mailing list to receive our latest updates, research reports, and industry news.

Want to be a guest?
Drop us a line!

Submit
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.