Solana processes thousands of transactions per second, which creates intense competition for transaction inclusion in the limited space of a slot. The high throughput and low block time (~400ms) require transactions to be propagated, prioritized, and included in real-time.
High throughput on Solana comes with another advantage: low transaction costs. Transaction fees have been minimal, at just 0.000005 SOL per signature. While this benefits everyone, it comes with a minor trade-off—it makes spam inexpensive.
For end-users, spam means slower transaction finalization, higher costs, and unreliable performance. It can even halt the network with a DDoS attack, as in 2021, or with an NFT mint, as in 2022.
Against this backdrop, Solana introduced significant updates in 2022: stake-weighted quality of service (swQoS) and priority fees. Both are designed to ensure the network prioritizes higher-value transactions, albeit through different approaches.
Another piece of infrastructure that can help reduce transaction latency is Jito MEV. It enables users to send tips to validators in exchange for ensuring that transaction bundles are prioritized and processed by them.
This article will explore these solutions, break down their features, and assess their effectiveness in transaction landing latency.
Let’s start with a basic building block—a transaction.
Solana has two types of transactions: voting and non-voting (regular). Voting transactions achieve consensus, while non-voting transactions change the state of the network's accounts.
Solana transaction consists of several components that define how data is structured and processed on the blockchain¹:
A single transaction can have multiple accounts, instructions, and signatures.
Below is an example of a non-voting transaction, including the components mentioned above:
On Solana, a transaction can be initiated by a user or a smart contract (a program). Once initiated, the transaction is sent to an RPC node, which acts as a bridge between users, applications, and the blockchain.
The RPC node forwards the transaction to the current leader—a validator responsible for building the next block. Solana uses a leader schedule, where validators take turns proposing blocks. During their turn, the leader collects transactions and produces four consecutive blocks before passing the role to the next validator.
Validators and RPC nodes are two types of nodes on Solana. Validators actively participate in consensus by voting, while RPC nodes do not. Aside from this, their structure is effectively the same.
So, why are RPCs needed? They offload non-consensus tasks from validators, allowing validators to focus on voting. Meanwhile, RPC nodes handle interactions with applications and wallets, such as fetching balances, submitting transactions, and providing blockchain data.
The main difference is that validators are staked, securing the network, while RPC nodes are not.
After reaching the validator, the transactions are processed in a few stages, which include²³:
The transaction is considered confirmed if it is voted for by ⅔ of the total network stake. It is finalized after 31 blocks.
In this setup, all validators and RPC nodes compete for the same limited bandwidth to send transactions to leaders. This creates inefficiencies, as any node can overwhelm the leader by spamming more transactions than the leader can handle.
To improve network resilience and enhance user experience, Solana introduced QUIC, swQoS, and priority fees, as outlined in this December 2022 post:
With the adoption of the QUIC protocol, trusted connections between nodes are required to send transactions. The swQoS system prioritizes these connections based on stake. In this framework, non-staked RPC nodes have limited opportunities to send transactions directly to the leader. Instead, they primarily rely on staked validators to forward their transactions.
Technically, a validator must configure swQoS individually for each RPC node, establishing a trusted peer relationship. When this service is enabled, any packets the RPC node sends are treated as though they originate from the validator configuring swQoS.
Validators are allocated a portion of the leader’s bandwidth proportional to their stake. For example, a validator holding 1% of the total stake can send up to 1% of the transaction packets during each leader’s slot.
From the leader’s perspective, 80% of available connections are reserved for staked nodes, while the remaining 20% are allocated to RPC nodes. To qualify as a staked node, a validator must maintain a minimum stake of 15,000 SOL.
While swQoS does not guarantee immediate inclusion of all transactions, it significantly increases the likelihood of inclusion for transactions submitted through nodes connected to high-stake validators.
Priority fees serve the same role as swQoS by increasing the chances of transaction inclusion, though they use a completely different mechanism.
There are two types of fees on Solana⁵:
Of the total fees from a transaction, 50% is burned, while 50% is received by the leader processing the transaction. A proposal to award the validator 100% of the priority fee has been passed and is expected to be activated in 2025 (see SIMD-0096).
Priority fees help validators prioritize transactions, particularly during high congestion periods when many transactions compete for the leader's bandwidth. Since fees are collected before transactions are executed, even failed transactions pay them.
During the banking stage of Solana’s transaction processing, transactions are non-deterministically assigned to queues within different execution threads. Within each queue, transactions are ranked by their priority fee and arrival time⁶. While a higher priority fee doesn’t guarantee that a transaction will be executed first, it does increase its chances.
The final puzzle of transaction prioritization is Jito. This modified Solana client allows searchers to send tips to validators in exchange for including groups of transactions, known as bundles, in the next block.
It could be argued that the Jito infrastructure prioritizes transactions using a tipping mechanism, as users can send a single transaction with a tip to improve its chances of landing fast.
For a deeper explanation of how Jito works, check out our previous article on the Paladin bot, which provides more details.
We now have a clearer understanding of how all three solutions contribute to transaction inclusion and prioritization. But how do they affect latency? Let’s find out.
Methodology
To calculate the time to inclusion of a transaction, we measure the difference between the time it is included in a block and the time it is generated. On Solana, the generation time can be determined from the timestamp of the transaction’s recent blockhash. Transactions with a recent blockhash older than 150 slots—approximately 90 seconds—expire.
The latest blockhash is assigned to the transaction before it is signed, so transactions signed by bots will be included faster than transactions generated by normal users. This method is not perfect, but still allows us to collect valuable information about latency and user topology.
Other factors beyond the swQoS and priority fees, such as the geographical proximity of nodes to the leader or validator and RPC performance, also impact inclusion times—we are not fully accounting for those.
To reduce the possible biases, we consider only slots proposed by our main identity from November 18th to November 25th, 2024.
Time to Inclusion
The time to inclusion across all transactions, without any filtering, has a trimodal distribution suggesting at least three transaction types. The highest peak is at 63 seconds, followed by another at 17 seconds, and a smaller one is at 5 seconds.
The second and third peaks are likely from regular users. This double peak could occur because general users don't set maxRetries to zero when generating the transaction. The first peak, at around 5s, is probably related to bots, where the delay between generating and signing a transaction is marginally zero.
We can classify users based on their 95th percentile time to inclusion:
Most users fall into the “normal” and “slow” classifications. Only a small fraction of submitted transactions originate from “fast” users.
Let’s now break down transactions by source.
Priority Fee
Transactions can be categorized based on their priority fee (PF) with respect to the PF distribution in the corresponding slot. Precisely, we can compare the PF with the 95th percentile (95p) of the distribution:
The size of the priority fee generally doesn’t influence a transaction’s time to inclusion. There isn’t a clear threshold where transactions with higher PF are consistently included more quickly. The result remains stable even when accounting for PF per compute unit.
Jito Tippers
We can restrict the analysis to users sending transactions via the Jito MEV infrastructure, excluding addresses of known swQoS consumers. Interestingly, most Jito transactions originate from “slow” users.
We categorize tippers by siże of tips in the block, analogously to what we did for PF:
When we compute the probability density function (PDF) of time to inclusion based on this classification, we find that the tip size doesn’t significantly impact the time to inclusion, suggesting that to build a successful MEV bot, one doesn’t have to pay more in tips!
Within the Jito framework, a bundle can consist of:
In both cases, the time it takes for the entire bundle to be included is determined by the inclusion time of the tip transaction. However, when a tip is paid in a separate transaction, we don’t track the other leg. This reduction in volume explains why the PDF of tippers differs from that of Jito consumers.
swQoS
It’s impossible to fully disentangle transaction time to inclusion from swQoS for general users, meaning some transactions in the analysis may still utilize swQoS. However, we can classify users based on addresses associated with our swQoS clients.
When we do this and apply the defined user topology classification, it becomes clear that swQoS consumers experience significantly reduced times to inclusion.
The peak around 60 seconds is much smaller for swQoS consumers, indicating they are far less likely to face such high inclusion times
The highest impact of using swQoS is seen in the reduction of the time to inclusion for “slow” users. By computing the cumulative distribution function (CDF) for this time, we observe a 30% probability of these transactions being included in less than 13 seconds.
When comparing the corresponding CDFs:
'Normal' users also benefit from swQoS. There's an additional peak in the PDF for these users between 9s and 13s, showing that some of “normal” users process transactions in less than 20s. Additionally, another peak appears around 40s, indicating that part of the slower users now see their 95th percentile falling in the left tail end of 'normal' users. This suggests that the overall spread of the time-to-inclusion distribution is reduced.
There is no statistically significant difference between the analyzed samples for “fast” users. However, some Jito consumers may also use swQoS, which complicates the ability to draw definitive conclusions.
Despite this, the improvements for “slow” and “norma”' users highlights swQoS's positive impact on transaction inclusion times. If swQoS explains the PDF shape for “fast” users, it increases the likelihood of inclusion within 10s from ~30% to ~100%, a 3x improvement. A similar 3x improvement is observed for “slow” users being included within 13s.
Transaction inclusion is arguably Solana's most pressing challenge today. Efforts to address this have been made at the core protocol level with swQoS and priority fees and through third-party solutions like Jito (remembering that the main Jito use is MEV).
Solana’s latest motto is to increase throughput and reduce latency. In this article, we have examined how these three solutions improve landing time. Or, more simply, do they actually reduce latency? We found out that:
Among the three, swQoS is the most reliable for reducing latency. Jito and priority fees can be used when the time to inclusion is less important.
References:
About Chorus One
Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.
Due to the unique architecture of blockchains, block proposers can insert, censor, or sort user transactions in a way that extracts value from each block before it's added to the blockchain.
These manipulations, called MEV or Maximum Extractable Value, come in various forms. The most common are arbitrage¹, liquidations², NFT mints³, and sandwiching⁴. Arbitrage involves exploiting price differences for the same asset across markets. Liquidations occur in lending protocols when a borrower’s collateral drops in value, allowing others to buy it at a discount. NFT mints can be profitable when high-demand NFTs are resold after minting.
Most types of MEV can benefit the ecosystem by helping with price discovery (arbitrage) or preventing lending protocols from accruing bad debt (liquidations). However, sandwiching is different. It involves an attacker front-running a user’s trade on a DEX and selling immediately for a profit. This harms the ecosystem by forcing users to pay a consistently worse price.
Solana's MEV landscape differs from Ethereum's due to its high speed, low latency, lack of a public mempool, and unique transaction processing. Without a public mempool for viewing unconfirmed transactions, MEV searchers (actors specializing in finding MEV opportunities⁵) send transactions to RPC nodes directly, which then forward them to validators. This setup enables searchers to work with RPC providers to submit a specifically ordered selection of transactions.
Moreover, the searchers don't know the leader's geographical location, so they send multiple transactions through various RPC nodes to improve their chances of being first. This spams the network as they compete to extract MEV—if you're first, you win.
Jito
A key addition to the Solana MEV landscape is Jito, who released a fork for the Solana Labs client. On a high level, the Jito client enables searchers to tip validators to include a bundle of transactions in the order that extracts the most value for the searcher. The validators can then share the revenue from the tips with their delegators.
These revenues are substantial. Currently, the Jito-Solana client operates on 80% of validators and generates thousands of SOL daily in tips from searchers. However, searchers keep a portion of each tip, so the total tip amounts don’t reveal the full MEV picture. Moreover, the atomic arbitrage market is considerable, and as we’ll explore later, Jito's tips don’t give an accurate estimate of the atomic MEV extracted.
Jito⁶ introduced a few new concepts to the Solana MEV landscape:
There’s more to the current MEV landscape on Solana, particularly concerning spam transactions, which largely result from unsuccessful arbitrage attempts, and the various mitigation strategies (such as priority fees, stake-weighted quality of service, and co-location of searchers and nodes). However, since these details are not central to the focus of this article, we will set them aside for now.
It's still early for Solana MEV, and until recently, Jito was the only major solution focused on boosting rewards for delegators. Following the same open-source principles, the Paladin team introduced a validator-level bot⁷ and an accompanying token that accrues value from the MEV collected by the bot.
The main idea behind Paladin is this:
Paladin’s success, therefore, depends on validators choosing honesty over toxic MEV extraction by running the Paladin bot.
Bots like Paladin⁸ operate at the validator level, enabling them to capitalize on opportunities that arise after Jito bundles and other transactions are sent to the validator for inclusion in a block.
In this scenario, once the bot assesses the impact of the transactions and bundles, it inserts its transactions into the block. The bot doesn’t front-run the submitted transactions but leverages the price changes that result after each shred is executed.
Paladin can also extract MEV through DEX-CEX arbitrage and optimize routes for swaps made via DEX aggregators. However, these features are currently not used in practice, so we only briefly mention them. Since the bot is a public good, the community can contribute by adding features like NFT minting or liquidation support in the future.
The PAL token is where 10% of the value extracted by the bot in SOL gets accumulated. Paladin will go live at TGE, which will airdrop the entire supply of 1 billion PAL in the following proportions:
At the architecture level, the MEV extracted by the bot is sent to a smart contract, which then distributes it as follows:
The crucial part of the Paladin architecture is slashing. If the validator misbehaves and extracts MEV through sandwiching, staked PAL holders (other validators and their delegators) can vote to slash the rogue validator. The slashing happens if >50% of the majority is reached and stays at this level for a week. The slashed PAL is burned.
Other actions that could lead to slashing include not running Paladin, using closed-source upgrades, or not participating in slashing votes. This isn't an exhaustive list, as PAL stakers can vote to slash for other reasons at their discretion. While sandwiching is easy to spot, other "misbehaviors" may not be as obvious and would require monitoring tools, potentially leading to enforcement issues.
Unstaking PAL is capped at 5%, and a cooldown period of one month before the next withdrawal can be made.
There are several controversies about Paladin⁹. Here are common criticisms:
Validators Profit Unfairly
This is not true. Palidators (validators running Paladin) receive 90% of the MEV extracted by the bot, which they can redistribute to their delegators while keeping their standard commission. The remaining 10% goes to the PAL token, with 7.5% each going to validators and their stakers. This setup ensures validators don't take a larger share of MEV profits. If a validator doesn’t share the captured MEV, delegators can switch to one with a healthy long-term track record, like Chorus One.
Run Paladin or Die
Validators must run Paladin and avoid toxic MEV extraction or any actions that could undermine their reputation for honesty. Slashing can also occur if validators run closed-source software on top of Paladin. This doesn't mean market participants can't enhance the bot. On the contrary, they are encouraged to do so and can be rewarded in PAL if their improvements are openly available to others.
No Development Post-TGE
After the PAL airdrop, the Paladin team will no longer develop the bot¹⁰. All maintenance and strategy updates will be the community's responsibility from then on. This includes adding new liquidity pools or tokens to identify emerging MEV opportunities. While a fund has been set aside for future development, it is uncertain how long it will last. Development may stall if the incentives dry up.
With the knowledge of how Paladin works, let’s evaluate its target market and assess its performance based on our collected data.
Atomic Arbitrage Market
We will start by analyzing Jito tips paid for atomic arbitrage and compare them to the overall atomic arb market to see how much of the atomic opportunities have been captured through Jito.
We will use data from mid-August 2024¹¹ onward, when the share of Jito tips related to atomic arbitrage rose significantly. We exclude earlier data to avoid bias. Interestingly, this spike happened despite the drop in the total MEV extracted through atomic arbs, indicating increased competition among searchers now willing to share more Jito tips.
Even though tips from atomic arbs have increased compared to the total arb MEV market, they still make up only a small percentage of the total Jito tips paid.
Only 4.25% of the tips searchers paid during the sampled period were from atomic arbs (SOL 10,316 out of SOL 242,754). At a SOL price of $150, this is $1,547,400, while the total atomic MEV extraction reached $6,567,554.
So, only about 23% of the total atomic arbitrage opportunities were shared through Jito! Some striking examples include:
This shows that most on-chain arbitrage MEV is being captured outside of Jito. Unfortunately, this also leads to a high number of failed transactions.
During one of the measured five-day periods, over 1 million arbitrage transactions were made, with 519k of them submitted through the Jupiter aggregator [source]. This led to a significant number of failed transactions because:
The above data shows that Paladin can tap into a sizable on-chain arbitrage market by finding opportunities more efficiently and avoiding failed transactions. This approach would benefit validators by filling blocks with successful transactions and improving the ecosystem by reducing congestion.
The annual atomic arbitrage market is around $42.4 million. With 392 million SOL staked [source] ($58.9 billion at $150 per SOL), this could add about 0.07% APY to validator performance.
Let's dive deeper into the data to see how much market the bot can take.
Distribution and Dataset
The distribution of atomic arb MEV in USD per slot for the data collection period (15 August to 10 October 2024) looks as follows:
The median value is $0.00105 per slot, with atomic arbitrage opportunities occurring in 51.6% of slots.
Paladin operated on our main validator with a 1.15m SOL stake for a week between 4 October and 11 October. Let’s see the atomic arbitrage market opportunities during the bot's operation period:
The median value is $0.00898 per slot, and the chance of atomic arbs is present in 59.47% of slots.
The KS test shows inconsistencies in both datasets, with a positive shift in the distribution, indicating higher values in the second dataset. Therefore, Paladin operated in a more favorable environment, with more significant and more frequent MEV extraction opportunities than the broader measurement period. This is especially clear when you look at the size of Jito tips during our timeframe.
Now, let's look at how Paladin performed in these circumstances.
The median arb profit is $0 per slot, with opportunities taken only in 29.64% of slots.
Here’s a more detailed summary of all three distributions:
As we can see, Paladin underperformed, capturing significantly less MEV and earning less per slot. The bot only managed to capture 15.84% of the total available atomic arbitrage opportunities.
In some of the most striking examples, the bot extracted only 0.00004 SOL (here and here), while the actual extractable value was $127.59, as seen in Tx1, Tx2, Tx3, Tx4, and Tx5.
The reason for failing to extract MEV from the opportunities in the linked transactions is that Paladin doesn’t support the traded token ($MODENG). This is a problem since memecoins are currently driving network activity and will likely contribute the largest share of MEV. These tokens emerge rapidly, requiring frequent updates to routing. One of Paladin's top priorities should be quickly adapting to capture MEV from new memecoins as they arise, and the lack of team involvement in the process is problematic in this context.
Estimated Returns
Now, let’s run a simulation to estimate the returns under different scenarios based on a stake share of 0.3% (Chorus One's share), 1%, and 10%. The returns are capped at 15.8%, which is the portion of opportunities Paladin captured in our data.
The median value for 0.3% of the total stake is around $20k, which matches the annualized value of what Chorus One earned. This increases to about $65k for a validator with 1% of the total stake and exceeds $700k for a hypothetical validator with 10%.
We also ran a simulation to estimate how much Paladin’s performance could improve if it captured 80% of available opportunities for a validator the size of Chorus One across different adoption levels—1%, 10%, 25%, and 50% of total stake using Paladin. At an estimated 1% adoption, our validator earns an additional 0.01% APY from the bot, while the total potential atomic arbitrage could generate 0.07% of the total stake.
The simulation assumes:
And in a more tangible form:
As we see, Paladin could generate a median of additional 0.29% in APY for a validator with 0.03% of the total stake once adoption reaches 50%.
We've been in touch with the Paladin team, who confirmed that a new version of the bot, P3, is in the works. This version will pivot from focusing on the atomic arbitrage market, which they no longer see as substantial enough to prioritize.
Maintenance
The bot has been stable without major issues, but Paladin requires patches to update strategies and fix smaller bugs. Maintaining the bot is also time-consuming for the engineering team, as each patch requires a restart and the process is more complex than anticipated, adding extra overhead.This is a similar problem we faced with our Breaking Bots—maintenance and strategy update costs were high. Eventually, we concluded that the effort was not exactly worth it. With Paladin, however, a whole community could tackle this problem, so things may look different.
Paladin has great potential to boost earnings for validators and stakers by tapping into new opportunities, but it's still in the early stages of development. While our analysis shows that Paladin currently captures only around 15.84% of available atomic arbitrage opportunities, this will likely improve as the bot becomes more optimized and widely adopted. The upside is promising—the total atomic arbitrage market could add 0.07% to a validator’s APY. While capturing all of it is unlikely, even a share of this can lead to solid gains.
That said, there are challenges to address. The bot’s development will shift to the community after the token TGE, raising questions about whether there will be enough resources and motivation for continuous updates. Additionally, maintaining the bot on the validator side can be tricky, as each patch requires a restart, making it time-consuming for validators to run.
At Chorus One, we believe that the long-term health of the Solana ecosystem is paramount. Paladin builds on the same core principles as Jito—to mitigate the toxic MEV and democratize good MEV.
We developed Breaking Bots with these ideas in mind, and we see Paladin as an extension of our efforts. Two solutions are better than one, and Paladin offers an interesting alternative to what exists today. Supporting multiple approaches is a cornerstone of decentralized systems, and we welcome new ideas that build resilience.
While we don't agree with all of Paladin's choices, especially regarding the team's lack of future bot development, we believe its success will benefit the entire ecosystem, and that's why we support it.
That being said, if the core principles Paladin is built on change, or the maintenance costs outweigh the benefits in the mid-term, we will reevaluate our position.
References:
1 You can find an interesting overview of arbitrage MEV here.
2 A detailed analysis of liquidations in DeFi is available in this paper.
3 More about the NFT MEV here.
4 Chorus One also provided an analysis on Solana sandwiching in here.
5 An in-depth write-up on searchers by Blockworks is here.
6 Information based on Jito documentation.
7 At Chorus One, in our “Breaking Bots” paper, we proposed a similar solution. The implementation details are available on GitHub.
8 Information based on series blogposts by the Paladin team.
9 Some of the examples available here, here,
10 Per the blogpost: We’re not a Foundation or Labs — we don’t run any part of Paladin, we don’t develop it, we don’t maintain it…
11 The data used in this section is available here and can be retrieved using these queries.
About Chorus One
Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.
In the blockchain industry, where the balance between decentralization and efficiency often teeters on a knife's edge, innovations that address these challenges are paramount. Among these innovations, preconfirmations stand out as a powerful tool designed to enhance transaction speed, security, and reliability. Here, we’ll delve into what preconfirmations (henceforth referred to as “preconfirms” ) are, why they matter, and how they’re set to transform the blockchain landscape.
The idea of providing a credible heads-up or confirmation that a transaction has occurred is deeply ingrained in our daily lives. Whether it's receiving an order confirmation from Amazon, verifying a credit card payment, or processing transactions in blockchain networks, this concept is familiar and widely used. In the blockchain world, centralized sequencers like those in Arbitrum function similarly, offering guarantees that your transaction will be included in the block.
However, these guarantees are not without limitations. True finality is only achieved when the transaction is settled on Ethereum. The reliance on centralized sequencers in Layer 2 (L2) networks, which are responsible for verifying, ordering, and batching transactions before they are committed to the main blockchain (Layer 1), presents significant challenges. They can become single points of failure, leading to increased risks of transaction censorship and bottlenecks in the process.
This is where preconfirms come into play. Preconfirms were introduced to address these challenges, providing a more secure and efficient way to ensure transaction integrity in decentralized networks.
Before jumping into the preconfirms trenches, let’s start by clarifying some key terms that will appear throughout this article (and are essential to the broader topic).
Builders: In the context of Ethereum and PBS, builders are responsible for selecting and ordering transactions in a block. This is a specialized role with the goal of creating blocks with the highest value for the proposer, and builders are also highly centralized entities. Blocks are submited to relays, which act as mediators between builders and proposers.
Proposers: The role of the proposer is to validate the contents of the most valuable block submitted by the block builders, and to propose this block to the network to be included as the new head of the blockchain. In this landscape, proposers are the validators in the Proof-of-Stake consensus protocol, and get rewarded for proposing blocks (a fee gets paid to the builder from the proposer as well).
Sequencers: Sequencers are akin to air traffic controllers, particularly within Layer 2 Rollup networks. They are responsible for coordinating and ordering transactions between the Rollup and the Layer 1 chain (such as Ethereum) for final settlement. Because they have exclusive rights to the ordering of transactions, they also benefit from transaction fees and MEV. Usually, they have ZK or optimistic security guarantees.
Now that we’ve set the stage, let’s dive into the concept of preconfirms.
At their core, preconfirms can provide two guarantees:
These two guarantees matter. Particularly for:
Speed: Traditional block confirmations can take several seconds, whereas preconfirms can provide a credible assurance much faster. This speed is particularly beneficial for "based rollups" that batch user transactions and commit them to Ethereum, resulting in faster transaction confirmations. @taikoxyz and @Spire_Labs are teams building based rollups.
Censorship Resistance: A proposer can request the inclusion of a transaction that some builders might not want to include.
Trading Use Cases: Traders may preconfirm transactions if it allows them to execute ahead of competitors.
Now, zooming in on Ethereum.
The following chart describes the overall Proposer-builder separation and transaction pipeline on Ethereum.
Within the Ethereum network, preconfirms can be implemented in three distinct scenarios, depending on the specific needs of the network:
Builder preconfirms suit the trading use case best. These offer low-latency guarantees and are effective in networks where a small number of builders dominate block-building. Builders can opt into proposer support, which enhances the strength of the guarantee.
However, the dominance of a few builders means that onboarding these few is key. However, since there are only a few dominant builders, successfully onboarding these players is key.
Proposers provide stronger inclusion guarantees than builders because they have the final say on which transactions are included in the block. This method is particularly useful for "based rollups," where Layer 1 validators act as sequencers.
Yet, maintaining strong guarantees are key challenges for proposer preconfirms.
The question of which solution will ultimately win remains uncertain, as multiple factors will play a crucial role in determining the outcome. We can speculate on the success of builder opt-ins for builder preconfirms, the growing traction of based rollups, and the effectiveness of proposer declaration implementations. The balance between user demand for inclusion versus execution guarantees will also be pivotal. Furthermore, the introduction of multiple concurrent proposers on the Ethereum roadmap could significantly impact the direction of transaction confirmation solutions. Ultimately, the interplay of these elements will shape the future landscape of blockchain transaction processing.
Commit-boost is a mev-boost like sidecar for preconfirms.
Commit-boost facilitates communication between builders and proposers, enhancing the preconfirmation process. It’s designed to replace the existing MEV-boost infrastructure, addressing performance issues and extending its capabilities to include preconfirms.
Currently in testnet, commit-boost is being developed by a non-ventured-backed neutral software for Ethereum with the ambition of fully integrating preconfirms into its framework. Chorus One is currently running commit-boost on Testnet.
Chorus One has been deeply involved with preconfirms from the very beginning, pioneering some of the first-ever preconfirms using Bolt during the ZuBerlin and Helder testnets. We’re fully immersed in optimizing the Proposer-Builder Separation (PBS) pipeline and are excited about the major developments currently unfolding in this space. Stay tuned for an upcoming special episode of the Chorus One Podcast, where we’ll dive more into this topic.
If you’re interested in learning more, feel free to reach out to us at research@chorus.one.
About Chorus One
Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.
This is a joint research article written by Chorus One and Superscrypt
Blockchain transactions are public and viewable even before they get written to the block. This has led to maximal extractable value (‘MEV’), i.e. where actors frontrun and backrun visible transactions to extract profit for themselves.
The MEV space is constantly evolving as competition intensifies and new avenues to extract value are always emerging. In this article we explore one such avenue - Oracle Extractable Value, where MEV can be extracted even before transactions hit the mempool.
This is particularly relevant for borrowing & lending protocols which rely on data feeds from oracles to make decisions on whether to liquidate positions or not. Read on to find out more.
Value is in a constant state of being created, destroyed, won or lost in any financialized system, and blockchains are no exception. User transactions are not isolated to their surroundings, but instead embedded within complex interactions that determine their final payoff.
Not all transaction costs are as explicit as gas fees. Fundamentally, the total value that can be captured from a transaction includes the payoff of downstream trades preceding or succeeding it. These can be benign in nature, for example, an arbitrage transaction to bring prices back in line with the market, or impose hidden taxes in the case of front running. Overall, maximal extractable value (or “MEV”) is the value that can be captured from strategically including and ordering transactions such that the aggregate block value is maximized.
If not extracted or monetized, value is simply lost. Presently, the actualization of MEV on Ethereum reflects a complex supply chain (“PBS”) where several actors such as wallets, searchers, block builders and validators fill specialized roles. There are returns on sophistication for all participants in this value chain, most explicitly for builders which are tasked with creating optimal blocks. Validators can play sophisticated timing games which result in additional MEV capture; for example, Chorus One has run an advanced timing games setup since early 2023, and published extensively on it. In the PBS context, the best proxy for the total MEV extracted is the final bid a builder gets to submit during the block auction.
Such returns on sophistication extend to the concept of Oracle Extractable Value (OEV), which is a type of MEV that has historically gone uncaptured by protocols. This article will explain OEV, and how it can be best captured.
Oracles are one of crypto's critical infrastructure components: they are the choreographers that orchestrate and synchronize the off-chain world with the blockchain’s immutable ledger. Their influence is immense: they inform all the prices you see and interact with on-chain. Markets are constantly changing, and protocols and applications rely on secure oracle feed updates to provide DeFi services to millions of crypto users worldwide.
The current status-quo is that third-party oracle networks serve as intermediaries that feed external data to smart contracts. They operate separately from the blockchains they serve, which maintains the core goal of chain consensus but introduces some limitations, including concepts such as fair sequencing, required payments from protocols and apps, and multiple sources of data in a decentralized world.
In practical terms, the data from oracles represents a great resource for value extraction. The market shift an oracle price update causes can be anticipated and traded profitably, by back-running any resulting arbitrage opportunities or (more prominently) by capturing resulting liquidations. This is Oracle Extractable Value. But how is it captured, and more importantly, who profits from it?
In MEV, searchers (which are essentially trading bots that run on-chain) profit from oracle updates by backrunning them in a free-for-all priority gas auction. Value is distributed between the searchers, who find opportunities particularly in the lending markets for liquidations, and the block proposers that include their prices in the ledger. Oracles themselves have not historically been a part of this equation.
OEV changes this flow by atomically coupling the backrun trade with the oracle update. This allows the oracle to capture value, by either acting as the searcher itself or auctioning off the extraction rights.
How OEV created in DeFi can be captured by MEV searchers before the dApp gets access to it.
OEV primarily impacts lending markets, where liquidations directly result from oracle updates. By bundling an oracle update with a liquidation transaction, the value capture becomes exclusive, preventing front-running since both actions are combined into a single atomic event. However, arbitrage can still occur before the oracle update through statistical methods, as traders act on the true price seen in other markets
UMA and Oval:
API3 and OEV Network:
Warlock
The upshot of this MEV capture is that oracles have a new dimension to compete on. OEV revenue can be shared with dApps by providing oracle updates free of charge, or by outright subsidizing integrations. Ultimately, protocols with OEV integration will thus be able to bid more competitively for users.
OEV solutions share the same basic idea - shifting the value extraction from oracle updates to the oracle layer, by coupling the price feed update with backrun searcher transactions.
There are several ways of approaching this - an OEV solution may integrate with an existing oracle via an official integration, or through third party infrastructure. These solutions may also be purpose built and provide their own price update.
Heuristically, the key components of an OEV solution are the oracle update and the MEV transaction - these can be either centralized or decentralized.
We would expect purpose-built or “official” extensions to existing oracles to perform better due to less latency versus what would be required to run third party logic in addition to the upstream oracle. Additionally, these would be much more attractive from a risk perspective, as in the case of third party infrastructure, updates could break undesired integrations spontaneously.
The practical case is that a centralized auction can make most sense in latency-sensitive use cases. For example, it may allow a protocol to offer more leverage, as the risk of stranding with bad debt due stale price updates is minimized. By contract, a decentralized auction likely yields the highest aggregate value in use cases where latency is not as sensitive, i.e. where margin requirements are higher.
OEV is still in its early stages, with much development ahead. We're excited to see how this space evolves and will continue to monitor its progress closely as new opportunities and innovations emerge.
About Chorus One
Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.