Bitcoin has firmly established itself as digital gold, the apex store of value in the cryptocurrency ecosystem. Adoption has reached Wall Street, banks are expanding their crypto services and offering direct BTC exposure via ETFs. With this level of institutional integration, the next pressing question becomes: how to generate yield on BTC holdings? Making things more interesting, institutions will focus on solutions that optimize for Security, Yield and Liquidity.
This poses a fundamental challenge for any Bitcoin L2 solution (and staking): since Bitcoin lacks native yield (unless you run a miner), and serves primarily as a store of value, any yield generated in another asset faces selling pressure if the ultimate goal is to accumulate more BTC.
When Bitcoiners participate in any ecosystem – whether it's an L2, DeFi protocol, or alternative chain – their end goal remains simple: stack more sats. This creates inherent selling pressure for any token used to pay staking rewards or security budgets. While teams are developing interesting utility for alternative tokens, the reality is that without a thriving ecosystem, sustainable yield remains a pipedream.. Teams are mainly forced to bootstrap network effects via points or other incentives.
This brings us to a critical point: Bitcoin L2s' main competition isn't other Bitcoin L2s or BTCfi, but established ecosystems like Solana and Ethereum. The sustainability of yield within a Bitcoin L2 cannot be achieved until a sufficiently robust ecosystem exists within that L2 – and this remains the central challenge. Interesting new ZK rollup providers like Alpen Labs and Starknet claim they can import network effects by offering EVM compatibility on Bitcoin while enhancing security. With Bitcoin’s building tenure as a store of value, increasingly like Gold, monetisation schemes for the asset will become more common.
However, we need to face reality – with 86% of VC funding for these L2s allocated post-2024, we're still years away from maturity. Is it too late for Bitcoin L2s to catch up?
Security alone is no longer a sufficient differentiator. Solana and Ethereum have proven resilient enough to earn institutional trust, while Bitcoin L2s must justify their additional complexity, particularly around smart contract risk when interacting with UTXOs.
Being EVM-compatible does not automatically create network effects. It might help bring developers / dapps over, but creating a winning ecosystem flywheel will only become tougher with time. In fact, the winners of this cycle have differentiated with a product first approach (Hyperliquid, Pumpdotfun, Ethena…), not VM or tech. As such, providing extra BTC economic security or alignment won’t be enough without a killer product in the long run.
Incremental security improvements alone aren't the most compelling selling point – we've seen re-staking initiatives like Eigenlayer struggle with this exact issue. AVS aren't generally willing to pay extra for security (especially since they’ve had it for free); selling security is hard. We’ve seen the same promise of cryptoeconomic security fail before with Cosmos ICS and Polkadot Parachains.
That said, Bitcoin L2s do have a compelling security advantage. They inherit Bitcoin's massive $1.2T+ security budget (hashrate), far exceeding what Solana or Ethereum can offer. For institutions prioritizing safety over yield size, this edge might matter – even if yields are somewhat lower. Bitcoin Timestamping could create a completely new market. Can L2s tap into this extra economic security and liquidity while 10x’ing product experience? Again, if your security is higher but the product is not great, it won’t matter.
BTC whales aren't primarily interested in bridging assets; they want to accumulate more Bitcoin. This raises an important question: from their perspective, is there a meaningful difference between locking BTC in an L2 versus in Solana?
Perceived risk is the key factor here. An institution might actually prefer Coinbase custody over a decentralized signer set where they might not know the operators, weighing legal risk against technical risk. This perception is heavily influenced by user experience – if a product isn't intuitive, the risk is perceived as higher. A degen whale on the other hand, might be comfortable with bridging into Solayer to farm the airdrop or with ‘staking’ into Bitlayer for yield.
At Chorus One we’ve classified every staking offering to better inform our institutional clients who are interested in putting their BTC to work, following the guidance of our friends at Bitcoin Layers.
Want to dive deeper into the staking offerings available through Bitcoin Layers? Shoot our analyst Luis Nuñez (and author of this paper) a DM on X!
Since risk is perceived and depending on your yield, security and liquidity preferences, your ideal option might look like this:
And still be super convenient. We’re in an interesting period where Bitcoin TVL or BTCfi is increasing dramatically (led by Babylon), while the % of BTC that has remained idle for at least 1 year keeps rising, now at 60%. This tells us that Bitcoin dominance is growing thanks to institutional adoption, but that there’s no compelling yield solutions yet to activate the BTC.
Institutions have historically preferred lending BTC over exploring L2/DeFi solutions, primarily due to familiarity (Coinbase, Cantor). According to Binance, only 0.79% of BTC is locked in DeFi, meaning that DeFi lending (e.g. Aave) is not as popular. Even so, wrapped BTC in DeFi is still around 5 times larger than the amount of BTC in staking protocols.
Staking in Bitcoin Layers requires significant education. L2s like Stacks and CoreDAO use the proximity to miners to secure the system and tap into the liquidity by providing incentives for contribution or merge mining. More TradFi akin operations might be an interesting differentiator for a BTC L2. We've seen significant institutional engagement in basis trades in the past, earning up to 5% yield with Deribit and other brokers.
However, lending's reputation has suffered severely post-2022. The collapses of BlockFi, Celsius, and Voyager exposed substantial custodial and counterparty risks, damaging institutional trust. As mentioned, Bitcoin L2s like Stacks offer an alternative by avoiding traditional custody while including other parties like Miners to have a role in providing yield via staking. For those with a more passive appetite, staking can be the ideal solution to yield. Today however, staking solutions are early and offer just points with the promise of a future airdrop, with the exception of CoreDAO.
Staking in Bitcoin L2s is very different. Typically, we see a multi-sig of operators that order L2 transactions and timestamp a hashed representation of the block into Bitcoin. This allows for state recreation of the L2 at any point in time if the L2 is compromised. Essentially, these use Bitcoin for DA (Data Availability). This means that consensus is still dependent on the multi-sig operators, so these could still collude. Innovations with ZK (Alpen Labs, Citrea), UTXO-to-Smart Contract (Arch, Stacks) and BitVM (BoB) are all trying to improve these security guarantees.
In Ethereum, leading L2s typically have a single sequencer (vs. a multi-sig) to settle transactions to the L1. Critically however, Ethereum L1 has the capability to do fraud proofs allowing for block reorgs if there's a malicious transaction. In Bitcoin, the L1 doesn’t have verification capabilities, so this is not possible… until BitVM?
BitVM aims to allow fraud proofs on the Bitcoin L1. BitVM potentially offers a 10x improvement in security for Bitcoin L2s, but it comes with significant operational challenges.
BitVM is a magnificent project where leaders from every ecosystem are collaborating to make it a reality. We’ve seen potentially drastic improvements between BitVM1 and BitVM2:
BitVM allows fraud proofs to happen through a sequence of standard Bitcoin transactions with carefully crafted scripts. At its core, verification in BitVM works because:
1. Program Decomposition
Before any transactions occur, the program to be verified (like a SNARK verifier) is split into sub-programs that fit in a btc block:
2. Operator Claim
The operator executes the entire program off-chain and claims:
They commit to all these values using cryptographic commitments in their on-chain transactions.
3. Challenge Initiation
When a challenger believes the operator is lying:
4. The Critical On-Chain Execution
Here's where Bitcoin nodes perform the actual verification:
The challenger creates a "Disprove" transaction that:
5. Bitcoin Consensus in Action
When nodes process this transaction:
The Bitcoin network reaches consensus on this result just like it does with any transaction's validity. The technology enables Bitcoin-native verification of arbitrary computations without changing Bitcoin's consensus rules. This opens the door for more sophisticated smart contracts secured directly by Bitcoin, but implementation hurdles are substantial since operators need to front the liquidity and face several risks:
As such, incentives to operate the bridge will be quite attractive to mitigate the risks. If we’re able to mitigate these, security will be significantly enhanced and might even provide interoperability between different layers, which could unlock interesting use-cases while retaining the Bitcoin proximity. Will this proximity allow for the creation of killer products and real yields?
For a Bitcoin L2 to succeed, it must offer products unavailable elsewhere or provide substantially better user experiences. The previously mentioned Bitcoin proximity has to be exploited for differentiation.
The jury is still out on whether ZK rollup initiatives can bootstrap meaningful network effects. These rollups will ultimately need a killer app to thrive or to port them from EVM with the promise of Bitcoin liquidity. Otherwise, why would dapps choose to settle on Bitcoin?
The winning strategy for Bitcoin L2s involves:
Below, we’ll dive into some of my top institutional picks, a few of which we’ve invested in.
Babylon’s main value-add is to provide Bitcoin economic security. As we’ve mentioned several times, this offering alone will not be enough, and the team is well aware. Personally, I'm bullish on the app-chain approach, following models like Avalanche or Cosmos, but simply using BTC for the initial bootstrap of security and liquidity.
While the app-chain thesis represents the endgame, reaching network effects requires 10x the effort since everything is naturally fragmented. Success demands an extremely robust supporting framework – something only Cosmos has arguably achieved with sufficient decentralization (and suffered its consequences). Avalanche provides the centralized support needed to unify a fragmented ecosystem.
The ideal endgame resembles apps in the App Store – distinct from each other but with clear commonalities. In this analogy, Bitcoin serves as the iPhone – the trusted foundation for distribution.
Mezo (investor)
Mezo's approach with mUSD is particularly interesting as it reduces token selling pressure if mUSD gains significant utility. Their focus on "real world" applications could drive mainstream adoption, with Bitcoin-backed loans as the centerpiece. Offering fixed rates as low as 1% unlocks interesting DeFi use cases around looping with reduced risk, while undercutting costs compared to Coinbase + Morpho BTC lending offerings (at around 5%).
Plasma (investor)
Purpose built for stablecoin usage. Zero-fee USDT transfers, parallel execution and strong distribution strategies position Plasma well in the ecosystem. Other features include confidential transactions and high customization around gas and fees.
Arch is following the MegaEth approach to curate a mafia ecosystem, a parallel execution environment, and close ties to Solana. In Arch, Users send assets directly to smart contracts using native Bitcoin transactions.
Stacks has a very interesting setup since there's no selling pressure for stakers (they earn BTC rather than STX). As the oldest and most recognized Bitcoin L2 brand, they have significant advantages. While Clarity presents challenges, this may be changing with innovations like smart contract to Bitcoin transaction capabilities in development and other programming languages. StackingDAO (investor), is the leading LST in the ecosystem and provides interesting yield opportunities in both liquid STX and liquid sBTC.
Looking to stake your STX? Click here!
BOB (Building on Bitcoin)
BoB is at the forefront of BitVM development (target mainnet in 2025) and is looking to use Babylon for security bootstrapping. The team is doing a fantastic job at exploiting the BTC proximity with BitVM while developing institutional grade products.
CoreDAO features strong LST adoption tailored for institutions and is the only staking yield mechanism that's live and returns actual $. CoreDAO Ventures is doing a great job at backing teams early in their development.
Botanix is the leading multi-sig set up with their Spiderchain, where each BTC that is being bridged by the chain is operated by a new and randomized multi-sig, increasing its robustness by providing ‘forward security’. Interestingly, Botanix will not have their own token (at least initially) and will only use BTC and pBTC, meaning rewards and fees will be in BTC.
For retail users, four standout solutions I like:
Bitcoin L2s face significant challenges in their quest for adoption and sustainability. The inherent tension between Bitcoin's store-of-value proposition and the yield-generating mechanisms of L2s creates fundamental hurdles. However, projects that can offer unique capabilities, seamless user experiences, and compelling institutional cases have the potential to overcome these obstacles and carve out valuable niches in the expanding Bitcoin ecosystem.
The key to success lies not in merely replicating what Ethereum or Solana already offer, but in leveraging Bitcoin's unique strengths to create complementary solutions that expand the utility of the world's leading cryptocurrency without compromising its fundamental value proposition. Adoption is one killer product away.
Want to learn more about yield opportunities on Bitcoin? Reach out to us at research@chorus.one and let’s chat!
In the world of blockchain technology, where every millisecond counts, the speed of light isn’t just a scientific constant—it’s a hard limit that defines the boundaries of performance. As Kevin Bowers highlighted in his article Jump Vs. the Speed of Light, the ultimate bottleneck for globally distributed systems, like those used in trading and blockchain, is the physical constraint of how fast information can travel.
To put this into perspective, light travels at approximately 299,792 km/s in a vacuum, but in fiber optic cables (the backbone of internet communication), it slows to about 200,000 km/s due to the medium's refractive index. This might sound fast, but when you consider the distances involved in a global network, delays become significant. For example:
For applications like high-frequency trading or blockchain consensus mechanisms, this delay is simply too long. In decentralized systems, the problem worsens because nodes must exchange multiple messages to reach agreement (e.g., propagating a block and confirming it). Each round-trip adds to the latency, making the speed of light a "frustrating constraint" when near-instant coordination is the goal.
Beyond the physical delay imposed by the speed of light, blockchain networks face an additional challenge rooted in information theory: the Shannon Capacity Theorem. This theorem defines the maximum rate at which data can be reliably transmitted over a communication channel. It’s expressed as:
where C is the channel capacity (bits per second), B is the bandwidth (in hertz), and S/N is the signal-to-noise ratio. In simpler terms, the theorem tells us that even with a perfect, lightspeed connection, there’s a ceiling on how much data a network can handle, determined by its bandwidth and the quality of the signal.
For blockchain systems, this is a critical limitation because they rely on broadcasting large volumes of transaction data to many nodes simultaneously. So, even if we could magically eliminate latency, the Shannon Capacity Theorem reminds us that the network’s ability to move data is still finite. For blockchains aiming for mass adoption—like Solana, which targets thousands of transactions per second—this dual constraint of light speed and channel capacity is a formidable hurdle.
In a computing landscape where recent technological advances have prioritized fitting more cores into a CPU rather than making them faster, and where the speed of light emerges as the ultimate bottleneck, Jump team refuses to settle for off-the-shelf solutions or the short-term fix of buying more hardware. Instead, it reimagines existing solutions to extract maximum performance from the network layer, optimizing data transmission, reducing latency, and enhancing reliability to combat the "noise" of packet loss, congestion, and global delays.
The Firedancer project is about tailoring this concept for a blockchain world where every microsecond matters, breaking the paralysis in decision-making that arises when systems have many unoptimized components.
Firedancer is a high-performance validator client developed in C for the Solana blockchain, developed by Jump Crypto, a division of Jump Trading focused on advancing blockchain technologies. Unlike traditional validator clients that rely on generic software stacks and incremental hardware upgrades, Firedancer is a ground-up reengineering of how a blockchain node operates. Its mission is to push the Solana network to the very limits of what’s physically possible, addressing the dual constraints of light speed and channel capacity head-on.
At its core, Firedancer is designed to optimize every layer of the system, from data transmission to transaction processing. It proposes a major rewrite of the three functional components of the Agave client: networking, runtime, and consensus mechanism.
Firedancer is a big project, and for this reason it is being developed incrementally. The first Firedancer validator is nicknamed Frankendancer. It is Firedancer’s networking layer grafted onto the Agave runtime and consensus code. Precisely, Frankendancer has implemented the following parts:
All other functionality is retained by Agave, including the runtime itself which tracks account state and executes transactions.
In this article, we’ll dive into on-chain data to compare the performance of the Agave client with Frankendancer. Through data-driven analysis, we quantify if these advancements can be seen on-chain via Solana’s performance. This means that not all improvements will be visible via this analysis.
You can walk through all the data used in this analysis via our dedicated dashboard.
While signature verification and block distribution engines are difficult to track using on-chain data, studying the dynamical behaviour of transactions can provide useful information about QUIC implementation and block packing logic.
Transactions on Solana are encoded and sent in QUIC streams into validators from clients, cfr. here. QUIC is relevant during the FetchStage, where incoming packets are batched (up to 128 per batch) and prepared for further processing. It operates at the kernel level, ensuring efficient network input handling. This makes QUIC a relevant piece of the Transaction Processing Unit (TPU) on Solana, which represents the logic of the validator responsible for block production. Improving QUIC means ultimately having control on transaction propagation. In this section we are going to compare the Agave QUIC implementation with the Frankendancer fd_quic—the C implementation of QUIC by Jump Crypto.
The first difference relies on connection management. Agave utilizes a connection cache to manage connections, implemented via the solana_connection_cache module, meaning there is a lookup mechanism for reusing or tracking existing connections. It also employs an AsyncTaskSemaphore to limit the number of asynchronous tasks (set to a maximum of 2000 tasks by default). This semaphore ensures that the system does not spawn excessive tasks, providing a basic form of concurrency control.
Frankendancer implements a more explicit and granular connection management system using a free list (state->free_conn_list) and a connection map (fd_quic_conn_map) based on connection IDs. This allows precise tracking and allocation of connection resources. It also leverages receive-side scaling and kernel bypass technologies like XDP/AF_XDP to distribute incoming traffic across CPU cores with minimal overhead, enhancing scalability and performance, cfr. here. It does not rely on semaphores for task limiting; instead, it uses a service queue (svc_queue) with scheduling logic (fd_quic_svc_schedule) to manage connection lifecycle events, indicating a more sophisticated event-driven approach.
Frankendancer also implements a stream handling pipeline. Precisely, fd_quic provides explicit stream management with functions like fd_quic_conn_new_stream() for creation, fd_quic_stream_send() for sending data, and fd_quic_tx_stream_free() for cleanup. Streams are tracked using a fd_quic_stream_map indexed by stream IDs.
Finally, for packet processing, Agave approach focuses on basic packet sending and receiving, with asynchronous methods like send_data_async() and send_data_batch_async().
Frankendancer implements detailed packet processing with specific handlers for different packet types: fd_quic_handle_v1_initial(), fd_quic_handle_v1_handshake(), fd_quic_handle_v1_retry(), and fd_quic_handle_v1_one_rtt(). These functions parse and process packets according to their QUIC protocol roles.
Differences in QUIC implementation can be seen on-chain at transactions level. Indeed, a more "sophisticated" version of QUIC means better handling of packets and ultimately more availability for optimization when sending them to the block packing logic.
After the FetchStage and the SigVerifyStage—which verifies the cryptographic signatures of transactions to ensure they are valid and authorized—there is the Banking stage. Here verified transactions are processed.
At the core of the Banking stage is the scheduler. It represents a critical component of any validator client, as it determines the order and priority of transaction processing for block producers.
Agave implements a central scheduler introduced in v2.18. Its main purpose is to loop and constantly check the incoming queue of transactions and process them as they arrive, routing them to an appropriate thread for further processing. It prioritizes transaction accordingly to
The scheduler is responsible for pulling transactions from the receiver channel, and sending them to the appropriate worker thread based on priority and conflict resolution. The scheduler maintains a view of which account locks are in-use by which threads, and is able to determine which threads a transaction can be queued on. Each worker thread will process batches of transactions, in the received order, and send a message back to the scheduler upon completion of each batch. These messages back to the scheduler allow the scheduler to update its view of the locks, and thus determine which future transactions can be scheduled, cfr. here.
Frankendancer implements its own scheduler in fd_pack. Within fd_pack, transactions are prioritized based on their reward-to-compute ratio—calculated as fees (in lamports) divided by estimated CUs—favoring those offering higher rewards per resource consumed. This prioritization happens within treaps, a blend of binary search trees and heaps, providing O(log n) access to the highest-priority transactions. Three treaps—pending (regular transactions), pending_votes (votes), and pending_bundles (bundled transactions)—segregate types, with votes balanced via reserved capacity and bundles ordered using a mathematical encoding of rewards to enforce FIFO sequencing without altering the treap’s comparison logic.
Scheduling, driven by fd_pack_schedule_next_microblock, pulls transactions from these treaps to build microblocks for banking tiles, respecting limits on CUs, bytes, and microblock counts. It ensures votes get fair representation while filling remaining space with high-priority non-votes, tracking usage via cumulative_block_cost and data_bytes_consumed.
To resolve conflicts, it uses bitsets—a container that represents a fixed-size sequence of bits—which are like quick-reference maps. Bitsets—rw_bitset (read/write) and w_bitset (write-only)—map account usage to bits, enabling O(1) intersection checks against global bitset_rw_in_use and bitset_w_in_use. Overlaps signal conflicts (e.g., write-write or read-write clashes), skipping the transaction. For heavily contested accounts (exceeding PENALTY_TREAP_THRESHOLD of 64 references), fd_pack diverts transactions to penalty treaps, delaying them until the account frees up, then promoting the best candidate back to pending upon microblock completion. A slow-path check via acct_in_use—a map of account locks per bank tile—ensures precision when bitsets flag potential issues.
Vote fees on Solana are a vital economic element of its consensus mechanism, ensuring network security and encouraging validator participation. In Solana’s delegated Proof of Stake (dPoS) system, each active validator submits one vote transaction per slot to confirm the leader’s proposed block, with an optimal delay of one slot. Delays, however, can shift votes into subsequent slots, causing the number of vote transactions per slot to exceed the active validator count. Under the current implementation, vote transactions compete with regular transactions for Compute Unit (CU) allocation within a block, influencing resource distribution.
Data reveals that the Frankendancer client includes more vote transactions than the Agave client, resulting in greater CU allocation to votes. To evaluate this difference, a dynamic Kolmogorov-Smirnov (KS) test can be applied. This non-parametric test compares two distributions by calculating the maximum difference between their Cumulative Distribution Functions (CDFs), assessing whether they originate from the same population. Unlike parametric tests with specific distributional assumptions, the KS-test’s flexibility suits diverse datasets, making it ideal for detecting behavioral shifts in dynamic systems. The test yields a p-value, where a low value (less than 0.05) indicates a significant difference between distributions.
When comparing CU usage for non-vote transactions between Agave (Version 2.1.14) and Frankendancer (Version 0.406.20113), the KS-test shows that Agave’s CDF frequently lies below Frankendancer’s (visualized as blue dots). This suggests that Agave blocks tend to allocate more CUs to non-vote transactions compared to Frankendancer. Specifically, the probability of observing a block with lower CU usage for non-votes is higher in Frankendancer relative to Agave.
Interestingly, this does not correspond to a lower overall count of non-vote transactions; Frankendancer appears to outperform Agave in including non-vote transactions as well. Together, these findings imply that Frankendancer validators achieve higher rewards, driven by increased vote transaction inclusion and efficient CU utilization for non-vote transactions.
Why Frankendancer is able to process more vote transactions may be due to the fact that on Agave there is a maximum number of QUIC connections that can be established between a client (identified by IP Address and Node Pubkey) and the server, ensuring network stability. The number of streams a client can open per connection is directly tied to their stake. Higher-stake validators can open more streams, allowing them to process more transactions concurrently, cfr. here. During high network load, lower-stake validators might face throttling, potentially missing vote opportunities, while higher-stake validators, with better bandwidth, can maintain consistent voting, indirectly affecting their influence in consensus. Frankendancer doesn't seem to suffer from the same restriction.
Although inclusion of vote transactions plays a relevant role in Solana consensus, there are other two metrics that are worth exploring: Skip Rate and Validator Uptime.
Skip Rate determines the availability of a validator to correctly propose a block when selected as leader. Having a high skip rate means less total rewards, mainly due to missed MEV and Priority Fee opportunities. However, missing a high number of slots also reduces total TPS, worsening final UX.
Validator Uptime impacts vote latency and consequently final staking rewards. This metric is estimated via Timely Vote Credit (TVC), which indirectly measures the distance a validator takes to land its votes. A 100% effectiveness on TVC means that validators land their votes in less than 2 slots.
As we can see, there are no main differences pre epoch 755. Data shows a recent elevated Skip Rate for Frankendancer and a corresponding low TVC effectiveness. However, it is worth noting that, since these metrics are based on averages, and considering a smaller stake is running Frankendancer, small fluctuations in Frankendancer performances need more time to be reabsorbed.
The scheduler plays a critical role in optimizing transaction processing during block production. Its primary task is to balance transaction prioritization—based on priority fees and compute units—with conflict resolution, ensuring that transactions modifying the same account are processed without inconsistencies. The scheduler orders transactions by priority, then groups them into conflict-free batches for parallel execution by worker threads, aiming to maximize throughput while maintaining state coherence. This balancing act often results in deviations from the ideal priority order due to conflicts.
To evaluate this efficiency, we introduced a dissipation metric, D, that quantifies the distance between a transaction’s optimal position o(i)—based on priority and dependent on the scheduler— and its actual position in the block a(i), defined as
where N is the number of transactions in the considered block.
This metric reveals how well the scheduler adheres to the priority order amidst conflict constraints. A lower dissipation score indicates better alignment with the ideal order. It is clear that the dissipation D has an intrinsic factor that accounts for accounts congestion, and for the time-dependency of transactions arrival. In an ideal case, these factors should be equal for all schedulers.
Given the intrinsic nature of the dissipation, the numerical value of this estimator doesn't carry much relevance. However, when comparing the results for two types of scheduler we can gather information on which one resolves better conflicts. Indeed, a higher value of the dissipation estimator indicates a preference towards conflict resolutions rather than transaction prioritization.
Comparing Frankendancer and Agave schedulers highlights how dissipation is higher for Frankendancer, independently from the version. This is more clear when showing the dynamical KS test. Only for very few instances the Agave scheduler showed a higher dissipation with statistically significant evidence.
If the resolution of conflicts—and then parallelization—is due to the scheduler implementation or to QUIC implementation is hard to tell from these data. Indeed, a better resolution of conflicts can be achieved also by having more transactions to select from.
Finally, also by comparing the percentiles of Priority Fees for transactions we can see hints of a different conflict resolution from Frankendancer. Indeed, despite the overall number of transactions (both vote and non-vote) and extracted value being higher than Agave, the median of PF is lower.
In this article we provide a detailed comparison of the Agave and Frankendancer validator clients on the Solana blockchain, focusing on on-chain performance metrics to quantify their differences. Frankendancer, the initial iteration of Jump Crypto’s Firedancer project, integrates an advanced networking layer—including a high-performance QUIC implementation and kernel bypass—onto Agave’s runtime and consensus code. This hybrid approach aims to optimize transaction processing, and the data reveals its impact.
On-chain data shows Frankendancer includes more vote transactions per block than Agave, resulting in greater compute unit (CU) allocation to votes, a critical factor in Solana’s consensus mechanism. This efficiency ties to Frankendancer’s QUIC and scheduler enhancements. Its fd_quic implementation, with granular connection management and kernel bypass, processes packets more effectively than Agave’s simpler, semaphore-limited approach, enabling better transaction propagation.
The scheduler, fd_pack, prioritizes transactions by reward-to-compute ratio using treaps, contrasting Agave’s priority formula based on fees and compute requests. To quantify how well each scheduler adheres to ideal priority order amidst conflicts we developed a dissipation metric. Frankendancer’s higher dissipation, confirmed by KS-test significance, shows it prioritizes conflict resolution over strict prioritization, boosting parallel execution and throughput. This is further highlighted by Frankendancer’s median priority fees being lower.
A lower median for Priority Fees and higher extracted value indicates more efficient transaction processing. For validators and delegators, this translates to increased revenue. For users, it means a better overall experience. Additionally, more votes for validators and delegators lead to higher revenues from SOL issuance, while for users, this results in a more stable consensus.
The analysis, supported by the Flipside Crypto dashboard, underscores Frankendancer’s data-driven edge in transaction processing, CU efficiency, and reward potential.
A huge thanks to Amin, Cooper, Hannes, Jacob, Michael, Norbert, Omer, and Teemu for sharing their feedback on the model and the article (this doesn’t mean they agree with the presented numbers!).
Zero-knowledge proofs are entering a period of rapid growth and widespread adoption. The core technology has been battle-tested, and we have begun to see the emergence of new services and more advanced use cases. These include outsourcing of proof computation from centralized servers, which opens the door to new revenue-generating opportunities for crypto infrastructure providers.
How significant could this revenue become? This article explores the proving ecosystem and estimates the market size in the coming years. But first, let’s start by revisiting the fundamentals.
ZK proofs are cryptographic tools that prove a computation's results are correct without revealing the underlying data or re-running the computation.
There are two main types of zk proofs:
A zk proof needs to be generated and verified. Typically, a prover contract sends the proof and the computation result to a verifier contract, which outputs a "yes" or "no" to confirm validity. While verification is easy and cheap, generating proofs is compute-intensive.
Proving is expensive because it needs significant computing power to 1) translate programs into polynomials and 2) run the programs expressed as polynomials, which requires performing complex mathematical operations.
This section overviews the current zk landscape, focusing on project types and their influence on proof generation demand.
Demand Side
Supply Side
For the privacy-focused rollup Aztec, only one proof per transaction will be generated in the browser, as depicted in the proving tree below. A similar dynamic is expected with other projects.
Monetization strategies will include fees and token incentives.
The primary revenue model will rely on charging base fees. These should cover the compute costs of proof generation. Prioritization of proving work will likely require paying optional priority fees.
The demand side and proving marketplaces will offer native token incentives to provers. These incentives are expected to be substantial and initially exceed the market size of proving fees.
To understand the proving market, we can draw analogies with the proof-of-stake (PoS) and proof-of-work (PoW) markets. Let’s examine how these comparisons hold up.
At the beginning of 2025, the PoS market is worth $16.3 billion, with the overall crypto market cap around $3.2 trillion. Assuming validators earn 5% of staking rewards, the staking market would represent approximately $815 million. This excludes priority fees and MEV rewards, which can be a significant part of validator revenues.
PoS characteristics have some similarities to zk-proving:
The PoW market can be roughly gauged using Bitcoin’s inflation rate, which is expected to be 0.84% in 2025. With a $2 trillion BTC market cap, this amounts to around $16.8 billion annually, excluding priority fees.
Both zk-proving and PoW rely on hardware, but they take different approaches. While PoW uses a “winner-takes-all” model, zk-proving creates a steady stream of proofs, resulting in more predictable earnings. This makes zk-proving less dependent on highly specialized hardware compared to Bitcoin mining.
The adoption of specialized hardware, like ASICs and FPGAs, for zk-proving will largely depend on the crypto market’s volume. Higher volumes are likely to encourage more investment in these technologies.
With these dynamics in mind, we can explore the revenue potential zk-proving represents.
Our analysis will be based on the Analyzing and Benchmarking ZK-Rollups paper, which benchmarks zkSync and Polygon zkEVM on various metrics, including proving time.
While the paper benchmarks zkSync Era and Polygon zkEVM, our analysis will focus on zkSync due to its more significant transaction volumes (230M per year vs. 5.5M for Polygon zkEVM). At higher transaction volumes, Polygon zkEVM has comparable costs to zkSync ($0.004 per transaction).
Approach
Results
A single Nvidia L4 GPU can prove a batch of ~4,000 transactions on zkSync in 9.5 hours. Given that zkSync submits a new batch to L1 every 10 minutes, around 57 NVIDIA L4 GPUs are required to keep up with this pace.
Proof Generation Cost
Knowing the compute time, we can calculate proving costs per batch, proof, and transaction:
The above calculations can be followed in detail in Proving Market Estimate(rows 1-29).
Proving costs depend on the efficiency of hardware and proof systems. The hardware costs can be optimized by, for example, using bare metal machines.
2024: Current Costs
2025: Optimizations Begin
2030: Proving costs fall to $0.001 per transaction across all rollups.
2024: Real Data
The number of transactions generated by rollups and other demand sources:
2025: Market Takes Off
The proving market begins to gain momentum. Estimated number of transactions: ~4.4B, including:
2030: zk-Proving at Scale
Proving will have reached widespread adoption. Estimated number of transactions: ~600B
We estimate the proving surplus based on previously estimated proving costs. This surplus is revenue from base and priority fees minus hardware costs. As the market matures, base fees and proving costs decrease, but priority fees will be a significant revenue driver.
Token incentives add further value boost, While it’s difficult to foresee the size of these investments, the estimate is based on the information collected from the projects.
2024: Early Market
2025: Expanding Demand
The total market is projected at $97M, including:
2030: Almost a Two-Billion-Dollar Market
The total zk-proving market opportunity is estimated at $1.34B.
A detailed analysis supporting the calculations is available in Proving Market Estimate(rows 32-57).
The estimates with so many variables and for such a long term will always have a margin of error. To support the main conclusion, we include a sensitivity analysis that presents other potential outcomes in 2025 and 2030 based on different transaction volumes and proving surplus. For the sake of simplicity, we left the proving costs intact at $0.059 and $0.001 per transaction in 2025 and 2030, respectively.
In 2025, the most pessimistic scenario estimates a total market value of just $12.5M, with less than a 10% proving surplus and 2B transactions. Conversely, the ultra-optimistic scenario imagines the market at $55M, based on a 50% surplus and 6B transactions.
In 2030, if things don’t go well, we could see a proving market of roughly $300M, from 10% proving surplus and 300B transactions. The best outcome assumes a $1.7B market based on a 90% surplus and 900B transactions.
Estimating so far into the future comes with inherent uncertainties. Below are potential error factors categorized into downside and upside scenarios:
Downside
Upside
After PoW and PoS, zk is the next-generation crypto technology that complements its predecessors. Comparing proving revenue opportunities with PoW or PoS is tricky because they serve different purposes. Still, for context:
We estimated that the zk-proving market could grow to $97M by 2025 and $1.34B by 2030. While these estimates are more of an educated guess, they’re meant to point out the trends and factors anyone interested in this space should monitor. These factors include:
Let’s revisit these forecasts a year from now.
A special thanks to Vishal from Multicoin and Max from Anza for their insights and discussions on this proposal.
Main Dashboard:
Token emission mechanisms play a critical role in the economic security and long-term sustainability of blockchain networks. In the case of Solana, the current fixed emission schedule operates independently of network dynamics, potentially leading to inefficiencies in staking participation, liquidity allocation, and overall network incentives. This proposal introduces a market-based emission mechanism that dynamically adjusts SOL issuance in response to fluctuations in staking participation.
The rationale for this adjustment is twofold: first, to enhance network security by ensuring that validator incentives remain sufficient under varying staking conditions, and second, to foster a more efficient allocation of capital within the Solana ecosystem, particularly in the DeFi sector. By linking token issuance to staking participation, the proposed model aims to mitigate the adverse effects of fixed inflation, such as excessive dilution of non-staking participants and unnecessary selling pressure on SOL.
SIMD228 introduces a dynamic adjustment mechanism based on staking participation. The model replaces the fixed emissions schedule with a function that responds to the fraction of the total SOL supply staked. The equation describing the new issuance rate is:
where r is the current emission curve, s is the fraction of total SOL supply staked, and
The issuance rate becomes more aggressive at around 0.5 of the total supply staked to encourage dynamic equilibrium around that point. Indeed, the multiplier of the current emission curve r shifts from ~0.70 at 0.4 to ~0.29 at 0.5. This means that, at a fraction of the total supply staked of 0.4, the new model mints ~70% of current inflation, at 0.5 just ~29% instead.
The corresponding APR from staking is represented below.
Despite the 0.5 shift seeming arbitrary, only data can adequately assess the real stake rate to trigger. We don’t have data we can use to understand the dynamic of SOL stakers due to issuance since the current issuance is stake-insensitive.
However, other ecosystems trigger inflation based on staking participation to have a fixed staking rate, balancing network usage and chain security. A prominent example of this is the Cosmos Hub. However, although Cosmos aims for 67% of the total supply staked, actual user behavior depends on network usage. For example, the Hub - meant to be a hub for security - has a current bonded amount of 57%. Also Ethereum has an issuance that depends on the amount of staked ETH, that at the time of writing is at 27.57%.
Some may argue that having an issuance rate that fluctuates can make returns on staked assets unpredictable. However, we believe that is just a matter of where the new equilibrium will be.
That said, it is hard to tell if 50% of the total supply staked is what Solana needs to grow with a healthy ecosystem. Thus, we believe 50% is no better or worse than any other sensible number to trigger “aggressiveness” since security is based on price. Indeed, a chain can be considered secure if the cost to attack it (Ca) is greater than the profit (P)
Currently, Solana's inflation schedule follows an exponential decay model, where the inflation rate decreases annually by a fixed disinflation rate (15%) until it reaches the long-term target of 1.5%. This model was adopted in February 2021, with the inflation rate reaching 4.6% as of February 2025 (cfr. ref). To achieve smooth disinflation, current issuance decreases by ~0.0889% per epoch until it reaches its long-term target.
The current curve is not sensitive to any shift in stake behavior; the only change is happening at the APR level. Indeed, the APR for a perfectly performing validator is
The current curve discourages a staking dynamic, with the sole aim of diluting the value of those who do not stake independently from the fraction of the total supply staked. This implies a dynamical change influenced by time rather than network needs.
It’s clear that to understand if Solana needs a change in the inflation model, we must assess how the current model influences the network's activity.
Our first consideration regards the dynamic evolution of the stake ratio. As we can see, despite decreasing over time, it shows a period of stillness, moving sidewise and confined in specific regions. Examples of this are epochs’ ranges [400, 550] and [650, 740], where the stake rate stays between [0.7, 0.75] and [0.65, 0.70], respectively, cfr. here. Notably, assuming epochs last for ~2 days, both periods are comparable with a year length.
Despite the prolonged static behavior, the stake rate shows a strong correlation with inflation (correlation coefficient of 0.78), meaning that when inflation decreases, the stake rate also decreases.
Since inflation is insensitive to the staking rate, the slow change in inflation triggers a shift in the fraction of total SOL staked. This indicates a willingness of users to move stake only when dilution for non-stakers decreases.
To assess if non-staked SOL moves into DeFi, we can study the elasticity of total value locked (TVL) over stake rate (SR)
where %ΔTVL is the percentage change in TVL in DeFi, and %ΔSR is the percentage change in the stake rate.
Elasticity E, in this context, measures the responsiveness of the change in TVL in DeFi, to change in the staking participation. A negative value indicates that TVL and SR move in the opposite direction, meaning that the decrease in SR correlates with liquidity moving into DeFi.
We need to see an increase in TVL and a decrease in SR to have a hint that non-staked SOL moves into DeFi. However, the elasticity indicates a low correlation between the staking rate and TVL.
If we compare DeFi TVL on Solana and Ethereum, we see that Solana still has a lot of room to grow.
The difference between TVL stems from the lack of adoption of lending protocols on Solana, while DEXes are catching up.
If we focus on DEXs’ TVL per traded volume, we see how Solana results in a more efficient network when dealing with trading activity.
On the contrary, TVL per active user is still low, indicating users’ preference for low-TVL interactions, like trading.
This indicates users’ willingness to use the chain and points to a high potential for growth. However, combining this observation with the staking behavior, there could be friction in depositing capital into DeFi due to dilution.
This may be due to DeFi yields still low compared with staking. For example
The reason why SOL is needed for DeFi growth is its low volatility compared to other assets prone to price discovery (i.e. non-stablecoins). This property has several implications, like:
This makes SOL indispensable for growing DeFi activity, especially for DEX liquidity provision for new projects launching their tokens.
In this section we have seen how DeFi grows because of externally injected capital. Further, the majority of TVL is locked in DEXs - with possible fictitious TVL into illiquid memecoins. Solana has 33% of TVL in DEXs, Ethereum just ~8%. Ethereum has 24% of TVL in Lending, Solana 13%. As a comparison, at the time of writing, just on AAVE you have 2% of ETH supply, on Kamino + Solend you have just 0.2% of SOL supply.
A key argument in favor of SIMD-228, as articulated by Anza researcher Max Resnick in his recent X article, is that inflation within the Solana network functions as a "leaky bucket," resulting in substantial financial inefficiencies. The theory contends that the current excessive issuance of SOL, approximately 28M SOL per year valued at $4.7B at current prices, leads to significant losses for SOL holders, including stakers, due to the siphoning of funds by governments and intermediaries.
Specifically, the theory highlights U.S. tax policies that treat staking rewards as ordinary income, subjecting them to a top tax rate of 37%—considerably higher than the 20% long-term capital gains rate—creating a "leaky bucket" effect that erodes value. Additionally, Resnick points to the role of powerful centralized exchanges like Binance and Coinbase, which leverage their market dominance to impose high commissions, such as 8%, on staking rewards, further draining resources from the network. The conclusion is that, by reducing inflation through SIMD-228, Solana could save between $100M and $400M annually, depending on the degree of leakage, thereby aligning with the network's ethos of optimization.
The current snapshot seems to point to an overpayment for security. Indeed, the current SOL staked value amounts to ~$53B, which is securing a TVL of ~$15B. Since the cost to control Solana is 66% of SOL staked, we have ~$35B securing ~$15B. However, it’s a common misconception that is the current 4.6% of inflation that determines the overpayment, leading to a ~28M SOL minted per year, or $4B at today's prices. This has nothing to do with security overpayment, and other ecosystems like Ethereum prints ~$8B for securing the network.
Our task is then to assess under which condition the overpayment statement holds. To assess if the current curve is prone to overpayment of security, we need to study the evolution of the parameters involved. This is not an easy task and each model is prone to interpretation. However, based on the above data, we can build a simple dynamical model to quantify the “overpaying” claim.
The model is meant to be a toy-model showing how the current curve (pre-SIMD228) can guarantee the security of the chain, overpaying for security based on different growth assumptions. The main idea is to assess security as the condition described in Eq. (3), where the profit is estimated assuming an attacker can drain the whole TVL. In this way, the chain can be considered secure provided that
which define the security ratio.
In our model we consider the stake rate decreasing by 0.05 each 150 epochs, based on the observation done in the previous section. We further consider an amount of burnt SOL per epoch of 1,800 SOL (cfr. Solana Transaction Analysis Dashboard), and a minimum stake rate of 0.33.
The first case we want to study is when TVL grows faster than SOL price. We assumed the following growth rates:
The dynamical evolution obtained as an outcome of these assumptions is depicted below.
We can see how, with the assumed growth rate, the current inflation curve guarantees a secure chain up to 2.5 years. Notably, this happens at a stake rate of 0.5 and SOL price slightly below SOL ATH. This corresponds to an inflation rate of 3% and an APR of 6.12%. After this point, the curve is not diluting enough non-staked capital to bring back the chain at security level.
Of course, changing the growth rates for DeFi’s TVL and SOL price changes the outcome, and we don’t have a crystal ball to say what will happen 10 years from now. For example, just assuming a SOL price growth higher than the TVL growth, the current curve results in a 10 years of overpayment for security.
This model shows how the current overpayment for security can drastically change over time, based on different growth assumptions. To enable the reader to draw their conclusions, we have built a dashboard that allows users to modify our assumptions and analyze the impact of adjusting various growth parameters. The dashboard is available here.
Solana requires beefy machines to run well. This is because there is a dilution of stake for non-optimally performing validators, decreasing their APR in favor of top-performing validators (see, e.g., here and here).
For example, let’s consider that 60% of the stake has an uptime of 99.8% — i.e., 60% of the stake has a TVC effectiveness of 99.8% — while 40% of the stake has an uptime of 95%. When accounting for APR share, we have a multiplier of
meaning 99.8% of the stake takes 61.1% of the total APR (i.e., of inflation) at the expense of the non-optimally performing 40% of the stake.
Despite this being in line with Solana's needs for top-performing validators, such a mechanism implies higher costs for validators. These machines are relatively expensive, ranging from $900/month to $1,500/month. To ensure that a validator can continue to validate when a machine fails or needs to reboot, a professional node operator needs two machines per validator identity. Furthermore, Solana uses a lot of network bandwidth. The costs vary by vendor and location, estimated at $100–200 per month. On top of that, there are voting costs of around 2 SOL a day. Assuming a SOL price of $160, this corresponds to an overall cost of between $128,800 and $137,200 a year. This is without accounting for engineering costs!
Assuming an 8% commission on staking rewards, a validator with 0.1% of stake needs — at 1 SOL = $160 — an APR ranging between 2.60% and 2.77% to break even. However, at the current staking rate, SIMD228 pushes the APR to 1.40%, making 1,193 out of 1,317 validators unprofitable from sole inflation. Clearly, lowering SOL price changes the APR needed to break even!
It is worth noting that, if SIMD228 is implemented in a year from now, assuming a stake rate of 0.5, the current curve would produce an APR of ~6%. At the same level of stake rate, the proposed curve would produce an APR of ~2%.
If we analyze the distribution of commission dividing validators by cohorts, we see that 50% of validators with less than 0.05% of stake have commissions higher than zero, and 40% of validators with stake share between 0.05% and 0.5% have commissions higher than zero. Here cohorts are defined as
If we look at the Cohorts’ dynamical evolution, we can see how the median of Cohort 3 started to adopt 0 commissions around epoch 600 (Apr 9, 2024), meanwhile Cohort 4 just started to opt for this solution recently. Cohort 1 and 2 are more stable with time. This is a clear sign that commissions are set based on market conditions, probably indicating that these are zero when value extracted from MEV and fees is enough to guarantee profitability.
This ties validator revenues to MEV and network fees, making the fraction of total supply staked a parameter highly dependent on the market and broader network activity.
Indeed, these add extra revenue to stakers, and there is no need for higher inflation. However, this assumes fairness among MEV and fee share, but we know these are long-tailed distributions. This property implies that having a higher stake unfairly exposes bigger validators to a higher likelihood of being leaders of juicier blocks.
By considering the distribution of MEV and fees from the start of the year, we can run simulations to see the effect of stake share on this “Market APR.”
From the plot above, it’s clear that low stake has a higher variance and lower median, incurring a non-null probability of ending the year with a low-generated Market APR. Considering that most of the revenues come from MEV and most are shared with delegators, the dynamic around it could enhance centralization. Other possibilities are
It is also worth noting that the simulations above are highly optimistic since they include MEV and PF from the January “craziness”. By excluding those very profitable days, we have a smaller Market APR.
This is still eventually optimistic, since at time of writing - epoch 747 - APR from Fees and MEV is respectively at 0.79% from fee and 1%. If we run simulations considering just data from the end of February we have an overall market APR further decreasing.
Notoriously, low stake validators cut costs on machines, operating on non-performing infrastructures for the operation of Solana. This results in an overall lower TVC effectiveness and higher skip rate. The first has an impact on network APR, requiring a higher APR to make profits. The second, instead, has implications on extracted Market APR, exacerbating the “MEV unfairness” between stake shares.
Another risk we see is that the market APR per staked SOL will drastically increase if there is a shock in fraction of staked SOL. Despite the amount of MEV and fees depending on block proposals, and then from the share of staked SOL, the relative gain per SOL depends on the SOL staked. In other words, a share of staked SOL of 1%, S1, produces on average M from MEV and fees. If the fraction of SOL staked goes down, the same share of 1%, this time S2, still makes M from MEV, this time with S2< S1. Since, M/S1 < M/S2, revenue for staked SOL increase. This behaviour is depicted in the image below - fixed share of staked SOL of 1%.
Despite this seems to be a point in favour of aggressively lowering issuance, we think that, combined with the risks of encouraging “bad” MEV to increase proceeds, this may lead to more staked capital used to frontrun users.
This makes Solana vulnerable to dilution from bad actors, since APR for staked SOL coming from market activity will be drastically higher than APR from inflation. This is risky because you can now make more profits in relative terms from MEV. Put it simply, larger actors can accumulate SOL with discounts coming from unstaking.
However, it is worth mentioning that, assuming a period of ~200 epochs to see SIMD228 implemented and a stake rate of 0.4, the proposed curve produces an APR of 6.47%, meaning that the effect induced by MEV is mitigated.
Introducing a market-based emission mechanism for Solana represents a fundamental shift from a fixed issuance schedule to a dynamic, staking-sensitive model. This proposal aims to align SOL issuance with actual network conditions, optimizing security incentives, removing unnecessary inflationary pressure, and fostering ecosystem growth. By adjusting emissions based on the fraction of total SOL supply staked, the model seeks to maintain an equilibrium that balances validator rewards with broader economic activity within the Solana ecosystem.
The analysis highlights key insights regarding the rigidity of the current staking rate, its correlation with inflation, and the limited elasticity between staking and Total Value Locked (TVL) in DeFi. The findings suggest that Solana's existing inflation structure primarily dilutes non-stakers rather than dynamically responding to network needs. Moreover, despite the increasing role of MEV and transaction fees in validator revenues, the distribution remains skewed, raising concerns about potential centralization effects under the new regime.
While the proposal addresses inefficiencies in capital allocation, its impact on validator sustainability remains a critical concern. The simulations indicate that under SIMD-228, a significant fraction of validators may become unprofitable, making revenue generation more dependent on MEV and network fees. This shift introduces new risks, including possible off-chain agreements to manipulate MEV distribution or incentives for adverse behaviors.
In conclusion, while SIMD-228 introduces a more responsive and theoretically efficient emission mechanism, its broader implications on validator economics, staking participation, and DeFi liquidity require further empirical validation. Although we believe that dynamical inflation tied to the fraction of the total supply staked is more aligned with network needs, we advocate a less aggressive reduction in order to make overall validator profitability less dependent on market conditions, reducing security issues. This less aggressive reduction may be achieved if SIMD228 takes around a year to be implemented.
Solana processes thousands of transactions per second, which creates intense competition for transaction inclusion in the limited space of a slot. The high throughput and low block time (~400ms) require transactions to be propagated, prioritized, and included in real-time.
High throughput on Solana comes with another advantage: low transaction costs. Transaction fees have been minimal, at just 0.000005 SOL per signature. While this benefits everyone, it comes with a minor trade-off—it makes spam inexpensive.
For end-users, spam means slower transaction finalization, higher costs, and unreliable performance. It can even halt the network with a DDoS attack, as in 2021, or with an NFT mint, as in 2022.
Against this backdrop, Solana introduced significant updates in 2022: stake-weighted quality of service (swQoS) and priority fees. Both are designed to ensure the network prioritizes higher-value transactions, albeit through different approaches.
Another piece of infrastructure that can help reduce transaction latency is Jito MEV. It enables users to send tips to validators in exchange for ensuring that transaction bundles are prioritized and processed by them.
This article will explore these solutions, break down their features, and assess their effectiveness in transaction landing latency.
Let’s start with a basic building block—a transaction.
Solana has two types of transactions: voting and non-voting (regular). Voting transactions achieve consensus, while non-voting transactions change the state of the network's accounts.
Solana transaction consists of several components that define how data is structured and processed on the blockchain¹:
A single transaction can have multiple accounts, instructions, and signatures.
Below is an example of a non-voting transaction, including the components mentioned above:
On Solana, a transaction can be initiated by a user or a smart contract (a program). Once initiated, the transaction is sent to an RPC node, which acts as a bridge between users, applications, and the blockchain.
The RPC node forwards the transaction to the current leader—a validator responsible for building the next block. Solana uses a leader schedule, where validators take turns proposing blocks. During their turn, the leader collects transactions and produces four consecutive blocks before passing the role to the next validator.
Validators and RPC nodes are two types of nodes on Solana. Validators actively participate in consensus by voting, while RPC nodes do not. Aside from this, their structure is effectively the same.
So, why are RPCs needed? They offload non-consensus tasks from validators, allowing validators to focus on voting. Meanwhile, RPC nodes handle interactions with applications and wallets, such as fetching balances, submitting transactions, and providing blockchain data.
The main difference is that validators are staked, securing the network, while RPC nodes are not.
After reaching the validator, the transactions are processed in a few stages, which include²³:
The transaction is considered confirmed if it is voted for by ⅔ of the total network stake. It is finalized after 31 blocks.
In this setup, all validators and RPC nodes compete for the same limited bandwidth to send transactions to leaders. This creates inefficiencies, as any node can overwhelm the leader by spamming more transactions than the leader can handle.
To improve network resilience and enhance user experience, Solana introduced QUIC, swQoS, and priority fees, as outlined in this December 2022 post:
With the adoption of the QUIC protocol, trusted connections between nodes are required to send transactions. The swQoS system prioritizes these connections based on stake. In this framework, non-staked RPC nodes have limited opportunities to send transactions directly to the leader. Instead, they primarily rely on staked validators to forward their transactions.
Technically, a validator must configure swQoS individually for each RPC node, establishing a trusted peer relationship. When this service is enabled, any packets the RPC node sends are treated as though they originate from the validator configuring swQoS.
Validators are allocated a portion of the leader’s bandwidth proportional to their stake. For example, a validator holding 1% of the total stake can send up to 1% of the transaction packets during each leader’s slot.
From the leader’s perspective, 80% of available connections are reserved for staked nodes, while the remaining 20% are allocated to RPC nodes. To qualify as a staked node, a validator must maintain a minimum stake of 15,000 SOL.
While swQoS does not guarantee immediate inclusion of all transactions, it significantly increases the likelihood of inclusion for transactions submitted through nodes connected to high-stake validators.
Priority fees serve the same role as swQoS by increasing the chances of transaction inclusion, though they use a completely different mechanism.
There are two types of fees on Solana⁵:
Of the total fees from a transaction, 50% is burned, while 50% is received by the leader processing the transaction. A proposal to award the validator 100% of the priority fee has been passed and is expected to be activated in 2025 (see SIMD-0096).
Priority fees help validators prioritize transactions, particularly during high congestion periods when many transactions compete for the leader's bandwidth. Since fees are collected before transactions are executed, even failed transactions pay them.
During the banking stage of Solana’s transaction processing, transactions are non-deterministically assigned to queues within different execution threads. Within each queue, transactions are ranked by their priority fee and arrival time⁶. While a higher priority fee doesn’t guarantee that a transaction will be executed first, it does increase its chances.
The final puzzle of transaction prioritization is Jito. This modified Solana client allows searchers to send tips to validators in exchange for including groups of transactions, known as bundles, in the next block.
It could be argued that the Jito infrastructure prioritizes transactions using a tipping mechanism, as users can send a single transaction with a tip to improve its chances of landing fast.
For a deeper explanation of how Jito works, check out our previous article on the Paladin bot, which provides more details.
We now have a clearer understanding of how all three solutions contribute to transaction inclusion and prioritization. But how do they affect latency? Let’s find out.
Methodology
To calculate the time to inclusion of a transaction, we measure the difference between the time it is included in a block and the time it is generated. On Solana, the generation time can be determined from the timestamp of the transaction’s recent blockhash. Transactions with a recent blockhash older than 150 slots—approximately 90 seconds—expire.
The latest blockhash is assigned to the transaction before it is signed, so transactions signed by bots will be included faster than transactions generated by normal users. This method is not perfect, but still allows us to collect valuable information about latency and user topology.
Other factors beyond the swQoS and priority fees, such as the geographical proximity of nodes to the leader or validator and RPC performance, also impact inclusion times—we are not fully accounting for those.
To reduce the possible biases, we consider only slots proposed by our main identity from November 18th to November 25th, 2024.
Time to Inclusion
The time to inclusion across all transactions, without any filtering, has a trimodal distribution suggesting at least three transaction types. The highest peak is at 63 seconds, followed by another at 17 seconds, and a smaller one is at 5 seconds.
The second and third peaks are likely from regular users. This double peak could occur because general users don't set maxRetries to zero when generating the transaction. The first peak, at around 5s, is probably related to bots, where the delay between generating and signing a transaction is marginally zero.
We can classify users based on their 95th percentile time to inclusion:
Most users fall into the “normal” and “slow” classifications. Only a small fraction of submitted transactions originate from “fast” users.
Let’s now break down transactions by source.
Priority Fee
Transactions can be categorized based on their priority fee (PF) with respect to the PF distribution in the corresponding slot. Precisely, we can compare the PF with the 95th percentile (95p) of the distribution:
The size of the priority fee generally doesn’t influence a transaction’s time to inclusion. There isn’t a clear threshold where transactions with higher PF are consistently included more quickly. The result remains stable even when accounting for PF per compute unit.
Jito Tippers
We can restrict the analysis to users sending transactions via the Jito MEV infrastructure, excluding addresses of known swQoS consumers. Interestingly, most Jito transactions originate from “slow” users.
We categorize tippers by siże of tips in the block, analogously to what we did for PF:
When we compute the probability density function (PDF) of time to inclusion based on this classification, we find that the tip size doesn’t significantly impact the time to inclusion, suggesting that to build a successful MEV bot, one doesn’t have to pay more in tips!
Within the Jito framework, a bundle can consist of:
In both cases, the time it takes for the entire bundle to be included is determined by the inclusion time of the tip transaction. However, when a tip is paid in a separate transaction, we don’t track the other leg. This reduction in volume explains why the PDF of tippers differs from that of Jito consumers.
swQoS
It’s impossible to fully disentangle transaction time to inclusion from swQoS for general users, meaning some transactions in the analysis may still utilize swQoS. However, we can classify users based on addresses associated with our swQoS clients.
When we do this and apply the defined user topology classification, it becomes clear that swQoS consumers experience significantly reduced times to inclusion.
The peak around 60 seconds is much smaller for swQoS consumers, indicating they are far less likely to face such high inclusion times
The highest impact of using swQoS is seen in the reduction of the time to inclusion for “slow” users. By computing the cumulative distribution function (CDF) for this time, we observe a 30% probability of these transactions being included in less than 13 seconds.
When comparing the corresponding CDFs:
'Normal' users also benefit from swQoS. There's an additional peak in the PDF for these users between 9s and 13s, showing that some of “normal” users process transactions in less than 20s. Additionally, another peak appears around 40s, indicating that part of the slower users now see their 95th percentile falling in the left tail end of 'normal' users. This suggests that the overall spread of the time-to-inclusion distribution is reduced.
There is no statistically significant difference between the analyzed samples for “fast” users. However, some Jito consumers may also use swQoS, which complicates the ability to draw definitive conclusions.
Despite this, the improvements for “slow” and “norma”' users highlights swQoS's positive impact on transaction inclusion times. If swQoS explains the PDF shape for “fast” users, it increases the likelihood of inclusion within 10s from ~30% to ~100%, a 3x improvement. A similar 3x improvement is observed for “slow” users being included within 13s.
Transaction inclusion is arguably Solana's most pressing challenge today. Efforts to address this have been made at the core protocol level with swQoS and priority fees and through third-party solutions like Jito (remembering that the main Jito use is MEV).
Solana’s latest motto is to increase throughput and reduce latency. In this article, we have examined how these three solutions improve landing time. Or, more simply, do they actually reduce latency? We found out that:
Among the three, swQoS is the most reliable for reducing latency. Jito and priority fees can be used when the time to inclusion is less important.
References:
About Chorus One
Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.
Due to the unique architecture of blockchains, block proposers can insert, censor, or sort user transactions in a way that extracts value from each block before it's added to the blockchain.
These manipulations, called MEV or Maximum Extractable Value, come in various forms. The most common are arbitrage¹, liquidations², NFT mints³, and sandwiching⁴. Arbitrage involves exploiting price differences for the same asset across markets. Liquidations occur in lending protocols when a borrower’s collateral drops in value, allowing others to buy it at a discount. NFT mints can be profitable when high-demand NFTs are resold after minting.
Most types of MEV can benefit the ecosystem by helping with price discovery (arbitrage) or preventing lending protocols from accruing bad debt (liquidations). However, sandwiching is different. It involves an attacker front-running a user’s trade on a DEX and selling immediately for a profit. This harms the ecosystem by forcing users to pay a consistently worse price.
Solana's MEV landscape differs from Ethereum's due to its high speed, low latency, lack of a public mempool, and unique transaction processing. Without a public mempool for viewing unconfirmed transactions, MEV searchers (actors specializing in finding MEV opportunities⁵) send transactions to RPC nodes directly, which then forward them to validators. This setup enables searchers to work with RPC providers to submit a specifically ordered selection of transactions.
Moreover, the searchers don't know the leader's geographical location, so they send multiple transactions through various RPC nodes to improve their chances of being first. This spams the network as they compete to extract MEV—if you're first, you win.
Jito
A key addition to the Solana MEV landscape is Jito, who released a fork for the Solana Labs client. On a high level, the Jito client enables searchers to tip validators to include a bundle of transactions in the order that extracts the most value for the searcher. The validators can then share the revenue from the tips with their delegators.
These revenues are substantial. Currently, the Jito-Solana client operates on 80% of validators and generates thousands of SOL daily in tips from searchers. However, searchers keep a portion of each tip, so the total tip amounts don’t reveal the full MEV picture. Moreover, the atomic arbitrage market is considerable, and as we’ll explore later, Jito's tips don’t give an accurate estimate of the atomic MEV extracted.
Jito⁶ introduced a few new concepts to the Solana MEV landscape:
There’s more to the current MEV landscape on Solana, particularly concerning spam transactions, which largely result from unsuccessful arbitrage attempts, and the various mitigation strategies (such as priority fees, stake-weighted quality of service, and co-location of searchers and nodes). However, since these details are not central to the focus of this article, we will set them aside for now.
It's still early for Solana MEV, and until recently, Jito was the only major solution focused on boosting rewards for delegators. Following the same open-source principles, the Paladin team introduced a validator-level bot⁷ and an accompanying token that accrues value from the MEV collected by the bot.
The main idea behind Paladin is this:
Paladin’s success, therefore, depends on validators choosing honesty over toxic MEV extraction by running the Paladin bot.
Bots like Paladin⁸ operate at the validator level, enabling them to capitalize on opportunities that arise after Jito bundles and other transactions are sent to the validator for inclusion in a block.
In this scenario, once the bot assesses the impact of the transactions and bundles, it inserts its transactions into the block. The bot doesn’t front-run the submitted transactions but leverages the price changes that result after each shred is executed.
Paladin can also extract MEV through DEX-CEX arbitrage and optimize routes for swaps made via DEX aggregators. However, these features are currently not used in practice, so we only briefly mention them. Since the bot is a public good, the community can contribute by adding features like NFT minting or liquidation support in the future.
The PAL token is where 10% of the value extracted by the bot in SOL gets accumulated. Paladin will go live at TGE, which will airdrop the entire supply of 1 billion PAL in the following proportions:
At the architecture level, the MEV extracted by the bot is sent to a smart contract, which then distributes it as follows:
The crucial part of the Paladin architecture is slashing. If the validator misbehaves and extracts MEV through sandwiching, staked PAL holders (other validators and their delegators) can vote to slash the rogue validator. The slashing happens if >50% of the majority is reached and stays at this level for a week. The slashed PAL is burned.
Other actions that could lead to slashing include not running Paladin, using closed-source upgrades, or not participating in slashing votes. This isn't an exhaustive list, as PAL stakers can vote to slash for other reasons at their discretion. While sandwiching is easy to spot, other "misbehaviors" may not be as obvious and would require monitoring tools, potentially leading to enforcement issues.
Unstaking PAL is capped at 5%, and a cooldown period of one month before the next withdrawal can be made.
There are several controversies about Paladin⁹. Here are common criticisms:
Validators Profit Unfairly
This is not true. Palidators (validators running Paladin) receive 90% of the MEV extracted by the bot, which they can redistribute to their delegators while keeping their standard commission. The remaining 10% goes to the PAL token, with 7.5% each going to validators and their stakers. This setup ensures validators don't take a larger share of MEV profits. If a validator doesn’t share the captured MEV, delegators can switch to one with a healthy long-term track record, like Chorus One.
Run Paladin or Die
Validators must run Paladin and avoid toxic MEV extraction or any actions that could undermine their reputation for honesty. Slashing can also occur if validators run closed-source software on top of Paladin. This doesn't mean market participants can't enhance the bot. On the contrary, they are encouraged to do so and can be rewarded in PAL if their improvements are openly available to others.
No Development Post-TGE
After the PAL airdrop, the Paladin team will no longer develop the bot¹⁰. All maintenance and strategy updates will be the community's responsibility from then on. This includes adding new liquidity pools or tokens to identify emerging MEV opportunities. While a fund has been set aside for future development, it is uncertain how long it will last. Development may stall if the incentives dry up.
With the knowledge of how Paladin works, let’s evaluate its target market and assess its performance based on our collected data.
Atomic Arbitrage Market
We will start by analyzing Jito tips paid for atomic arbitrage and compare them to the overall atomic arb market to see how much of the atomic opportunities have been captured through Jito.
We will use data from mid-August 2024¹¹ onward, when the share of Jito tips related to atomic arbitrage rose significantly. We exclude earlier data to avoid bias. Interestingly, this spike happened despite the drop in the total MEV extracted through atomic arbs, indicating increased competition among searchers now willing to share more Jito tips.
Even though tips from atomic arbs have increased compared to the total arb MEV market, they still make up only a small percentage of the total Jito tips paid.
Only 4.25% of the tips searchers paid during the sampled period were from atomic arbs (SOL 10,316 out of SOL 242,754). At a SOL price of $150, this is $1,547,400, while the total atomic MEV extraction reached $6,567,554.
So, only about 23% of the total atomic arbitrage opportunities were shared through Jito! Some striking examples include:
This shows that most on-chain arbitrage MEV is being captured outside of Jito. Unfortunately, this also leads to a high number of failed transactions.
During one of the measured five-day periods, over 1 million arbitrage transactions were made, with 519k of them submitted through the Jupiter aggregator [source]. This led to a significant number of failed transactions because:
The above data shows that Paladin can tap into a sizable on-chain arbitrage market by finding opportunities more efficiently and avoiding failed transactions. This approach would benefit validators by filling blocks with successful transactions and improving the ecosystem by reducing congestion.
The annual atomic arbitrage market is around $42.4 million. With 392 million SOL staked [source] ($58.9 billion at $150 per SOL), this could add about 0.07% APY to validator performance.
Let's dive deeper into the data to see how much market the bot can take.
Distribution and Dataset
The distribution of atomic arb MEV in USD per slot for the data collection period (15 August to 10 October 2024) looks as follows:
The median value is $0.00105 per slot, with atomic arbitrage opportunities occurring in 51.6% of slots.
Paladin operated on our main validator with a 1.15m SOL stake for a week between 4 October and 11 October. Let’s see the atomic arbitrage market opportunities during the bot's operation period:
The median value is $0.00898 per slot, and the chance of atomic arbs is present in 59.47% of slots.
The KS test shows inconsistencies in both datasets, with a positive shift in the distribution, indicating higher values in the second dataset. Therefore, Paladin operated in a more favorable environment, with more significant and more frequent MEV extraction opportunities than the broader measurement period. This is especially clear when you look at the size of Jito tips during our timeframe.
Now, let's look at how Paladin performed in these circumstances.
The median arb profit is $0 per slot, with opportunities taken only in 29.64% of slots.
Here’s a more detailed summary of all three distributions:
As we can see, Paladin underperformed, capturing significantly less MEV and earning less per slot. The bot only managed to capture 15.84% of the total available atomic arbitrage opportunities.
In some of the most striking examples, the bot extracted only 0.00004 SOL (here and here), while the actual extractable value was $127.59, as seen in Tx1, Tx2, Tx3, Tx4, and Tx5.
The reason for failing to extract MEV from the opportunities in the linked transactions is that Paladin doesn’t support the traded token ($MODENG). This is a problem since memecoins are currently driving network activity and will likely contribute the largest share of MEV. These tokens emerge rapidly, requiring frequent updates to routing. One of Paladin's top priorities should be quickly adapting to capture MEV from new memecoins as they arise, and the lack of team involvement in the process is problematic in this context.
Estimated Returns
Now, let’s run a simulation to estimate the returns under different scenarios based on a stake share of 0.3% (Chorus One's share), 1%, and 10%. The returns are capped at 15.8%, which is the portion of opportunities Paladin captured in our data.
The median value for 0.3% of the total stake is around $20k, which matches the annualized value of what Chorus One earned. This increases to about $65k for a validator with 1% of the total stake and exceeds $700k for a hypothetical validator with 10%.
We also ran a simulation to estimate how much Paladin’s performance could improve if it captured 80% of available opportunities for a validator the size of Chorus One across different adoption levels—1%, 10%, 25%, and 50% of total stake using Paladin. At an estimated 1% adoption, our validator earns an additional 0.01% APY from the bot, while the total potential atomic arbitrage could generate 0.07% of the total stake.
The simulation assumes:
And in a more tangible form:
As we see, Paladin could generate a median of additional 0.29% in APY for a validator with 0.03% of the total stake once adoption reaches 50%.
We've been in touch with the Paladin team, who confirmed that a new version of the bot, P3, is in the works. This version will pivot from focusing on the atomic arbitrage market, which they no longer see as substantial enough to prioritize.
Maintenance
The bot has been stable without major issues, but Paladin requires patches to update strategies and fix smaller bugs. Maintaining the bot is also time-consuming for the engineering team, as each patch requires a restart and the process is more complex than anticipated, adding extra overhead.This is a similar problem we faced with our Breaking Bots—maintenance and strategy update costs were high. Eventually, we concluded that the effort was not exactly worth it. With Paladin, however, a whole community could tackle this problem, so things may look different.
Paladin has great potential to boost earnings for validators and stakers by tapping into new opportunities, but it's still in the early stages of development. While our analysis shows that Paladin currently captures only around 15.84% of available atomic arbitrage opportunities, this will likely improve as the bot becomes more optimized and widely adopted. The upside is promising—the total atomic arbitrage market could add 0.07% to a validator’s APY. While capturing all of it is unlikely, even a share of this can lead to solid gains.
That said, there are challenges to address. The bot’s development will shift to the community after the token TGE, raising questions about whether there will be enough resources and motivation for continuous updates. Additionally, maintaining the bot on the validator side can be tricky, as each patch requires a restart, making it time-consuming for validators to run.
At Chorus One, we believe that the long-term health of the Solana ecosystem is paramount. Paladin builds on the same core principles as Jito—to mitigate the toxic MEV and democratize good MEV.
We developed Breaking Bots with these ideas in mind, and we see Paladin as an extension of our efforts. Two solutions are better than one, and Paladin offers an interesting alternative to what exists today. Supporting multiple approaches is a cornerstone of decentralized systems, and we welcome new ideas that build resilience.
While we don't agree with all of Paladin's choices, especially regarding the team's lack of future bot development, we believe its success will benefit the entire ecosystem, and that's why we support it.
That being said, if the core principles Paladin is built on change, or the maintenance costs outweigh the benefits in the mid-term, we will reevaluate our position.
References:
1 You can find an interesting overview of arbitrage MEV here.
2 A detailed analysis of liquidations in DeFi is available in this paper.
3 More about the NFT MEV here.
4 Chorus One also provided an analysis on Solana sandwiching in here.
5 An in-depth write-up on searchers by Blockworks is here.
6 Information based on Jito documentation.
7 At Chorus One, in our “Breaking Bots” paper, we proposed a similar solution. The implementation details are available on GitHub.
8 Information based on series blogposts by the Paladin team.
9 Some of the examples available here, here,
10 Per the blogpost: We’re not a Foundation or Labs — we don’t run any part of Paladin, we don’t develop it, we don’t maintain it…
11 The data used in this section is available here and can be retrieved using these queries.
About Chorus One
Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.
In the blockchain industry, where the balance between decentralization and efficiency often teeters on a knife's edge, innovations that address these challenges are paramount. Among these innovations, preconfirmations stand out as a powerful tool designed to enhance transaction speed, security, and reliability. Here, we’ll delve into what preconfirmations (henceforth referred to as “preconfirms” ) are, why they matter, and how they’re set to transform the blockchain landscape.
The idea of providing a credible heads-up or confirmation that a transaction has occurred is deeply ingrained in our daily lives. Whether it's receiving an order confirmation from Amazon, verifying a credit card payment, or processing transactions in blockchain networks, this concept is familiar and widely used. In the blockchain world, centralized sequencers like those in Arbitrum function similarly, offering guarantees that your transaction will be included in the block.
However, these guarantees are not without limitations. True finality is only achieved when the transaction is settled on Ethereum. The reliance on centralized sequencers in Layer 2 (L2) networks, which are responsible for verifying, ordering, and batching transactions before they are committed to the main blockchain (Layer 1), presents significant challenges. They can become single points of failure, leading to increased risks of transaction censorship and bottlenecks in the process.
This is where preconfirms come into play. Preconfirms were introduced to address these challenges, providing a more secure and efficient way to ensure transaction integrity in decentralized networks.
Before jumping into the preconfirms trenches, let’s start by clarifying some key terms that will appear throughout this article (and are essential to the broader topic).
Builders: In the context of Ethereum and PBS, builders are responsible for selecting and ordering transactions in a block. This is a specialized role with the goal of creating blocks with the highest value for the proposer, and builders are also highly centralized entities. Blocks are submited to relays, which act as mediators between builders and proposers.
Proposers: The role of the proposer is to validate the contents of the most valuable block submitted by the block builders, and to propose this block to the network to be included as the new head of the blockchain. In this landscape, proposers are the validators in the Proof-of-Stake consensus protocol, and get rewarded for proposing blocks (a fee gets paid to the builder from the proposer as well).
Sequencers: Sequencers are akin to air traffic controllers, particularly within Layer 2 Rollup networks. They are responsible for coordinating and ordering transactions between the Rollup and the Layer 1 chain (such as Ethereum) for final settlement. Because they have exclusive rights to the ordering of transactions, they also benefit from transaction fees and MEV. Usually, they have ZK or optimistic security guarantees.
Now that we’ve set the stage, let’s dive into the concept of preconfirms.
At their core, preconfirms can provide two guarantees:
These two guarantees matter. Particularly for:
Speed: Traditional block confirmations can take several seconds, whereas preconfirms can provide a credible assurance much faster. This speed is particularly beneficial for "based rollups" that batch user transactions and commit them to Ethereum, resulting in faster transaction confirmations. @taikoxyz and @Spire_Labs are teams building based rollups.
Censorship Resistance: A proposer can request the inclusion of a transaction that some builders might not want to include.
Trading Use Cases: Traders may preconfirm transactions if it allows them to execute ahead of competitors.
Now, zooming in on Ethereum.
The following chart describes the overall Proposer-builder separation and transaction pipeline on Ethereum.
Within the Ethereum network, preconfirms can be implemented in three distinct scenarios, depending on the specific needs of the network:
Builder preconfirms suit the trading use case best. These offer low-latency guarantees and are effective in networks where a small number of builders dominate block-building. Builders can opt into proposer support, which enhances the strength of the guarantee.
However, the dominance of a few builders means that onboarding these few is key. However, since there are only a few dominant builders, successfully onboarding these players is key.
Proposers provide stronger inclusion guarantees than builders because they have the final say on which transactions are included in the block. This method is particularly useful for "based rollups," where Layer 1 validators act as sequencers.
Yet, maintaining strong guarantees are key challenges for proposer preconfirms.
The question of which solution will ultimately win remains uncertain, as multiple factors will play a crucial role in determining the outcome. We can speculate on the success of builder opt-ins for builder preconfirms, the growing traction of based rollups, and the effectiveness of proposer declaration implementations. The balance between user demand for inclusion versus execution guarantees will also be pivotal. Furthermore, the introduction of multiple concurrent proposers on the Ethereum roadmap could significantly impact the direction of transaction confirmation solutions. Ultimately, the interplay of these elements will shape the future landscape of blockchain transaction processing.
Commit-boost is a mev-boost like sidecar for preconfirms.
Commit-boost facilitates communication between builders and proposers, enhancing the preconfirmation process. It’s designed to replace the existing MEV-boost infrastructure, addressing performance issues and extending its capabilities to include preconfirms.
Currently in testnet, commit-boost is being developed by a non-ventured-backed neutral software for Ethereum with the ambition of fully integrating preconfirms into its framework. Chorus One is currently running commit-boost on Testnet.
Chorus One has been deeply involved with preconfirms from the very beginning, pioneering some of the first-ever preconfirms using Bolt during the ZuBerlin and Helder testnets. We’re fully immersed in optimizing the Proposer-Builder Separation (PBS) pipeline and are excited about the major developments currently unfolding in this space. Stay tuned for an upcoming special episode of the Chorus One Podcast, where we’ll dive more into this topic.
If you’re interested in learning more, feel free to reach out to us at research@chorus.one.
About Chorus One
Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.
This is a joint research article written by Chorus One and Superscrypt
Blockchain transactions are public and viewable even before they get written to the block. This has led to maximal extractable value (‘MEV’), i.e. where actors frontrun and backrun visible transactions to extract profit for themselves.
The MEV space is constantly evolving as competition intensifies and new avenues to extract value are always emerging. In this article we explore one such avenue - Oracle Extractable Value, where MEV can be extracted even before transactions hit the mempool.
This is particularly relevant for borrowing & lending protocols which rely on data feeds from oracles to make decisions on whether to liquidate positions or not. Read on to find out more.
Value is in a constant state of being created, destroyed, won or lost in any financialized system, and blockchains are no exception. User transactions are not isolated to their surroundings, but instead embedded within complex interactions that determine their final payoff.
Not all transaction costs are as explicit as gas fees. Fundamentally, the total value that can be captured from a transaction includes the payoff of downstream trades preceding or succeeding it. These can be benign in nature, for example, an arbitrage transaction to bring prices back in line with the market, or impose hidden taxes in the case of front running. Overall, maximal extractable value (or “MEV”) is the value that can be captured from strategically including and ordering transactions such that the aggregate block value is maximized.
If not extracted or monetized, value is simply lost. Presently, the actualization of MEV on Ethereum reflects a complex supply chain (“PBS”) where several actors such as wallets, searchers, block builders and validators fill specialized roles. There are returns on sophistication for all participants in this value chain, most explicitly for builders which are tasked with creating optimal blocks. Validators can play sophisticated timing games which result in additional MEV capture; for example, Chorus One has run an advanced timing games setup since early 2023, and published extensively on it. In the PBS context, the best proxy for the total MEV extracted is the final bid a builder gets to submit during the block auction.
Such returns on sophistication extend to the concept of Oracle Extractable Value (OEV), which is a type of MEV that has historically gone uncaptured by protocols. This article will explain OEV, and how it can be best captured.
Oracles are one of crypto's critical infrastructure components: they are the choreographers that orchestrate and synchronize the off-chain world with the blockchain’s immutable ledger. Their influence is immense: they inform all the prices you see and interact with on-chain. Markets are constantly changing, and protocols and applications rely on secure oracle feed updates to provide DeFi services to millions of crypto users worldwide.
The current status-quo is that third-party oracle networks serve as intermediaries that feed external data to smart contracts. They operate separately from the blockchains they serve, which maintains the core goal of chain consensus but introduces some limitations, including concepts such as fair sequencing, required payments from protocols and apps, and multiple sources of data in a decentralized world.
In practical terms, the data from oracles represents a great resource for value extraction. The market shift an oracle price update causes can be anticipated and traded profitably, by back-running any resulting arbitrage opportunities or (more prominently) by capturing resulting liquidations. This is Oracle Extractable Value. But how is it captured, and more importantly, who profits from it?
In MEV, searchers (which are essentially trading bots that run on-chain) profit from oracle updates by backrunning them in a free-for-all priority gas auction. Value is distributed between the searchers, who find opportunities particularly in the lending markets for liquidations, and the block proposers that include their prices in the ledger. Oracles themselves have not historically been a part of this equation.
OEV changes this flow by atomically coupling the backrun trade with the oracle update. This allows the oracle to capture value, by either acting as the searcher itself or auctioning off the extraction rights.
How OEV created in DeFi can be captured by MEV searchers before the dApp gets access to it.
OEV primarily impacts lending markets, where liquidations directly result from oracle updates. By bundling an oracle update with a liquidation transaction, the value capture becomes exclusive, preventing front-running since both actions are combined into a single atomic event. However, arbitrage can still occur before the oracle update through statistical methods, as traders act on the true price seen in other markets
UMA and Oval:
API3 and OEV Network:
Warlock
The upshot of this MEV capture is that oracles have a new dimension to compete on. OEV revenue can be shared with dApps by providing oracle updates free of charge, or by outright subsidizing integrations. Ultimately, protocols with OEV integration will thus be able to bid more competitively for users.
OEV solutions share the same basic idea - shifting the value extraction from oracle updates to the oracle layer, by coupling the price feed update with backrun searcher transactions.
There are several ways of approaching this - an OEV solution may integrate with an existing oracle via an official integration, or through third party infrastructure. These solutions may also be purpose built and provide their own price update.
Heuristically, the key components of an OEV solution are the oracle update and the MEV transaction - these can be either centralized or decentralized.
We would expect purpose-built or “official” extensions to existing oracles to perform better due to less latency versus what would be required to run third party logic in addition to the upstream oracle. Additionally, these would be much more attractive from a risk perspective, as in the case of third party infrastructure, updates could break undesired integrations spontaneously.
The practical case is that a centralized auction can make most sense in latency-sensitive use cases. For example, it may allow a protocol to offer more leverage, as the risk of stranding with bad debt due stale price updates is minimized. By contract, a decentralized auction likely yields the highest aggregate value in use cases where latency is not as sensitive, i.e. where margin requirements are higher.
OEV is still in its early stages, with much development ahead. We're excited to see how this space evolves and will continue to monitor its progress closely as new opportunities and innovations emerge.
About Chorus One
Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.