The financial industry has long grappled with the inefficiencies of traditional systems, namely settlement delays, restricted market access, and capital inefficiencies. Decentralized finance (DeFi) would eventually emerge as a response, but early implementations often fell short, plagued by issues like over-collateralization and limited composability.
Recognizing these challenges, Injective introduced iAssets - programmable financial primitives that facilitate enhanced liquidity allocation, position-based exposure, and cross-market composability. Unlike their static predecessors, iAssets are dynamic, on-chain instruments, with second-order utility and no pre-funding constraints. With iAssets, Injective aims to move blockchain-based stocks, commodities, and more beyond proof-of-concept, ushering in the era of Stocks 3.0.
Traditional Finance (Stocks 1.0)
Traditional financial systems operate within structured, yet inflexible frameworks, characterized by delayed settlements (typically T+2), stringent access barriers, and segregated liquidity. The opacity of processes such as prime brokerage and rehypothecation further compound systemic risks, creating inefficiencies and restricting market participation to predominantly institutional actors.
Early DeFi & Synthetic Assets (Stocks 2.0)
The initial wave of DeFi introduced tokenized and synthetic assets, allowing for asset programmability and a more open financial environment. However, these models often required excessive collateralization (often surpassing 150%), leading to substantial capital inefficiencies. Liquidity pools were isolated, limiting the effective deployment of capital, and creating vulnerabilities such as liquidation cascades during market volatility.
Understanding the shortcomings of both traditional systems and early blockchain solutions, Injective's iAssets introduce significant innovations to further the utility of on-chain assets. Key advancements include:
These characteristics mark a distinct shift from representational to programmable finance. Rather than merely mirroring the value of off-chain assets, iAssets transform them into composable building blocks—financial primitives that can be deployed across lending protocols, used as collateral, integrated into structured products, or programmed into hedging strategies. The result is a framework that not only preserves the core utility of traditional assets, but enhances them with real-time liquidity, seamless market integration, and systemic transparency.
In this light, iAssets are not just an iteration on previous tokenization efforts, they are a redefinition of what it means to own and utilize assets in a digitally native financial system.
Injective's iAssets are realized through a robust and meticulously designed technical infrastructure. At its core lies Injective's modular architecture, which has been developed over several years to support high-performance decentralized financial applications.
Exchange Module and On-Chain CLOB
The Exchange Module serves as the foundation for iAssets, providing a fully decentralized, on-chain central limit order book (CLOB). Unlike traditional automated market maker (AMM) models, the CLOB facilitates tighter spreads and more efficient price discovery. This architecture allows for professional institutions to dynamically manage liquidity, ensuring that iAssets benefit from deep and responsive markets.
Moreover, the Exchange Module plays a pivotal role in optimizing liquidity across the Injective ecosystem. By enabling a shared liquidity environment, it allows for seamless capital flow between various financial applications, including trading platforms, and structured financial products. This interconnectedness ensures that liquidity is not siloed, and instead dynamically allocated based on real-time market demands.
And iAssets haven’t wasted any time in picking up steam. Injective now hosts all Mag 7 stocks, which have done a cumulative $165M+ in trading volume since launch. iAssets as a whole have seen over $465M in trading, laying the foundation for a burgeoning asset category and aggressive innovation. And if that wasn’t enough - one particular asset of note really takes center stage, TRADFI, which achieved approximately $14 million in trading volume on its first day of listing.
Modular Design and Multi-VM Support
Injective's architecture is composed of interoperable modules, each serving a specific function within the ecosystem. This modularity gives developers access to a robust set of pre-built components, such as the Oracle Module, RWA Module, automatic smart contracts and more, without the need to build from scratch. For all Injective modules, click here. Furthermore, Injective supports multiple virtual machines (VMs), enhancing the flexibility and scalability of applications built on the network.
To learn more about Injective modules, click here.
And Injective isn’t stopping there. The team is actively working on several initiatives aimed at enhancing capital efficiency and utility, notably, their Liquidity Availability Framework.
Liquidity Availability Framework
One of the key developments is the introduction of a "Liquidity Availability" framework. This initiative seeks to optimize capital utilization by allowing liquidity to move dynamically between applications based on demand. While underutilization is a notable concern, the primary objective of liquidity availability is to address limitations brought about by application-specific liquidity, and ensure that liquidity is allocated more efficiently across the ecosystem.
Want to learn more? Check out Injective’s research paper on Liquidity Availability here.
Injective’s iAssets represent a pivotal advancement in the evolution of financial markets, transitioning from static representations to dynamic, programmable financial primitives. By addressing the limitations of both traditional finance and early decentralized finance models, iAssets offer enhanced capital efficiency, real-time liquidity, and seamless composability across financial applications.
Leveraging Injective's robust modular architecture and on-chain central limit order book, iAssets facilitate a more integrated and efficient financial ecosystem. This infrastructure not only accelerates development timelines but also fosters innovation, enabling complex financial instruments to be constructed with greater ease and reliability.
As the financial industry continues to evolve, Injective seeks to provide the foundational infrastructure necessary for the next generation of programmable finance.
Want to learn more about iAssets? Check out the iAssets research paper here.
As a leading institutional staking provider, Chorus One is proud to support the Injective ecosystem and its innovative iAssets framework. By operating a highly secure and reliable validator node on Injective, Chorus One ensures network stability and contributes to the seamless functioning of the Injective ecosystem.
Bitcoin has firmly established itself as digital gold, the apex store of value in the cryptocurrency ecosystem. Adoption has reached Wall Street, banks are expanding their crypto services and offering direct BTC exposure via ETFs. With this level of institutional integration, the next pressing question becomes: how to generate yield on BTC holdings? Making things more interesting, institutions will focus on solutions that optimize for Security, Yield and Liquidity.
This poses a fundamental challenge for any Bitcoin L2 solution (and staking): since Bitcoin lacks native yield (unless you run a miner), and serves primarily as a store of value, any yield generated in another asset faces selling pressure if the ultimate goal is to accumulate more BTC.
When Bitcoiners participate in any ecosystem – whether it's an L2, DeFi protocol, or alternative chain – their end goal remains simple: stack more sats. This creates inherent selling pressure for any token used to pay staking rewards or security budgets. While teams are developing interesting utility for alternative tokens, the reality is that without a thriving ecosystem, sustainable yield remains a pipedream.. Teams are mainly forced to bootstrap network effects via points or other incentives.
This brings us to a critical point: Bitcoin L2s' main competition isn't other Bitcoin L2s or BTCfi, but established ecosystems like Solana and Ethereum. The sustainability of yield within a Bitcoin L2 cannot be achieved until a sufficiently robust ecosystem exists within that L2 – and this remains the central challenge. Interesting new ZK rollup providers like Alpen Labs and Starknet claim they can import network effects by offering EVM compatibility on Bitcoin while enhancing security. With Bitcoin’s building tenure as a store of value, increasingly like Gold, monetisation schemes for the asset will become more common.
However, we need to face reality – with 86% of VC funding for these L2s allocated post-2024, we're still years away from maturity. Is it too late for Bitcoin L2s to catch up?
Security alone is no longer a sufficient differentiator. Solana and Ethereum have proven resilient enough to earn institutional trust, while Bitcoin L2s must justify their additional complexity, particularly around smart contract risk when interacting with UTXOs.
Being EVM-compatible does not automatically create network effects. It might help bring developers / dapps over, but creating a winning ecosystem flywheel will only become tougher with time. In fact, the winners of this cycle have differentiated with a product first approach (Hyperliquid, Pumpdotfun, Ethena…), not VM or tech. As such, providing extra BTC economic security or alignment won’t be enough without a killer product in the long run.
Incremental security improvements alone aren't the most compelling selling point – we've seen re-staking initiatives like Eigenlayer struggle with this exact issue. AVS aren't generally willing to pay extra for security (especially since they’ve had it for free); selling security is hard. We’ve seen the same promise of cryptoeconomic security fail before with Cosmos ICS and Polkadot Parachains.
That said, Bitcoin L2s do have a compelling security advantage. They inherit Bitcoin's massive $1.2T+ security budget (hashrate), far exceeding what Solana or Ethereum can offer. For institutions prioritizing safety over yield size, this edge might matter – even if yields are somewhat lower. Bitcoin Timestamping could create a completely new market. Can L2s tap into this extra economic security and liquidity while 10x’ing product experience? Again, if your security is higher but the product is not great, it won’t matter.
BTC whales aren't primarily interested in bridging assets; they want to accumulate more Bitcoin. This raises an important question: from their perspective, is there a meaningful difference between locking BTC in an L2 versus in Solana?
Perceived risk is the key factor here. An institution might actually prefer Coinbase custody over a decentralized signer set where they might not know the operators, weighing legal risk against technical risk. This perception is heavily influenced by user experience – if a product isn't intuitive, the risk is perceived as higher. A degen whale on the other hand, might be comfortable with bridging into Solayer to farm the airdrop or with ‘staking’ into Bitlayer for yield.
At Chorus One we’ve classified every staking offering to better inform our institutional clients who are interested in putting their BTC to work, following the guidance of our friends at Bitcoin Layers.
Want to dive deeper into the staking offerings available through Bitcoin Layers? Shoot our analyst Luis Nuñez (and author of this paper) a DM on X!
Since risk is perceived and depending on your yield, security and liquidity preferences, your ideal option might look like this:
And still be super convenient. We’re in an interesting period where Bitcoin TVL or BTCfi is increasing dramatically (led by Babylon), while the % of BTC that has remained idle for at least 1 year keeps rising, now at 60%. This tells us that Bitcoin dominance is growing thanks to institutional adoption, but that there’s no compelling yield solutions yet to activate the BTC.
Institutions have historically preferred lending BTC over exploring L2/DeFi solutions, primarily due to familiarity (Coinbase, Cantor). According to Binance, only 0.79% of BTC is locked in DeFi, meaning that DeFi lending (e.g. Aave) is not as popular. Even so, wrapped BTC in DeFi is still around 5 times larger than the amount of BTC in staking protocols.
Staking in Bitcoin Layers requires significant education. L2s like Stacks and CoreDAO use the proximity to miners to secure the system and tap into the liquidity by providing incentives for contribution or merge mining. More TradFi akin operations might be an interesting differentiator for a BTC L2. We've seen significant institutional engagement in basis trades in the past, earning up to 5% yield with Deribit and other brokers.
However, lending's reputation has suffered severely post-2022. The collapses of BlockFi, Celsius, and Voyager exposed substantial custodial and counterparty risks, damaging institutional trust. As mentioned, Bitcoin L2s like Stacks offer an alternative by avoiding traditional custody while including other parties like Miners to have a role in providing yield via staking. For those with a more passive appetite, staking can be the ideal solution to yield. Today however, staking solutions are early and offer just points with the promise of a future airdrop, with the exception of CoreDAO.
Staking in Bitcoin L2s is very different. Typically, we see a multi-sig of operators that order L2 transactions and timestamp a hashed representation of the block into Bitcoin. This allows for state recreation of the L2 at any point in time if the L2 is compromised. Essentially, these use Bitcoin for DA (Data Availability). This means that consensus is still dependent on the multi-sig operators, so these could still collude. Innovations with ZK (Alpen Labs, Citrea), UTXO-to-Smart Contract (Arch, Stacks) and BitVM (BoB) are all trying to improve these security guarantees.
In Ethereum, leading L2s typically have a single sequencer (vs. a multi-sig) to settle transactions to the L1. Critically however, Ethereum L1 has the capability to do fraud proofs allowing for block reorgs if there's a malicious transaction. In Bitcoin, the L1 doesn’t have verification capabilities, so this is not possible… until BitVM?
BitVM aims to allow fraud proofs on the Bitcoin L1. BitVM potentially offers a 10x improvement in security for Bitcoin L2s, but it comes with significant operational challenges.
BitVM is a magnificent project where leaders from every ecosystem are collaborating to make it a reality. We’ve seen potentially drastic improvements between BitVM1 and BitVM2:
BitVM allows fraud proofs to happen through a sequence of standard Bitcoin transactions with carefully crafted scripts. At its core, verification in BitVM works because:
1. Program Decomposition
Before any transactions occur, the program to be verified (like a SNARK verifier) is split into sub-programs that fit in a btc block:
2. Operator Claim
The operator executes the entire program off-chain and claims:
They commit to all these values using cryptographic commitments in their on-chain transactions.
3. Challenge Initiation
When a challenger believes the operator is lying:
4. The Critical On-Chain Execution
Here's where Bitcoin nodes perform the actual verification:
The challenger creates a "Disprove" transaction that:
5. Bitcoin Consensus in Action
When nodes process this transaction:
The Bitcoin network reaches consensus on this result just like it does with any transaction's validity. The technology enables Bitcoin-native verification of arbitrary computations without changing Bitcoin's consensus rules. This opens the door for more sophisticated smart contracts secured directly by Bitcoin, but implementation hurdles are substantial since operators need to front the liquidity and face several risks:
As such, incentives to operate the bridge will be quite attractive to mitigate the risks. If we’re able to mitigate these, security will be significantly enhanced and might even provide interoperability between different layers, which could unlock interesting use-cases while retaining the Bitcoin proximity. Will this proximity allow for the creation of killer products and real yields?
For a Bitcoin L2 to succeed, it must offer products unavailable elsewhere or provide substantially better user experiences. The previously mentioned Bitcoin proximity has to be exploited for differentiation.
The jury is still out on whether ZK rollup initiatives can bootstrap meaningful network effects. These rollups will ultimately need a killer app to thrive or to port them from EVM with the promise of Bitcoin liquidity. Otherwise, why would dapps choose to settle on Bitcoin?
The winning strategy for Bitcoin L2s involves:
Below, we’ll dive into some of my top institutional picks, a few of which we’ve invested in.
Babylon’s main value-add is to provide Bitcoin economic security. As we’ve mentioned several times, this offering alone will not be enough, and the team is well aware. Personally, I'm bullish on the app-chain approach, following models like Avalanche or Cosmos, but simply using BTC for the initial bootstrap of security and liquidity.
While the app-chain thesis represents the endgame, reaching network effects requires 10x the effort since everything is naturally fragmented. Success demands an extremely robust supporting framework – something only Cosmos has arguably achieved with sufficient decentralization (and suffered its consequences). Avalanche provides the centralized support needed to unify a fragmented ecosystem.
The ideal endgame resembles apps in the App Store – distinct from each other but with clear commonalities. In this analogy, Bitcoin serves as the iPhone – the trusted foundation for distribution.
Mezo (investor)
Mezo's approach with mUSD is particularly interesting as it reduces token selling pressure if mUSD gains significant utility. Their focus on "real world" applications could drive mainstream adoption, with Bitcoin-backed loans as the centerpiece. Offering fixed rates as low as 1% unlocks interesting DeFi use cases around looping with reduced risk, while undercutting costs compared to Coinbase + Morpho BTC lending offerings (at around 5%).
Plasma (investor)
Purpose built for stablecoin usage. Zero-fee USDT transfers, parallel execution and strong distribution strategies position Plasma well in the ecosystem. Other features include confidential transactions and high customization around gas and fees.
Arch is following the MegaEth approach to curate a mafia ecosystem, a parallel execution environment, and close ties to Solana. In Arch, Users send assets directly to smart contracts using native Bitcoin transactions.
Stacks has a very interesting setup since there's no selling pressure for stakers (they earn BTC rather than STX). As the oldest and most recognized Bitcoin L2 brand, they have significant advantages. While Clarity presents challenges, this may be changing with innovations like smart contract to Bitcoin transaction capabilities in development and other programming languages. StackingDAO (investor), is the leading LST in the ecosystem and provides interesting yield opportunities in both liquid STX and liquid sBTC.
Looking to stake your STX? Click here!
BOB (Building on Bitcoin)
BoB is at the forefront of BitVM development (target mainnet in 2025) and is looking to use Babylon for security bootstrapping. The team is doing a fantastic job at exploiting the BTC proximity with BitVM while developing institutional grade products.
CoreDAO features strong LST adoption tailored for institutions and is the only staking yield mechanism that's live and returns actual $. CoreDAO Ventures is doing a great job at backing teams early in their development.
Botanix is the leading multi-sig set up with their Spiderchain, where each BTC that is being bridged by the chain is operated by a new and randomized multi-sig, increasing its robustness by providing ‘forward security’. Interestingly, Botanix will not have their own token (at least initially) and will only use BTC and pBTC, meaning rewards and fees will be in BTC.
For retail users, four standout solutions I like:
Bitcoin L2s face significant challenges in their quest for adoption and sustainability. The inherent tension between Bitcoin's store-of-value proposition and the yield-generating mechanisms of L2s creates fundamental hurdles. However, projects that can offer unique capabilities, seamless user experiences, and compelling institutional cases have the potential to overcome these obstacles and carve out valuable niches in the expanding Bitcoin ecosystem.
The key to success lies not in merely replicating what Ethereum or Solana already offer, but in leveraging Bitcoin's unique strengths to create complementary solutions that expand the utility of the world's leading cryptocurrency without compromising its fundamental value proposition. Adoption is one killer product away.
Want to learn more about yield opportunities on Bitcoin? Reach out to us at research@chorus.one and let’s chat!
In the world of blockchain technology, where every millisecond counts, the speed of light isn’t just a scientific constant—it’s a hard limit that defines the boundaries of performance. As Kevin Bowers highlighted in his article Jump Vs. the Speed of Light, the ultimate bottleneck for globally distributed systems, like those used in trading and blockchain, is the physical constraint of how fast information can travel.
To put this into perspective, light travels at approximately 299,792 km/s in a vacuum, but in fiber optic cables (the backbone of internet communication), it slows to about 200,000 km/s due to the medium's refractive index. This might sound fast, but when you consider the distances involved in a global network, delays become significant. For example:
For applications like high-frequency trading or blockchain consensus mechanisms, this delay is simply too long. In decentralized systems, the problem worsens because nodes must exchange multiple messages to reach agreement (e.g., propagating a block and confirming it). Each round-trip adds to the latency, making the speed of light a "frustrating constraint" when near-instant coordination is the goal.
Beyond the physical delay imposed by the speed of light, blockchain networks face an additional challenge rooted in information theory: the Shannon Capacity Theorem. This theorem defines the maximum rate at which data can be reliably transmitted over a communication channel. It’s expressed as:
where C is the channel capacity (bits per second), B is the bandwidth (in hertz), and S/N is the signal-to-noise ratio. In simpler terms, the theorem tells us that even with a perfect, lightspeed connection, there’s a ceiling on how much data a network can handle, determined by its bandwidth and the quality of the signal.
For blockchain systems, this is a critical limitation because they rely on broadcasting large volumes of transaction data to many nodes simultaneously. So, even if we could magically eliminate latency, the Shannon Capacity Theorem reminds us that the network’s ability to move data is still finite. For blockchains aiming for mass adoption—like Solana, which targets thousands of transactions per second—this dual constraint of light speed and channel capacity is a formidable hurdle.
In a computing landscape where recent technological advances have prioritized fitting more cores into a CPU rather than making them faster, and where the speed of light emerges as the ultimate bottleneck, Jump team refuses to settle for off-the-shelf solutions or the short-term fix of buying more hardware. Instead, it reimagines existing solutions to extract maximum performance from the network layer, optimizing data transmission, reducing latency, and enhancing reliability to combat the "noise" of packet loss, congestion, and global delays.
The Firedancer project is about tailoring this concept for a blockchain world where every microsecond matters, breaking the paralysis in decision-making that arises when systems have many unoptimized components.
Firedancer is a high-performance validator client developed in C for the Solana blockchain, developed by Jump Crypto, a division of Jump Trading focused on advancing blockchain technologies. Unlike traditional validator clients that rely on generic software stacks and incremental hardware upgrades, Firedancer is a ground-up reengineering of how a blockchain node operates. Its mission is to push the Solana network to the very limits of what’s physically possible, addressing the dual constraints of light speed and channel capacity head-on.
At its core, Firedancer is designed to optimize every layer of the system, from data transmission to transaction processing. It proposes a major rewrite of the three functional components of the Agave client: networking, runtime, and consensus mechanism.
Firedancer is a big project, and for this reason it is being developed incrementally. The first Firedancer validator is nicknamed Frankendancer. It is Firedancer’s networking layer grafted onto the Agave runtime and consensus code. Precisely, Frankendancer has implemented the following parts:
All other functionality is retained by Agave, including the runtime itself which tracks account state and executes transactions.
In this article, we’ll dive into on-chain data to compare the performance of the Agave client with Frankendancer. Through data-driven analysis, we quantify if these advancements can be seen on-chain via Solana’s performance. This means that not all improvements will be visible via this analysis.
You can walk through all the data used in this analysis via our dedicated dashboard.
While signature verification and block distribution engines are difficult to track using on-chain data, studying the dynamical behaviour of transactions can provide useful information about QUIC implementation and block packing logic.
Transactions on Solana are encoded and sent in QUIC streams into validators from clients, cfr. here. QUIC is relevant during the FetchStage, where incoming packets are batched (up to 128 per batch) and prepared for further processing. It operates at the kernel level, ensuring efficient network input handling. This makes QUIC a relevant piece of the Transaction Processing Unit (TPU) on Solana, which represents the logic of the validator responsible for block production. Improving QUIC means ultimately having control on transaction propagation. In this section we are going to compare the Agave QUIC implementation with the Frankendancer fd_quic—the C implementation of QUIC by Jump Crypto.
The first difference relies on connection management. Agave utilizes a connection cache to manage connections, implemented via the solana_connection_cache module, meaning there is a lookup mechanism for reusing or tracking existing connections. It also employs an AsyncTaskSemaphore to limit the number of asynchronous tasks (set to a maximum of 2000 tasks by default). This semaphore ensures that the system does not spawn excessive tasks, providing a basic form of concurrency control.
Frankendancer implements a more explicit and granular connection management system using a free list (state->free_conn_list) and a connection map (fd_quic_conn_map) based on connection IDs. This allows precise tracking and allocation of connection resources. It also leverages receive-side scaling and kernel bypass technologies like XDP/AF_XDP to distribute incoming traffic across CPU cores with minimal overhead, enhancing scalability and performance, cfr. here. It does not rely on semaphores for task limiting; instead, it uses a service queue (svc_queue) with scheduling logic (fd_quic_svc_schedule) to manage connection lifecycle events, indicating a more sophisticated event-driven approach.
Frankendancer also implements a stream handling pipeline. Precisely, fd_quic provides explicit stream management with functions like fd_quic_conn_new_stream() for creation, fd_quic_stream_send() for sending data, and fd_quic_tx_stream_free() for cleanup. Streams are tracked using a fd_quic_stream_map indexed by stream IDs.
Finally, for packet processing, Agave approach focuses on basic packet sending and receiving, with asynchronous methods like send_data_async() and send_data_batch_async().
Frankendancer implements detailed packet processing with specific handlers for different packet types: fd_quic_handle_v1_initial(), fd_quic_handle_v1_handshake(), fd_quic_handle_v1_retry(), and fd_quic_handle_v1_one_rtt(). These functions parse and process packets according to their QUIC protocol roles.
Differences in QUIC implementation can be seen on-chain at transactions level. Indeed, a more "sophisticated" version of QUIC means better handling of packets and ultimately more availability for optimization when sending them to the block packing logic.
After the FetchStage and the SigVerifyStage—which verifies the cryptographic signatures of transactions to ensure they are valid and authorized—there is the Banking stage. Here verified transactions are processed.
At the core of the Banking stage is the scheduler. It represents a critical component of any validator client, as it determines the order and priority of transaction processing for block producers.
Agave implements a central scheduler introduced in v2.18. Its main purpose is to loop and constantly check the incoming queue of transactions and process them as they arrive, routing them to an appropriate thread for further processing. It prioritizes transaction accordingly to
The scheduler is responsible for pulling transactions from the receiver channel, and sending them to the appropriate worker thread based on priority and conflict resolution. The scheduler maintains a view of which account locks are in-use by which threads, and is able to determine which threads a transaction can be queued on. Each worker thread will process batches of transactions, in the received order, and send a message back to the scheduler upon completion of each batch. These messages back to the scheduler allow the scheduler to update its view of the locks, and thus determine which future transactions can be scheduled, cfr. here.
Frankendancer implements its own scheduler in fd_pack. Within fd_pack, transactions are prioritized based on their reward-to-compute ratio—calculated as fees (in lamports) divided by estimated CUs—favoring those offering higher rewards per resource consumed. This prioritization happens within treaps, a blend of binary search trees and heaps, providing O(log n) access to the highest-priority transactions. Three treaps—pending (regular transactions), pending_votes (votes), and pending_bundles (bundled transactions)—segregate types, with votes balanced via reserved capacity and bundles ordered using a mathematical encoding of rewards to enforce FIFO sequencing without altering the treap’s comparison logic.
Scheduling, driven by fd_pack_schedule_next_microblock, pulls transactions from these treaps to build microblocks for banking tiles, respecting limits on CUs, bytes, and microblock counts. It ensures votes get fair representation while filling remaining space with high-priority non-votes, tracking usage via cumulative_block_cost and data_bytes_consumed.
To resolve conflicts, it uses bitsets—a container that represents a fixed-size sequence of bits—which are like quick-reference maps. Bitsets—rw_bitset (read/write) and w_bitset (write-only)—map account usage to bits, enabling O(1) intersection checks against global bitset_rw_in_use and bitset_w_in_use. Overlaps signal conflicts (e.g., write-write or read-write clashes), skipping the transaction. For heavily contested accounts (exceeding PENALTY_TREAP_THRESHOLD of 64 references), fd_pack diverts transactions to penalty treaps, delaying them until the account frees up, then promoting the best candidate back to pending upon microblock completion. A slow-path check via acct_in_use—a map of account locks per bank tile—ensures precision when bitsets flag potential issues.
Vote fees on Solana are a vital economic element of its consensus mechanism, ensuring network security and encouraging validator participation. In Solana’s delegated Proof of Stake (dPoS) system, each active validator submits one vote transaction per slot to confirm the leader’s proposed block, with an optimal delay of one slot. Delays, however, can shift votes into subsequent slots, causing the number of vote transactions per slot to exceed the active validator count. Under the current implementation, vote transactions compete with regular transactions for Compute Unit (CU) allocation within a block, influencing resource distribution.
Data reveals that the Frankendancer client includes more vote transactions than the Agave client, resulting in greater CU allocation to votes. To evaluate this difference, a dynamic Kolmogorov-Smirnov (KS) test can be applied. This non-parametric test compares two distributions by calculating the maximum difference between their Cumulative Distribution Functions (CDFs), assessing whether they originate from the same population. Unlike parametric tests with specific distributional assumptions, the KS-test’s flexibility suits diverse datasets, making it ideal for detecting behavioral shifts in dynamic systems. The test yields a p-value, where a low value (less than 0.05) indicates a significant difference between distributions.
When comparing CU usage for non-vote transactions between Agave (Version 2.1.14) and Frankendancer (Version 0.406.20113), the KS-test shows that Agave’s CDF frequently lies below Frankendancer’s (visualized as blue dots). This suggests that Agave blocks tend to allocate more CUs to non-vote transactions compared to Frankendancer. Specifically, the probability of observing a block with lower CU usage for non-votes is higher in Frankendancer relative to Agave.
Interestingly, this does not correspond to a lower overall count of non-vote transactions; Frankendancer appears to outperform Agave in including non-vote transactions as well. Together, these findings imply that Frankendancer validators achieve higher rewards, driven by increased vote transaction inclusion and efficient CU utilization for non-vote transactions.
Why Frankendancer is able to process more vote transactions may be due to the fact that on Agave there is a maximum number of QUIC connections that can be established between a client (identified by IP Address and Node Pubkey) and the server, ensuring network stability. The number of streams a client can open per connection is directly tied to their stake. Higher-stake validators can open more streams, allowing them to process more transactions concurrently, cfr. here. During high network load, lower-stake validators might face throttling, potentially missing vote opportunities, while higher-stake validators, with better bandwidth, can maintain consistent voting, indirectly affecting their influence in consensus. Frankendancer doesn't seem to suffer from the same restriction.
Although inclusion of vote transactions plays a relevant role in Solana consensus, there are other two metrics that are worth exploring: Skip Rate and Validator Uptime.
Skip Rate determines the availability of a validator to correctly propose a block when selected as leader. Having a high skip rate means less total rewards, mainly due to missed MEV and Priority Fee opportunities. However, missing a high number of slots also reduces total TPS, worsening final UX.
Validator Uptime impacts vote latency and consequently final staking rewards. This metric is estimated via Timely Vote Credit (TVC), which indirectly measures the distance a validator takes to land its votes. A 100% effectiveness on TVC means that validators land their votes in less than 2 slots.
As we can see, there are no main differences pre epoch 755. Data shows a recent elevated Skip Rate for Frankendancer and a corresponding low TVC effectiveness. However, it is worth noting that, since these metrics are based on averages, and considering a smaller stake is running Frankendancer, small fluctuations in Frankendancer performances need more time to be reabsorbed.
The scheduler plays a critical role in optimizing transaction processing during block production. Its primary task is to balance transaction prioritization—based on priority fees and compute units—with conflict resolution, ensuring that transactions modifying the same account are processed without inconsistencies. The scheduler orders transactions by priority, then groups them into conflict-free batches for parallel execution by worker threads, aiming to maximize throughput while maintaining state coherence. This balancing act often results in deviations from the ideal priority order due to conflicts.
To evaluate this efficiency, we introduced a dissipation metric, D, that quantifies the distance between a transaction’s optimal position o(i)—based on priority and dependent on the scheduler— and its actual position in the block a(i), defined as
where N is the number of transactions in the considered block.
This metric reveals how well the scheduler adheres to the priority order amidst conflict constraints. A lower dissipation score indicates better alignment with the ideal order. It is clear that the dissipation D has an intrinsic factor that accounts for accounts congestion, and for the time-dependency of transactions arrival. In an ideal case, these factors should be equal for all schedulers.
Given the intrinsic nature of the dissipation, the numerical value of this estimator doesn't carry much relevance. However, when comparing the results for two types of scheduler we can gather information on which one resolves better conflicts. Indeed, a higher value of the dissipation estimator indicates a preference towards conflict resolutions rather than transaction prioritization.
Comparing Frankendancer and Agave schedulers highlights how dissipation is higher for Frankendancer, independently from the version. This is more clear when showing the dynamical KS test. Only for very few instances the Agave scheduler showed a higher dissipation with statistically significant evidence.
If the resolution of conflicts—and then parallelization—is due to the scheduler implementation or to QUIC implementation is hard to tell from these data. Indeed, a better resolution of conflicts can be achieved also by having more transactions to select from.
Finally, also by comparing the percentiles of Priority Fees for transactions we can see hints of a different conflict resolution from Frankendancer. Indeed, despite the overall number of transactions (both vote and non-vote) and extracted value being higher than Agave, the median of PF is lower.
In this article we provide a detailed comparison of the Agave and Frankendancer validator clients on the Solana blockchain, focusing on on-chain performance metrics to quantify their differences. Frankendancer, the initial iteration of Jump Crypto’s Firedancer project, integrates an advanced networking layer—including a high-performance QUIC implementation and kernel bypass—onto Agave’s runtime and consensus code. This hybrid approach aims to optimize transaction processing, and the data reveals its impact.
On-chain data shows Frankendancer includes more vote transactions per block than Agave, resulting in greater compute unit (CU) allocation to votes, a critical factor in Solana’s consensus mechanism. This efficiency ties to Frankendancer’s QUIC and scheduler enhancements. Its fd_quic implementation, with granular connection management and kernel bypass, processes packets more effectively than Agave’s simpler, semaphore-limited approach, enabling better transaction propagation.
The scheduler, fd_pack, prioritizes transactions by reward-to-compute ratio using treaps, contrasting Agave’s priority formula based on fees and compute requests. To quantify how well each scheduler adheres to ideal priority order amidst conflicts we developed a dissipation metric. Frankendancer’s higher dissipation, confirmed by KS-test significance, shows it prioritizes conflict resolution over strict prioritization, boosting parallel execution and throughput. This is further highlighted by Frankendancer’s median priority fees being lower.
A lower median for Priority Fees and higher extracted value indicates more efficient transaction processing. For validators and delegators, this translates to increased revenue. For users, it means a better overall experience. Additionally, more votes for validators and delegators lead to higher revenues from SOL issuance, while for users, this results in a more stable consensus.
The analysis, supported by the Flipside Crypto dashboard, underscores Frankendancer’s data-driven edge in transaction processing, CU efficiency, and reward potential.
A huge thanks to Amin, Cooper, Hannes, Jacob, Michael, Norbert, Omer, and Teemu for sharing their feedback on the model and the article (this doesn’t mean they agree with the presented numbers!).
Zero-knowledge proofs are entering a period of rapid growth and widespread adoption. The core technology has been battle-tested, and we have begun to see the emergence of new services and more advanced use cases. These include outsourcing of proof computation from centralized servers, which opens the door to new revenue-generating opportunities for crypto infrastructure providers.
How significant could this revenue become? This article explores the proving ecosystem and estimates the market size in the coming years. But first, let’s start by revisiting the fundamentals.
ZK proofs are cryptographic tools that prove a computation's results are correct without revealing the underlying data or re-running the computation.
There are two main types of zk proofs:
A zk proof needs to be generated and verified. Typically, a prover contract sends the proof and the computation result to a verifier contract, which outputs a "yes" or "no" to confirm validity. While verification is easy and cheap, generating proofs is compute-intensive.
Proving is expensive because it needs significant computing power to 1) translate programs into polynomials and 2) run the programs expressed as polynomials, which requires performing complex mathematical operations.
This section overviews the current zk landscape, focusing on project types and their influence on proof generation demand.
Demand Side
Supply Side
For the privacy-focused rollup Aztec, only one proof per transaction will be generated in the browser, as depicted in the proving tree below. A similar dynamic is expected with other projects.
Monetization strategies will include fees and token incentives.
The primary revenue model will rely on charging base fees. These should cover the compute costs of proof generation. Prioritization of proving work will likely require paying optional priority fees.
The demand side and proving marketplaces will offer native token incentives to provers. These incentives are expected to be substantial and initially exceed the market size of proving fees.
To understand the proving market, we can draw analogies with the proof-of-stake (PoS) and proof-of-work (PoW) markets. Let’s examine how these comparisons hold up.
At the beginning of 2025, the PoS market is worth $16.3 billion, with the overall crypto market cap around $3.2 trillion. Assuming validators earn 5% of staking rewards, the staking market would represent approximately $815 million. This excludes priority fees and MEV rewards, which can be a significant part of validator revenues.
PoS characteristics have some similarities to zk-proving:
The PoW market can be roughly gauged using Bitcoin’s inflation rate, which is expected to be 0.84% in 2025. With a $2 trillion BTC market cap, this amounts to around $16.8 billion annually, excluding priority fees.
Both zk-proving and PoW rely on hardware, but they take different approaches. While PoW uses a “winner-takes-all” model, zk-proving creates a steady stream of proofs, resulting in more predictable earnings. This makes zk-proving less dependent on highly specialized hardware compared to Bitcoin mining.
The adoption of specialized hardware, like ASICs and FPGAs, for zk-proving will largely depend on the crypto market’s volume. Higher volumes are likely to encourage more investment in these technologies.
With these dynamics in mind, we can explore the revenue potential zk-proving represents.
Our analysis will be based on the Analyzing and Benchmarking ZK-Rollups paper, which benchmarks zkSync and Polygon zkEVM on various metrics, including proving time.
While the paper benchmarks zkSync Era and Polygon zkEVM, our analysis will focus on zkSync due to its more significant transaction volumes (230M per year vs. 5.5M for Polygon zkEVM). At higher transaction volumes, Polygon zkEVM has comparable costs to zkSync ($0.004 per transaction).
Approach
Results
A single Nvidia L4 GPU can prove a batch of ~4,000 transactions on zkSync in 9.5 hours. Given that zkSync submits a new batch to L1 every 10 minutes, around 57 NVIDIA L4 GPUs are required to keep up with this pace.
Proof Generation Cost
Knowing the compute time, we can calculate proving costs per batch, proof, and transaction:
The above calculations can be followed in detail in Proving Market Estimate(rows 1-29).
Proving costs depend on the efficiency of hardware and proof systems. The hardware costs can be optimized by, for example, using bare metal machines.
2024: Current Costs
2025: Optimizations Begin
2030: Proving costs fall to $0.001 per transaction across all rollups.
2024: Real Data
The number of transactions generated by rollups and other demand sources:
2025: Market Takes Off
The proving market begins to gain momentum. Estimated number of transactions: ~4.4B, including:
2030: zk-Proving at Scale
Proving will have reached widespread adoption. Estimated number of transactions: ~600B
We estimate the proving surplus based on previously estimated proving costs. This surplus is revenue from base and priority fees minus hardware costs. As the market matures, base fees and proving costs decrease, but priority fees will be a significant revenue driver.
Token incentives add further value boost, While it’s difficult to foresee the size of these investments, the estimate is based on the information collected from the projects.
2024: Early Market
2025: Expanding Demand
The total market is projected at $97M, including:
2030: Almost a Two-Billion-Dollar Market
The total zk-proving market opportunity is estimated at $1.34B.
A detailed analysis supporting the calculations is available in Proving Market Estimate(rows 32-57).
The estimates with so many variables and for such a long term will always have a margin of error. To support the main conclusion, we include a sensitivity analysis that presents other potential outcomes in 2025 and 2030 based on different transaction volumes and proving surplus. For the sake of simplicity, we left the proving costs intact at $0.059 and $0.001 per transaction in 2025 and 2030, respectively.
In 2025, the most pessimistic scenario estimates a total market value of just $12.5M, with less than a 10% proving surplus and 2B transactions. Conversely, the ultra-optimistic scenario imagines the market at $55M, based on a 50% surplus and 6B transactions.
In 2030, if things don’t go well, we could see a proving market of roughly $300M, from 10% proving surplus and 300B transactions. The best outcome assumes a $1.7B market based on a 90% surplus and 900B transactions.
Estimating so far into the future comes with inherent uncertainties. Below are potential error factors categorized into downside and upside scenarios:
Downside
Upside
After PoW and PoS, zk is the next-generation crypto technology that complements its predecessors. Comparing proving revenue opportunities with PoW or PoS is tricky because they serve different purposes. Still, for context:
We estimated that the zk-proving market could grow to $97M by 2025 and $1.34B by 2030. While these estimates are more of an educated guess, they’re meant to point out the trends and factors anyone interested in this space should monitor. These factors include:
Let’s revisit these forecasts a year from now.