Stay vigilant against phishing attacks. Chorus One sends emails exclusively to contacts who have subscribed. If you are in doubt, please don’t hesitate to reach out through our official communication channels.

Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Core Research
Is The Speed of Light Too Slow?
Blockchain systems face two hard limits: the speed light travels through a given medium, and Shannon Capacity Thereom. Firedancer, by Jump Crypto, re-engineers Solana validators to test these limits. In this article, we'll dive deep into the first version of Firedancer, dubbed Frankendancer, which merges a custom networking stack with Agave's runtime.
March 24, 2025
5 min read

A Data-Driven Analysis of Frankendancer

TL;DR:

  • Blockchain systems like Solana face two hard limits: the speed of light traveling in a medium, slowing data transmission, and the Shannon Capacity Theorem, capping throughput even at maximum speed.
  • Firedancer, built by Jump Crypto, re-engineers Solana’s validator client to test these limits.
  • Frankendancer, the first version of Firedancer, merges a custom networking stack with Agave’s runtime.
  • The improvements introduced by Frankendancer include an advanced QUIC setup and a custom scheduler with different ordering logic.
  • Data analysis shows that Frankendancer not only includes more vote transactions per block than Agave but also handles more non-vote transactions.
  • Rebuilt scheduler favors conflict resolution over strict priority for better parallel execution. This reflects as more optimally packed, more valuable, blocks.
  • Despite a lower median priority fee, Frankendancer achieves higher throughput, hinting at superior transaction handling.
  • There are minor skip rates and vote latency gaps, likely due to Frankendancer’s smaller stake.
  • Full data insights available at Flipside Crypto Dashboard.

Introduction

In the world of blockchain technology, where every millisecond counts, the speed of light isn’t just a scientific constant—it’s a hard limit that defines the boundaries of performance. As Kevin Bowers highlighted in his article Jump Vs. the Speed of Light, the ultimate bottleneck for globally distributed systems, like those used in trading and blockchain, is the physical constraint of how fast information can travel. 

To put this into perspective, light travels at approximately 299,792 km/s in a vacuum, but in fiber optic cables (the backbone of internet communication), it slows to about 200,000 km/s due to the medium's refractive index. This might sound fast, but when you consider the distances involved in a global network, delays become significant. For example:

  • A round-trip signal between New York and Singapore, roughly 15,300 km apart as the crow flies (and longer via actual fiber routes), takes about 200 ms. That’s 200 ms of pure latency, before accounting for processing, queuing, or network congestion.

For applications like high-frequency trading or blockchain consensus mechanisms, this delay is simply too long. In decentralized systems, the problem worsens because nodes must exchange multiple messages to reach agreement (e.g., propagating a block and confirming it). Each round-trip adds to the latency, making the speed of light a "frustrating constraint" when near-instant coordination is the goal.

The Shannon Capacity Theorem: Another Layer of Limitation

Beyond the physical delay imposed by the speed of light, blockchain networks face an additional challenge rooted in information theory: the Shannon Capacity Theorem. This theorem defines the maximum rate at which data can be reliably transmitted over a communication channel. It’s expressed as:

where C is the channel capacity (bits per second), B is the bandwidth (in hertz), and S/N is the signal-to-noise ratio. In simpler terms, the theorem tells us that even with a perfect, lightspeed connection, there’s a ceiling on how much data a network can handle, determined by its bandwidth and the quality of the signal.

For blockchain systems, this is a critical limitation because they rely on broadcasting large volumes of transaction data to many nodes simultaneously. So, even if we could magically eliminate latency, the Shannon Capacity Theorem reminds us that the network’s ability to move data is still finite. For blockchains aiming for mass adoption—like Solana, which targets thousands of transactions per second—this dual constraint of light speed and channel capacity is a formidable hurdle.

Firedancer: A Vision for Blockchain Performance

In a computing landscape where recent technological advances have prioritized fitting more cores into a CPU rather than making them faster, and where the speed of light emerges as the ultimate bottleneck, Jump team refuses to settle for off-the-shelf solutions or the short-term fix of buying more hardware. Instead, it reimagines existing solutions to extract maximum performance from the network layer, optimizing data transmission, reducing latency, and enhancing reliability to combat the "noise" of packet loss, congestion, and global delays.

The Firedancer project is about tailoring this concept for a blockchain world where every microsecond matters, breaking the paralysis in decision-making that arises when systems have many unoptimized components.

Firedancer is a high-performance validator client developed in C for the Solana blockchain, developed by Jump Crypto, a division of Jump Trading focused on advancing blockchain technologies. Unlike traditional validator clients that rely on generic software stacks and incremental hardware upgrades, Firedancer is a ground-up reengineering of how a blockchain node operates. Its mission is to push the Solana network to the very limits of what’s physically possible, addressing the dual constraints of light speed and channel capacity head-on.

At its core, Firedancer is designed to optimize every layer of the system, from data transmission to transaction processing. It proposes a major rewrite of the three functional components of the Agave client: networking, runtime, and consensus mechanism

Frankendancer

Firedancer is a big project, and for this reason it is being developed incrementally. The first Firedancer validator is nicknamed Frankendancer. It is Firedancer’s networking layer grafted onto the Agave runtime and consensus code. Precisely, Frankendancer has implemented the following parts:

  • The QUIC and UDP ingress networking pieces, using high performance kernel bypass networking.
  • The block distribution engine and egress networking, also using kernel bypass. The engine contains a full reimplementation of erasure coding and the Solana turbine protocol for packet routing.
  • Signature verification with a custom AVX512 ED25519 implementation.
  • The block packing logic.

All other functionality is retained by Agave, including the runtime itself which tracks account state and executes transactions.

In this article, we’ll dive into on-chain data to compare the performance of the Agave client with Frankendancer. Through data-driven analysis, we quantify if these advancements can be seen on-chain via Solana’s performance. This means that not all improvements will be visible via this analysis.

You can walk through all the data used in this analysis via our dedicated dashboard.

What to Look for

While signature verification and block distribution engines are difficult to track using on-chain data, studying the dynamical behaviour of transactions can provide useful information about QUIC implementation and block packing logic.

QUIC Implementation

Transactions on Solana are encoded and sent in QUIC streams into validators from clients, cfr. here. QUIC is relevant during the FetchStage, where incoming packets are batched (up to 128 per batch) and prepared for further processing. It operates at the kernel level, ensuring efficient network input handling. This makes QUIC a relevant piece of the Transaction Processing Unit (TPU) on Solana, which represents the logic of the validator responsible for block production. Improving QUIC means ultimately having control on transaction propagation. In this section we are going to compare the Agave QUIC implementation with the Frankendancer fd_quic—the C implementation of QUIC by Jump Crypto.

Fig. 1: Validator TPU. Source from Anza documentation.

The first difference relies on connection management. Agave utilizes a connection cache to manage connections, implemented via the solana_connection_cache module, meaning there is a lookup mechanism for reusing or tracking existing connections. It also employs an AsyncTaskSemaphore to limit the number of asynchronous tasks (set to a maximum of 2000 tasks by default). This semaphore ensures that the system does not spawn excessive tasks, providing a basic form of concurrency control.

Frankendancer implements a more explicit and granular connection management system using a free list (state->free_conn_list) and a connection map (fd_quic_conn_map) based on connection IDs. This allows precise tracking and allocation of connection resources. It also leverages receive-side scaling and kernel bypass technologies like XDP/AF_XDP to distribute incoming traffic across CPU cores with minimal overhead, enhancing scalability and performance, cfr. here. It does not rely on semaphores for task limiting; instead, it uses a service queue (svc_queue) with scheduling logic (fd_quic_svc_schedule) to manage connection lifecycle events, indicating a more sophisticated event-driven approach.

Frankendancer also implements a stream handling pipeline. Precisely, fd_quic provides explicit stream management with functions like fd_quic_conn_new_stream() for creation, fd_quic_stream_send() for sending data, and fd_quic_tx_stream_free() for cleanup. Streams are tracked using a fd_quic_stream_map indexed by stream IDs.

Finally, for packet processing, Agave approach focuses on basic packet sending and receiving, with asynchronous methods like send_data_async() and send_data_batch_async().

Frankendancer implements detailed packet processing with specific handlers for different packet types: fd_quic_handle_v1_initial(), fd_quic_handle_v1_handshake(), fd_quic_handle_v1_retry(), and fd_quic_handle_v1_one_rtt(). These functions parse and process packets according to their QUIC protocol roles.

Differences in QUIC implementation can be seen on-chain at transactions level. Indeed, a more "sophisticated" version of QUIC means better handling of packets and ultimately more availability for optimization when sending them to the block packing logic. 

Block Packing Logic

After the FetchStage and the SigVerifyStage—which verifies the cryptographic signatures of transactions to ensure they are valid and authorized—there is the Banking stage. Here verified transactions are processed. 

Fig. 2: Validator TPU with a focus on Banking Stage. Source from Anza blog.

At the core of the Banking stage is the scheduler. It represents a critical component of any validator client, as it determines the order and priority of transaction processing for block producers. 

Agave implements a central scheduler introduced in v2.18. Its main purpose is to loop and constantly check the incoming queue of transactions and process them as they arrive, routing them to an appropriate thread for further processing. It prioritizes transaction accordingly to 

The scheduler is responsible for pulling transactions from the receiver channel, and sending them to the appropriate worker thread based on priority and conflict resolution. The scheduler maintains a view of which account locks are in-use by which threads, and is able to determine which threads a transaction can be queued on. Each worker thread will process batches of transactions, in the received order, and send a message back to the scheduler upon completion of each batch. These messages back to the scheduler allow the scheduler to update its view of the locks, and thus determine which future transactions can be scheduled, cfr. here

Frankendancer implements its own scheduler in fd_pack. Within fd_pack, transactions are prioritized based on their reward-to-compute ratio—calculated as fees (in lamports) divided by estimated CUs—favoring those offering higher rewards per resource consumed. This prioritization happens within treaps, a blend of binary search trees and heaps, providing O(log n) access to the highest-priority transactions. Three treaps—pending (regular transactions), pending_votes (votes), and pending_bundles (bundled transactions)—segregate types, with votes balanced via reserved capacity and bundles ordered using a mathematical encoding of rewards to enforce FIFO sequencing without altering the treap’s comparison logic.

Scheduling, driven by fd_pack_schedule_next_microblock, pulls transactions from these treaps to build microblocks for banking tiles, respecting limits on CUs, bytes, and microblock counts. It ensures votes get fair representation while filling remaining space with high-priority non-votes, tracking usage via cumulative_block_cost and data_bytes_consumed.

To resolve conflicts, it uses bitsets—a container that represents a fixed-size sequence of bits—which are like quick-reference maps. Bitsets—rw_bitset (read/write) and w_bitset (write-only)—map account usage to bits, enabling O(1) intersection checks against global bitset_rw_in_use and bitset_w_in_use. Overlaps signal conflicts (e.g., write-write or read-write clashes), skipping the transaction. For heavily contested accounts (exceeding PENALTY_TREAP_THRESHOLD of 64 references), fd_pack diverts transactions to penalty treaps, delaying them until the account frees up, then promoting the best candidate back to pending upon microblock completion. A slow-path check via acct_in_use—a map of account locks per bank tile—ensures precision when bitsets flag potential issues.

Data Walkthrough

Transactions & Extracted Value

Vote fees on Solana are a vital economic element of its consensus mechanism, ensuring network security and encouraging validator participation. In Solana’s delegated Proof of Stake (dPoS) system, each active validator submits one vote transaction per slot to confirm the leader’s proposed block, with an optimal delay of one slot. Delays, however, can shift votes into subsequent slots, causing the number of vote transactions per slot to exceed the active validator count. Under the current implementation, vote transactions compete with regular transactions for Compute Unit (CU) allocation within a block, influencing resource distribution.

Fig. 3: Relevant percentiles of Vote transactions included in a block divided by software versions The percentiles are computed using hourly data. Source from our dedicated dashboard.

Data reveals that the Frankendancer client includes more vote transactions than the Agave client, resulting in greater CU allocation to votes. To evaluate this difference, a dynamic Kolmogorov-Smirnov (KS) test can be applied. This non-parametric test compares two distributions by calculating the maximum difference between their Cumulative Distribution Functions (CDFs), assessing whether they originate from the same population. Unlike parametric tests with specific distributional assumptions, the KS-test’s flexibility suits diverse datasets, making it ideal for detecting behavioral shifts in dynamic systems. The test yields a p-value, where a low value (less than 0.05) indicates a significant difference between distributions.

Fig. 4: Distribution of p-value from a dynamical KS-test computed from the usage of CU from non-vote transactions. The CDFs are computed using hourly data. Source from our dedicated dashboard.

When comparing CU usage for non-vote transactions between Agave (Version 2.1.14) and Frankendancer (Version 0.406.20113), the KS-test shows that Agave’s CDF frequently lies below Frankendancer’s (visualized as blue dots). This suggests that Agave blocks tend to allocate more CUs to non-vote transactions compared to Frankendancer. Specifically, the probability of observing a block with lower CU usage for non-votes is higher in Frankendancer relative to Agave.

Fig. 5: Relevant percentiles for non-vote transactions included in a block (top row) and fee collected by validators (bottom row) divided by software version. The percentiles are computed using hourly data. Source from our dedicated dashboard.

Interestingly, this does not correspond to a lower overall count of non-vote transactions; Frankendancer appears to outperform Agave in including non-vote transactions as well. Together, these findings imply that Frankendancer validators achieve higher rewards, driven by increased vote transaction inclusion and efficient CU utilization for non-vote transactions.

Why Frankendancer is able to process more vote transactions may be due to the fact that on Agave there is a maximum number of QUIC connections that can be established between a client (identified by IP Address and Node Pubkey) and the server, ensuring network stability. The number of streams a client can open per connection is directly tied to their stake. Higher-stake validators can open more streams, allowing them to process more transactions concurrently, cfr. here. During high network load, lower-stake validators might face throttling, potentially missing vote opportunities, while higher-stake validators, with better bandwidth, can maintain consistent voting, indirectly affecting their influence in consensus. Frankendancer doesn't seem to suffer from the same restriction.

Skip Rate and Validator Uptime

Although inclusion of vote transactions plays a relevant role in Solana consensus, there are other two metrics that are worth exploring: Skip Rate and Validator Uptime.

Skip Rate determines the availability of a validator to correctly propose a block when selected as leader. Having a high skip rate means less total rewards, mainly due to missed MEV and Priority Fee opportunities. However, missing a high number of slots also reduces total TPS, worsening final UX.

Validator Uptime impacts vote latency and consequently final staking rewards. This metric is estimated via Timely Vote Credit (TVC), which indirectly measures the distance a validator takes to land its votes. A 100% effectiveness on TVC means that validators land their votes in less than 2 slots.

Fig. 6: Skip Rate (upper panel) and TVC effectiveness (lower panel) divided by software version. Source from our dedicated dashboard.

As we can see, there are no main differences pre epoch 755. Data shows a recent elevated Skip Rate for Frankendancer and a corresponding low TVC effectiveness. However, it is worth noting that, since these metrics are based on averages, and considering a smaller stake is running Frankendancer, small fluctuations in Frankendancer performances need more time to be reabsorbed.

Scheduler Dissipation

The scheduler plays a critical role in optimizing transaction processing during block production. Its primary task is to balance transaction prioritization—based on priority fees and compute units—with conflict resolution, ensuring that transactions modifying the same account are processed without inconsistencies. The scheduler orders transactions by priority, then groups them into conflict-free batches for parallel execution by worker threads, aiming to maximize throughput while maintaining state coherence. This balancing act often results in deviations from the ideal priority order due to conflicts. 

To evaluate this efficiency, we introduced a dissipation metric, D, that quantifies the distance between a transaction’s optimal position o(i)—based on priority and dependent on the scheduler— and its actual position in the block a(i), defined as

where N is the number of transactions in the considered block.

This metric reveals how well the scheduler adheres to the priority order amidst conflict constraints. A lower dissipation score indicates better alignment with the ideal order. It is clear that the dissipation D has an intrinsic factor that accounts for accounts congestion, and for the time-dependency of transactions arrival. In an ideal case, these factors should be equal for all schedulers. 

Given the intrinsic nature of the dissipation, the numerical value of this estimator doesn't carry much relevance. However, when comparing the results for two types of scheduler we can gather information on which one resolves better conflicts. Indeed, a higher value of the dissipation estimator indicates a preference towards conflict resolutions rather than transaction prioritization. 

Fig. 7: Relevant percentiles for the scheduler dissipation estimator divided by software version. The percentiles are computed using hourly data. Source from our dedicated dashboard.

Comparing Frankendancer and Agave schedulers highlights how dissipation is higher for Frankendancer, independently from the version. This is more clear when showing the dynamical KS test. Only for very few instances the Agave scheduler showed a higher dissipation with statistically significant evidence.

Fig. 8: Distribution of p-value from a dynamical KS-test computed from the scheduler dissipation estimator divided by software versions. The CDFs are computed using hourly data. Source from our dedicated dashboard.

If the resolution of conflicts—and then parallelization—is due to the scheduler implementation or to QUIC implementation is hard to tell from these data. Indeed, a better resolution of conflicts can be achieved also by having more transactions to select from.

Fig. 9: Relevant percentiles for transactions PF divided by software version. The percentiles are computed using hourly data. Source from our dedicated dashboard.

Finally, also by comparing the percentiles of Priority Fees for transactions we can see hints of a different conflict resolution from Frankendancer. Indeed, despite the overall number of transactions (both vote and non-vote) and extracted value being higher than Agave, the median of PF is lower. 

Conclusions

In this article we provide a detailed comparison of the Agave and Frankendancer validator clients on the Solana blockchain, focusing on on-chain performance metrics to quantify their differences. Frankendancer, the initial iteration of Jump Crypto’s Firedancer project, integrates an advanced networking layer—including a high-performance QUIC implementation and kernel bypass—onto Agave’s runtime and consensus code. This hybrid approach aims to optimize transaction processing, and the data reveals its impact.

On-chain data shows Frankendancer includes more vote transactions per block than Agave, resulting in greater compute unit (CU) allocation to votes, a critical factor in Solana’s consensus mechanism. This efficiency ties to Frankendancer’s QUIC and scheduler enhancements. Its fd_quic implementation, with granular connection management and kernel bypass, processes packets more effectively than Agave’s simpler, semaphore-limited approach, enabling better transaction propagation.

The scheduler, fd_pack, prioritizes transactions by reward-to-compute ratio using treaps, contrasting Agave’s priority formula based on fees and compute requests. To quantify how well each scheduler adheres to ideal priority order amidst conflicts we developed a dissipation metric. Frankendancer’s higher dissipation, confirmed by KS-test significance, shows it prioritizes conflict resolution over strict prioritization, boosting parallel execution and throughput. This is further highlighted by Frankendancer’s median priority fees being lower.

A lower median for Priority Fees and higher extracted value indicates more efficient transaction processing. For validators and delegators, this translates to increased revenue. For users, it means a better overall experience. Additionally, more votes for validators and delegators lead to higher revenues from SOL issuance, while for users, this results in a more stable consensus.

The analysis, supported by the Flipside Crypto dashboard, underscores Frankendancer’s data-driven edge in transaction processing, CU efficiency, and reward potential.

Networks
Nillion Mainnet Goes Live
With the launch of the Nillion mainnet, let's take a deep-dive into current challenges surrounding private data exchange and how Nillion addresses these issues
March 24, 2025
5 min read

Nillion has officially launched its mainnet, ushering in a new era of private, decentralized computation. Chorus One has supported the network since early days, including the Genesis Sprint and Catalyst Convergence phases. With the mainnet launch, we are now proud to join the network as a Genesis Validator, and support $NIL staking from day one!

If you're looking for a trusted validator, backed by a team of 35+ engineers committed to delivering a best-in-class staking experience, select the Chorus One validator and start staking with us today!

Redefining Data Privacy in the Age of AI

The rapid expansion of AI-driven applications and platforms in has revolutionized everything from email composition to the rise of virtual influencers. AI has permeated countless aspects of our daily lives, offering unprecedented convenience and capabilities. However, with this explosive growth comes an increasingly urgent question: How can we enjoy the benefits of AI without compromising our privacy? This concern extends beyond AI to other domains where sensitive data exchange is critical, such as healthcare, identity verification, and trading. While privacy is often viewed as an impediment to these use cases, Nillion posits that it can actually be an enabler. In this article, we'll delve into the current challenges surrounding private data exchange, how Nillion addresses these issues, and explore the potential it unlocks.

The Value of Data and the Privacy Paradox

Privacy in blockchain technology is not a novel concept. Over the years, several protocols have emerged, offering solutions like private transactions and obfuscation of user identities. However, privacy extends far beyond financial transactions. It could be argued that privacy has the potential to unlock a multitude of non-financial use cases—if only we could compute on private data without compromising its confidentiality. Feeding private data into generative AI platforms or allowing them to train on user-generated content raises significant privacy concerns.

Data Categories and Privacy Concerns

Every day, we unknowingly share fragments of our data through various channels. This data can be categorized into three broad types:

  • Public Data: Instagram posts, blogs, tweets, Google reviews, Reddit comments, real estate listings.
  • Partially Private Data: Blockchain transactions, deleted tweets, search history, advertising cookies.
  • Private Data: Transaction data, text messages, voicemails, medical records, personal photos, location data.

The publicly shared data has fueled the growth of social media and the internet, generating billions of dollars in economic value and creating jobs. Companies have capitalized on this data to improve algorithms and enhance targeted advertising, leading to a concentration of data within a few powerful entities, as evidenced by scandals like Cambridge Analytica. Users, often unaware of the implications, continue to feed these data monopolies, further entrenching their dominance. With the rise of AI wearables, the potential for privacy invasion only increases.

As awareness of the importance of privacy grows, it becomes clear that while people are generally comfortable with their data being used, they want its contents to remain confidential. This desire for privacy presents a significant challenge: how can we allow services to use data without revealing the underlying information? Traditional encryption methods require decryption before computation, which introduces security vulnerabilities and increases the risk of data misuse.

Another critical issue is the concentration of sensitive data. Ideally, high-value data should be decentralized to avoid central points of failure, but sharing data across multiple parties or nodes raises concerns about efficiency and consistent security standards.

This is where Nillion comes in. While blockchains have decentralized transactions, Nillion seeks to decentralize high-value data itself.

What is Nillion?

Nillion is a secure computation network designed to decentralize trust for high-value data. It addresses privacy challenges by leveraging Privacy-Enhancing Technologies (PETs), particularly Multi-Party Computation (MPC). These PETs enable users to securely store high-value data on Nillion's peer-to-peer network of nodes and allow computations to be executed on the masked data itself. This approach eliminates the need to decrypt data prior to computation, thereby enhancing the security of sensitive information.

The Nillion network enables computations on hidden data, unlocking new possibilities across various sectors. Early adopters in the Nillion community are already building tools for private predictive AI, secure storage and compute solutions for healthcare, password management, and trading data. Developers can create applications and services that utilize PETs like MPC to perform blind computations on private user data without revealing it to the network or other users.

The Nillion Network operates through two interdependent layers:

  • Coordination Layer: Governed by the NilChain, a Cosmos-based network that coordinates payments for storage operations and blind computations performed on the network.
  • Orchestration Layer: Powered by Petnet, this layer harnesses PETs like MPC to protect data at rest and enable blind computations on that data.

When decentralized applications (dApps) or other blockchain networks require privacy-enhanced data (e.g., blind computations), they must pay in $NIL, the network's native token. The Coordination Layer's nodes manage the payments between the dApp and the Petnet, while infrastructure providers on the Petnet are rewarded in $NIL for securely storing data and performing computations.

The Coordination Layer functions as a Cosmos chain, with infrastructure providers staking $NIL to secure the network, just like in other Cosmos-based chains. This dual-layer architecture ensures that Nillion can scale effectively while maintaining robust security and privacy standards.

Clustering on the Petnet

At the heart of Nillion's architecture is the concept of clustering. Each cluster consists of a variable number of nodes tailored to meet specific security, cost, and performance requirements. Unlike traditional blockchains, Nillion's compute network does not rely on a global shared state, allowing it to scale both vertically and horizontally. As demand for storage or compute power increases, clusters can scale up their infrastructure or new clusters of nodes can be added.

Clusters can be specialized to handle different types of requests, such as provisioning large amounts of storage for secrets or utilizing specific hardware to accelerate particular computations. This flexibility enables the Nillion network to adapt to various use cases and workloads.

The Role of $NIL

$NIL is the governance and staking token of the Nillion network, playing a crucial role in securing and managing the network. Its primary functions include:

  1. Securing the Coordination Layer: Staking $NIL accrues voting power, which is used to secure the network and determine the active set of validators through a Delegated Proof of Stake mechanism.
  2. Managing Network Resources: Users pay $NIL tokens to access the Coordination Layer or request blind computations, enabling efficient resource management.
  3. Economics of Petnet Clusters: Infrastructure providers earn $NIL for facilitating blind computations and securely storing data.
  4. Network Governance: $NIL holders can stake their tokens to vote on on-chain proposals within the Coordination Layer or delegate their voting power to others.

Use Cases for Nillion

Nillion's advanced data privacy capabilities open up a wide range of potential use cases, both within and beyond the crypto space:

  • Private Order Books: A privacy-enhanced order book could mitigate the effects of Maximal Extractable Value (MEV) and reduce front-running in DeFi.
  • Governance: Decentralized Autonomous Organizations (DAOs) and delegators could benefit from provable privacy for their votes.
  • Messaging: On-chain messaging, particularly in decentralized social media, could be a significant use case with Nillion's privacy features.
  • Decentralized Storage: Storing sensitive documents or information in a centralized entity carries risks. Nillion's decentralized infrastructure with complete encryption could transform how such data is managed.
  • Medical Data: Privacy-enhanced infrastructure could streamline the storage, transfer, and usage of medical data, ensuring confidentiality.
  • Advertising: Advertisers currently exploit user data for behavioral trends without compensating the data providers. Nillion's privacy solutions could create a more equitable model.

Staking Your $NIL with Chorus One

Chorus One is a genesis validator on the Nillion mainnet, and is officially supporting $NIL staking. To stake your $NIL with us, select the Chorus One validator at the link below, and begin staking with us today!

👉Stake Your $NIL

Networks
Network Offboarding Announcement
In light of current market conditions and lower network activity, we have made the decision to offboard a few of our supported networks. This change allows us to streamline our focus and dedicate more resources to networks that offer stronger long-term growth potential and user adoption.
March 21, 2025
5 min read

At Chorus One, we aim to provide users with a best-in-class experience across a wide variety of networks. To maintain this standard, we periodically assess our supported networks for current and future viability. In light of market conditions and lower network activity, we have made the decision to stop supporting the networks below at the end of this month. These include:

These changes are part of an ongoing effort to streamline our focus and dedicate resources to networks with stronger long-term growth potential. 

Why The Change? 

We are proud to have supported these networks and their users. However, there are a few trends we have observed that have led to our decision: 

  1. Market Conditions: The volatility and price movement of the affected networks’ tokens have impacted their sustainability from a node operation perspective. In uncertain market conditions, it’s crucial for us to prioritize networks that show resilience and consistent growth.
  2. Low Network Activity: Despite their early potential, the applications and user adoption on these networks have not reached the levels necessary to justify continued support. In our commitment to delivering the best experience to our users, we believe it’s important to focus on networks with higher engagement and vibrant ecosystems.

What does this mean for you?

If you’re currently staking tokens on any of these networks, we kindly ask that you migrate them to a different validator by March 31, 2025. After this date, staking rewards from our public nodes will no longer be guaranteed. Please ensure your tokens are unstaked or re-delegated before then.

To view all current supported networks, node addresses, and APY, click here. 

Looking Forward

This decision allows us to allocate more resources and attention to the networks that show the most promise in terms of activity, user growth, and long-term sustainability. As we continue to grow and evolve, we remain committed to offering the best staking services and supporting the most innovative and active networks in the industry.

Need help?

If you have any questions or need assistance with unstaking your tokens, our support team is here to help. Feel free to reach out to us via support@chorus.one.

About Chorus One

Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.

Core Research
The Economics of ZK-Proving: Market Size and Future Projections
Zero-knowledge proofs are entering a period of rapid growth and widespread adoption. The core technology has been battle-tested, and we have begun to see the emergence of new services and more advanced use cases. These include outsourcing of proof computation from centralized servers, which opens the door to new revenue-generating opportunities for crypto infrastructure providers.
March 13, 2025
5 min read

A huge thanks to Amin, Cooper, Hannes, Jacob, Michael, Norbert, Omer, and Teemu for sharing their feedback on the model and the article (this doesn’t mean they agree with the presented numbers!).

Zero-knowledge proofs are entering a period of rapid growth and widespread adoption. The core technology has been battle-tested, and we have begun to see the emergence of new services and more advanced use cases. These include outsourcing of proof computation from centralized servers, which opens the door to new revenue-generating opportunities for crypto infrastructure providers.

How significant could this revenue become? This article explores the proving ecosystem and estimates the market size in the coming years. But first, let’s start by revisiting the fundamentals.

Proving ABC

ZK proofs are cryptographic tools that prove a computation's results are correct without revealing the underlying data or re-running the computation. 

There are two main types of zk proofs:

  1. Elliptic Curve-based SNARKs: Slow to generate but have a fixed proof size, regardless of computation size.
  2. Hash-based STARKs: Can be faster to generate but produce larger proofs, making verification on L1s costly.

A zk proof needs to be generated and verified. Typically, a prover contract sends the proof and the computation result to a verifier contract, which outputs a "yes" or "no" to confirm validity. While verification is easy and cheap, generating proofs is compute-intensive.

Proving is expensive because it needs significant computing power to 1) translate programs into polynomials and 2) run the programs expressed as polynomials, which requires performing complex mathematical operations.

ZK Ecosystem

This section overviews the current zk landscape, focusing on project types and their influence on proof generation demand.

Demand Side

  • zk-Rollups: The demand for proving currency mostly comes from zk-rollups. In 2024, the main zk-rollups (zkSync Era, Linea, Starknet, and Scroll) generated 580K transactions. Each transaction requires multiple proofs to be generated. 
  • zkVMs: Developers can write zk circuits on their own using or use a zkVM to abstract away the zero-knowledge part and use just a high-level language like Rust to write applications. This democratizes access to zk-proofs as devs no longer need to learn domain-specific languages to write verifiable code. zkVMs will not drive demand by themselves but will instead facilitate one coming from rollups, apps, and infra projects.
  • Apps and Infrastructure: Any apps and infra projects using zk, including privacy apps, oracles, bridges, or zkTLS.
  • Aggregators minimize verification costs by batching multiple proofs from various sources. Instead of sending proofs directly to an L1, rollups, apps, or zkVMs can route them to an aggregator. The aggregator validates these proofs off-chain and submits a single consolidated proof to the L1. Since L1 verification incurs high gas costs on Ethereum (400-500k for SNARKs, up to 5 million for STARKs), it is the most expensive aspect of the current zk pipeline. 

Supply Side

  • Infrastructure Providers: The main limitation in proof generation is hardware. Thus, anyone with powerful hardware will be incentivized to generate proofs. In blockchain, companies with extensive hardware expertise operate validators, making zk-proving a natural next step for them.
  • Centralized Proving: The demand side can independently generate proofs, e.g., at the sequencer level for a rollup, or outsource them. Currently, rollups utilize centralized provers, but there is an incentive to offload proving to improve decentralization and liveness.
  • Client-Side Proving (on user device): Shifting proving to user browsers reduces trust assumptions in zk applications by eliminating the need to send user data to proving servers. Performance constraints currently limit proof generation on consumer devices and will likely remain so for some time.

For the privacy-focused rollup Aztec, only one proof per transaction will be generated in the browser, as depicted in the proving tree below. A similar dynamic is expected with other projects.

  • Hardware and Accelerators: Companies build specialized hardware and software-based hardware accelerator platforms. While these projects do not directly generate proof demand, they enhance proof delivery speed.
  • Proving Marketplaces: Networks that connect proof demand with computing power. They will not generate proofs by themselves.

Monetization

Monetization strategies will include fees and token incentives.

The primary revenue model will rely on charging base fees. These should cover the compute costs of proof generation. Prioritization of proving work will likely require paying optional priority fees.

The demand side and proving marketplaces will offer native token incentives to provers. These incentives are expected to be substantial and initially exceed the market size of proving fees.

Proving Market Opportunity

Market Dynamics

To understand the proving market, we can draw analogies with the proof-of-stake (PoS) and proof-of-work (PoW) markets. Let’s examine how these comparisons hold up.

At the beginning of 2025, the PoS market is worth $16.3 billion, with the overall crypto market cap around $3.2 trillion. Assuming validators earn 5% of staking rewards, the staking market would represent approximately $815 million. This excludes priority fees and MEV rewards, which can be a significant part of validator revenues. 

PoS characteristics have some similarities to zk-proving:

  • Both prioritize accuracy, speed, and reliability in computation.
  • They could use similar economic tools, such as posting bonds and slashing.

The PoW market can be roughly gauged using Bitcoin’s inflation rate, which is expected to be 0.84% in 2025. With a $2 trillion BTC market cap, this amounts to around $16.8 billion annually, excluding priority fees.

Both zk-proving and PoW rely on hardware, but they take different approaches. While PoW uses a “winner-takes-all” model, zk-proving creates a steady stream of proofs, resulting in more predictable earnings. This makes zk-proving less dependent on highly specialized hardware compared to Bitcoin mining.

The adoption of specialized hardware, like ASICs and FPGAs, for zk-proving will largely depend on the crypto market’s volume. Higher volumes are likely to encourage more investment in these technologies.

With these dynamics in mind, we can explore the revenue potential zk-proving represents.

Methodology

Our analysis will be based on the Analyzing and Benchmarking ZK-Rollups paper, which benchmarks zkSync and Polygon zkEVM on various metrics, including proving time.

While the paper benchmarks zkSync Era and Polygon zkEVM, our analysis will focus on zkSync due to its more significant transaction volumes (230M per year vs. 5.5M for Polygon zkEVM). At higher transaction volumes, Polygon zkEVM has comparable costs to zkSync ($0.004 per transaction).

Approach

  • Measure the proving time of groups of different transaction types (e.g., ERC token transfers, ETH transfers, contract deployments, hash function computations) in various quantities. This data is based on the benchmarks available in the paper.
  • Create a batch of roughly 4,000 transactions, which matches the average batch on zkSync.
  • Calculate the proving time for the batch, including the STARK to Groth16 compression time. 
  • To calculate the costs, use cloud-based hardware offering:
    1. Hardware: 32 vCPUs, 1 NVIDIA L4 GPU.
    2. Cloud Cost: $1.87/hour.

Results

A single Nvidia L4 GPU can prove a batch of ~4,000 transactions on zkSync in 9.5 hours. Given that zkSync submits a new batch to L1 every 10 minutes, around 57 NVIDIA L4 GPUs are required to keep up with this pace.

Proof Generation Cost

Knowing the compute time, we can calculate proving costs per batch, proof, and transaction:

  • Batch Size: 3,985 transactions.
  • Cost per batch: $17.97.
  • Cost per proof: $0.0423.
  • Cost per transaction: $0.0045.

The above calculations can be followed in detail in Proving Market Estimate(rows 1-29).

Proving Costs Estimates

Proving costs depend on the efficiency of hardware and proof systems. The hardware costs can be optimized by, for example, using bare metal machines.

2024: Current Costs

  • zkSync: $0.0045 per transaction.
  • Other zk-Rollups: Since smaller and less optimized rollups have higher costs, a 40% premium is applied. This brings their proving cost to $0.0063 per transaction.

2025: Optimizations Begin

  • zkSync: Proving costs remain at $0.0045 per transaction.
  • Other zk-Rollups: Optimizations reduce costs down to $0.0059 per transaction.

2030:  Proving costs fall to $0.001 per transaction across all rollups.

Transaction Volume Estimates

2024: Real Data

The number of transactions generated by rollups and other demand sources:

  • zk-Rollups: Virtually the only demand driver with 580M transactions. No rollup opened provers in 2024, but this will change starting in 2025.
  • Optimistic Rollups: None added zk-proving in 2024, but transaction volumes are a baseline for future estimates: 2.3B transactions.
  • Apps and Infrastructure: negligible.

2025: Market Takes Off

The proving market begins to gain momentum. Estimated number of transactions: ~4.4B, including: 

  • zk-Rollups: The primary driver with 2.46B transactions.
  • Apps and Infra: Demand starts to grow with 490M transactions.
  • Aggregators: Smaller share. For simplicity, one batch equals one transaction in this analysis. Add 12M transactions.
  • Other Blockchains: Aleo, now on mainnet, will contribute significantly. With zk-compression on Solana and Celestia’s zk initiatives in the early stages, the impact is 366M transactions.
  • Multi-proofs: Optimism implements zk-proofs to improve finality time, adding 1.09B transactions.

2030: zk-Proving at Scale

Proving will have reached widespread adoption. Estimated number of transactions: ~600B

  • zk-Rollups transactions volume grows to 17B.
  • Optimistic Rollups will switch to validity proofs, increasing transaction volumes and driving demand for 69B transactions.
  • Apps and Infra: New ideas and legacy solutions add 15B transactions.
  • Aggregators are crucial but do not drive significant transaction volumes with 151M.
  • Other Blockchains: Solana, Celestia, and various L1 platforms have significantly advanced their zk efforts. Ethereum Beam Chain is live, bringing the total transaction count to 108B.
  • Unknown Opportunities: zk-proving expands into the real world, with use cases like Worldcoin adding 76B transactions.
  • Multi-proofs: At least one redundant proof system will be integrated across almost all ecosystem projects, adding 315B transactions.
  • Client-side Proving: Required by privacy-preserving solutions substracts around 3.5B transactions from the market.

Market size estimate

We estimate the proving surplus based on previously estimated proving costs. This surplus is revenue from base and priority fees minus hardware costs. As the market matures, base fees and proving costs decrease, but priority fees will be a significant revenue driver. 

Token incentives add further value boost, While it’s difficult to foresee the size of these investments, the estimate is based on the information collected from the projects.

2024: Early Market

  • zk-Rollups processed 590M transactions for $3.26M in hardware costs.
  • There are no token incentives or proving fees.

2025: Expanding Demand

The total market is projected at $97M, including: 

  • The total cost for all zk-proofs of $24M.
  • A 30% proving surplus results in a market size of $32M.
  • Projects offer significant token incentives alongside regular fees, boosting the market size by an additional $65M.

2030: Almost a Two-Billion-Dollar Market

The total zk-proving market opportunity is estimated at $1.34B.

  • Proving costs are $813M.
  • With priority fees increasing, the proving surplus rises to 60%, bringing the market to $1.3B.
  • As the market matures, token incentives decrease, adding only $40M.

A detailed analysis supporting the calculations is available in Proving Market Estimate(rows 32-57).

Sensitivity Analysis

The estimates with so many variables and for such a long term will always have a margin of error. To support the main conclusion, we include a sensitivity analysis that presents other potential outcomes in 2025 and 2030 based on different transaction volumes and proving surplus. For the sake of simplicity, we left the proving costs intact at $0.059 and $0.001 per transaction in 2025 and 2030, respectively.

In 2025, the most pessimistic scenario estimates a total market value of just $12.5M, with less than a 10% proving surplus and 2B transactions. Conversely, the ultra-optimistic scenario imagines the market at $55M, based on a 50% surplus and 6B transactions.

In 2030, if things don’t go well, we could see a proving market of roughly $300M, from 10% proving surplus and 300B transactions. The best outcome assumes a $1.7B market based on a 90% surplus and 900B transactions.

Risks

Estimating so far into the future comes with inherent uncertainties. Below are potential error factors categorized into downside and upside scenarios:

Downside 

  1. Broader blockchain adoption may not occur as quickly as anticipated, slowing transaction growth across the ecosystem participants.
  2. The dynamics of priority fee markets may not follow the same path as those of today’s blockchains, which can lead to overestimating the proving surplus.
  3. Multi-proofs significantly increase transaction volumes in the estimates. However, projects might stick with single proving systems supported by Trusted Execution Environments (TEEs), which offer similar functionality on a hardware rather than software level.
  4. Without major security breaches, optimistic rollups may not feel pressure to switch to zk-proving beyond adding a single proof system for reduced finality.
  5. Advancements in proving tech could drastically reduce costs, leading to commoditization. Profit margins will be compressed as proving services become broadly available at lower prices.

Upside

  1. Breakthroughs in software, especially in apps and zkVMs, could accelerate adoption across and beyond blockchains, leading to faster growth than projected.
  2. Priority fees significantly boost revenue for validators on Ethereum and Solana. If zk-proving follows suit, proving fees could exceed the estimates.

Conclusions

After PoW and PoS, zk is the next-generation crypto technology that complements its predecessors. Comparing proving revenue opportunities with PoW or PoS is tricky because they serve different purposes. Still, for context:

  • The PoS market is valued at $16.3B, with roughly $800M going to validators (minus priority fees and MEV rewards).
  • The PoW opportunity is about $16.8B annually, excluding priority fees. Of course, Bitcoin mining’s cost structure and competition differ significantly from zk-proving or PoS.

We estimated that the zk-proving market could grow to $97M by 2025 and $1.34B by 2030. While these estimates are more of an educated guess, they’re meant to point out the trends and factors anyone interested in this space should monitor. These factors include:

  • Proof generation costs, driven by advancements in software and hardware.
  • Demand for zk-proofs represented in transaction volumes.
  • Base and priority fees, which influence the economic incentives for proving.

Let’s revisit these forecasts a year from now.

No results found.

Please try different keywords.

 Join our mailing list to receive our latest updates, research reports, and industry news.