Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Core Research
Transaction Latency on Solana: Do swQoS, Priority Fees, and Jito Tips Make Your Transactions Land Faster?
Exploring Solana's updates—stake-weighted QoS, priority fees, and Jito MEV—and their impact on transaction prioritization and landing latency.
December 3, 2024
5 min read
TL;DR
  • Transaction inclusion is one of the biggest challenges for Solana today.
  • We’ve explored how different solutions impact transaction inclusion times:
    • Priority Fees minimally affect landing times.
    • Jito Tips also have negligible latency benefits.
    • swQoS is the most effective at reducing latency for all transaction types.
Introduction

Solana processes thousands of transactions per second, which creates intense competition for transaction inclusion in the limited space of a slot. The high throughput and low block time (~400ms) require transactions to be propagated, prioritized, and included in real-time.

High throughput on Solana comes with another advantage: low transaction costs. Transaction fees have been minimal, at just 0.000005 SOL per signature. While this benefits everyone, it comes with a minor trade-off—it makes spam inexpensive.

For end-users, spam means slower transaction finalization, higher costs, and unreliable performance. It can even halt the network with a DDoS attack, as in 2021, or with an NFT mint, as in 2022.

Against this backdrop, Solana introduced significant updates in 2022: stake-weighted quality of service (swQoS) and priority fees. Both are designed to ensure the network prioritizes higher-value transactions, albeit through different approaches.

Another piece of infrastructure that can help reduce transaction latency is Jito MEV. It enables users to send tips to validators in exchange for ensuring that transaction bundles are prioritized and processed by them.

This article will explore these solutions, break down their features, and assess their effectiveness in transaction landing latency.

Solana Transaction Anatomy

Let’s start with a basic building block—a transaction.

Solana has two types of transactions: voting and non-voting (regular). Voting transactions achieve consensus, while non-voting transactions change the state of the network's accounts.

Solana transaction consists of several components that define how data is structured and processed on the blockchain¹:

  1. Accounts, which represent the state of Solana.
  2. Instructions, which define the operations to be executed in the transaction (e.g. transfer SOL, read account state, write account state).
  3. Message, which includes:
    • Information on accounts invovled in the transation.
    • The most recent blockhash at the time when the transaction was created.
  4. Signatures, which cryptographically guarantee the authenticity of the transaction.

A single transaction can have multiple accounts, instructions, and signatures.

Below is an example of a non-voting transaction, including the components mentioned above:

On Solana, a transaction can be initiated by a user or a smart contract (a program). Once initiated, the transaction is sent to an RPC node, which acts as a bridge between users, applications, and the blockchain.

The RPC node forwards the transaction to the current leader—a validator responsible for building the next block. Solana uses a leader schedule, where validators take turns proposing blocks. During their turn, the leader collects transactions and produces four consecutive blocks before passing the role to the next validator.

Validators and RPC nodes are two types of nodes on Solana. Validators actively participate in consensus by voting, while RPC nodes do not. Aside from this, their structure is effectively the same.

So, why are RPCs needed? They offload non-consensus tasks from validators, allowing validators to focus on voting. Meanwhile, RPC nodes handle interactions with applications and wallets, such as fetching balances, submitting transactions, and providing blockchain data.

The main difference is that validators are staked, securing the network, while RPC nodes are not.

Transaction flow on Solana

After reaching the validator, the transactions are processed in a few stages, which include²³:

  1. Fetch, where the validator parses incoming packets for valid transactions.
  2. Verification, where the transaction signatures are verified.
  3. Banking, the core stage of the entire pipeline, where:
    • Accounts are locked to ensure no conflicting transactions execute simultaneously.
    • The transaction is executed in SVM, which processes program instructions and updates account states.
    • The results of the transaction (success or failure) are validated.
    • Transactions are processed in parallel in two voting and four non-voting threads using Solana's runtime.
    • Based on the leader schedule, a validator handles transactions as follows:
      • Process, if it is the leader.
      • Hold if it is two slots away from being the leader.
      • Forward to the current leader and next two leaders if more than two slots away.
  4. Proof of History, where transaction data and previous hash are hashed to determine the transaction ordering.
  5. Broadcast, where valid transactions are shared with the networks in groups called shreds.
Based on the Solana documentation

The transaction is considered confirmed if it is voted for by ⅔ of the total network stake. It is finalized after 31 blocks.

In this setup, all validators and RPC nodes compete for the same limited bandwidth to send transactions to leaders. This creates inefficiencies, as any node can overwhelm the leader by spamming more transactions than the leader can handle.

To improve network resilience and enhance user experience, Solana introduced QUIC, swQoS, and priority fees, as outlined in this December 2022 post:

  • QUIC Protocol: The QUIC protocol transfers data between two network nodes. It enables parallelization of data streams and requires establishing a connection between validators and RPC nodes.
  • Stake-weighted Quality of Service: swQoS is a mechanism that prioritizes network traffic based on the stake held by validators. It ensures that validators with more stake can send more transactions to the leader.
  • Priority Fees: In addition to the base fee of 5K lamports per signature, users can include an additional priority fee, to speed up the inclusion of their transactions.

Stake-weighted QoS⁴

With the adoption of the QUIC protocol, trusted connections between nodes are required to send transactions. The swQoS system prioritizes these connections based on stake. In this framework, non-staked RPC nodes have limited opportunities to send transactions directly to the leader. Instead, they primarily rely on staked validators to forward their transactions.

Technically, a validator must configure swQoS individually for each RPC node, establishing a trusted peer relationship. When this service is enabled, any packets the RPC node sends are treated as though they originate from the validator configuring swQoS.

Validators are allocated a portion of the leader’s bandwidth proportional to their stake. For example, a validator holding 1% of the total stake can send up to 1% of the transaction packets during each leader’s slot.

From the leader’s perspective, 80% of available connections are reserved for staked nodes, while the remaining 20% are allocated to RPC nodes. To qualify as a staked node, a validator must maintain a minimum stake of 15,000 SOL.

While swQoS does not guarantee immediate inclusion of all transactions, it significantly increases the likelihood of inclusion for transactions submitted through nodes connected to high-stake validators.

Priority Fees

Priority fees serve the same role as swQoS by increasing the chances of transaction inclusion, though they use a completely different mechanism.

There are two types of fees on Solana⁵:

  • Base Fee: This is a fixed 5,000 lamports (0.000005 SOL) per signature, typically one per transaction. It does not depend on the compute units (CUs) required to process the transaction.
  • Priority Fee: This fee is optional and specified by users for each transaction. It depends on the CUs requested by the transaction. CUs are used to estimate the computational cost, similar to gas on Ethereum.

Of the total fees from a transaction, 50% is burned, while 50% is received by the leader processing the transaction. A proposal to award the validator 100% of the priority fee has been passed and is expected to be activated in 2025 (see SIMD-0096).

Priority fees help validators prioritize transactions, particularly during high congestion periods when many transactions compete for the leader's bandwidth. Since fees are collected before transactions are executed, even failed transactions pay them.

During the banking stage of Solana’s transaction processing, transactions are non-deterministically assigned to queues within different execution threads. Within each queue, transactions are ranked by their priority fee and arrival time⁶. While a higher priority fee doesn’t guarantee that a transaction will be executed first, it does increase its chances.

Jito MEV

The final puzzle of transaction prioritization is Jito. This modified Solana client allows searchers to send tips to validators in exchange for including groups of transactions, known as bundles, in the next block.

It could be argued that the Jito infrastructure prioritizes transactions using a tipping mechanism, as users can send a single transaction with a tip to improve its chances of landing fast.

For a deeper explanation of how Jito works, check out our previous article on the Paladin bot, which provides more details.

swQoS, Priority Fees, and Jito Tips in Action

We now have a clearer understanding of how all three solutions contribute to transaction inclusion and prioritization. But how do they affect latency? Let’s find out.

Methodology

To calculate the time to inclusion of a transaction, we measure the difference between the time it is included in a block and the time it is generated. On Solana, the generation time can be determined from the timestamp of the transaction’s recent blockhash. Transactions with a recent blockhash older than 150 slots—approximately 90 seconds—expire.

The latest blockhash is assigned to the transaction before it is signed, so transactions signed by bots will be included faster than transactions generated by normal users. This method is not perfect, but still allows us to collect valuable information about latency and user topology.

Other factors beyond the swQoS and priority fees, such as the geographical proximity of nodes to the leader or validator and RPC performance, also impact inclusion times—we are not fully accounting for those.

To reduce the possible biases, we consider only slots proposed by our main identity from November 18th to November 25th, 2024.

Time to Inclusion

The time to inclusion across all transactions, without any filtering, has a trimodal distribution suggesting at least three transaction types. The highest peak is at 63 seconds, followed by another at 17 seconds, and a smaller one is at 5 seconds.

The second and third peaks are likely from regular users. This double peak could occur because general users don't set maxRetries to zero when generating the transaction. The first peak, at around 5s, is probably related to bots, where the delay between generating and signing a transaction is marginally zero.

We can classify users based on their 95th percentile time to inclusion:

  • Fast: Less than 10 seconds.
  • Normal: Between 10 and 40 seconds.
  • Slow: Greater than 40 seconds.

Most users fall into the “normal” and “slow” classifications. Only a small fraction of submitted transactions originate from “fast” users.

Let’s now break down transactions by source.

Priority Fee

Transactions can be categorized based on their priority fee (PF) with respect to the PF distribution in the corresponding slot. Precisely, we can compare the PF with the 95th percentile (95p) of the distribution:

  • Cheap: PF is less than 10% of the 95p PF within the block.
  • Normal: PF is between 10% and 50% of the 95p PF within the block.
  • Expensive: PF is greater than 50% of the 95p PF within the block.

The size of the priority fee generally doesn’t influence a transaction’s time to inclusion. There isn’t a clear threshold where transactions with higher PF are consistently included more quickly. The result remains stable even when accounting for PF per compute unit.

Jito Tippers

We can restrict the analysis to users sending transactions via the Jito MEV infrastructure, excluding addresses of known swQoS consumers. Interestingly, most Jito transactions originate from “slow” users.

We categorize tippers by siże of tips in the block, analogously to what we did for PF:

  • Cheap: Tip is less than 10% of 95p in the block.
  • Normal: Tip is between 10% and 50% of 95p in the block.
  • Expensive: Tip exceeds 50% of 95p in the block.

When we compute the probability density function (PDF) of time to inclusion based on this classification, we find that the tip size doesn’t significantly impact the time to inclusion, suggesting that to build a successful MEV bot, one doesn’t have to pay more in tips!

Within the Jito framework, a bundle can consist of:

  • A single transaction that includes the user operations along with the Jito tip (e.g. here).
  • Separate transactions, where one of them is a payment to the Jito program (e.g. here).

In both cases, the time it takes for the entire bundle to be included is determined by the inclusion time of the tip transaction. However, when a tip is paid in a separate transaction, we don’t track the other leg. This reduction in volume explains why the PDF of tippers differs from that of Jito consumers.

swQoS

It’s impossible to fully disentangle transaction time to inclusion from swQoS for general users, meaning some transactions in the analysis may still utilize swQoS. However, we can classify users based on addresses associated with our swQoS clients.

When we do this and apply the defined user topology classification, it becomes clear that swQoS consumers experience significantly reduced times to inclusion.

The peak around 60 seconds is much smaller for swQoS consumers, indicating they are far less likely to face such high inclusion times

The highest impact of using swQoS is seen in the reduction of the time to inclusion for “slow” users. By computing the cumulative distribution function (CDF) for this time, we observe a 30% probability of these transactions being included in less than 13 seconds.

When comparing the corresponding CDFs:

  • For Jito, “slow” users have only a 10% probability of being included in less than 13 seconds and 60% in less than 50 seconds.
  • For swQoS, “slow” users have 25% of probability of inclusion within 13 seconds and 86% in 50 seconds.

'Normal' users also benefit from swQoS. There's an additional peak in the PDF for these users between 9s and 13s, showing that some of “normal” users process transactions in less than 20s. Additionally, another peak appears around 40s, indicating that part of the slower users now see their 95th percentile falling in the left tail end of 'normal' users. This suggests that the overall spread of the time-to-inclusion distribution is reduced.

There is no statistically significant difference between the analyzed samples for “fast” users. However, some Jito consumers may also use swQoS, which complicates the ability to draw definitive conclusions.

Despite this, the improvements for “slow” and “norma”' users highlights swQoS's positive impact on transaction inclusion times. If swQoS explains the PDF shape for “fast” users, it increases the likelihood of inclusion within 10s from ~30% to ~100%, a 3x improvement. A similar 3x improvement is observed for “slow” users being included within 13s.

Summary

Transaction inclusion is arguably Solana's most pressing challenge today. Efforts to address this have been made at the core protocol level with swQoS and priority fees and through third-party solutions like Jito (remembering that the main Jito use is MEV).

Solana’s latest motto is to increase throughput and reduce latency. In this article, we have examined how these three solutions improve landing time. Or, more simply, do they actually reduce latency? We found out that:

  • swQoS delivers the highest latency reductions. This is evident for all types of transactions, but, especially for “slow”. For "fast" transactions, swQoS and Jito have similar performance, though comparing them can be tricky as overlapping use is hard to detect.
  • Priority fees, while helpful for transaction inclusion, have a minuscule effect on landing times.
  • Jito tips’ impact on latency is insignificant, especially for slow transactions.

Among the three, swQoS is the most reliable for reducing latency. Jito and priority fees can be used when the time to inclusion is less important.

References:

  1. Solana documentation: transactions
  2. Solana documentation: TPU
  3. Helius, Stake-weighted Quality of Service, Everything You Need to Know
  4. Solana documentation: swQoS
  5. Solana documentation: fees
  6. Umbra Research, Lifecyle of a Solana Transaction

About Chorus One

Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.

Core Research
Opinion
Paladin’s Quest for Fair MEV: Evaluating the Bot and the Atomic Arbitrage Market
Evaluation of the Bot and the Atomic Arbs Market
October 24, 2024
5 min read
TL/DR
  • Unaligned MEV is a significant long-term threat to Solana's growth.
  • Efforts are underway to democratize MEV, with Jito being the most well-known solution.
  • A new player, Paladin, an atomic arbitrage bot, has recently emerged.
  • We explain Paladin’s architecture together with its associated token.
  • The atomic arbitrage market is estimated at $42 million, which could boost validator APY by 0.07%.
  • Paladin captured 16% of atomic arbitrages in our slots, adding 0.01% in annualized APY.
  • We project APY could increase to 0.03% if Paladin runs on 50% of validators, assuming market conditions stay the same.

MEV on Solana

Due to the unique architecture of blockchains, block proposers can insert, censor, or sort user transactions in a way that extracts value from each block before it's added to the blockchain.

These manipulations, called MEV or Maximum Extractable Value, come in various forms. The most common are arbitrage¹, liquidations², NFT mints³, and sandwiching⁴. Arbitrage involves exploiting price differences for the same asset across markets. Liquidations occur in lending protocols when a borrower’s collateral drops in value, allowing others to buy it at a discount. NFT mints can be profitable when high-demand NFTs are resold after minting.

Most types of MEV can benefit the ecosystem by helping with price discovery (arbitrage) or preventing lending protocols from accruing bad debt (liquidations). However, sandwiching is different. It involves an attacker front-running a user’s trade on a DEX and selling immediately for a profit. This harms the ecosystem by forcing users to pay a consistently worse price.

Solana’s Characteristics

Solana's MEV landscape differs from Ethereum's due to its high speed, low latency, lack of a public mempool, and unique transaction processing. Without a public mempool for viewing unconfirmed transactions, MEV searchers (actors specializing in finding MEV opportunities⁵) send transactions to RPC nodes directly, which then forward them to validators. This setup enables searchers to work with RPC providers to submit a specifically ordered selection of transactions.

Moreover, the searchers don't know the leader's geographical location, so they send multiple transactions through various RPC nodes to improve their chances of being first. This spams the network as they compete to extract MEV—if you're first, you win.

Jito

A key addition to the Solana MEV landscape is Jito, who released a fork for the Solana Labs client. On a high level, the Jito client enables searchers to tip validators to include a bundle of transactions in the order that extracts the most value for the searcher. The validators can then share the revenue from the tips with their delegators.

These revenues are substantial. Currently, the Jito-Solana client operates on 80% of validators and generates thousands of SOL daily in tips from searchers. However, searchers keep a portion of each tip, so the total tip amounts don’t reveal the full MEV picture. Moreover, the atomic arbitrage market is considerable, and as we’ll explore later, Jito's tips don’t give an accurate estimate of the atomic MEV extracted.

Share of tips paid by searchers to validators and Jito per day. Source: https://dune.com/ilemi/jitosol

Jito⁶ introduced a few new concepts to the Solana MEV landscape:

  • Bundles: a list of transactions searchers create and send to the Block Engine. The bundle is executed sequentially and atomically, with either all transactions being executed or none.
  • Block Engine: receives transactions from relayers and shares them with searchers. Searchers use these transactions to create bundles that extract MEV and then forward the bundles back to Block Engine. The Block Engine simulates these bundles to determine which are the most profitable and then sends those selected bundles to validators.
  • Relayer: receives transactions from RPC nodes, validators, and other sources, filters them, checks signatures, and forwards them to validators and the block engine.

There’s more to the current MEV landscape on Solana, particularly concerning spam transactions, which largely result from unsuccessful arbitrage attempts, and the various mitigation strategies (such as priority fees, stake-weighted quality of service, and co-location of searchers and nodes). However, since these details are not central to the focus of this article, we will set them aside for now.

Enter Paladin

It's still early for Solana MEV, and until recently, Jito was the only major solution focused on boosting rewards for delegators. Following the same open-source principles, the Paladin team introduced a validator-level bot⁷ and an accompanying token that accrues value from the MEV collected by the bot.

The Bot

The main idea behind Paladin is this:

  • The bot funnels MEV rewards to the token airdropped to validators and stakers.
  • The token accrues value from the extracted MEV.
  • Validators stake the token and, with the risk of slashing, have less incentive to sandwich.

Paladin’s success, therefore, depends on validators choosing honesty over toxic MEV extraction by running the Paladin bot.

Bots like Paladin⁸ operate at the validator level, enabling them to capitalize on opportunities that arise after Jito bundles and other transactions are sent to the validator for inclusion in a block.

In this scenario, once the bot assesses the impact of the transactions and bundles, it inserts its transactions into the block. The bot doesn’t front-run the submitted transactions but leverages the price changes that result after each shred is executed.

Paladin can also extract MEV through DEX-CEX arbitrage and optimize routes for swaps made via DEX aggregators. However, these features are currently not used in practice, so we only briefly mention them. Since the bot is a public good, the community can contribute by adding features like NFT minting or liquidation support in the future.

The Token

The PAL token is where 10% of the value extracted by the bot in SOL gets accumulated. Paladin will go live at TGE, which will airdrop the entire supply of 1 billion PAL in the following proportions:

  • 50% to validators and their delegates.
  • 23% to the Solana ecosystem that contributed to Paladin's development.
  • 20% to the Paladin team.
  • 7% to a fund for Paladin's future development.

At the architecture level, the MEV extracted by the bot is sent to a smart contract, which then distributes it as follows:

The crucial part of the Paladin architecture is slashing. If the validator misbehaves and extracts MEV through sandwiching, staked PAL holders (other validators and their delegators) can vote to slash the rogue validator. The slashing happens if >50% of the majority is reached and stays at this level for a week. The slashed PAL is burned.

Other actions that could lead to slashing include not running Paladin, using closed-source upgrades, or not participating in slashing votes. This isn't an exhaustive list, as PAL stakers can vote to slash for other reasons at their discretion. While sandwiching is easy to spot, other "misbehaviors" may not be as obvious and would require monitoring tools, potentially leading to enforcement issues.

Unstaking PAL is capped at 5%, and a cooldown period of one month before the next withdrawal can be made.

Controversies

There are several controversies about Paladin⁹. Here are common criticisms:

Validators Profit Unfairly

This is not true. Palidators (validators running Paladin) receive 90% of the MEV extracted by the bot, which they can redistribute to their delegators while keeping their standard commission. The remaining 10% goes to the PAL token, with 7.5% each going to validators and their stakers. This setup ensures validators don't take a larger share of MEV profits. If a validator doesn’t share the captured MEV, delegators can switch to one with a healthy long-term track record, like Chorus One.

Run Paladin or Die

Validators must run Paladin and avoid toxic MEV extraction or any actions that could undermine their reputation for honesty. Slashing can also occur if validators run closed-source software on top of Paladin. This doesn't mean market participants can't enhance the bot. On the contrary, they are encouraged to do so and can be rewarded in PAL if their improvements are openly available to others.

No Development Post-TGE

After the PAL airdrop, the Paladin team will no longer develop the bot¹⁰. All maintenance and strategy updates will be the community's responsibility from then on. This includes adding new liquidity pools or tokens to identify emerging MEV opportunities. While a fund has been set aside for future development, it is uncertain how long it will last. Development may stall if the incentives dry up.

Paladin’s Opportunity

With the knowledge of how Paladin works, let’s evaluate its target market and assess its performance based on our collected data.

Atomic Arbitrage Market

We will start by analyzing Jito tips paid for atomic arbitrage and compare them to the overall atomic arb market to see how much of the atomic opportunities have been captured through Jito.

We will use data from mid-August 2024¹¹ onward, when the share of Jito tips related to atomic arbitrage rose significantly. We exclude earlier data to avoid bias. Interestingly, this spike happened despite the drop in the total MEV extracted through atomic arbs, indicating increased competition among searchers now willing to share more Jito tips.

Source

Even though tips from atomic arbs have increased compared to the total arb MEV market, they still make up only a small percentage of the total Jito tips paid.

Source

Only 4.25% of the tips searchers paid during the sampled period were from atomic arbs (SOL 10,316 out of SOL 242,754). At a SOL price of $150, this is $1,547,400, while the total atomic MEV extraction reached $6,567,554.

Source

So, only about 23% of the total atomic arbitrage opportunities were shared through Jito! Some striking examples include:

  • From September 25 to September 29, this bot extracted $24k using Jupiter aggregator but tipped only 0.1 SOL to Jito.
  • Over the same period, another bot extracted $24.2k using the Jupiter aggregator without tipping anything.

This shows that most on-chain arbitrage MEV is being captured outside of Jito. Unfortunately, this also leads to a high number of failed transactions.

During one of the measured five-day periods, over 1 million arbitrage transactions were made, with 519k of them submitted through the Jupiter aggregator [source]. This led to a significant number of failed transactions because:

  • Searchers are flooding transactions to the leader.
  • Jupiter tries routing through all possible paths, causing unsuccessful paths to end as failed transactions.
Source

The above data shows that Paladin can tap into a sizable on-chain arbitrage market by finding opportunities more efficiently and avoiding failed transactions. This approach would benefit validators by filling blocks with successful transactions and improving the ecosystem by reducing congestion.

Bot’s Performance

The annual atomic arbitrage market is around $42.4 million. With 392 million SOL staked [source] ($58.9 billion at $150 per SOL), this could add about 0.07% APY to validator performance.

Let's dive deeper into the data to see how much market the bot can take.

Distribution and Dataset

The distribution of atomic arb MEV in USD per slot for the data collection period (15 August to 10 October 2024) looks as follows:

The median value is $0.00105 per slot, with atomic arbitrage opportunities occurring in 51.6% of slots.

Paladin operated on our main validator with a 1.15m SOL stake for a week between 4 October and 11 October. Let’s see the atomic arbitrage market opportunities during the bot's operation period:

The median value is $0.00898 per slot, and the chance of atomic arbs is present in 59.47% of slots.

The KS test shows inconsistencies in both datasets, with a positive shift in the distribution, indicating higher values in the second dataset. Therefore, Paladin operated in a more favorable environment, with more significant and more frequent MEV extraction opportunities than the broader measurement period. This is especially clear when you look at the size of Jito tips during our timeframe.

Source

Now, let's look at how Paladin performed in these circumstances.

The median arb profit is $0 per slot, with opportunities taken only in 29.64% of slots.

Here’s a more detailed summary of all three distributions:

As we can see, Paladin underperformed, capturing significantly less MEV and earning less per slot. The bot only managed to capture 15.84% of the total available atomic arbitrage opportunities.

In some of the most striking examples, the bot extracted only 0.00004 SOL (here and here), while the actual extractable value was $127.59, as seen in Tx1, Tx2, Tx3, Tx4, and Tx5.

The reason for failing to extract MEV from the opportunities in the linked transactions is that  Paladin doesn’t support the traded token ($MODENG). This is a problem since memecoins are currently driving network activity and will likely contribute the largest share of MEV. These tokens emerge rapidly, requiring frequent updates to routing. One of Paladin's top priorities should be quickly adapting to capture MEV from new memecoins as they arise, and the lack of team involvement in the process is problematic in this context.

Estimated Returns

Now, let’s run a simulation to estimate the returns under different scenarios based on a stake share of 0.3% (Chorus One's share), 1%, and 10%. The returns are capped at 15.8%, which is the portion of opportunities Paladin captured in our data.

The median value for 0.3% of the total stake is around $20k, which matches the annualized value of what Chorus One earned. This increases to about $65k for a validator with 1% of the total stake and exceeds $700k for a hypothetical validator with 10%.

We also ran a simulation to estimate how much Paladin’s performance could improve if it captured 80% of available opportunities for a validator the size of Chorus One across different adoption levels—1%, 10%, 25%, and 50% of total stake using Paladin. At an estimated 1% adoption, our validator earns an additional 0.01% APY from the bot, while the total potential atomic arbitrage could generate 0.07% of the total stake.

The simulation assumes:

  • The MEV landscape remains constant.
  • The bot will catch more opportunities as adoption grows, but the APY is capped at 0.035% in the 50% adoption scenario.

And in a more tangible form:

As we see, Paladin could generate a median of additional 0.29% in APY for a validator with 0.03% of the total stake once adoption reaches 50%.

We've been in touch with the Paladin team, who confirmed that a new version of the bot, P3, is in the works. This version will pivot from focusing on the atomic arbitrage market, which they no longer see as substantial enough to prioritize.

Maintenance

The bot has been stable without major issues, but Paladin requires patches to update strategies and fix smaller bugs. Maintaining the bot is also time-consuming for the engineering team, as each patch requires a restart and the process is more complex than anticipated, adding extra overhead.This is a similar problem we faced with our Breaking Bots—maintenance and strategy update costs were high. Eventually, we concluded that the effort was not exactly worth it. With Paladin, however, a whole community could tackle this problem, so things may look different.

Conclusion

Paladin has great potential to boost earnings for validators and stakers by tapping into new opportunities, but it's still in the early stages of development. While our analysis shows that Paladin currently captures only around 15.84% of available atomic arbitrage opportunities, this will likely improve as the bot becomes more optimized and widely adopted. The upside is promising—the total atomic arbitrage market could add 0.07% to a validator’s APY. While capturing all of it is unlikely, even a share of this can lead to solid gains.

That said, there are challenges to address. The bot’s development will shift to the community after the token TGE, raising questions about whether there will be enough resources and motivation for continuous updates. Additionally, maintaining the bot on the validator side can be tricky, as each patch requires a restart, making it time-consuming for validators to run.

Chorus One’s Perspective

At Chorus One, we believe that the long-term health of the Solana ecosystem is paramount. Paladin builds on the same core principles as Jito—to mitigate the toxic MEV and democratize good MEV.

We developed Breaking Bots with these ideas in mind, and we see Paladin as an extension of our efforts. Two solutions are better than one, and Paladin offers an interesting alternative to what exists today. Supporting multiple approaches is a cornerstone of decentralized systems, and we welcome new ideas that build resilience.

While we don't agree with all of Paladin's choices, especially regarding the team's lack of future bot development, we believe its success will benefit the entire ecosystem, and that's why we support it.

That being said, if the core principles Paladin is built on change, or the maintenance costs outweigh the benefits in the mid-term, we will reevaluate our position.

References:

1 You can find an interesting overview of arbitrage MEV here.

2 A detailed analysis of liquidations in DeFi is available in this paper.

3 More about the NFT MEV here.

4 Chorus One also provided an analysis on Solana sandwiching in here.

5 An in-depth write-up on searchers by Blockworks is here.

6 Information based on Jito documentation.

7 At Chorus One, in our “Breaking Bots” paper, we proposed a similar solution. The implementation details are available on GitHub.

8 Information based on series blogposts by the Paladin team.

9 Some of the examples available here, here,

10 Per the blogpost: We’re not a Foundation or Labs — we don’t run any part of Paladin, we don’t develop it, we don’t maintain it…

11 The data used in this section is available here and can be retrieved using these queries.

About Chorus One

Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.

Core Research
A primer on proposer preconfirms
We explore what preconfirmations are, why they matter, and how they’re set to transform the blockchain landscape.
September 9, 2024
5 min read

In the blockchain industry, where the balance between decentralization and efficiency often teeters on a knife's edge, innovations that address these challenges are paramount. Among these innovations, preconfirmations stand out as a powerful tool designed to enhance transaction speed, security, and reliability. Here, we’ll delve into what preconfirmations (henceforth referred to as “preconfirms” ) are, why they matter, and how they’re set to transform the blockchain landscape.

Preconfirms are not a new concept.

The idea of providing a credible heads-up or confirmation that a transaction has occurred is deeply ingrained in our daily lives. Whether it's receiving an order confirmation from Amazon, verifying a credit card payment, or processing transactions in blockchain networks, this concept is familiar and widely used. In the blockchain world, centralized sequencers like those in Arbitrum function similarly, offering guarantees that your transaction will be included in the block.

However, these guarantees are not without limitations. True finality is only achieved when the transaction is settled on Ethereum. The reliance on centralized sequencers in Layer 2 (L2) networks, which are responsible for verifying, ordering, and batching transactions before they are committed to the main blockchain (Layer 1), presents significant challenges. They can become single points of failure, leading to increased risks of transaction censorship and bottlenecks in the process.

This is where preconfirms come into play. Preconfirms were introduced to address these challenges, providing a more secure and efficient way to ensure transaction integrity in decentralized networks.

Builders, Sequencers, Proposers: Who’s Who

Before jumping into the preconfirms trenches, let’s start by clarifying some key terms that will appear throughout this article (and are essential to the broader topic).

Builders: In the context of Ethereum and PBS, builders are responsible for selecting and ordering transactions in a block. This is a specialized role with the goal of creating blocks with the highest value for the proposer, and builders are also highly centralized entities. Blocks are submited to relays, which act as mediators between builders and proposers.

Proposers: The role of the proposer is to validate the contents of the most valuable block submitted by the block builders, and to propose this block to the network to be included as the new head of the blockchain. In this landscape, proposers are the validators in the Proof-of-Stake consensus protocol, and get rewarded for proposing blocks (a fee gets paid to the builder from the proposer as well).

Sequencers: Sequencers are akin to air traffic controllers, particularly within Layer 2 Rollup networks. They are responsible for coordinating and ordering transactions between the Rollup and the Layer 1 chain (such as Ethereum) for final settlement. Because they have exclusive rights to the ordering of transactions, they also benefit from transaction fees and MEV.  Usually, they have ZK or optimistic security guarantees.

The solution: Preconfirmations

Now that we’ve set the stage, let’s dive into the concept of preconfirms.

At their core, preconfirms can provide two guarantees:

  • Inclusion Guarantees: Assurance that a transaction will be included in the next block.
  • Execution Guarantees: Assurance that a transaction will successfully execute, especially in competitive environments where multiple users are vying for the same resources, such as in trading scenarios.

These two guarantees matter. Particularly for:

Speed: Traditional block confirmations can take several seconds, whereas preconfirms can provide a credible assurance much faster. This speed is particularly beneficial for "based rollups" that batch user transactions and commit them to Ethereum, resulting in faster transaction confirmations.  @taikoxyz and @Spire_Labs are teams building based rollups.

Censorship Resistance: A proposer can request the inclusion of a transaction that some builders might not want to include.

Trading Use Cases: Traders may preconfirm transactions if it allows them to execute ahead of competitors.

Preconfirmations on Ethereum: A Closer Look

Now, zooming in on Ethereum.

The following chart describes the overall Proposer-builder separation and transaction pipeline on Ethereum.

Within the Ethereum network, preconfirms can be implemented in three distinct scenarios, depending on the specific needs of the network:

  1. Builder issued Preconfirms

Builder preconfirms suit the trading use case best. These offer low-latency guarantees and are effective in networks where a small number of builders dominate block-building. Builders can opt into proposer support, which enhances the strength of the guarantee.

However, the dominance of a few builders means that onboarding these few is key. However, since there are only a few dominant builders, successfully onboarding these players is key.

  1. Proposer issued Preconfirms.

Proposers provide stronger inclusion guarantees than builders because they have the final say on which transactions are included in the block. This method is particularly useful for "based rollups," where Layer 1 validators act as sequencers.

Yet, maintaining strong guarantees are key challenges for proposer preconfirms.

The question of which solution will ultimately win remains uncertain, as multiple factors will play a crucial role in determining the outcome. We can speculate on the success of builder opt-ins for builder preconfirms, the growing traction of based rollups, and the effectiveness of proposer declaration implementations. The balance between user demand for inclusion versus execution guarantees will also be pivotal. Furthermore, the introduction of multiple concurrent proposers on the Ethereum roadmap could significantly impact the direction of transaction confirmation solutions. Ultimately, the interplay of these elements will shape the future landscape of blockchain transaction processing.

Commit-Boost

Commit-boost is a mev-boost like sidecar for preconfirms.

Commit-boost facilitates communication between builders and proposers, enhancing the preconfirmation process. It’s designed to replace the existing MEV-boost infrastructure, addressing performance issues and extending its capabilities to include preconfirms.

Currently in testnet, commit-boost is being developed by a non-ventured-backed neutral software for Ethereum with the ambition of fully integrating preconfirms into its framework. Chorus One is currently running commit-boost on Testnet.  

Recap - The preconfirmation design space
  1. Who chooses which transactions to preconfirm.
    1. This could be the builder, the proposer, or a sophisticated third party (“a gateway”) chosen by the proposer.
  2. Where in the block the preconfirmed transactions are included.
    1. Granular control over placement can be interesting for traders even without execution preconfs.
  3. Whether only inclusion or additionally execution is guaranteed.
    1. Without an execution guarantee, an included transaction could still fail, e.g. if it tries to trade on an opportunity that has disappeared.
  4. How and what amount of collateral the builder or proposer puts up
    1. Preconfers must be disincentivized from reneging on their promised preconfs for these to be credible.
    2. E.g. This could be a Symbiotic or Eigenlayer service, and proposed collateral requirements range from 1 ETH to 1000 ETH.

Final Word

Chorus One has been deeply involved with preconfirms from the very beginning, pioneering some of the first-ever preconfirms using Bolt during the ZuBerlin and Helder testnets. We’re fully immersed in optimizing the Proposer-Builder Separation (PBS) pipeline and are excited about the major developments currently unfolding in this space. Stay tuned for an upcoming special episode of the Chorus One Podcast, where we’ll dive more into this topic.

If you’re interested in learning more, feel free to reach out to us at research@chorus.one.

About Chorus One

Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.

Core Research
An introduction to oracle extractable value (OEV)
This is a joint research article written by Chorus One and Superscrypt, explaining OEV, and how it can be best captured.
August 30, 2024
5 min read

This is a joint research article written by Chorus One and Superscrypt

Blockchain transactions are public and viewable even before they get written to the block. This has led to maximal extractable value (‘MEV’), i.e. where actors frontrun and backrun visible transactions to extract profit for themselves.

The MEV space is constantly evolving as competition intensifies and new avenues to extract value are always emerging. In this article we explore one such avenue - Oracle Extractable Value, where MEV can be extracted even before transactions hit the mempool.

This is particularly relevant for borrowing & lending protocols which rely on data feeds from oracles to make decisions on whether to liquidate positions or not. Read on to find out more.

Introduction

Value is in a constant state of being created, destroyed, won or lost in any financialized system, and blockchains are no exception. User transactions are not isolated to their surroundings, but instead embedded within complex interactions that determine their final payoff.

Not all transaction costs are as explicit as gas fees. Fundamentally, the total value that can be captured from a transaction includes the payoff of downstream trades preceding or succeeding it. These can be benign in nature, for example, an arbitrage transaction to bring prices back in line with the market, or impose hidden taxes in the case of front running. Overall, maximal extractable value (or “MEV”) is the value that can be captured from strategically including and ordering transactions such that the aggregate block value is maximized.

If not extracted or monetized, value is simply lost. Presently, the actualization of MEV on Ethereum reflects a complex supply chain (“PBS”) where several actors such as wallets, searchers, block builders and validators fill specialized roles. There are returns on sophistication for all participants in this value chain, most explicitly for builders which are tasked with creating optimal blocks. Validators can play sophisticated timing games which result in additional MEV capture; for example, Chorus One has run an advanced timing games setup since early 2023, and published extensively on it. In the PBS context, the best proxy for the total MEV extracted is the final bid a builder gets to submit during the block auction.

Such returns on sophistication extend to the concept of Oracle Extractable Value (OEV), which is a type of MEV that has historically gone uncaptured by protocols. This article will explain OEV, and how it can be best captured.

Oracles

Oracles are one of crypto's critical infrastructure components: they are the choreographers that orchestrate and synchronize the off-chain world with the blockchain’s immutable ledger. Their influence is immense: they inform all the prices you see and interact with on-chain. Markets are constantly changing, and protocols and applications rely on secure oracle feed updates to provide DeFi services to millions of crypto users worldwide.

The current status-quo is that third-party oracle networks serve as intermediaries that feed external data to smart contracts. They operate separately from the blockchains they serve, which maintains the core goal of chain consensus but introduces some limitations, including concepts such as fair sequencing, required payments from protocols and apps, and multiple sources of data in a decentralized world.

In practical terms, the data from oracles represents a great resource for value extraction. The market shift an oracle price update causes can be anticipated and traded profitably, by back-running any resulting arbitrage opportunities or (more prominently) by capturing resulting liquidations. This is Oracle Extractable Value. But how is it captured, and more importantly, who profits from it?

A potential approach to understand the value in OEV (using AAVE data).
Oracle Extractable Value (OEV)

In MEV, searchers (which are essentially trading bots that run on-chain) profit from oracle updates by backrunning them in a free-for-all priority gas auction. Value is distributed between the searchers, who find opportunities particularly in the lending markets for liquidations, and the block proposers that include their prices in the ledger. Oracles themselves have not historically been a part of this equation.

OEV changes this flow by atomically coupling the backrun trade with the oracle update. This allows the oracle to capture value, by either acting as the searcher itself or auctioning off the extraction rights.

How OEV created in DeFi can be captured by MEV searchers before the dApp gets access to it.

OEV primarily impacts lending markets, where liquidations directly result from oracle updates. By bundling an oracle update with a liquidation transaction, the value capture becomes exclusive, preventing front-running since both actions are combined into a single atomic event. However, arbitrage can still occur before the oracle update through statistical methods, as traders act on the true price seen in other markets

Current landscape

UMA and Oval:

  • UMA has developed a middleware product called Oval (in collaboration with Flashbots), which aims to redistribute value more fairly within the DeFi space.
  • Oval works by wrapping data and conducting an order flow auction where participants bid for the right to use the data, with proceeds shared among protocols like Aave, UMA, and Chainlink.
  • This means that Oval inserts an auction mechanism and lets the market decide what a particular price update is worth.
  • This system helps DeFi protocols like Aave capture value that would otherwise go to liquidators or validators, potentially increasing their revenue.
  • Recently, Oval announced they had successfully completed the “world’s first OEV capture”, through a series of liquidations on the platform Morpho Labs. They even claim a 20% APY boost on some pairs on Morpho.

API3 and OEV Network:

  • API3 launched the OEV Network as a L2 solution, which uses ZK-rollups to capture and redistribute OEV within the DeFi ecosystem.
  • The network functions as an order flow auction platform where the rights to execute specific data feed updates are sold to the highest bidder.
  • This is a different extraction mechanism, as it turns the fixed liquidation bonus into a dynamic market-driven variable through competition.
  • This approach aims to enhance the revenue streams of DeFi protocols and promote a more balanced ecosystem for data providers and users.
  • API3’s solution also incentivizes API providers by distributing a portion of the captured OEV, thus encouraging direct participation and somewhat disrupting the dominance of third-party oracles​.

Warlock

  • Warlock is an upcoming OEV solution that will combine an oracle update sourced from multiple nodes with centralized backrun transactions.
  • The oracle update will feature increasing ZK trust guarantees over time, starting with computation consistency across oracle nodes.
  • Centralizing the backrun allows for lower latency updates, precludes searcher congestion, and protects against information leakage as the searcher entity retains exclusivity, i.e. does not need to obscure alpha. Warlock will service liquidations with internal inventory.
  • The upshot is that lending markets can offer more margin due to less volatility exposure via lower latency. The relative upside will scale with the sophistication of the searcher entity and the impact of congestion on auction-type OEV.
  • Overall, the warlock team estimates that a 10-20% upside will accrue to lending markets initially, with a future upside as value capture improves.

Where could this go?

The upshot of this MEV capture is that oracles have a new dimension to compete on. OEV revenue can be shared with dApps by providing oracle updates free of charge, or by outright subsidizing integrations. Ultimately, protocols with OEV integration will thus be able to bid more competitively for users.

OEV solutions share the same basic idea - shifting the value extraction from oracle updates to the oracle layer, by coupling the price feed update with backrun searcher transactions.

There are several ways of approaching this - an OEV solution may integrate with an existing oracle via an official integration, or through third party infrastructure. These solutions may also be purpose built and provide their own price update.

Heuristically, the key components of an OEV solution are the oracle update and the MEV transaction - these can be either centralized or decentralized.

We would expect purpose-built or “official” extensions to existing oracles to perform better due to less latency versus what would be required to run third party logic in addition to the upstream oracle. Additionally, these would be much more attractive from a risk perspective, as in the case of third party infrastructure, updates could break undesired integrations spontaneously.

The practical case is that a centralized auction can make most sense in latency-sensitive use cases. For example, it may allow a protocol to offer more leverage, as the risk of stranding with bad debt due stale price updates is minimized. By contract, a decentralized auction likely yields the highest aggregate value in use cases where latency is not as sensitive, i.e. where margin requirements are higher.

Mechanisms and Implications of OEV
  1. Atomic Liquidations
    • In a network supply chain, several blockchain actors can benefit from the information arbitrage that they possess.
    • Entities with privileged access to oracle data can leverage this information for liquidation or arbitrage
    • This can create unfair advantages and centralize power among those with early data access.
  2. A new dimension to compete on
    • OEV can lead to substantial profit opportunities, with estimated profits in the millions of dollars. This is especially true in highly volatile markets.
    • OEV enables oracles to distribute atomic backrun rights to searchers, capturing significant value
    • Ecosystems that distribute value in proportion to the contributions (of users, developers, and validators) are likely to thrive.
  3. Potential Risks and Concerns
    • If not managed properly, OEV can undermine the fairness and integrity of decentralized systems. Although the size of the oracle remains the same, it opens the door to competition on the value they can extract and pass onto dApps.
    • Some oracles like Chainlink have moved to reduce OEV and mitigate its impact, by refusing to endorse any third-party OEV solution. However, canonical OEV integrations are important as third party integrations bring idiosyncratic risk.
    • In traditional finance, market makers currently make all of the money from order flow. In crypto, there is a chance that value can be shared with users.
  4. Mitigation Strategies
    • Decentralization of Oracles: Using multiple independent oracles to aggregate data can reduce the risk of any single point of control.
    • Cryptographic Techniques: Techniques like zero-knowledge proofs can help ensure data integrity and fair dissemination without revealing the actual data prematurely.
    • Incentive Structures: Designing incentive structures that discourage exploitative behavior and promote fair access to data. Ultimately, the goal is a competitive market between oracles, where they compete with how much value can pass downstream.

Key Insights
  • Revenue Enhancement: By capturing OEV, projects can significantly enhance the revenue streams for DeFi protocols. For example, UMA’s Oval estimates that Aave missed out on about $62 million in revenue over three years due to not capturing OEV. By enabling these protocols to capture such value, they can reduce unnecessary payouts to liquidators and validators, redirecting this value to improve their own financial health.
  • Decentralization and Security: API3’s use of ZK-rollups and the integration with Polygon CDK provides a robust, secure, and scalable solution for capturing OEV. This approach not only ensures transparency and accountability but also aligns with the principles of decentralization by preventing a single point of failure and enabling more participants to benefit from the system. An aspect of this is also addressed by oracle-agnostic solutions and order flow auctions.
  • Incentives for API Providers: Both API3 and UMA’s solutions include mechanisms to incentivize API providers. API3, in particular, allows API providers to claim ownership of their data in Web3, providing a viable business model that promotes direct participation and reduces reliance on third-party oracles.
  • Impact on Users and Developers: For users and developers of DeFi applications, these innovations should be largely invisible yet beneficial. They help ensure that DeFi protocols operate more efficiently and profitably, potentially leading to lower costs and better services for end-users.
  • Adoption by Oracles and Protocols: Ultimately, the oracles have a part to play in the expansion and acceleration of OEV extraction, through themselves or more realistically, by partnering with third-party solutions. In the last weeks, UMA has launched OEV capture for Redstone oracle feeds, whilst Pyth Network announced their pilot for a new OEV capture solution. Protocols might also want to strike a balance between a new revenue stream ( for the protocol, liquidity pools, liquidity providers…) and the negative externalities of their user base.

OEV is still in its early stages, with much development ahead. We're excited to see how this space evolves and will continue to monitor its progress closely as new opportunities and innovations emerge.

About Chorus One

Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.

Core Research
The evolution of shared security
We examine the various approaches to shared security, including Restaking, Bitcoin Staking, Rollups (L2's), and Inter-chain security (Cosmos)
June 28, 2024
5 min read

This article is extracted from the Q1 2024 Quarterly Insights. To read the full report, please visit https://chorus.one/reports-research/quarterly-network-insights-q1-2024

Authors: Michael Moser, Umberto Natale, Gabriella Sofia, Thalita Franklin, Luis Nuñez Clavijo

On PoS networks, the financial aspect of staking is equivalent to the computational power committed on PoW networks. If we were to make an analogy with PoW, shared security could be compared to “merge mining”, a mechanism that allows a miner to mine a block in one blockchain, by solving the cryptographic challenge on another chain.

As a generalization, shared security technologies imply, at least, one security provider chain and, at least, one security consumer chain. To guarantee security, the shared security solution must allow for misbehavior in either the provider or consumer chains to be penalized, and that can be even by slashing the capital used for security of the provider chains. Different approaches are being used to optimize for the specific needs of each ecosystem. We will review the approaches most advanced in terms of development, and highlight the incentives and risks associated with the adoption of those technologies.

Although one may argue that Ethereum has pioneered the concept of shared security with L2s - like Arbitrum and Optimism, other blockchains have been exploring “the appchain thesis” and experimenting with more customized solutions:

  • On Avalanche, validators of the Primary Chain need to stake AVAX and they can participate on “Subnets” - a dynamic set of validators working together to achieve consensus on the state of a set of blockchains. Each blockchain is validated by exactly one Subnet. A Subnet can validate arbitrarily many blockchains. A node may be a member of arbitrarilymany Subnets.
  • On Polkadot, validators are staked on the Relay Chain in DOT and validate for the Relay Chain. Parachain auctions are held on the Polkadot Relay Chain to determine which blockchain will connect to the parachain slot. Parachains connected to the Polkadot Relay Chain all share in the security of the Relay Chain.
  • On Cosmos, the Interchain Security stack allows for new L1 chains to rent security from the Cosmos Hub as a way to lower the barrier to economic security. This is accomplished by the validator set of the Cosmos Hub running the consumer chain's nodes as well, and being subject to penalties (“slashing”) of the stake deposited on the Hub.

The motivation behind shared security is twofold:

  • It reduces the complexity for launching new chains, repurposing battle-tested security from well-established chains and decreasing or even removing the need for building a validator set from scratch, and;
  • It improves capital efficiency, allowing individuals to participate and be rewarded in multiple PoS chains, without the need to deploy additional capital.

Rollups

Rollups solutions are the main contenders for Layer 2 (“L2”) scalability in the Ethereum (the “L1”) path to modularity. This strategy allows the execution, in terms of computation and memory, to be processed “off the main chain”. The settlement properties of the state are kept on the L1 chain, which pools the security of the ecosystem through its validator base, and “rolled” from the L2 in batches (thus the name “rollup”).

This aggregation of transactions helps to minimize execution costs for each individual transaction. To maintain an ordered control of the state and upcoming transactions, rollups can make use of different architectures: historically we’ve seen a growing trend of optimistic (e.g. Arbitrum, OP, Base) or zero-knowledge (“ZK”, e.g. Starknet, Scroll) rollups, both of which have achieved limited levels of maturity in their proving mechanisms.

New architectures or upgraded versions of past ideas have also taken flight in the past months. Validiums have been brought backto the spotlight with new developments such as X Layer, and a particular flavor deemed “Optimium” (that uses the OP stack) now powers contenders such as Mantle, Mode Network, Metis, etc. The innovation, however, continues to thrive. The idea of “Based rollups” was first introduced in March by lead EF researcher Justin Drake,a simple design that allows L2 sequencing to be defined by L1 validators in their proposed blocks, thus deepening the shared security model between the layers.

It is safe to say that the rollup ecosystem continues to be the leading product in the shared security environment, with a TVL of $45.49  billion (counting canonically bridged, externally bridged, and natively minted tokens). In the last 180 days, transactions per second on the rollups have dwarfed activity on Ethereum mainnet, and the number of active users (considering distinct wallets) has risen meteorically in comparison to the L1.

EigenLayer

The idea behind shared security has captured extraordinary attention with EigenLayer, the restaking protocol built on Ethereum that has become a leading narrative within the network’s large staking  community.  In fact, restaking might as well become a larger sector than even the entire industry of single-asset staking. Driven by growing demand from stakers (seeking increased returns on their investments) and developers (sourcing security), the industry is witnessing an unprecedented shake up with capital flowing to secure multiple chains in aggregate. Concretely, EigenLayer’s TVL has managed to reach the 5 million ETH milestone at the time of writing.

Since we first identified restaking as a fundamental trend in our Q1 2023 edition, we’ve discussed EigenLayer at length and become deeply invested in the future success of the protocol: our research has focused on finding optimal risk-reward baskets for AVSs - total risk is not simply a combination of linear risks, but needs to take correlations into account.

As a result of our experience on the Holesky testnet and as mainnet operators for several AVSs, we publicized our approach to AVS selection. The thesis is straightforward: to identify and onboard the AVSs that have chances of being break-out winners, while filtering out the long tail of AVSs that merely introduce complexity and risk.

Much of what’s left to flesh out has to do with reward mechanisms and slashing conditions in these restaking protocols. As EigenLayer and other shared security models evolve and reach maturity, more information surfaces. Most recently, the Eigen Labs team presented their solution for the slashing dilemma (at least partially): $EIGEN. Current staking tokens have limitations in a model such as the AVS standard, due to the attributable nature of the slashing conditionson Ethereum. In other words, ETH can only secure work thatis provable on-chain. And since AVSs are by definition exogenous to the protocol, they are not attributable to capital on Ethereum.

Enter $EIGEN, the nominal “universal intersubjective work token” that intends to address agreed faults that are not internally provable. The slashing agreements under this classification should not be handled through the ETH restaked pool (as they necessitate a governance mechanism to determine their validity) but this second token, thus fulfilling the dual staking promise that the team had previously outlined. Currently, EigenDA is in its first phase of implementing this dual-quorum solution, and users can restake and delegate both ETH and EIGEN to the EigenDA operators.

ICS: replicated and mesh security

Replicated security went live on the Cosmos Hub in March 2023as the initial version of the Interchain Security protocol (“ICS”). Through this system, other Cosmos chains can apply to get the entire security of the Cosmos Hub validator set. This is accomplished by the validator set of the Cosmos Hub running the consumer chain's nodes as well, and being subject to slashing for downtime or double signing. Inter-Blockchain Communication (“IBC”) is utilized to relay updates of validator stake from the provider to the consumer chain so that the consumer chain knows which validators can produce blocks.

Currently, all Cosmos Hub validators secure the consumer chains. Under discussion is the “opt-in security” or ICS v2, an evolution of the above, that allows validators to choose to secure specific consumer chains or not. Another long-awaited feature is the ability for a consumer chain to get security from multiple provider chains. Both, however, introduce security and scaling issues. For example, the validator set of a consumer chain secured by multiple providers can have poor performance, since it will grow too large.

Solving most of the concerns around Replicated Security, Mesh Security was presented by Sunny Agarwal, the co-founder of Osmosis, in September 2022. The main insight is that instead of using the validator set of a provider chain to secure a consumer chain, delegators on one blockchain can be allowed to restake their staked assets to secure another Cosmos chain, and vice versa...

With Mesh Security, operators can choose whether to run a Cosmos chain and enable features to accept staked assets from another Cosmos chain, thereby increasing the economic security of the first one. This approach allows one chain to provide and consume security simultaneously.

BabylonChain

BabylonChain uses Bitcoin’s economic value to secure PoS chains. Specifically, Bitcoin has several properties that make it particularly for economic security purposes, prominently its large market cap, and beyond this, the fact that it is unencumbered, less volatile, and generally idle and fairly distributed.

Staking is not a native feature of the Bitcoin blockchain. Babylon implements a remote staking mechanism on top of Bitcoin’s UTXO model, which allows the recipient of a transaction to spend a specific amount of coins specified by the sender. In this way, a staking contract can be generated that allows for four operations: staking, slashing, unbonding, and claiming coins after they have been unbonded. 


Blocks are processed natively on the PoS chain using BabylonChain for security first, and then in a second round, validators provide finality by signing again using so-called extractable one-time signatures (EOTS). The central feature of this key type is that whena signer signs two messages using the same private key, it is leaked.

Therefore, if a validator signs two conflicting blocks at the same time, the corresponding private key is leaked, allowing anybody to exit the staked BTC through a burn transaction.  

Separately, BabylonChain protects against so-called long-range attacks by timestamping, where the PoS chain’s block hashes are committed to the Bitcoin chain. Such an attacked would occur when a staker unbonds but is still able to vote on blocks, i.e. could attack a chain costlessly. Through timestamping, the set of stakers on Bitcoin is synchronized with the blocks of the PoS chain, precluding a long-range attack.

No one-size-fits all approach

When exploring the evolution of different solutions to shared security, it becomes clear that it improves one of the dimensions of security in PoS chains - the financial commitment behind a network, resulting in a higher cost of corruption, or the minimum cost incurred by any adversary for successfully executing a safety or liveness attack on the protocols. As a natural challenge to modularity, some networks are born with optimized solutions to how different projects would be able to leverage a validator set. That is the case for Avalanche and Polkadot, for example. On the other side, there are solutions being built as an additional layer on top of existing networks, like EigenLayer and Babylon. And there is the Cosmos ICS, which leverages IBC, and is modular enough to not form part of either of the previous two groups.

In the set of analyzed projects, two categories emerged: restaking and checkpointing. The former aims to unlock liquidity in the ecosystems, while the latter works as an additional layer of security to a protocol, without directly changing the dynamics for stakers nor node operators. In the end, those projects also have secondary effects on the networks. For example, restaking reduces the need for scaling the validator set in the Cosmos, while checkpointing has the potential to minimize the unbonding period for stakers.

Shared security can also change the economic incentives to operate a network. Particularly related to restaking, the final rewards for validating multiple networks are expected to be higher than validating only one. However, as always, return scales with risk. Shared security can compromise on the decentralization dimension of security, opening the doors to higher levels of contagiousness during stress scenarios, and it also adds new implementation and smart contract risk.
In the context of decentralized networks, shared security is the idea of increasing the economic security of a blockchain through the use of resources from another - one or multiple - networks.

Shared security can also change the economic incentives to operate a network. Particularly related to restaking, the final rewards for validating multiple networks are expected to be higher than validating only one. However, as always, return scales with risk. Shared security can compromise on the decentralization dimension of security, opening the doors to higher levels of contagiousness during stress scenarios, and it also adds new implementation and smart contract risk.

About Chorus One

Chorus One is one of the biggest institutional staking providers globally, operating infrastructure for 50+ Proof-of-Stake networks, including Ethereum, Cosmos, Solana, Avalanche, and Near, amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures. We are a team of over 50 passionate individuals spread throughout the globe who believe in the transformative power of blockchain technology.

Core Research
Ethena: Delving into the Mechanics and Risks of USDe
An in-depth analysis of the risks and opportunities of Ethena Labs
June 17, 2024
5 min read

This article is extracted from the Q1 2024 Quarterly Insights. To read the full report, please visit https://chorus.one/reports-research/quarterly-network-insights-q1-2024

Ethena is a project that has recently captured significant attention, driven not only by their fundraising announcement in February but also by the early April launch of their governance token, $ENA. However, it is their product called USDe, that lies at the heart of ongoing debates and discussions. Described by the Ethena team as a 'synthetic dollar', a concept originally proposed by Bitmex, USDe has emerged as a focal point of discussion within the crypto community. While USDe may indeed be perceived as an innovative product, it's essential to acknowledge that all innovation carries inherent risks that must be carefully evaluated. This piece aims to explain how Ethena operates, including the mechanisms behind USDe and sUSDe, while also examining market dynamics and potential vulnerabilities in the case of black swan scenarios. The goal is to provide readers with comprehensive insights to better understand Ethena’s mechanisms.

Getting Started with the Fundamentals

When reviewing the official documentation, one will find the following passages:

Ethena is a synthetic dollar protocol built on Ethereum that provides a crypto-native solution for money not reliant on traditional banking system infrastructure, alongside a globally accessible dollar denominated instrument - the 'Internet Bond'.

and

Ethena's synthetic dollar, USDe, provides the crypto-native, scalable solution for money achieved by delta-hedging Ethereum and Bitcoin collateral. USDe is fully-backed (subject to the discussion in the Risks section regarding events potentially resulting in loss of backing) and free to compose throughout DeFi.

Understanding USDe isn't necessarily straightforward for everyone, as it necessitates some basic understanding of trading strategies and derivative products. What Ethena is doing with USDe is a cash and carry trade, which is a concept very well known in TradFi.

In this specific scenario, Ethena's objective in executing a cash and carry trade is to use spot assets as collateral to open a short position with a perpetual futures contract linked to the same underlying assets. That way, the position is delta-hedged and Ethena capitalizes on positive funding rates, ultimately distributing profits between USDe stakers (those who hold sUSDe tokens) and an insurance fund.

For those not familiar with the concept of perpetual futures contracts and delta hedging/delta neutral strategies, let’s define the concepts.

Perpetual futures contracts were popularized by BitMEX and are crypto derivatives that allow users to trade long or short positions with leverage if they want to. The concept is similar to traditional Futures Contracts but without an expiration date or settlement. Traders can maintain their positions indefinitely, with a funding mechanism ensuring that the contract's price stays closely tied to the spot price of the underlying asset.

  • If the index price exceeds the spot price due to more long positions than short, long traders have to pay a funding rate to short, incentivizing adjustments to bring the price closer to the spot level.
  • Conversely, an excess of short positions forces short traders to pay a funding rate to longs, ensuring convergence of the perpetual price to the spot price.

A Delta Neutral strategy is a strategy that aims to minimize directional risk by keeping a position's delta at zero. To achieve delta neutrality, traders typically offset the delta of one position with the delta of another position in such a way that any gains or losses from price movements are balanced out.

This strategy is popular among professional traders and market makers to hedge against market direction. Ethena uses this strategy to keep USDe stable around $1 without being affected by market movements.

Let’s take a look at a concrete example:

Let’s take the example of stETH. We assume stETH is trading at par(1 stETH = 1 ETH) with the price of ETH at $3000. If the price of ETH increases by 10% from $3000 to $3300, here's what will happen:

  • For the first leg, which is the collateral (long stETH position), the P&L would be $300 + staking yield.  
  • For the second leg, which is the short perpetual ETH position, the P&L would be -$300+ funding rate.

Note: If the stETH/ETH pair experiences a depeg, it could potentially result in a liquidation event, which may cause USDe to no longer be backed by $1 worth of collateral.

Therefore, the total P&L of the position would be:

Total P&L = $300 + staking yield - 300 + funding rate

The generalized formula would be:

Total P&L = (Δa+Σ pk) + (Гb+ f)

Δ = rate of change of position a
a = collateral
p = additional parameters related to asset a (example: staking yield)
Г = rate of change of position
bf = funding rate

To conclude this part, we can argue that USDe is not a stablecoin. Ethena’s USDe represents a tokenized, delta-hedged strategy. It’s a pioneering concept that offers decentralized access to a hedge fund’s strategy.

Core Protocol Components

A. The USDe total supply

There are exclusively two ways to acquire USDe, depending on whether one is a whitelisted participant (a market maker for example) or not. The methods vary as follows:

1) Minting: A whitelisted entity decides to mint USDe by selecting a backing asset (like stETH) and entering the amount to use for minting. Then, the backing asset is swapped against the agreed amount of USDe that is newly minted.

Note: This method is exclusively available for whitelisted entities.

2) Buying though a liquidity pool: A user decides to buy USDe via the Ethena dApp and can exchange different sorts of stablecoins for USDe, which are available in liquidity pools from protocols such as Curve. This transaction done via the Ethena UI, is routed using MEV protection through CowSwap.

At the time of writing, the total supply of USDe is 2,317,686,500 USDe in circulation. The evolution of the cumulative supply can be seen on the dashboard below:

Source: Ethena Labs on May 16th

As we can see, USDe has experienced steady growth from February until early April, and then has stagnated for most of the months of April and May.

The largest daily inflow occurred on April 2nd, with 232,176,843 USDe minted. This corresponds to the launch of the $ENA governance token and its associated airdrop.

Source: https://dune.com/kambenbrik/ethena-usde

On the contrary, the largest outflow occurred on April 13th, with 19,514,466 USDe removed from circulation. This happened during a sell-off triggered by the Bitcoin halving and the fact that funding turned negative during that short period of time.

To redeem USDe, only addresses whitelisted by the Ethena Protocol are eligible. These whitelisted addresses typically belong to entities such as market makers or arbitrageurs. For non-whitelisted addresses, the only way to exit is by selling USDe in liquidity pools, which can lead to a depegging event, similar to what occurred mid-April 2024 and May 2024.

In these specific scenarios, whitelisted addresses capitalize on this arbitrage opportunity by buying USDe on-chain and redeeming the collateral to realize profits.

B. Ethena’s collateral

Whitelisted addresses have the ability to generate USDe by providing a range of collateral options, including BTC, ETH, ETH LSTs, or USDT. Below is the current allocation of collateral held by Ethena:

This allocation is split between CEXs for executing a cash and carry trade, with some portion remaining unallocated.

Source: Ethena Labs on May 16th

The purpose of USDT is to purchase collateral and establish a delta-hedged position. However, there is currently a lack of publicly available information regarding the frequency of swaps, the trading process, and allocation specifics. Similar to a traditional hedge fund, this aspect appears to be at the discretion of the team, which makes this process opaque.

C. USDe, sUSDe and Insurance Fund

USDe could be seen as a claim over Ethena’s collateral. Users provide collateral (BTC, ETH, etc.) and receive USDe in exchange, while Ethena delta hedges the collateral to ensure that 1 USDe should be worth $1 of Ethena collateral (factoring the execution costs). Therefore, USDe could be seen as a notice debt, in which if you decide to reclaim the collateral, users should be able to redeem it. USDe could be seen as a claim over Ethena’s collateral, users provide a collateral (BTC, ETH etc), and receive in exchange USDe which delta collateral the collateral to ensure that 1 USDe should be worth $1 of Ethena collateral (magnus execution cost). Therefore, USDe could be seen as a debt or a 'repayment commitment' from Ethena Labs, wherein USDe holders can redeem Ethena’s collateral.

However, even if considered a debt, holding USDe does not offer any yield. To earn yield on USDe, users can either:

  • Provide USDe liquidity in DeFi
  • Stake their USDe into sUSDe

In the second case, USDe has to be staked in order to receive the yield which comes from two sources:

  • Staking yield (when applied, such as stETH)
  • Funding rate

Yield is not paid directly to sUSDe holders; rather, it accumulates within the staking contract, resulting in the "value" of sUSDe rising over time. The relationship between sUSDe and USDe is as follows:

sUSDe:USDe ratio = Total sUSDe supply / Total USDe staked + total protocol yield deposited

At the time of writing, 1 sUSDE = 1.058 USDe

What is surprising is when we look at the data, it seems like only a few portion of USDe holders are staking their USDe to earn a yield.

The portion of 370,127,486 sUSDe represents 391,594,880 USDe with a ratio of 1.058.

Out of the 2,317,686,500 USDe in circulation, only 391,594,880 are staked and generating yield. This represents only 16.8% of the supply that is staked and generates yield.Why wouldn't the remaining 83.2% stake to get the yield? This is because of the Sats Campaign.

Ethena is currently running a SATS campaign that incentivizes USDe holders not to stake by giving them SATS, which would result in additional incentives in ENA by locking USDe, holding it, or providing USDe liquidity into diverse protocols.

Therefore, Ethena is using the ENA tokens as incentives to prevent USDe holders from staking it. Why is that? Because of the Insurance Fund.

The Insurance Fund is a safety measure created by the Ethena team to have a reserve for use in case of events such as negative funding rates (which we will discuss later in this article). The Insurance Fund can be track in the following address.

Which represents a total of more than $39 million. Part of Ethena’s strategy is to use ENA to incentivize USDe holders not to stake in order to fill in the insurance fund and prepare in case of a bad scenario. This sets the stage for the next part, in which we will discuss some of the intrinsic risks related to the protocol.

Note: Since the publication of this article, the number of sUSDe in circulation has significantly increased. This is due to the fact that the insurance fund now has a fairly large treasury, as well as the increase in the caps for sUSDe on Pendle.

Intrinsic risks of the protocol

A. Negative funding rates

One of the most well-known risks of Ethena’s architecture is probably the risk of funding rates turning negative. As explained in the first part, Ethena is taking a short perpetual position to delta-hedge the spot collateral. If the funding rates turn negative (indicating more people are on the short side than the long side), there is a risk that the protocol starts losing money.

There are two mechanisms in place to mitigate losses coming from negative funding rates:

  • The staking yield generated by the assets. As of now, the collateral yield accounts for 0.66% of the Collateral Notional. With a total value of $2.3 billion, this represents around $15.18 million annually.
  • The Insurance Fund: As previously mentioned, it currently holds approximately +$39 million and receives daily yields from those who are not staking USDe.

The Insurance Fund steps in when the negative funding rate > the collateral yield.

Based on Ethena’s analysis, there has only been one quarter in the last 3 years where the average sum yield was negative, and this data was polluted by the ETH PoW arbitrage period, which was a one-off event that dragged funding deeply negative.

However, it’s important to mention that past data is not necessarily a representation of the future. As of May 13, 2024, Ethena represents 14% of the total Open Interest on ETH, and approximately 5% of the total open interest on BTC.


If Ethena continues to grow, there is a chance that it will start representing too significant a portion of the total open interest to be known to be on the short side, leading to a natural decrease in funding rates and potentially experiencing negative funding rates more often due to the protocol becoming too large for the market.

If this scenario happens, Ethena will be forced at some point to cap USDe supply in order to adapt to the total open interest. Otherwise, Ethena would shoot itself in the foot.

B. The Liquidity Crunch

This is somewhat related to the negative funding rates mentioned earlier. When negative funding rates occur, there is a sell-off, as shown here:

Source: https://www.coinglass.com/funding/BTC

We can notice that funding rates started to be more frequent on some specific exchanges between mid-April and mid-May. This has been translated into some periods of USDe depegs, with an inflow of USDe probably explained by whitelisted entities taking advantage of that depeg, and a USDe total supply not really growing.

The only way for non-whitelisted people to exit from USDe is to sell on the market, which will create a depeg. This will be captured by the whitelisted entities. If a depeg happens, whitelisted entities will buy USDe at a discount to redeem collateral by giving back USDe, therefore reducing the USDe circulating supply and capturing the profits.

This is an easy way for whitelisted entities to capture profits.

Example:

With negative funding rates, some people decide to exit USDe and sell on a DEX. USDe is now trading at $0.8. Whitelisted actors will buy USDe at $0.8 and redeem USDe against BTC or ETH for $1 worth of assets, then sell the collateral to capture $0.2 of profits (factoring the execution cost).

Things become more complex when they have to deal with ETH LSTs; this is where the liquidity crunch can happen. Ethena currently has 14% of its total collateral in ETH LSTs, which at the time of writing, represents around $324 million. It is not detailed which assets are held within the LSTs category, therefore we will assume it’s mostly stETH.

Let’s now imagine a scenario where all native assets such as ETH and BTC have been redeemed by whitelisted actors, and Ethena now only has ETH LSTs as collateral.

Funding rates turn negative again, there is a sell-off of USDe, and whitelisted actors start redeeming USDe against ETH LSTs. Different scenarios can happen, we will present three main scenarios below:

Scenario 1: Whitelisted entities are directly selling the ETH LSTs on the market, capturing some profits but also reducing the arbitrage opportunity if more and more actors do so, as the ETH/ETH LSTs pair will start depegging.

This scenario can happen initially, and some traders will take advantage of the ETH/stETH depeg to buy stETH at a discount and unstake to get ETH. This will start impacting the exit/unstaking queue, leading to negative consequences in other scenarios.

Scenario 2: Whitelisted entities decide to unstake the ETH LSTs to get ETH and simultaneously open a short perp position on ETH to delta hedge and mitigate the risk associated with the token price.

They then wait for the exit queue to end, get the native ETH, close the short perp position, and profit.

If the funding rates are negative, the whitelisted actor might not engage in this arbitrage and redeem the collateral because it depends on how negative the funding rates are and how long the exit queue is.

If the exit queue is too long and funding rates are too negative to make that trade profitable, then actors who don’t want exposure to the asset price won’t take that trade. This would leave USDe depegged and trigger a bank run, with more and more people selling their USDe on the market.

  • They face duration risk: if the exit queue to unstake is too long, they won’t take that trade because they don’t want to wait that long to receive native ETH.
  • If USDe behaves like a falling knife, they might also refrain from taking that trade because they don’t want to buy USDe and redeem it, knowing that USDe sell-offs keep happening and the discount will be larger.

If USDe starts depegging and remains that way, Ethena’s insurance fund will also take a significant hit, mostly due to the negative funding rates and the fact that a portion of the insurance fund is in USDe. 


Of course, all these scenarios would only occur in a situation of a very extreme event. However, if such a scenario were to happen, non-whitelisted USDe holders would suffer the most, as their only way of exit would be to sell USDe. At least, changing this model by offering the redemption feature to everyone could improve the situation. In any case, if Ethena were to become big enough, this could lead to significant unstaking events, thereby impacting Ethereum's economic security.

If an attacker sees that most of Ethena's collateral is in ETH LSTs, they can choose to borrow USDe, sell it heavily on liquidity pools to break the peg, allow the first whitelisted actors to arbitrage and begin increasing the unstaking queue, and then keep selling massively USDe to start a bank run.

That's why it's important for Ethena not to grow too large and to ensure that the collateral in ETH LSTs is also capped.

C. The Execution risk

Holding USDe also involves trusting the Ethena team to execute the cash and carry trade effectively. Unfortunately, there isn't much information available about how this trade is executed. After reviewing the official documentation, there is no information provided about the trading team or how frequently this trade occurs. For example, there is currently $109.5 million of unallocated collateral in USDT, which will be used for the cash and carry trade, but no information on when those trades will be executed.

This is a review of the hidden risks associated with Ethena that users should be aware of. Of course, there are many more traditional risks related to the protocol, such as smart contract risks, custodial risks, or exchange risks. The Ethena team has done a great job of mentioning these traditional risks here.

In conclusion, the goal of this article was to explain what Ethena is, show the various mechanisms behind the protocol and its innovations, while also outlining the associated risks. Users of a protocol should be aware of their exposures and act accordingly, there is no free lunch in the market, and Ethena presents multiple risks that should be taken into account before engaging with the protocol.

About Chorus One

Chorus One is one of the biggest institutional staking providers globally, operating infrastructure for 50+ Proof-of-Stake networks, including Ethereum, Cosmos, Solana, Avalanche, and Near, amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures. We are a team of over 50 passionate individuals spread throughout the globe who believe in the transformative power of blockchain technology.

Core Research
MEV-Boost Withdrawal Bug
We describe a bug we've encountered in mev-boost, the standard software validators used to solicit blocks from sophisticated, specialized entitites called builders on Ethereum.
March 11, 2024
5 min read

The following article is a summary of a recent ETHResearch contribution by Chorus One Research, which describes a bug we've encountered in mev-boost, the standard software validators used to solicit blocks from sophisticated, specialized entitites called builders on Ethereum. This bug is not specific to Chorus One; it can affect all Ethereum validators running mev-boost.

To read the full paper, please visit: https://chorus.one/reports-research/mev-boost-withdrawal-bug

--

Chorus One runs a proprietary version of mev-boost, dubbed Adagio, which optimizes for mev capture by optimizing latency.  Our commitment to Adagio obligates us to have an in-depth understanding of mev-boost and Ethereum's PBS setup in general. As such, we decided to dive deeper, and to make our findings available to the Ethereum community.

In practice, mev-boost facilitates an auction, where the winning builder commits to paying a certain amount of ETH for the right to provide the block that the validator proposing the next slot ("proposer") will include. This amount then accrues to an address provided by the validator, referred to as the "fee recipient".

Proposers and builders do not communicate directly, but exchange standardized messages via a third party called a "relay". The relay can determine the amount paid for a block by comparing the balance of the fee recipient at certain fixed times in the auction.

We have observed that in instances where the block in question coincidentally includes reward withdrawals due to the fee recipient, the relay has been unable to separate these withdrawals from the amount paid by the builder. This leads to an inflated value for the auction payment. This inaccuracy can negatively reflect on the Ethereum network under its current economic model (EIP-1559). Specifically, it may decrease the amount of transactions processed and decrease the amount of ETH burned, thus manifesting a small but measurable negative net outcome for the network overall.

For a deep dive, please visit: https://chorus.one/reports-research/mev-boost-withdrawal-bug

About Chorus One

Chorus One is one of the biggest institutional staking providers globally operating infrastructure for 50+ Proof-of-Stake networks, including Ethereum, Cosmos, Solana, Avalanche, and Near, amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures.

Core Research
Opinion
Reflections #4: Research Recap
A refresher on Chorus One's significant research efforts in 2023
December 19, 2023
5 min read

Throughout 2023, Chorus One maintained its standing as one of the select few node operators to consistently deliver in-depth research reports, wherein our dedicated in-house research team delves into the latest developments in the crypto and staking world.

Edition #4 of our 2023 Reflections series recaps Chorus One’s significant research efforts in 2023. Dive in!

Featured
  1. MEV on the dYdX v4 chain: A validator’s perspective on impact and mitigation

This year, Chorus One introduced a major research effort, fueled by a grant from dYdX, that examines the implications of Maximum Extractable Value (MEV) within the context of dYdX v4 from a validator's perspective.

This comprehensive analysis presents the first-ever exploration of mitigating negative MEV externalities in a fully decentralized, validator-driven order book.

Additionally, it delves into the uncharted territory of cross-domain arbitrage involving a fully decentralized in-validator order book and other venues.

Dive in: https://chorus.one/reports-research/mev-on-the-dydx-v4-chain#

  1. The cost of artificial latency

We present a comprehensive analysis of the implications of artificial latency in the Proposer-Builder-Separation framework on the Ethereum network. Focusing on the MEV-Boost auction system, we analyze how strategic latency manipulation affects Maximum Extractable Value yields and network integrity. Our findings reveal both increased profitability for node operators and significant systemic challenges, including heightened network inefficiencies and centralization risks. We empirically validate these insights with a pilot that Chorus One has been operating on Ethereum mainnet.

Dive in: https://chorus.one/reports-research/the-cost-of-artificial-latency-in-the-pbs-context

TL;DR: https://chorus.one/articles/timing-games-and-implications-on-mev-extraction

  1. Breaking Bots: An alternative way to capture MEV on Solana

We published a whitepaper comparing key characteristics of Ethereum and Solana, which explores the block-building marketplace model, akin to the "flashbots-like model," and examines the challenges of adapting it to Solana.

Additionally, recognizing Solana's unique features, we also proposed an alternative to the block-building marketplace: the solana-mev client. This model enables decentralized extraction by validators through a modified Solana validator client, capable of handling MEV opportunities directly in the banking stage of the validator. Complementing the whitepaper, we also shared an open-source prototype implementation of this approach.

Dive in: https://chorus.one/reports-research/breaking-bots-an-alternative-way-to-capture-mev-on-solana

Quarterly Insights

Every quarter, we publish an exclusive report on the events and trends that dominated the Proof-of-Stake world. Check out our Quarterly reports below, with a glimpse into the topics covered in each edition.

Q1

Titles covered:

  • Cross-chain MEV: A New Frontier in DeFi
  • The Evolution of Shared Security
  • The Start of ZK Season
  • App-chain thesis and Avalanche subnets

Read it here: https://chorus.one/reports-research/quarterly-network-insights-q1-2023  

Q2

Titles covered:

  • ETH <> Arbitrum Cross-Chain MEV: a first estimate
  • ICS on Cosmos Hub, and Centralization
  • Expanding the Ethereum Staking Ecosystem: Restaking
  • Ecosystem Review - Injective

Read it here: https://chorus.one/reports-research/quarterly-network-insights-q2-2023

Q3

Titles covered:

  • A sneak peek at validator-side MEV optimization
  • Hedging LP positions by staking
  • Considerations on the Future of Ethereum Liquid Staking
  • New developments in State Sync and Partial Nodes

Read it here:  https://chorus.one/reports-research/quarterly-network-insights-q3-2023-2024

Reach out!

If you have any questions, would like to learn more, or get in touch with our research team, please reach out to us at research@chorus.one

About Chorus One

Chorus One is one of the biggest institutional staking providers globally operating infrastructure for 45+ Proof-of-Stake networks including Ethereum, Cosmos, Solana, Avalanche, and Near amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures.

No results found.

Please try different keywords.

 Join our mailing list to receive our latest updates, research reports, and industry news.

Want to be a guest?
Drop us a line!

Submit
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.