Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Core Research
A primer on proposer preconfirms
We explore what preconfirmations are, why they matter, and how they’re set to transform the blockchain landscape.
September 9, 2024
5 min read

In the blockchain industry, where the balance between decentralization and efficiency often teeters on a knife's edge, innovations that address these challenges are paramount. Among these innovations, preconfirmations stand out as a powerful tool designed to enhance transaction speed, security, and reliability. Here, we’ll delve into what preconfirmations (henceforth referred to as “preconfirms” ) are, why they matter, and how they’re set to transform the blockchain landscape.

Preconfirms are not a new concept.

The idea of providing a credible heads-up or confirmation that a transaction has occurred is deeply ingrained in our daily lives. Whether it's receiving an order confirmation from Amazon, verifying a credit card payment, or processing transactions in blockchain networks, this concept is familiar and widely used. In the blockchain world, centralized sequencers like those in Arbitrum function similarly, offering guarantees that your transaction will be included in the block.

However, these guarantees are not without limitations. True finality is only achieved when the transaction is settled on Ethereum. The reliance on centralized sequencers in Layer 2 (L2) networks, which are responsible for verifying, ordering, and batching transactions before they are committed to the main blockchain (Layer 1), presents significant challenges. They can become single points of failure, leading to increased risks of transaction censorship and bottlenecks in the process.

This is where preconfirms come into play. Preconfirms were introduced to address these challenges, providing a more secure and efficient way to ensure transaction integrity in decentralized networks.

Builders, Sequencers, Proposers: Who’s Who

Before jumping into the preconfirms trenches, let’s start by clarifying some key terms that will appear throughout this article (and are essential to the broader topic).

Builders: In the context of Ethereum and PBS, builders are responsible for selecting and ordering transactions in a block. This is a specialized role with the goal of creating blocks with the highest value for the proposer, and builders are also highly centralized entities. Blocks are submited to relays, which act as mediators between builders and proposers.

Proposers: The role of the proposer is to validate the contents of the most valuable block submitted by the block builders, and to propose this block to the network to be included as the new head of the blockchain. In this landscape, proposers are the validators in the Proof-of-Stake consensus protocol, and get rewarded for proposing blocks (a fee gets paid to the builder from the proposer as well).

Sequencers: Sequencers are akin to air traffic controllers, particularly within Layer 2 Rollup networks. They are responsible for coordinating and ordering transactions between the Rollup and the Layer 1 chain (such as Ethereum) for final settlement. Because they have exclusive rights to the ordering of transactions, they also benefit from transaction fees and MEV.  Usually, they have ZK or optimistic security guarantees.

The solution: Preconfirmations

Now that we’ve set the stage, let’s dive into the concept of preconfirms.

At their core, preconfirms can provide two guarantees:

  • Inclusion Guarantees: Assurance that a transaction will be included in the next block.
  • Execution Guarantees: Assurance that a transaction will successfully execute, especially in competitive environments where multiple users are vying for the same resources, such as in trading scenarios.

These two guarantees matter. Particularly for:

Speed: Traditional block confirmations can take several seconds, whereas preconfirms can provide a credible assurance much faster. This speed is particularly beneficial for "based rollups" that batch user transactions and commit them to Ethereum, resulting in faster transaction confirmations.  @taikoxyz and @Spire_Labs are teams building based rollups.

Censorship Resistance: A proposer can request the inclusion of a transaction that some builders might not want to include.

Trading Use Cases: Traders may preconfirm transactions if it allows them to execute ahead of competitors.

Preconfirmations on Ethereum: A Closer Look

Now, zooming in on Ethereum.

The following chart describes the overall Proposer-builder separation and transaction pipeline on Ethereum.

Within the Ethereum network, preconfirms can be implemented in three distinct scenarios, depending on the specific needs of the network:

  1. Builder issued Preconfirms

Builder preconfirms suit the trading use case best. These offer low-latency guarantees and are effective in networks where a small number of builders dominate block-building. Builders can opt into proposer support, which enhances the strength of the guarantee.

However, the dominance of a few builders means that onboarding these few is key. However, since there are only a few dominant builders, successfully onboarding these players is key.

  1. Proposer issued Preconfirms.

Proposers provide stronger inclusion guarantees than builders because they have the final say on which transactions are included in the block. This method is particularly useful for "based rollups," where Layer 1 validators act as sequencers.

Yet, maintaining strong guarantees are key challenges for proposer preconfirms.

The question of which solution will ultimately win remains uncertain, as multiple factors will play a crucial role in determining the outcome. We can speculate on the success of builder opt-ins for builder preconfirms, the growing traction of based rollups, and the effectiveness of proposer declaration implementations. The balance between user demand for inclusion versus execution guarantees will also be pivotal. Furthermore, the introduction of multiple concurrent proposers on the Ethereum roadmap could significantly impact the direction of transaction confirmation solutions. Ultimately, the interplay of these elements will shape the future landscape of blockchain transaction processing.

Commit-Boost

Commit-boost is a mev-boost like sidecar for preconfirms.

Commit-boost facilitates communication between builders and proposers, enhancing the preconfirmation process. It’s designed to replace the existing MEV-boost infrastructure, addressing performance issues and extending its capabilities to include preconfirms.

Currently in testnet, commit-boost is being developed by a non-ventured-backed neutral software for Ethereum with the ambition of fully integrating preconfirms into its framework. Chorus One is currently running commit-boost on Testnet.  

Recap - The preconfirmation design space
  1. Who chooses which transactions to preconfirm.
    1. This could be the builder, the proposer, or a sophisticated third party (“a gateway”) chosen by the proposer.
  2. Where in the block the preconfirmed transactions are included.
    1. Granular control over placement can be interesting for traders even without execution preconfs.
  3. Whether only inclusion or additionally execution is guaranteed.
    1. Without an execution guarantee, an included transaction could still fail, e.g. if it tries to trade on an opportunity that has disappeared.
  4. How and what amount of collateral the builder or proposer puts up
    1. Preconfers must be disincentivized from reneging on their promised preconfs for these to be credible.
    2. E.g. This could be a Symbiotic or Eigenlayer service, and proposed collateral requirements range from 1 ETH to 1000 ETH.

Final Word

Chorus One has been deeply involved with preconfirms from the very beginning, pioneering some of the first-ever preconfirms using Bolt during the ZuBerlin and Helder testnets. We’re fully immersed in optimizing the Proposer-Builder Separation (PBS) pipeline and are excited about the major developments currently unfolding in this space. Stay tuned for an upcoming special episode of the Chorus One Podcast, where we’ll dive more into this topic.

If you’re interested in learning more, feel free to reach out to us at [email protected].

About Chorus One

Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.

Core Research
An introduction to oracle extractable value (OEV)
This is a joint research article written by Chorus One and Superscrypt, explaining OEV, and how it can be best captured.
August 30, 2024
5 min read

This is a joint research article written by Chorus One and Superscrypt

Blockchain transactions are public and viewable even before they get written to the block. This has led to maximal extractable value (‘MEV’), i.e. where actors frontrun and backrun visible transactions to extract profit for themselves.

The MEV space is constantly evolving as competition intensifies and new avenues to extract value are always emerging. In this article we explore one such avenue - Oracle Extractable Value, where MEV can be extracted even before transactions hit the mempool.

This is particularly relevant for borrowing & lending protocols which rely on data feeds from oracles to make decisions on whether to liquidate positions or not. Read on to find out more.

Introduction

Value is in a constant state of being created, destroyed, won or lost in any financialized system, and blockchains are no exception. User transactions are not isolated to their surroundings, but instead embedded within complex interactions that determine their final payoff.

Not all transaction costs are as explicit as gas fees. Fundamentally, the total value that can be captured from a transaction includes the payoff of downstream trades preceding or succeeding it. These can be benign in nature, for example, an arbitrage transaction to bring prices back in line with the market, or impose hidden taxes in the case of front running. Overall, maximal extractable value (or “MEV”) is the value that can be captured from strategically including and ordering transactions such that the aggregate block value is maximized.

If not extracted or monetized, value is simply lost. Presently, the actualization of MEV on Ethereum reflects a complex supply chain (“PBS”) where several actors such as wallets, searchers, block builders and validators fill specialized roles. There are returns on sophistication for all participants in this value chain, most explicitly for builders which are tasked with creating optimal blocks. Validators can play sophisticated timing games which result in additional MEV capture; for example, Chorus One has run an advanced timing games setup since early 2023, and published extensively on it. In the PBS context, the best proxy for the total MEV extracted is the final bid a builder gets to submit during the block auction.

Such returns on sophistication extend to the concept of Oracle Extractable Value (OEV), which is a type of MEV that has historically gone uncaptured by protocols. This article will explain OEV, and how it can be best captured.

Oracles

Oracles are one of crypto's critical infrastructure components: they are the choreographers that orchestrate and synchronize the off-chain world with the blockchain’s immutable ledger. Their influence is immense: they inform all the prices you see and interact with on-chain. Markets are constantly changing, and protocols and applications rely on secure oracle feed updates to provide DeFi services to millions of crypto users worldwide.

The current status-quo is that third-party oracle networks serve as intermediaries that feed external data to smart contracts. They operate separately from the blockchains they serve, which maintains the core goal of chain consensus but introduces some limitations, including concepts such as fair sequencing, required payments from protocols and apps, and multiple sources of data in a decentralized world.

In practical terms, the data from oracles represents a great resource for value extraction. The market shift an oracle price update causes can be anticipated and traded profitably, by back-running any resulting arbitrage opportunities or (more prominently) by capturing resulting liquidations. This is Oracle Extractable Value. But how is it captured, and more importantly, who profits from it?

A potential approach to understand the value in OEV (using AAVE data).
Oracle Extractable Value (OEV)

In MEV, searchers (which are essentially trading bots that run on-chain) profit from oracle updates by backrunning them in a free-for-all priority gas auction. Value is distributed between the searchers, who find opportunities particularly in the lending markets for liquidations, and the block proposers that include their prices in the ledger. Oracles themselves have not historically been a part of this equation.

OEV changes this flow by atomically coupling the backrun trade with the oracle update. This allows the oracle to capture value, by either acting as the searcher itself or auctioning off the extraction rights.

How OEV created in DeFi can be captured by MEV searchers before the dApp gets access to it.

OEV primarily impacts lending markets, where liquidations directly result from oracle updates. By bundling an oracle update with a liquidation transaction, the value capture becomes exclusive, preventing front-running since both actions are combined into a single atomic event. However, arbitrage can still occur before the oracle update through statistical methods, as traders act on the true price seen in other markets

Current landscape

UMA and Oval:

  • UMA has developed a middleware product called Oval (in collaboration with Flashbots), which aims to redistribute value more fairly within the DeFi space.
  • Oval works by wrapping data and conducting an order flow auction where participants bid for the right to use the data, with proceeds shared among protocols like Aave, UMA, and Chainlink.
  • This means that Oval inserts an auction mechanism and lets the market decide what a particular price update is worth.
  • This system helps DeFi protocols like Aave capture value that would otherwise go to liquidators or validators, potentially increasing their revenue.
  • Recently, Oval announced they had successfully completed the “world’s first OEV capture”, through a series of liquidations on the platform Morpho Labs. They even claim a 20% APY boost on some pairs on Morpho.

API3 and OEV Network:

  • API3 launched the OEV Network as a L2 solution, which uses ZK-rollups to capture and redistribute OEV within the DeFi ecosystem.
  • The network functions as an order flow auction platform where the rights to execute specific data feed updates are sold to the highest bidder.
  • This is a different extraction mechanism, as it turns the fixed liquidation bonus into a dynamic market-driven variable through competition.
  • This approach aims to enhance the revenue streams of DeFi protocols and promote a more balanced ecosystem for data providers and users.
  • API3’s solution also incentivizes API providers by distributing a portion of the captured OEV, thus encouraging direct participation and somewhat disrupting the dominance of third-party oracles​.

Warlock

  • Warlock is an upcoming OEV solution that will combine an oracle update sourced from multiple nodes with centralized backrun transactions.
  • The oracle update will feature increasing ZK trust guarantees over time, starting with computation consistency across oracle nodes.
  • Centralizing the backrun allows for lower latency updates, precludes searcher congestion, and protects against information leakage as the searcher entity retains exclusivity, i.e. does not need to obscure alpha. Warlock will service liquidations with internal inventory.
  • The upshot is that lending markets can offer more margin due to less volatility exposure via lower latency. The relative upside will scale with the sophistication of the searcher entity and the impact of congestion on auction-type OEV.
  • Overall, the warlock team estimates that a 10-20% upside will accrue to lending markets initially, with a future upside as value capture improves.

Where could this go?

The upshot of this MEV capture is that oracles have a new dimension to compete on. OEV revenue can be shared with dApps by providing oracle updates free of charge, or by outright subsidizing integrations. Ultimately, protocols with OEV integration will thus be able to bid more competitively for users.

OEV solutions share the same basic idea - shifting the value extraction from oracle updates to the oracle layer, by coupling the price feed update with backrun searcher transactions.

There are several ways of approaching this - an OEV solution may integrate with an existing oracle via an official integration, or through third party infrastructure. These solutions may also be purpose built and provide their own price update.

Heuristically, the key components of an OEV solution are the oracle update and the MEV transaction - these can be either centralized or decentralized.

We would expect purpose-built or “official” extensions to existing oracles to perform better due to less latency versus what would be required to run third party logic in addition to the upstream oracle. Additionally, these would be much more attractive from a risk perspective, as in the case of third party infrastructure, updates could break undesired integrations spontaneously.

The practical case is that a centralized auction can make most sense in latency-sensitive use cases. For example, it may allow a protocol to offer more leverage, as the risk of stranding with bad debt due stale price updates is minimized. By contract, a decentralized auction likely yields the highest aggregate value in use cases where latency is not as sensitive, i.e. where margin requirements are higher.

Mechanisms and Implications of OEV
  1. Atomic Liquidations
    • In a network supply chain, several blockchain actors can benefit from the information arbitrage that they possess.
    • Entities with privileged access to oracle data can leverage this information for liquidation or arbitrage
    • This can create unfair advantages and centralize power among those with early data access.
  2. A new dimension to compete on
    • OEV can lead to substantial profit opportunities, with estimated profits in the millions of dollars. This is especially true in highly volatile markets.
    • OEV enables oracles to distribute atomic backrun rights to searchers, capturing significant value
    • Ecosystems that distribute value in proportion to the contributions (of users, developers, and validators) are likely to thrive.
  3. Potential Risks and Concerns
    • If not managed properly, OEV can undermine the fairness and integrity of decentralized systems. Although the size of the oracle remains the same, it opens the door to competition on the value they can extract and pass onto dApps.
    • Some oracles like Chainlink have moved to reduce OEV and mitigate its impact, by refusing to endorse any third-party OEV solution. However, canonical OEV integrations are important as third party integrations bring idiosyncratic risk.
    • In traditional finance, market makers currently make all of the money from order flow. In crypto, there is a chance that value can be shared with users.
  4. Mitigation Strategies
    • Decentralization of Oracles: Using multiple independent oracles to aggregate data can reduce the risk of any single point of control.
    • Cryptographic Techniques: Techniques like zero-knowledge proofs can help ensure data integrity and fair dissemination without revealing the actual data prematurely.
    • Incentive Structures: Designing incentive structures that discourage exploitative behavior and promote fair access to data. Ultimately, the goal is a competitive market between oracles, where they compete with how much value can pass downstream.

Key Insights
  • Revenue Enhancement: By capturing OEV, projects can significantly enhance the revenue streams for DeFi protocols. For example, UMA’s Oval estimates that Aave missed out on about $62 million in revenue over three years due to not capturing OEV. By enabling these protocols to capture such value, they can reduce unnecessary payouts to liquidators and validators, redirecting this value to improve their own financial health.
  • Decentralization and Security: API3’s use of ZK-rollups and the integration with Polygon CDK provides a robust, secure, and scalable solution for capturing OEV. This approach not only ensures transparency and accountability but also aligns with the principles of decentralization by preventing a single point of failure and enabling more participants to benefit from the system. An aspect of this is also addressed by oracle-agnostic solutions and order flow auctions.
  • Incentives for API Providers: Both API3 and UMA’s solutions include mechanisms to incentivize API providers. API3, in particular, allows API providers to claim ownership of their data in Web3, providing a viable business model that promotes direct participation and reduces reliance on third-party oracles.
  • Impact on Users and Developers: For users and developers of DeFi applications, these innovations should be largely invisible yet beneficial. They help ensure that DeFi protocols operate more efficiently and profitably, potentially leading to lower costs and better services for end-users.
  • Adoption by Oracles and Protocols: Ultimately, the oracles have a part to play in the expansion and acceleration of OEV extraction, through themselves or more realistically, by partnering with third-party solutions. In the last weeks, UMA has launched OEV capture for Redstone oracle feeds, whilst Pyth Network announced their pilot for a new OEV capture solution. Protocols might also want to strike a balance between a new revenue stream ( for the protocol, liquidity pools, liquidity providers…) and the negative externalities of their user base.

OEV is still in its early stages, with much development ahead. We're excited to see how this space evolves and will continue to monitor its progress closely as new opportunities and innovations emerge.

About Chorus One

Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.

Core Research
The evolution of shared security
We examine the various approaches to shared security, including Restaking, Bitcoin Staking, Rollups (L2's), and Inter-chain security (Cosmos)
June 28, 2024
5 min read

This article is extracted from the Q1 2024 Quarterly Insights. To read the full report, please visit https://chorus.one/reports-research/quarterly-network-insights-q1-2024

Authors: Michael Moser, Umberto Natale, Gabriella Sofia, Thalita Franklin, Luis Nuñez Clavijo

On PoS networks, the financial aspect of staking is equivalent to the computational power committed on PoW networks. If we were to make an analogy with PoW, shared security could be compared to “merge mining”, a mechanism that allows a miner to mine a block in one blockchain, by solving the cryptographic challenge on another chain.

As a generalization, shared security technologies imply, at least, one security provider chain and, at least, one security consumer chain. To guarantee security, the shared security solution must allow for misbehavior in either the provider or consumer chains to be penalized, and that can be even by slashing the capital used for security of the provider chains. Different approaches are being used to optimize for the specific needs of each ecosystem. We will review the approaches most advanced in terms of development, and highlight the incentives and risks associated with the adoption of those technologies.

Although one may argue that Ethereum has pioneered the concept of shared security with L2s - like Arbitrum and Optimism, other blockchains have been exploring “the appchain thesis” and experimenting with more customized solutions:

  • On Avalanche, validators of the Primary Chain need to stake AVAX and they can participate on “Subnets” - a dynamic set of validators working together to achieve consensus on the state of a set of blockchains. Each blockchain is validated by exactly one Subnet. A Subnet can validate arbitrarily many blockchains. A node may be a member of arbitrarilymany Subnets.
  • On Polkadot, validators are staked on the Relay Chain in DOT and validate for the Relay Chain. Parachain auctions are held on the Polkadot Relay Chain to determine which blockchain will connect to the parachain slot. Parachains connected to the Polkadot Relay Chain all share in the security of the Relay Chain.
  • On Cosmos, the Interchain Security stack allows for new L1 chains to rent security from the Cosmos Hub as a way to lower the barrier to economic security. This is accomplished by the validator set of the Cosmos Hub running the consumer chain's nodes as well, and being subject to penalties (“slashing”) of the stake deposited on the Hub.

The motivation behind shared security is twofold:

  • It reduces the complexity for launching new chains, repurposing battle-tested security from well-established chains and decreasing or even removing the need for building a validator set from scratch, and;
  • It improves capital efficiency, allowing individuals to participate and be rewarded in multiple PoS chains, without the need to deploy additional capital.

Rollups

Rollups solutions are the main contenders for Layer 2 (“L2”) scalability in the Ethereum (the “L1”) path to modularity. This strategy allows the execution, in terms of computation and memory, to be processed “off the main chain”. The settlement properties of the state are kept on the L1 chain, which pools the security of the ecosystem through its validator base, and “rolled” from the L2 in batches (thus the name “rollup”).

This aggregation of transactions helps to minimize execution costs for each individual transaction. To maintain an ordered control of the state and upcoming transactions, rollups can make use of different architectures: historically we’ve seen a growing trend of optimistic (e.g. Arbitrum, OP, Base) or zero-knowledge (“ZK”, e.g. Starknet, Scroll) rollups, both of which have achieved limited levels of maturity in their proving mechanisms.

New architectures or upgraded versions of past ideas have also taken flight in the past months. Validiums have been brought backto the spotlight with new developments such as X Layer, and a particular flavor deemed “Optimium” (that uses the OP stack) now powers contenders such as Mantle, Mode Network, Metis, etc. The innovation, however, continues to thrive. The idea of “Based rollups” was first introduced in March by lead EF researcher Justin Drake,a simple design that allows L2 sequencing to be defined by L1 validators in their proposed blocks, thus deepening the shared security model between the layers.

It is safe to say that the rollup ecosystem continues to be the leading product in the shared security environment, with a TVL of $45.49  billion (counting canonically bridged, externally bridged, and natively minted tokens). In the last 180 days, transactions per second on the rollups have dwarfed activity on Ethereum mainnet, and the number of active users (considering distinct wallets) has risen meteorically in comparison to the L1.

EigenLayer

The idea behind shared security has captured extraordinary attention with EigenLayer, the restaking protocol built on Ethereum that has become a leading narrative within the network’s large staking  community.  In fact, restaking might as well become a larger sector than even the entire industry of single-asset staking. Driven by growing demand from stakers (seeking increased returns on their investments) and developers (sourcing security), the industry is witnessing an unprecedented shake up with capital flowing to secure multiple chains in aggregate. Concretely, EigenLayer’s TVL has managed to reach the 5 million ETH milestone at the time of writing.

Since we first identified restaking as a fundamental trend in our Q1 2023 edition, we’ve discussed EigenLayer at length and become deeply invested in the future success of the protocol: our research has focused on finding optimal risk-reward baskets for AVSs - total risk is not simply a combination of linear risks, but needs to take correlations into account.

As a result of our experience on the Holesky testnet and as mainnet operators for several AVSs, we publicized our approach to AVS selection. The thesis is straightforward: to identify and onboard the AVSs that have chances of being break-out winners, while filtering out the long tail of AVSs that merely introduce complexity and risk.

Much of what’s left to flesh out has to do with reward mechanisms and slashing conditions in these restaking protocols. As EigenLayer and other shared security models evolve and reach maturity, more information surfaces. Most recently, the Eigen Labs team presented their solution for the slashing dilemma (at least partially): $EIGEN. Current staking tokens have limitations in a model such as the AVS standard, due to the attributable nature of the slashing conditionson Ethereum. In other words, ETH can only secure work thatis provable on-chain. And since AVSs are by definition exogenous to the protocol, they are not attributable to capital on Ethereum.

Enter $EIGEN, the nominal “universal intersubjective work token” that intends to address agreed faults that are not internally provable. The slashing agreements under this classification should not be handled through the ETH restaked pool (as they necessitate a governance mechanism to determine their validity) but this second token, thus fulfilling the dual staking promise that the team had previously outlined. Currently, EigenDA is in its first phase of implementing this dual-quorum solution, and users can restake and delegate both ETH and EIGEN to the EigenDA operators.

ICS: replicated and mesh security

Replicated security went live on the Cosmos Hub in March 2023as the initial version of the Interchain Security protocol (“ICS”). Through this system, other Cosmos chains can apply to get the entire security of the Cosmos Hub validator set. This is accomplished by the validator set of the Cosmos Hub running the consumer chain's nodes as well, and being subject to slashing for downtime or double signing. Inter-Blockchain Communication (“IBC”) is utilized to relay updates of validator stake from the provider to the consumer chain so that the consumer chain knows which validators can produce blocks.

Currently, all Cosmos Hub validators secure the consumer chains. Under discussion is the “opt-in security” or ICS v2, an evolution of the above, that allows validators to choose to secure specific consumer chains or not. Another long-awaited feature is the ability for a consumer chain to get security from multiple provider chains. Both, however, introduce security and scaling issues. For example, the validator set of a consumer chain secured by multiple providers can have poor performance, since it will grow too large.

Solving most of the concerns around Replicated Security, Mesh Security was presented by Sunny Agarwal, the co-founder of Osmosis, in September 2022. The main insight is that instead of using the validator set of a provider chain to secure a consumer chain, delegators on one blockchain can be allowed to restake their staked assets to secure another Cosmos chain, and vice versa...

With Mesh Security, operators can choose whether to run a Cosmos chain and enable features to accept staked assets from another Cosmos chain, thereby increasing the economic security of the first one. This approach allows one chain to provide and consume security simultaneously.

BabylonChain

BabylonChain uses Bitcoin’s economic value to secure PoS chains. Specifically, Bitcoin has several properties that make it particularly for economic security purposes, prominently its large market cap, and beyond this, the fact that it is unencumbered, less volatile, and generally idle and fairly distributed.

Staking is not a native feature of the Bitcoin blockchain. Babylon implements a remote staking mechanism on top of Bitcoin’s UTXO model, which allows the recipient of a transaction to spend a specific amount of coins specified by the sender. In this way, a staking contract can be generated that allows for four operations: staking, slashing, unbonding, and claiming coins after they have been unbonded. 


Blocks are processed natively on the PoS chain using BabylonChain for security first, and then in a second round, validators provide finality by signing again using so-called extractable one-time signatures (EOTS). The central feature of this key type is that whena signer signs two messages using the same private key, it is leaked.

Therefore, if a validator signs two conflicting blocks at the same time, the corresponding private key is leaked, allowing anybody to exit the staked BTC through a burn transaction.  

Separately, BabylonChain protects against so-called long-range attacks by timestamping, where the PoS chain’s block hashes are committed to the Bitcoin chain. Such an attacked would occur when a staker unbonds but is still able to vote on blocks, i.e. could attack a chain costlessly. Through timestamping, the set of stakers on Bitcoin is synchronized with the blocks of the PoS chain, precluding a long-range attack.

No one-size-fits all approach

When exploring the evolution of different solutions to shared security, it becomes clear that it improves one of the dimensions of security in PoS chains - the financial commitment behind a network, resulting in a higher cost of corruption, or the minimum cost incurred by any adversary for successfully executing a safety or liveness attack on the protocols. As a natural challenge to modularity, some networks are born with optimized solutions to how different projects would be able to leverage a validator set. That is the case for Avalanche and Polkadot, for example. On the other side, there are solutions being built as an additional layer on top of existing networks, like EigenLayer and Babylon. And there is the Cosmos ICS, which leverages IBC, and is modular enough to not form part of either of the previous two groups.

In the set of analyzed projects, two categories emerged: restaking and checkpointing. The former aims to unlock liquidity in the ecosystems, while the latter works as an additional layer of security to a protocol, without directly changing the dynamics for stakers nor node operators. In the end, those projects also have secondary effects on the networks. For example, restaking reduces the need for scaling the validator set in the Cosmos, while checkpointing has the potential to minimize the unbonding period for stakers.

Shared security can also change the economic incentives to operate a network. Particularly related to restaking, the final rewards for validating multiple networks are expected to be higher than validating only one. However, as always, return scales with risk. Shared security can compromise on the decentralization dimension of security, opening the doors to higher levels of contagiousness during stress scenarios, and it also adds new implementation and smart contract risk.
In the context of decentralized networks, shared security is the idea of increasing the economic security of a blockchain through the use of resources from another - one or multiple - networks.

Shared security can also change the economic incentives to operate a network. Particularly related to restaking, the final rewards for validating multiple networks are expected to be higher than validating only one. However, as always, return scales with risk. Shared security can compromise on the decentralization dimension of security, opening the doors to higher levels of contagiousness during stress scenarios, and it also adds new implementation and smart contract risk.

About Chorus One

Chorus One is one of the biggest institutional staking providers globally, operating infrastructure for 50+ Proof-of-Stake networks, including Ethereum, Cosmos, Solana, Avalanche, and Near, amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures. We are a team of over 50 passionate individuals spread throughout the globe who believe in the transformative power of blockchain technology.

Core Research
Ethena: Delving into the Mechanics and Risks of USDe
An in-depth analysis of the risks and opportunities of Ethena Labs
June 17, 2024
5 min read

This article is extracted from the Q1 2024 Quarterly Insights. To read the full report, please visit https://chorus.one/reports-research/quarterly-network-insights-q1-2024

Ethena is a project that has recently captured significant attention, driven not only by their fundraising announcement in February but also by the early April launch of their governance token, $ENA. However, it is their product called USDe, that lies at the heart of ongoing debates and discussions. Described by the Ethena team as a 'synthetic dollar', a concept originally proposed by Bitmex, USDe has emerged as a focal point of discussion within the crypto community. While USDe may indeed be perceived as an innovative product, it's essential to acknowledge that all innovation carries inherent risks that must be carefully evaluated. This piece aims to explain how Ethena operates, including the mechanisms behind USDe and sUSDe, while also examining market dynamics and potential vulnerabilities in the case of black swan scenarios. The goal is to provide readers with comprehensive insights to better understand Ethena’s mechanisms.

Getting Started with the Fundamentals

When reviewing the official documentation, one will find the following passages:

Ethena is a synthetic dollar protocol built on Ethereum that provides a crypto-native solution for money not reliant on traditional banking system infrastructure, alongside a globally accessible dollar denominated instrument - the 'Internet Bond'.

and

Ethena's synthetic dollar, USDe, provides the crypto-native, scalable solution for money achieved by delta-hedging Ethereum and Bitcoin collateral. USDe is fully-backed (subject to the discussion in the Risks section regarding events potentially resulting in loss of backing) and free to compose throughout DeFi.

Understanding USDe isn't necessarily straightforward for everyone, as it necessitates some basic understanding of trading strategies and derivative products. What Ethena is doing with USDe is a cash and carry trade, which is a concept very well known in TradFi.

In this specific scenario, Ethena's objective in executing a cash and carry trade is to use spot assets as collateral to open a short position with a perpetual futures contract linked to the same underlying assets. That way, the position is delta-hedged and Ethena capitalizes on positive funding rates, ultimately distributing profits between USDe stakers (those who hold sUSDe tokens) and an insurance fund.

For those not familiar with the concept of perpetual futures contracts and delta hedging/delta neutral strategies, let’s define the concepts.

Perpetual futures contracts were popularized by BitMEX and are crypto derivatives that allow users to trade long or short positions with leverage if they want to. The concept is similar to traditional Futures Contracts but without an expiration date or settlement. Traders can maintain their positions indefinitely, with a funding mechanism ensuring that the contract's price stays closely tied to the spot price of the underlying asset.

  • If the index price exceeds the spot price due to more long positions than short, long traders have to pay a funding rate to short, incentivizing adjustments to bring the price closer to the spot level.
  • Conversely, an excess of short positions forces short traders to pay a funding rate to longs, ensuring convergence of the perpetual price to the spot price.

A Delta Neutral strategy is a strategy that aims to minimize directional risk by keeping a position's delta at zero. To achieve delta neutrality, traders typically offset the delta of one position with the delta of another position in such a way that any gains or losses from price movements are balanced out.

This strategy is popular among professional traders and market makers to hedge against market direction. Ethena uses this strategy to keep USDe stable around $1 without being affected by market movements.

Let’s take a look at a concrete example:

Let’s take the example of stETH. We assume stETH is trading at par(1 stETH = 1 ETH) with the price of ETH at $3000. If the price of ETH increases by 10% from $3000 to $3300, here's what will happen:

  • For the first leg, which is the collateral (long stETH position), the P&L would be $300 + staking yield.  
  • For the second leg, which is the short perpetual ETH position, the P&L would be -$300+ funding rate.

Note: If the stETH/ETH pair experiences a depeg, it could potentially result in a liquidation event, which may cause USDe to no longer be backed by $1 worth of collateral.

Therefore, the total P&L of the position would be:

Total P&L = $300 + staking yield - 300 + funding rate

The generalized formula would be:

Total P&L = (Δa+Σ pk) + (Гb+ f)

Δ = rate of change of position a
a = collateral
p = additional parameters related to asset a (example: staking yield)
Г = rate of change of position
bf = funding rate

To conclude this part, we can argue that USDe is not a stablecoin. Ethena’s USDe represents a tokenized, delta-hedged strategy. It’s a pioneering concept that offers decentralized access to a hedge fund’s strategy.

Core Protocol Components

A. The USDe total supply

There are exclusively two ways to acquire USDe, depending on whether one is a whitelisted participant (a market maker for example) or not. The methods vary as follows:

1) Minting: A whitelisted entity decides to mint USDe by selecting a backing asset (like stETH) and entering the amount to use for minting. Then, the backing asset is swapped against the agreed amount of USDe that is newly minted.

Note: This method is exclusively available for whitelisted entities.

2) Buying though a liquidity pool: A user decides to buy USDe via the Ethena dApp and can exchange different sorts of stablecoins for USDe, which are available in liquidity pools from protocols such as Curve. This transaction done via the Ethena UI, is routed using MEV protection through CowSwap.

At the time of writing, the total supply of USDe is 2,317,686,500 USDe in circulation. The evolution of the cumulative supply can be seen on the dashboard below:

Source: Ethena Labs on May 16th

As we can see, USDe has experienced steady growth from February until early April, and then has stagnated for most of the months of April and May.

The largest daily inflow occurred on April 2nd, with 232,176,843 USDe minted. This corresponds to the launch of the $ENA governance token and its associated airdrop.

Source: https://dune.com/kambenbrik/ethena-usde

On the contrary, the largest outflow occurred on April 13th, with 19,514,466 USDe removed from circulation. This happened during a sell-off triggered by the Bitcoin halving and the fact that funding turned negative during that short period of time.

To redeem USDe, only addresses whitelisted by the Ethena Protocol are eligible. These whitelisted addresses typically belong to entities such as market makers or arbitrageurs. For non-whitelisted addresses, the only way to exit is by selling USDe in liquidity pools, which can lead to a depegging event, similar to what occurred mid-April 2024 and May 2024.

In these specific scenarios, whitelisted addresses capitalize on this arbitrage opportunity by buying USDe on-chain and redeeming the collateral to realize profits.

B. Ethena’s collateral

Whitelisted addresses have the ability to generate USDe by providing a range of collateral options, including BTC, ETH, ETH LSTs, or USDT. Below is the current allocation of collateral held by Ethena:

This allocation is split between CEXs for executing a cash and carry trade, with some portion remaining unallocated.

Source: Ethena Labs on May 16th

The purpose of USDT is to purchase collateral and establish a delta-hedged position. However, there is currently a lack of publicly available information regarding the frequency of swaps, the trading process, and allocation specifics. Similar to a traditional hedge fund, this aspect appears to be at the discretion of the team, which makes this process opaque.

C. USDe, sUSDe and Insurance Fund

USDe could be seen as a claim over Ethena’s collateral. Users provide collateral (BTC, ETH, etc.) and receive USDe in exchange, while Ethena delta hedges the collateral to ensure that 1 USDe should be worth $1 of Ethena collateral (factoring the execution costs). Therefore, USDe could be seen as a notice debt, in which if you decide to reclaim the collateral, users should be able to redeem it. USDe could be seen as a claim over Ethena’s collateral, users provide a collateral (BTC, ETH etc), and receive in exchange USDe which delta collateral the collateral to ensure that 1 USDe should be worth $1 of Ethena collateral (magnus execution cost). Therefore, USDe could be seen as a debt or a 'repayment commitment' from Ethena Labs, wherein USDe holders can redeem Ethena’s collateral.

However, even if considered a debt, holding USDe does not offer any yield. To earn yield on USDe, users can either:

  • Provide USDe liquidity in DeFi
  • Stake their USDe into sUSDe

In the second case, USDe has to be staked in order to receive the yield which comes from two sources:

  • Staking yield (when applied, such as stETH)
  • Funding rate

Yield is not paid directly to sUSDe holders; rather, it accumulates within the staking contract, resulting in the "value" of sUSDe rising over time. The relationship between sUSDe and USDe is as follows:

sUSDe:USDe ratio = Total sUSDe supply / Total USDe staked + total protocol yield deposited

At the time of writing, 1 sUSDE = 1.058 USDe

What is surprising is when we look at the data, it seems like only a few portion of USDe holders are staking their USDe to earn a yield.

The portion of 370,127,486 sUSDe represents 391,594,880 USDe with a ratio of 1.058.

Out of the 2,317,686,500 USDe in circulation, only 391,594,880 are staked and generating yield. This represents only 16.8% of the supply that is staked and generates yield.Why wouldn't the remaining 83.2% stake to get the yield? This is because of the Sats Campaign.

Ethena is currently running a SATS campaign that incentivizes USDe holders not to stake by giving them SATS, which would result in additional incentives in ENA by locking USDe, holding it, or providing USDe liquidity into diverse protocols.

Therefore, Ethena is using the ENA tokens as incentives to prevent USDe holders from staking it. Why is that? Because of the Insurance Fund.

The Insurance Fund is a safety measure created by the Ethena team to have a reserve for use in case of events such as negative funding rates (which we will discuss later in this article). The Insurance Fund can be track in the following address.

Which represents a total of more than $39 million. Part of Ethena’s strategy is to use ENA to incentivize USDe holders not to stake in order to fill in the insurance fund and prepare in case of a bad scenario. This sets the stage for the next part, in which we will discuss some of the intrinsic risks related to the protocol.

Note: Since the publication of this article, the number of sUSDe in circulation has significantly increased. This is due to the fact that the insurance fund now has a fairly large treasury, as well as the increase in the caps for sUSDe on Pendle.

Intrinsic risks of the protocol

A. Negative funding rates

One of the most well-known risks of Ethena’s architecture is probably the risk of funding rates turning negative. As explained in the first part, Ethena is taking a short perpetual position to delta-hedge the spot collateral. If the funding rates turn negative (indicating more people are on the short side than the long side), there is a risk that the protocol starts losing money.

There are two mechanisms in place to mitigate losses coming from negative funding rates:

  • The staking yield generated by the assets. As of now, the collateral yield accounts for 0.66% of the Collateral Notional. With a total value of $2.3 billion, this represents around $15.18 million annually.
  • The Insurance Fund: As previously mentioned, it currently holds approximately +$39 million and receives daily yields from those who are not staking USDe.

The Insurance Fund steps in when the negative funding rate > the collateral yield.

Based on Ethena’s analysis, there has only been one quarter in the last 3 years where the average sum yield was negative, and this data was polluted by the ETH PoW arbitrage period, which was a one-off event that dragged funding deeply negative.

However, it’s important to mention that past data is not necessarily a representation of the future. As of May 13, 2024, Ethena represents 14% of the total Open Interest on ETH, and approximately 5% of the total open interest on BTC.


If Ethena continues to grow, there is a chance that it will start representing too significant a portion of the total open interest to be known to be on the short side, leading to a natural decrease in funding rates and potentially experiencing negative funding rates more often due to the protocol becoming too large for the market.

If this scenario happens, Ethena will be forced at some point to cap USDe supply in order to adapt to the total open interest. Otherwise, Ethena would shoot itself in the foot.

B. The Liquidity Crunch

This is somewhat related to the negative funding rates mentioned earlier. When negative funding rates occur, there is a sell-off, as shown here:

Source: https://www.coinglass.com/funding/BTC

We can notice that funding rates started to be more frequent on some specific exchanges between mid-April and mid-May. This has been translated into some periods of USDe depegs, with an inflow of USDe probably explained by whitelisted entities taking advantage of that depeg, and a USDe total supply not really growing.

The only way for non-whitelisted people to exit from USDe is to sell on the market, which will create a depeg. This will be captured by the whitelisted entities. If a depeg happens, whitelisted entities will buy USDe at a discount to redeem collateral by giving back USDe, therefore reducing the USDe circulating supply and capturing the profits.

This is an easy way for whitelisted entities to capture profits.

Example:

With negative funding rates, some people decide to exit USDe and sell on a DEX. USDe is now trading at $0.8. Whitelisted actors will buy USDe at $0.8 and redeem USDe against BTC or ETH for $1 worth of assets, then sell the collateral to capture $0.2 of profits (factoring the execution cost).

Things become more complex when they have to deal with ETH LSTs; this is where the liquidity crunch can happen. Ethena currently has 14% of its total collateral in ETH LSTs, which at the time of writing, represents around $324 million. It is not detailed which assets are held within the LSTs category, therefore we will assume it’s mostly stETH.

Let’s now imagine a scenario where all native assets such as ETH and BTC have been redeemed by whitelisted actors, and Ethena now only has ETH LSTs as collateral.

Funding rates turn negative again, there is a sell-off of USDe, and whitelisted actors start redeeming USDe against ETH LSTs. Different scenarios can happen, we will present three main scenarios below:

Scenario 1: Whitelisted entities are directly selling the ETH LSTs on the market, capturing some profits but also reducing the arbitrage opportunity if more and more actors do so, as the ETH/ETH LSTs pair will start depegging.

This scenario can happen initially, and some traders will take advantage of the ETH/stETH depeg to buy stETH at a discount and unstake to get ETH. This will start impacting the exit/unstaking queue, leading to negative consequences in other scenarios.

Scenario 2: Whitelisted entities decide to unstake the ETH LSTs to get ETH and simultaneously open a short perp position on ETH to delta hedge and mitigate the risk associated with the token price.

They then wait for the exit queue to end, get the native ETH, close the short perp position, and profit.

If the funding rates are negative, the whitelisted actor might not engage in this arbitrage and redeem the collateral because it depends on how negative the funding rates are and how long the exit queue is.

If the exit queue is too long and funding rates are too negative to make that trade profitable, then actors who don’t want exposure to the asset price won’t take that trade. This would leave USDe depegged and trigger a bank run, with more and more people selling their USDe on the market.

  • They face duration risk: if the exit queue to unstake is too long, they won’t take that trade because they don’t want to wait that long to receive native ETH.
  • If USDe behaves like a falling knife, they might also refrain from taking that trade because they don’t want to buy USDe and redeem it, knowing that USDe sell-offs keep happening and the discount will be larger.

If USDe starts depegging and remains that way, Ethena’s insurance fund will also take a significant hit, mostly due to the negative funding rates and the fact that a portion of the insurance fund is in USDe. 


Of course, all these scenarios would only occur in a situation of a very extreme event. However, if such a scenario were to happen, non-whitelisted USDe holders would suffer the most, as their only way of exit would be to sell USDe. At least, changing this model by offering the redemption feature to everyone could improve the situation. In any case, if Ethena were to become big enough, this could lead to significant unstaking events, thereby impacting Ethereum's economic security.

If an attacker sees that most of Ethena's collateral is in ETH LSTs, they can choose to borrow USDe, sell it heavily on liquidity pools to break the peg, allow the first whitelisted actors to arbitrage and begin increasing the unstaking queue, and then keep selling massively USDe to start a bank run.

That's why it's important for Ethena not to grow too large and to ensure that the collateral in ETH LSTs is also capped.

C. The Execution risk

Holding USDe also involves trusting the Ethena team to execute the cash and carry trade effectively. Unfortunately, there isn't much information available about how this trade is executed. After reviewing the official documentation, there is no information provided about the trading team or how frequently this trade occurs. For example, there is currently $109.5 million of unallocated collateral in USDT, which will be used for the cash and carry trade, but no information on when those trades will be executed.

This is a review of the hidden risks associated with Ethena that users should be aware of. Of course, there are many more traditional risks related to the protocol, such as smart contract risks, custodial risks, or exchange risks. The Ethena team has done a great job of mentioning these traditional risks here.

In conclusion, the goal of this article was to explain what Ethena is, show the various mechanisms behind the protocol and its innovations, while also outlining the associated risks. Users of a protocol should be aware of their exposures and act accordingly, there is no free lunch in the market, and Ethena presents multiple risks that should be taken into account before engaging with the protocol.

About Chorus One

Chorus One is one of the biggest institutional staking providers globally, operating infrastructure for 50+ Proof-of-Stake networks, including Ethereum, Cosmos, Solana, Avalanche, and Near, amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures. We are a team of over 50 passionate individuals spread throughout the globe who believe in the transformative power of blockchain technology.

Core Research
MEV-Boost Withdrawal Bug
We describe a bug we've encountered in mev-boost, the standard software validators used to solicit blocks from sophisticated, specialized entitites called builders on Ethereum.
March 11, 2024
5 min read

The following article is a summary of a recent ETHResearch contribution by Chorus One Research, which describes a bug we've encountered in mev-boost, the standard software validators used to solicit blocks from sophisticated, specialized entitites called builders on Ethereum. This bug is not specific to Chorus One; it can affect all Ethereum validators running mev-boost.

To read the full paper, please visit: https://chorus.one/reports-research/mev-boost-withdrawal-bug

--

Chorus One runs a proprietary version of mev-boost, dubbed Adagio, which optimizes for mev capture by optimizing latency.  Our commitment to Adagio obligates us to have an in-depth understanding of mev-boost and Ethereum's PBS setup in general. As such, we decided to dive deeper, and to make our findings available to the Ethereum community.

In practice, mev-boost facilitates an auction, where the winning builder commits to paying a certain amount of ETH for the right to provide the block that the validator proposing the next slot ("proposer") will include. This amount then accrues to an address provided by the validator, referred to as the "fee recipient".

Proposers and builders do not communicate directly, but exchange standardized messages via a third party called a "relay". The relay can determine the amount paid for a block by comparing the balance of the fee recipient at certain fixed times in the auction.

We have observed that in instances where the block in question coincidentally includes reward withdrawals due to the fee recipient, the relay has been unable to separate these withdrawals from the amount paid by the builder. This leads to an inflated value for the auction payment. This inaccuracy can negatively reflect on the Ethereum network under its current economic model (EIP-1559). Specifically, it may decrease the amount of transactions processed and decrease the amount of ETH burned, thus manifesting a small but measurable negative net outcome for the network overall.

For a deep dive, please visit: https://chorus.one/reports-research/mev-boost-withdrawal-bug

About Chorus One

Chorus One is one of the biggest institutional staking providers globally operating infrastructure for 50+ Proof-of-Stake networks, including Ethereum, Cosmos, Solana, Avalanche, and Near, amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures.

Core Research
Opinion
Reflections #4: Research Recap
A refresher on Chorus One's significant research efforts in 2023
December 19, 2023
5 min read

Throughout 2023, Chorus One maintained its standing as one of the select few node operators to consistently deliver in-depth research reports, wherein our dedicated in-house research team delves into the latest developments in the crypto and staking world.

Edition #4 of our 2023 Reflections series recaps Chorus One’s significant research efforts in 2023. Dive in!

Featured
  1. MEV on the dYdX v4 chain: A validator’s perspective on impact and mitigation

This year, Chorus One introduced a major research effort, fueled by a grant from dYdX, that examines the implications of Maximum Extractable Value (MEV) within the context of dYdX v4 from a validator's perspective.

This comprehensive analysis presents the first-ever exploration of mitigating negative MEV externalities in a fully decentralized, validator-driven order book.

Additionally, it delves into the uncharted territory of cross-domain arbitrage involving a fully decentralized in-validator order book and other venues.

Dive in: https://chorus.one/reports-research/mev-on-the-dydx-v4-chain#

  1. The cost of artificial latency

We present a comprehensive analysis of the implications of artificial latency in the Proposer-Builder-Separation framework on the Ethereum network. Focusing on the MEV-Boost auction system, we analyze how strategic latency manipulation affects Maximum Extractable Value yields and network integrity. Our findings reveal both increased profitability for node operators and significant systemic challenges, including heightened network inefficiencies and centralization risks. We empirically validate these insights with a pilot that Chorus One has been operating on Ethereum mainnet.

Dive in: https://chorus.one/reports-research/the-cost-of-artificial-latency-in-the-pbs-context

TL;DR: https://chorus.one/articles/timing-games-and-implications-on-mev-extraction

  1. Breaking Bots: An alternative way to capture MEV on Solana

We published a whitepaper comparing key characteristics of Ethereum and Solana, which explores the block-building marketplace model, akin to the "flashbots-like model," and examines the challenges of adapting it to Solana.

Additionally, recognizing Solana's unique features, we also proposed an alternative to the block-building marketplace: the solana-mev client. This model enables decentralized extraction by validators through a modified Solana validator client, capable of handling MEV opportunities directly in the banking stage of the validator. Complementing the whitepaper, we also shared an open-source prototype implementation of this approach.

Dive in: https://chorus.one/reports-research/breaking-bots-an-alternative-way-to-capture-mev-on-solana

Quarterly Insights

Every quarter, we publish an exclusive report on the events and trends that dominated the Proof-of-Stake world. Check out our Quarterly reports below, with a glimpse into the topics covered in each edition.

Q1

Titles covered:

  • Cross-chain MEV: A New Frontier in DeFi
  • The Evolution of Shared Security
  • The Start of ZK Season
  • App-chain thesis and Avalanche subnets

Read it here: https://chorus.one/reports-research/quarterly-network-insights-q1-2023  

Q2

Titles covered:

  • ETH <> Arbitrum Cross-Chain MEV: a first estimate
  • ICS on Cosmos Hub, and Centralization
  • Expanding the Ethereum Staking Ecosystem: Restaking
  • Ecosystem Review - Injective

Read it here: https://chorus.one/reports-research/quarterly-network-insights-q2-2023

Q3

Titles covered:

  • A sneak peek at validator-side MEV optimization
  • Hedging LP positions by staking
  • Considerations on the Future of Ethereum Liquid Staking
  • New developments in State Sync and Partial Nodes

Read it here:  https://chorus.one/reports-research/quarterly-network-insights-q3-2023-2024

Reach out!

If you have any questions, would like to learn more, or get in touch with our research team, please reach out to us at [email protected]

About Chorus One

Chorus One is one of the biggest institutional staking providers globally operating infrastructure for 45+ Proof-of-Stake networks including Ethereum, Cosmos, Solana, Avalanche, and Near amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures.

MEV
Core Research
Timing Games and Implications on MEV extraction
An empirical study on the effects of latency optimization on MEV capture
December 18, 2023
5 min read
Introducing Chorus One’s latest post on ethresear.ch

Today, our research team published a study on ethresear.ch, delving into the impact of latency (time) on MEV extraction. More specifically, we demonstrate the costs associated with introducing artificial latency within a PBS (Proposer-Builder Separation) framework. Additionally, we present findings from Adagio, an empirical study that explores the implications of latency optimization aimed at maximizing MEV capture.

In late August 2023, we launched Adagio, a latency-optimized setup on the Ethereum mainnet. The primary objective was to collect actionable data ethically, with minimal disruptions to the network.  Until this point, Adagio has not been a client-facing product, but an internal research initiative running on approximately 100 self-funded validators. We initially shared ongoing results of the Adagio pilot in our Q3 Quarterly Insights report  in October.

In alignment with our commitment to operational honesty and rational competition, this study discloses the full results of Adagio, alongside an extensive discussion of node operator incentives and potential adverse knock-on effects on the Ethereum network. As pioneers in MEV research, our primary objective is to address and mitigate existing competitive dynamics by offering a detailed analysis backed by proprietary data from our study, which will be explored further in the subsequent sections of this article.

This article offers a top-level summary of our study, contextualizing it within the ongoing Ethereum community dialogue on ethically optimizing MEV performance. We dive into the key findings of the study, highlighting significant observations and results. Central to our discussion is the exploration of the outcomes tied to the implementation of the Adagio setup, which demonstrates an overarching boost in MEV capture.

Ultimately, we recognise that node operators are compelled and incentivised to employ latency optimization as a matter of strategic necessity. As more operators take advantage of this inefficiency, they set a higher standard for returns, making it easier for investors to choose setups that use latency optimization.

This creates a cycle where the use of latency optimization becomes a standard practice, putting pressure on operators who are hesitant to join in. In the end, the competitive advantage of a node operator is determined by their willingness to exploit this systematic inefficiency in the system.

Additionally, we demonstrate that the parameters set by our Adagio setup corresponds to an Annual Percentage Rate (APR) that is 1.58% higher than the vanilla (standard) case, with a range from 1.30% to 3.09%. Insights into these parameters are provided below, with additional clarity available in the original post.

A Note on the Wider Conversation on Timing Games

Let’s preface this section with the phrase - Right Place at the Right Time.

Delightfully analogous to the quote above, we’re adding further insights to the overarching discourse on the implication of latency optimization (i.e, a strategy where block proposers intentionally delay the publication of their block for as long as possible to maximize MEV capture) when it has become a burning topic within the Ethereum community, drawing increased attention from various stakeholders concerned about its network implications.

Yet, despite its growing significance, there has been a noticeable lack of empirical research on this subject. As pioneers in MEV research, we've been investigating this concept for over a year, incorporating latency optimization as one of our MEV strategies from the outset. Now, we're proud to contribute to the ongoing discussions and scrutinize the most significant claims with robust, evidence-based research.

Why did we undertake this effort?

In a previous article about Chorus One’s approach to MEV, we emphasized the importance of exploring the dynamics between builders, relays, and validators with the dimension of time.

Our focus on how latency optimization can profoundly influence MEV performance remains unchanged. However, we've identified a crucial gap in empirical data supporting this concept. Compounding this issue, various actors have advocated for methods to increase MEV extraction without rigorous analysis, resulting in inflated values based on biased assumptions. Recognizing the serious consequences this scenario poses in terms of centralization pressure, we now find it imperative to conduct a deep dive into this complex scenario.

Our strategy involves implementing a setup tailored to collect actionable data through self-funded validators in an ethical manner, ensuring minimal disruptions to the network. This initiative is geared toward addressing the existing gap in empirical research and offering a more nuanced understanding of the implications of latency optimization in the MEV domain.

Key objectives

The key objectives of this research is three-fold, including:

  1. To describe the auction dynamics that give rise to latency strategies, and the associated externalities imposed on the Ethereum network
  2. To demonstrate practical results for maximizing MEV extraction through our Adagio setup
  3. To initiate a constructive discussion, contributing to an informed decision by the community.

In the following section, we will present a comprehensive overview of the three most pivotal and relevant observations from the study, and as promised earlier, we will also delve into the results of Adagio.

Observations
1. PBS dynamics, and the MEV-Boost auction

Context: First, we delve into PBS inefficiencies and MEV returns.

Here, we explore the inefficiencies in the Proposer-Builder Separation (PBS) framework, showing how timing in auctions can be strategically exploited to generate consistent, excess MEV returns.

Additionally, we demonstrate how all client-facing node operators are incentivized to compete for latency-optimized MEV capture, irrespective of their voting power.

Key Finding: Latency optimization is beneficial for all client-facing node operators, irrespective of their size or voting power.

Using an empirical framework to estimate the potential yearly excess returns for validators who optimize for latency considering factors like the frequency of MEV opportunities, network conditions, and different latency strategies, our results indicate that node operators with different voting powers have varying levels of predictability in their MEV increases.

Fig. 1: Cumulative probability of weekly MEV reward increases for a node operator with 13% voting power (left panel) and 1% voting power (right panel).

The above figure demonstrates that higher voting power tends to result in more predictable returns, while lower voting power introduces more variance. The median weekly MEV reward increase is around 5.47% for a node operator with 13% voting power and 5.11% for a node operator with 1% voting power.

The implication here is that big and small node operators cater to different utilities of their clients (delegators) because they operate at different levels of risk and reward. As a result, optimizing for latency is beneficial for both small and large node operators. In simpler terms, regardless of their size, node operators could consider optimizing latency to better serve their clients and enhance their overall performance.

As we look at a longer timeframe, the variability in rewards for any voting power profile is expected to decrease due to statistical principles. This means that rewards are likely to cluster around the 5% mark, regardless of the size of the node operator.

In practical terms, if execution layer rewards make up 30% of the total rewards, adopting a latency-aware strategy can boost the Annual Percentage Rate (APR) from 4.2% to 4.27%. This represents a noteworthy 1.67% increase in overall APR. Therefore, this presents a significant opportunity, encouraging node operators to adopt strategies that consider and optimize for latency.

2. The cost of artificial latency

Context: Second, we discuss the costs of introducing artificial delays, explaining how it increases MEV rewards but at the expense of subsequent proposers.

Key Finding: MEV tends to benefit node operators with higher voting power, giving them more stable returns. When these operators engage in strategic latency tactics, it can increase centralization risks and potentially raise gas cost and faster burnt ETH for the next proposer..

While sophisticated validators benefit from optimized MEV capture with artificial latency, the broader impact results in increased gas costs and a faster burning of ETH for the next proposers. The Ethereum network aims to maximize decentralization by encouraging hobbyists to run validators, but the outlined risks disproportionately affect solo validators. Below, we demonstrate that these downside risks are significant in scale, and disproportionately impact solo validators.

Fig.2: (Left panel) PDF of the burnt ETH increase obtained after applying the 950 ms standard delay. (Right panel) Cumulative probability of burnt ETH increase obtained after applying a delay.

Figure 2 illustrates that introducing artificial latency increases the percentage of ETH burned, potentially reducing final rewards. Even a small increase in burnt ETH can significantly decrease rewards, especially for smaller node operators who are chosen less frequently to propose blocks. The negative impact is most significant for solo validators, making them less competitive on overall APR and subject to greater income variability. Large node operators playing timing games benefit from comparatively higher APR at lower variance to the detriment of other operators.

MEV tends to benefit node operators with higher voting power, giving them more stable returns. When these operators engage in strategic latency tactics, it can increase centralization risks and potentially raise gas fees for the entire Ethereum network. Moreover, larger node operators, due to their size, have access to more data, giving them an edge in testing strategies and optimizing latency.

In this scenario, node operators find it necessary to optimize for latency to stay competitive. As more operators adopt these strategies, it becomes a standard practice, creating a cycle where those hesitant to participate face increasing pressure. This results in an environment where a node operator's success is tied to its willingness to exploit systematic inefficiencies in the process.

3. Empirical results from the Adagio pilot

Context: In late August 2023, Chorus One  launched a latency-optimized setup — internally dubbed Adagio — on Ethereum mainnet.

Its goal was to gather actionable data in a sane manner, minimizing any potential disruptions to the network. Until this point, Adagio has not been a client-facing product, but an internal research initiative running on approximately 100 self-funded validators. We are committed to both operational honesty and rational competition, and therefore disclose our findings via this study.

In simple terms, this section analyzes the outcomes of our Adagio pilot, focusing on how different relay configurations affect the timing of bid selection and eligibility in the MEV-Boost auction.

Our pilot comprises four distinct setups, each representing a variable (i.e. a relay) in our experiment:The Benchmark Setup, The Aggressive Setup, The Normal Setup, and the Moderate Setup.

Key Findings: The results of this pilot indicate that the timing strategies opted by node operators used within relay operations have a significant impact on how competitive they are.

The aggressive setup, in particular, allows non-optimistic relays to perform similarly to optimistic ones. This means that certain relays can only effectively compete if they introduce an artificial delay.

In extreme cases, a relay might not be competitive on its own, but because it captures exclusive order flow, node operators might intentionally introduce an artificial delay when querying it or might choose not to use it at all. Essentially, these timing strategies play a crucial role in determining how relays can effectively participate and compete in the overall system.

These results offer valuable insights into how strategically introducing latency within the relay infrastructure can impact the overall effectiveness and competition in the MEV-Boost auction. The goal is to level the playing field among different relays by customizing their latency parameters.

Fig 3: Box plot of the eligibility time of winning bids. The red lines represent the medians of the distributions, meanwhile the boxes represent the distributions between the 25% and 75% quantiles.

The above graph displays the eligibility time of winning bids in the Adagio pilot compared to the broader network distribution. As expected, Adagio selects bids that become eligible later with respect to the network distribution. Notably, our setup always selects bids eligible before 1s, reducing the risks of missed slots and increased number of forks for the network.  

Finally, it’s worth mentioning that our results indicate that certain setups are more favorable to winning bids. This opens up the possibility for relays adopting latency optimization to impact their submission rate.

Implications on overall MEV increase by adopting the Adagio setup

Bringing together the data on latency optimization payoff and the results of our Adagio pilot allows us to quantify the expected annual increase of validator-side MEV returns.

Fig 4: PDF of annual MEV increase expected by adopting the Adagio setup. The high spread is due to the low voting power we have with the current pilot.

The simulation results presented in Fig. 4 show that, on average, there is a 4.75% increase in MEV extracted per block, with a range from 3.92% to 9.27%. This corresponds to an Annual Percentage Rate (APR) that is 1.58% higher than the vanilla (standard) case, with a range from 1.30% to 3.09%.

The increased variability in the range is mainly due to the limited voting power in the pilot, but some of it is also caused by fluctuations in bid eligibility times. The observed median value is 5% lower than the theoretically projected value. To address this difference, the approach will be updated to minimize variance in bid selections and keep eligibility times below the 950ms threshold.

Key Takeaways

Let’s take a moment to consolidate the key takeaways derived from our study and the Adagio setup.

  1. Latency optimization is beneficial for all client-facing node operators, irrespective of their size or voting power because they serve different utilities for their delegators.
  1. MEV tends to benefit node operators with higher voting power, giving them more stable returns. When these operators engage in strategic latency tactics, it can increase centralization risks and potentially raise gas fees for the entire Ethereum network. In this scenario, node operators find it necessary to optimize for latency to remain competitive. As more operators adopt these strategies, it becomes a standard practice, creating a cycle where those hesitant to participate face increasing pressure. This results in an environment where a node operator's success is tied to its willingness to exploit systematic inefficiencies in the process.
  1. Timing strategies used within relay operations have a significant impact on how competitive they are. While a relay might not be competitive on its own, introducing an artificial delay when querying it or choosing not to use it at all (by node operators) can play a crucial role in determining how relays can effectively participate and compete in the overall system. And strategically implemented timing strategies, like those used in our Adagio pilot, can invariably lead to an increase in additional MEV captured.

Chorus One’s MEV Work and Achievements

Since inception, Chorus One has recognised the importance of MEV and spearheaded the exploration of the concept within the industry. From establishing robust MEV policies and strategies, receiving a grant from dYdX for investigating MEV in the context of the dYdX Chain to conducting empirical studies that investigate the practical implications of factors influencing MEV returns, we've consistently taken a pioneering role. Our dedication revolves around enhancing the general understanding of MEV through rational, honest, and practical methods.

For comprehensive details about our MEV policies, work, and achievements, please visit our MEV page.

Reach out!

If you’d like to learn more, have questions, or would like to get in touch with our research team, please reach out to us at [email protected].

If you want to learn more about our staking services, or would like to get started, please reach out at [email protected]

About Chorus One

Chorus One is one of the biggest institutional staking providers globally operating infrastructure for 45+ Proof-of-Stake networks including Ethereum, Cosmos, Solana, Avalanche, and Near amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures.

Core Research
Considerations on the Future of Ethereum Liquid Staking
Chapter 3 of our Q3 Quarterly Insights explores the intricacies of liquid staking and Ethereum's unique design choices.
December 8, 2023
5 min read

People like to say that those who cannot remember the past are condemned to repeat it. However, sometimes forgetting the past is a deliberate choice: an invitation to build on completely new grounds, a bet that enables a different future.

All bets have consequences. Specifically in crypto, many of t hese consequences are so material t hat t hey become hard to comprehend: hundred-million dollar exploit after exploit, billions vanishing in thin air... In its relatively short history, Ethereum has made many bets when deciding what the optimal protocol looks like. One such gamble was the decision to not enshrine native delegation into their Proof-of-Stake protocol layer.

Before the Merge, the standard PoS implementation was some sort of DPoS (Delegated Proof-of-Stake). The likes of Solana and Cosmos had already cemented some of the ground work, with features like voting and delegation mechanisms becoming the norm. Ethereum departed from this by opting for a purePoS design philosophy.

The thought-process here had to do with simplicity but even above this, the goal was to force individual staking for a more resilient network: resilient to capture and resilient to third-party influence, whether in the form of companies or nation states.

How successful have these ideas been? We could write ad infinitum about the value of decentralization, creating strong social layers and any other such platitudes, but we believe there’s more weight in real arguments. In this analysis we want to expand on the concepts and current state of the liquid staking market and what it actually means for the future of Ethereum. Also, we talk about the role of Lido and other LST protocols such as Stakewise in this market.

About derivatives and Liquid Staking

If t ere’s some hing that history has shown us is that derivatives can strengthen markets. This is true of traditional commodities where the underlying asset is difficult or impossible to trade, like oil, or even mature financial instruments, like a single stock becoming a complicated index. In fact, the growth in the use of derivatives has led to exponential growth in the total volume of contracts in our economy.

It is common as well that in most markets, the volume of derivatives greatly surpasses the spot, providing significant opportunities across a large design space. It might sound familiar (and we will get to crypto in a moment), but this open-design space has posed major challenges for risk-management practices in the already mature traditional finance, in areas such as regulation and supervision of the mechanisms, and monetary policy.

Liquid tokens are one of the first derivative primitives developed solely for the crypto markets, and have greatly inherited from their predecessors. When designing these products in the context of our industry, one has to account not only for the protocol-specific interactions, but also the terms of regulation (from the internal governance mechanisms and also in the legal sense), fluctuating market dynamics and increasingly sophisticated trading stakeholders.

Let ’s review some of Ethereum’s design choices, and how they fit into t his idea. Ethereum has enforced some pretty intense protocol restrictions on staked assets, famously their 32ETH requirement per validator and lack of native delegation. Game theory has a notoriously difficult reputation in distributed systems design. Mechanisms for incentivizing or disincentivizing any behavior will typically almost always have negative externalities.

Also, on-chain restrictions tend to be quite futile. In our last edition, we discussed some effects that can be observed in assets that resemble “money ”, like the token markets of LSTs, including network effects and power law distributions. But now we want to go deeper and consider, why is Liquid Staking so big in Ethereum and not other chains?

We observe a clear relationship between the existence of a native delegation mechanism and the slower adoption of Liquid Staking protocols. In that sense, other chains have enshrined DPoS, which makes it significantly less likely to result in high-adoption or a similar dynamic, whilst Ethereum has found it self increasingly growing in that direction.

We observe the results of the restrictions imposed at the protocol level. The network *allows* stake to be managed by individual actors, but there is no way to prevent aggregation or pooling. No matter how many incentives you create for the behavior on-chain to be as observable and maximally auditable as possible, the reality is that as it stands, the effect is never auditable.

stETH and alternatives

At the time of writing this analysis, Lido has managed to concentrate 31.76% of the market share for staking in Ethereum under its signature token stETH. This is an out standing figure, not only in absolute terms but also relative to its position in the Liquid Staking market, where it controls an extraordinary ~80%, with close to 167,000 unique depositors on their public smart contracts. It is, by a margin, the largest protocol in crypto by Total Value Locked.

https://dune.com/lido/lido-dashboards-catalogue

A big issue with TVL is that it is heavily dependent on crypto prices. In the case of Lido, we actually observe that the inflow charts show a constant growing trend from protocol launch to the present day. This is independent from the decreased crypto prices, minimal transaction output on-chain and t e consequent inferior returns on the asset, with an APR that moves in between 3.2 and 3.6% on the average day. This is of course, below the network average for vanilla nodes considering the protocol takes a 10% cut from staking rewards, divided between the DAO and its 38 permissioned Node Operators.

https://dune.com/hildobby/eth2-staking

Recently, there’s been heated debate related to the position and surface of Lido inside Ethereum, as it relates to decentralization concerns and a specific number that constantly pops up. What is this 33.3% we keep hearing about ?

There are two important thresholds related to PoS, the first one being t his 33.3 percent number; which in practical terms means that if an attacker could take control of that surface of the network they would be able to prevent it from finalizing... at least during a period of time. This is a progressive issue with more questions than answers: what if a protocol controls 51% of all stake? How about 100%?

Before diving into some arguments, it is interesting to contextualize liquid ETH derivatives as they compare to native ETH. In the derivatives market, the instrument allows the unbundling of various risks affecting the value of an underlying asset. LSTs such as stETH combine pooling and some pseudo-delegation, and although this delegation is probably the main catalyst of high adoption, it is the pooling effect that has a huge effect on decentralization. As slashing risk is socialized, it turns operator selection into a highly opinionated activity.

Another common use of derivatives is leveraged position-taking, in a way the opposite of the previous one that is more focused on hedging risk. This makes an interesting case for the growth of stETH, as in a way its liquidity and yielding capabilities are augmenting native ETH’s utility. There is no reason you cannot, for example, take leveraged positions in a liquid token and enjoy both sources of revenue. At least, this is true of the likes of stETH which have found almost complete DeFi integration. As long as they are two distinct assets, one could see more value accrual going to derivatives, which is consistent with traditional markets.

This growth spurt is an interesting subject of study by itself, but we think it would be also possible to identify growth catalysts, and also apply them across the industry, to discover where some other undervalued protocols might exist if any. For this, you would want to identify when the protocol had growth spurts, find out which events led to that and search for these catalysts in other protocols.

One such example comes when protocols become liquid enough to be accessible to bigger players.

What would happen if we addressed so-called centralization vectors, and revisited the in-protocol delegation. Or more realistically, if we had the chance to reduce the pooling effect and allowed the market to decide the distributions of stake, for example, by having one LST per node operator.

Alternatives like Stakewise have been building in that design space to create a completely new staking experience, one that takes into account the past.

In particular, Stakewise V3 has a modular designt hat mimics network modularity, against more monolithic LST protocols. For instance, it allows stakers the freedom to selectt heir own validator, rather than enforcing socialized pooling. The protocol also helps mitigate some slashing risk, as losses can be easily confined to a single “vault”. Each staker receives a proportional amount of Vault Liquid Tokens (VLT) in return for depositing in a specific vault, which they can then mint into osETH, the traded liquid staking derivative.

Although not without its complexities, it offers an alternative to the opinionated nature of permissioned protocols like Lido, in an industry where only a better product can go face to face with the incumbent.

A view into the future

If you design a system where the people with the most stake enforce the rules and there is an incentive for that stake to consolidate, there’s something to be said about those rules. However, can we really make the claim that t here’s some inherent flaw in the design?

One of the points that get brought up is in the selection of the protocol participants. However, a more decentralized mechanism for choosing node operators can actually have the unintended result of greater centralization of stake. We need only to look at simple DPoS, which counts into its severe shortcomings a generally poor delegate selection with very top heavy stake delegation and capital inefficiency.

Another issue has to do with enforcing limits on Liquid Staking protocols, or asking them to self limit in the name of some reported values. This paternalistic attitude punishes successful products in the crypto ecosystem, while simultaneously asserting the largest group of stake in a PoS system is not representative of the system. Users have shown with their actions that even with LST or even DPoS downsides (all kinds of risk, superlinear penalty scaling) this is still prefered to the alternative of taking on technical complexity.

An underlying problem exists in the beliefs that control a lot of Ethereum’s design decisions, meaning that all value should accrue to just ETH and no other token can be generating value on the base layer. This taxation is something that we should be wary of, as it is very pervasive in the technocracies and other systems we stand separate to. Applications on Ethereum have to be allowed to also generate revenue.

Ultimately, the debate about Lido controlling high levels of stake does seem to be an optics issue, and not an immediate threat to Ethereum. Moreover, it is the symptom of a thriving economy, which we have observed when compared to the traditional derivatives market.

Ethereum’s co-founder, Vitalik Buterin, recently wrote an article out lining some changes that could be applied to protocol and staking pools to improve decentralization. There he outlines the ways in which the delegator role can be made more meaningful, especially in regards to pool selection. This would allow immediate effects in the voting tools within pools, more competition between pools and also some level of enshrined delegation, whilst maintaining the philosophy of high-level minimum viable enshrinement in the network and the value of the decentralized blockspace that is Ethereum’s prime product. At least, this looks like a way forward. Let ’s see if it succeeds in creating an alternative, or if we will continue to replicate the same faulty systems of our recent financial history.

About Chorus One

Chorus One is one of the biggest institutional staking providers globally operating infrastructure for 45+ Proof-of-Stake networks including Ethereum, Cosmos, Solana, Avalanche, and Near amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures.

No results found.

Please try different keywords.

 Join our mailing list to receive our latest updates, research reports, and industry news.

Want to be a guest?
Drop us a line!

Submit
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.