Today, our research team published a study on ethresear.ch, delving into the impact of latency (time) on MEV extraction. More specifically, we demonstrate the costs associated with introducing artificial latency within a PBS (Proposer-Builder Separation) framework. Additionally, we present findings from Adagio, an empirical study that explores the implications of latency optimization aimed at maximizing MEV capture.
In late August 2023, we launched Adagio, a latency-optimized setup on the Ethereum mainnet. The primary objective was to collect actionable data ethically, with minimal disruptions to the network. Until this point, Adagio has not been a client-facing product, but an internal research initiative running on approximately 100 self-funded validators. We initially shared ongoing results of the Adagio pilot in our Q3 Quarterly Insights report in October.
In alignment with our commitment to operational honesty and rational competition, this study discloses the full results of Adagio, alongside an extensive discussion of node operator incentives and potential adverse knock-on effects on the Ethereum network. As pioneers in MEV research, our primary objective is to address and mitigate existing competitive dynamics by offering a detailed analysis backed by proprietary data from our study, which will be explored further in the subsequent sections of this article.
This article offers a top-level summary of our study, contextualizing it within the ongoing Ethereum community dialogue on ethically optimizing MEV performance. We dive into the key findings of the study, highlighting significant observations and results. Central to our discussion is the exploration of the outcomes tied to the implementation of the Adagio setup, which demonstrates an overarching boost in MEV capture.
Ultimately, we recognise that node operators are compelled and incentivised to employ latency optimization as a matter of strategic necessity. As more operators take advantage of this inefficiency, they set a higher standard for returns, making it easier for investors to choose setups that use latency optimization.
This creates a cycle where the use of latency optimization becomes a standard practice, putting pressure on operators who are hesitant to join in. In the end, the competitive advantage of a node operator is determined by their willingness to exploit this systematic inefficiency in the system.
Additionally, we demonstrate that the parameters set by our Adagio setup corresponds to an Annual Percentage Rate (APR) that is 1.58% higher than the vanilla (standard) case, with a range from 1.30% to 3.09%. Insights into these parameters are provided below, with additional clarity available in the original post.
Let’s preface this section with the phrase - Right Place at the Right Time.
Delightfully analogous to the quote above, we’re adding further insights to the overarching discourse on the implication of latency optimization (i.e, a strategy where block proposers intentionally delay the publication of their block for as long as possible to maximize MEV capture) when it has become a burning topic within the Ethereum community, drawing increased attention from various stakeholders concerned about its network implications.
Yet, despite its growing significance, there has been a noticeable lack of empirical research on this subject. As pioneers in MEV research, we've been investigating this concept for over a year, incorporating latency optimization as one of our MEV strategies from the outset. Now, we're proud to contribute to the ongoing discussions and scrutinize the most significant claims with robust, evidence-based research.
In a previous article about Chorus One’s approach to MEV, we emphasized the importance of exploring the dynamics between builders, relays, and validators with the dimension of time.
Our focus on how latency optimization can profoundly influence MEV performance remains unchanged. However, we've identified a crucial gap in empirical data supporting this concept. Compounding this issue, various actors have advocated for methods to increase MEV extraction without rigorous analysis, resulting in inflated values based on biased assumptions. Recognizing the serious consequences this scenario poses in terms of centralization pressure, we now find it imperative to conduct a deep dive into this complex scenario.
Our strategy involves implementing a setup tailored to collect actionable data through self-funded validators in an ethical manner, ensuring minimal disruptions to the network. This initiative is geared toward addressing the existing gap in empirical research and offering a more nuanced understanding of the implications of latency optimization in the MEV domain.
The key objectives of this research is three-fold, including:
In the following section, we will present a comprehensive overview of the three most pivotal and relevant observations from the study, and as promised earlier, we will also delve into the results of Adagio.
Context: First, we delve into PBS inefficiencies and MEV returns.
Here, we explore the inefficiencies in the Proposer-Builder Separation (PBS) framework, showing how timing in auctions can be strategically exploited to generate consistent, excess MEV returns.
Additionally, we demonstrate how all client-facing node operators are incentivized to compete for latency-optimized MEV capture, irrespective of their voting power.
Key Finding: Latency optimization is beneficial for all client-facing node operators, irrespective of their size or voting power.
Using an empirical framework to estimate the potential yearly excess returns for validators who optimize for latency considering factors like the frequency of MEV opportunities, network conditions, and different latency strategies, our results indicate that node operators with different voting powers have varying levels of predictability in their MEV increases.
The above figure demonstrates that higher voting power tends to result in more predictable returns, while lower voting power introduces more variance. The median weekly MEV reward increase is around 5.47% for a node operator with 13% voting power and 5.11% for a node operator with 1% voting power.
The implication here is that big and small node operators cater to different utilities of their clients (delegators) because they operate at different levels of risk and reward. As a result, optimizing for latency is beneficial for both small and large node operators. In simpler terms, regardless of their size, node operators could consider optimizing latency to better serve their clients and enhance their overall performance.
As we look at a longer timeframe, the variability in rewards for any voting power profile is expected to decrease due to statistical principles. This means that rewards are likely to cluster around the 5% mark, regardless of the size of the node operator.
In practical terms, if execution layer rewards make up 30% of the total rewards, adopting a latency-aware strategy can boost the Annual Percentage Rate (APR) from 4.2% to 4.27%. This represents a noteworthy 1.67% increase in overall APR. Therefore, this presents a significant opportunity, encouraging node operators to adopt strategies that consider and optimize for latency.
Context: Second, we discuss the costs of introducing artificial delays, explaining how it increases MEV rewards but at the expense of subsequent proposers.
Key Finding: MEV tends to benefit node operators with higher voting power, giving them more stable returns. When these operators engage in strategic latency tactics, it can increase centralization risks and potentially raise gas cost and faster burnt ETH for the next proposer..
While sophisticated validators benefit from optimized MEV capture with artificial latency, the broader impact results in increased gas costs and a faster burning of ETH for the next proposers. The Ethereum network aims to maximize decentralization by encouraging hobbyists to run validators, but the outlined risks disproportionately affect solo validators. Below, we demonstrate that these downside risks are significant in scale, and disproportionately impact solo validators.
Figure 2 illustrates that introducing artificial latency increases the percentage of ETH burned, potentially reducing final rewards. Even a small increase in burnt ETH can significantly decrease rewards, especially for smaller node operators who are chosen less frequently to propose blocks. The negative impact is most significant for solo validators, making them less competitive on overall APR and subject to greater income variability. Large node operators playing timing games benefit from comparatively higher APR at lower variance to the detriment of other operators.
MEV tends to benefit node operators with higher voting power, giving them more stable returns. When these operators engage in strategic latency tactics, it can increase centralization risks and potentially raise gas fees for the entire Ethereum network. Moreover, larger node operators, due to their size, have access to more data, giving them an edge in testing strategies and optimizing latency.
In this scenario, node operators find it necessary to optimize for latency to stay competitive. As more operators adopt these strategies, it becomes a standard practice, creating a cycle where those hesitant to participate face increasing pressure. This results in an environment where a node operator's success is tied to its willingness to exploit systematic inefficiencies in the process.
Context: In late August 2023, Chorus One launched a latency-optimized setup — internally dubbed Adagio — on Ethereum mainnet.
Its goal was to gather actionable data in a sane manner, minimizing any potential disruptions to the network. Until this point, Adagio has not been a client-facing product, but an internal research initiative running on approximately 100 self-funded validators. We are committed to both operational honesty and rational competition, and therefore disclose our findings via this study.
In simple terms, this section analyzes the outcomes of our Adagio pilot, focusing on how different relay configurations affect the timing of bid selection and eligibility in the MEV-Boost auction.
Our pilot comprises four distinct setups, each representing a variable (i.e. a relay) in our experiment:The Benchmark Setup, The Aggressive Setup, The Normal Setup, and the Moderate Setup.
Key Findings: The results of this pilot indicate that the timing strategies opted by node operators used within relay operations have a significant impact on how competitive they are.
The aggressive setup, in particular, allows non-optimistic relays to perform similarly to optimistic ones. This means that certain relays can only effectively compete if they introduce an artificial delay.
In extreme cases, a relay might not be competitive on its own, but because it captures exclusive order flow, node operators might intentionally introduce an artificial delay when querying it or might choose not to use it at all. Essentially, these timing strategies play a crucial role in determining how relays can effectively participate and compete in the overall system.
These results offer valuable insights into how strategically introducing latency within the relay infrastructure can impact the overall effectiveness and competition in the MEV-Boost auction. The goal is to level the playing field among different relays by customizing their latency parameters.
The above graph displays the eligibility time of winning bids in the Adagio pilot compared to the broader network distribution. As expected, Adagio selects bids that become eligible later with respect to the network distribution. Notably, our setup always selects bids eligible before 1s, reducing the risks of missed slots and increased number of forks for the network.
Finally, it’s worth mentioning that our results indicate that certain setups are more favorable to winning bids. This opens up the possibility for relays adopting latency optimization to impact their submission rate.
Bringing together the data on latency optimization payoff and the results of our Adagio pilot allows us to quantify the expected annual increase of validator-side MEV returns.
The simulation results presented in Fig. 4 show that, on average, there is a 4.75% increase in MEV extracted per block, with a range from 3.92% to 9.27%. This corresponds to an Annual Percentage Rate (APR) that is 1.58% higher than the vanilla (standard) case, with a range from 1.30% to 3.09%.
The increased variability in the range is mainly due to the limited voting power in the pilot, but some of it is also caused by fluctuations in bid eligibility times. The observed median value is 5% lower than the theoretically projected value. To address this difference, the approach will be updated to minimize variance in bid selections and keep eligibility times below the 950ms threshold.
Let’s take a moment to consolidate the key takeaways derived from our study and the Adagio setup.
Since inception, Chorus One has recognised the importance of MEV and spearheaded the exploration of the concept within the industry. From establishing robust MEV policies and strategies, receiving a grant from dYdX for investigating MEV in the context of the dYdX Chain to conducting empirical studies that investigate the practical implications of factors influencing MEV returns, we've consistently taken a pioneering role. Our dedication revolves around enhancing the general understanding of MEV through rational, honest, and practical methods.
For comprehensive details about our MEV policies, work, and achievements, please visit our MEV page.
If you’d like to learn more, have questions, or would like to get in touch with our research team, please reach out to us at research@chorus.one.
If you want to learn more about our staking services, or would like to get started, please reach out at staking@chorus.one
About Chorus One
Chorus One is one of the biggest institutional staking providers globally operating infrastructure for 45+ Proof-of-Stake networks including Ethereum, Cosmos, Solana, Avalanche, and Near amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures.
People like to say that those who cannot remember the past are condemned to repeat it. However, sometimes forgetting the past is a deliberate choice: an invitation to build on completely new grounds, a bet that enables a different future.
All bets have consequences. Specifically in crypto, many of t hese consequences are so material t hat t hey become hard to comprehend: hundred-million dollar exploit after exploit, billions vanishing in thin air... In its relatively short history, Ethereum has made many bets when deciding what the optimal protocol looks like. One such gamble was the decision to not enshrine native delegation into their Proof-of-Stake protocol layer.
Before the Merge, the standard PoS implementation was some sort of DPoS (Delegated Proof-of-Stake). The likes of Solana and Cosmos had already cemented some of the ground work, with features like voting and delegation mechanisms becoming the norm. Ethereum departed from this by opting for a purePoS design philosophy.
The thought-process here had to do with simplicity but even above this, the goal was to force individual staking for a more resilient network: resilient to capture and resilient to third-party influence, whether in the form of companies or nation states.
How successful have these ideas been? We could write ad infinitum about the value of decentralization, creating strong social layers and any other such platitudes, but we believe there’s more weight in real arguments. In this analysis we want to expand on the concepts and current state of the liquid staking market and what it actually means for the future of Ethereum. Also, we talk about the role of Lido and other LST protocols such as Stakewise in this market.
If t ere’s some hing that history has shown us is that derivatives can strengthen markets. This is true of traditional commodities where the underlying asset is difficult or impossible to trade, like oil, or even mature financial instruments, like a single stock becoming a complicated index. In fact, the growth in the use of derivatives has led to exponential growth in the total volume of contracts in our economy.
It is common as well that in most markets, the volume of derivatives greatly surpasses the spot, providing significant opportunities across a large design space. It might sound familiar (and we will get to crypto in a moment), but this open-design space has posed major challenges for risk-management practices in the already mature traditional finance, in areas such as regulation and supervision of the mechanisms, and monetary policy.
Liquid tokens are one of the first derivative primitives developed solely for the crypto markets, and have greatly inherited from their predecessors. When designing these products in the context of our industry, one has to account not only for the protocol-specific interactions, but also the terms of regulation (from the internal governance mechanisms and also in the legal sense), fluctuating market dynamics and increasingly sophisticated trading stakeholders.
Let ’s review some of Ethereum’s design choices, and how they fit into t his idea. Ethereum has enforced some pretty intense protocol restrictions on staked assets, famously their 32ETH requirement per validator and lack of native delegation. Game theory has a notoriously difficult reputation in distributed systems design. Mechanisms for incentivizing or disincentivizing any behavior will typically almost always have negative externalities.
Also, on-chain restrictions tend to be quite futile. In our last edition, we discussed some effects that can be observed in assets that resemble “money ”, like the token markets of LSTs, including network effects and power law distributions. But now we want to go deeper and consider, why is Liquid Staking so big in Ethereum and not other chains?
We observe a clear relationship between the existence of a native delegation mechanism and the slower adoption of Liquid Staking protocols. In that sense, other chains have enshrined DPoS, which makes it significantly less likely to result in high-adoption or a similar dynamic, whilst Ethereum has found it self increasingly growing in that direction.
We observe the results of the restrictions imposed at the protocol level. The network *allows* stake to be managed by individual actors, but there is no way to prevent aggregation or pooling. No matter how many incentives you create for the behavior on-chain to be as observable and maximally auditable as possible, the reality is that as it stands, the effect is never auditable.
At the time of writing this analysis, Lido has managed to concentrate 31.76% of the market share for staking in Ethereum under its signature token stETH. This is an out standing figure, not only in absolute terms but also relative to its position in the Liquid Staking market, where it controls an extraordinary ~80%, with close to 167,000 unique depositors on their public smart contracts. It is, by a margin, the largest protocol in crypto by Total Value Locked.
A big issue with TVL is that it is heavily dependent on crypto prices. In the case of Lido, we actually observe that the inflow charts show a constant growing trend from protocol launch to the present day. This is independent from the decreased crypto prices, minimal transaction output on-chain and t e consequent inferior returns on the asset, with an APR that moves in between 3.2 and 3.6% on the average day. This is of course, below the network average for vanilla nodes considering the protocol takes a 10% cut from staking rewards, divided between the DAO and its 38 permissioned Node Operators.
Recently, there’s been heated debate related to the position and surface of Lido inside Ethereum, as it relates to decentralization concerns and a specific number that constantly pops up. What is this 33.3% we keep hearing about ?
There are two important thresholds related to PoS, the first one being t his 33.3 percent number; which in practical terms means that if an attacker could take control of that surface of the network they would be able to prevent it from finalizing... at least during a period of time. This is a progressive issue with more questions than answers: what if a protocol controls 51% of all stake? How about 100%?
Before diving into some arguments, it is interesting to contextualize liquid ETH derivatives as they compare to native ETH. In the derivatives market, the instrument allows the unbundling of various risks affecting the value of an underlying asset. LSTs such as stETH combine pooling and some pseudo-delegation, and although this delegation is probably the main catalyst of high adoption, it is the pooling effect that has a huge effect on decentralization. As slashing risk is socialized, it turns operator selection into a highly opinionated activity.
Another common use of derivatives is leveraged position-taking, in a way the opposite of the previous one that is more focused on hedging risk. This makes an interesting case for the growth of stETH, as in a way its liquidity and yielding capabilities are augmenting native ETH’s utility. There is no reason you cannot, for example, take leveraged positions in a liquid token and enjoy both sources of revenue. At least, this is true of the likes of stETH which have found almost complete DeFi integration. As long as they are two distinct assets, one could see more value accrual going to derivatives, which is consistent with traditional markets.
This growth spurt is an interesting subject of study by itself, but we think it would be also possible to identify growth catalysts, and also apply them across the industry, to discover where some other undervalued protocols might exist if any. For this, you would want to identify when the protocol had growth spurts, find out which events led to that and search for these catalysts in other protocols.
One such example comes when protocols become liquid enough to be accessible to bigger players.
What would happen if we addressed so-called centralization vectors, and revisited the in-protocol delegation. Or more realistically, if we had the chance to reduce the pooling effect and allowed the market to decide the distributions of stake, for example, by having one LST per node operator.
Alternatives like Stakewise have been building in that design space to create a completely new staking experience, one that takes into account the past.
In particular, Stakewise V3 has a modular designt hat mimics network modularity, against more monolithic LST protocols. For instance, it allows stakers the freedom to selectt heir own validator, rather than enforcing socialized pooling. The protocol also helps mitigate some slashing risk, as losses can be easily confined to a single “vault”. Each staker receives a proportional amount of Vault Liquid Tokens (VLT) in return for depositing in a specific vault, which they can then mint into osETH, the traded liquid staking derivative.
Although not without its complexities, it offers an alternative to the opinionated nature of permissioned protocols like Lido, in an industry where only a better product can go face to face with the incumbent.
If you design a system where the people with the most stake enforce the rules and there is an incentive for that stake to consolidate, there’s something to be said about those rules. However, can we really make the claim that t here’s some inherent flaw in the design?
One of the points that get brought up is in the selection of the protocol participants. However, a more decentralized mechanism for choosing node operators can actually have the unintended result of greater centralization of stake. We need only to look at simple DPoS, which counts into its severe shortcomings a generally poor delegate selection with very top heavy stake delegation and capital inefficiency.
Another issue has to do with enforcing limits on Liquid Staking protocols, or asking them to self limit in the name of some reported values. This paternalistic attitude punishes successful products in the crypto ecosystem, while simultaneously asserting the largest group of stake in a PoS system is not representative of the system. Users have shown with their actions that even with LST or even DPoS downsides (all kinds of risk, superlinear penalty scaling) this is still prefered to the alternative of taking on technical complexity.
An underlying problem exists in the beliefs that control a lot of Ethereum’s design decisions, meaning that all value should accrue to just ETH and no other token can be generating value on the base layer. This taxation is something that we should be wary of, as it is very pervasive in the technocracies and other systems we stand separate to. Applications on Ethereum have to be allowed to also generate revenue.
Ultimately, the debate about Lido controlling high levels of stake does seem to be an optics issue, and not an immediate threat to Ethereum. Moreover, it is the symptom of a thriving economy, which we have observed when compared to the traditional derivatives market.
Ethereum’s co-founder, Vitalik Buterin, recently wrote an article out lining some changes that could be applied to protocol and staking pools to improve decentralization. There he outlines the ways in which the delegator role can be made more meaningful, especially in regards to pool selection. This would allow immediate effects in the voting tools within pools, more competition between pools and also some level of enshrined delegation, whilst maintaining the philosophy of high-level minimum viable enshrinement in the network and the value of the decentralized blockspace that is Ethereum’s prime product. At least, this looks like a way forward. Let ’s see if it succeeds in creating an alternative, or if we will continue to replicate the same faulty systems of our recent financial history.
About Chorus One
Chorus One is one of the biggest institutional staking providers globally operating infrastructure for 45+ Proof-of-Stake networks including Ethereum, Cosmos, Solana, Avalanche, and Near amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures.
Web3 founders face a crucial decision when deciding to launch their product. If they want to avoid the layer 2 option due to concerns surrounding centralized sequencers and multisig bridges, they must choose between two main paths: developing their product as a smart contract and deploying it on an existing Layer 1 blockchain, or taking the ambitious route of creating their own blockchain from scratch. The former option comes with different advantages, notably removing the complexities of infrastructure management, ensuring a decentralized foundation, and leveraging the network effect inherent in the underlying blockchain.
Yet, opting for a smart contract deployment is not without tradeoffs. It leads to a competition for block space, resulting in a worse user experience characterized by inflated gas costs and transaction fees, coupled with an impact on transaction executions. The immutability of smart contracts can also be restrictive, offering little flexibility for the protocol in the case of critical bugs or hacks. The smart contract approach also lacks sovereignty, as the protocol will be subject to the rules of the hosting blockchain.
One solution that has gained popularity in the last two years to address the challenges of the smart contract approach is the appchain thesis, which was pioneered by Cosmos and followed by Polkadot. The idea behind this model is to build a dedicated blockchain for one application. Compared to the smart-contract solution, this model offers sovereignty and full customizability from the blockchain to the application. It also enhances performance and scalability since the application has its own blockspace. This leads to increased opportunities for the token to capture value, such as MEV, as Osmosis does, in addition to capturing other network fees.
Certainly, this solution involves several important factors to consider. It requires the management of the chain's infrastructure, ensuring its own security, attracting validators, and designing a tokenomics model that aligns the interests of validators, stakers, and app users.
What if we could easily launch an application, similar to deploying a smart contract, and gain the benefits of an appchain, all without any initial investment or extensive effort? This is exactly what Saga's value proposition is about.
The Saga protocol functions like application-specific blockchains as a service. In other words, Saga is a blockchain used to easily launch other blockchains, called “Chainlets” in the Saga ecosystem. Chainlets are secured by the Saga blockchain and its validators through a mechanism called Interchain Security, a well-known shared-security system in Cosmos.
Interchain security means that one blockchain, in this case Saga, acts as a provider of security for other blockchains, in this case the Chainlets. As a result, the Chainlets inherit the benefits of running a Cosmos SDK appchain but outsource their block validation and validator set to Saga.
Therefore, a Chainlet is a sovereign blockchain that has the same level of security and decentralization as Saga.
Saga introduces an easy, decentralized, and secure approach to deploying application-specific blockchains. This solution also grants developers the autonomy to choose their preferred Virtual Machine (VM), with initial support for the Ethereum Virtual Machine (EVM).
In the long run, Chainlets aims to be VM agnostic, which means that developers would have the flexibility to choose from a variety of virtual machines, including the EVM, CosmWasm, or the Javascript VM for example.
The way Chainlets are created differs slightly from what we can observe on the Cosmos Hub when launching consumer chains with Replicated Security. In contrast to the Cosmos Hub, the launch of a Chainlet with Saga is entirely permissionless.
Developers only need to have SAGA tokens to pay for setting up and maintaining their Chainlet. This is similar to services offered by Amazon Web Services and other SaaS platforms, except that here the subscription fee is paid in SAGA tokens to create and maintain a Chainlet.
This means that once the fee is paid, the role of Saga validators is to set up and run the infrastructure for a Chainlet, similar to how Cosmos Hub validators also operate the infrastructure of the consumer chains.
To launch a Chainlet, a developer is required to allocate funds to an escrow account using SAGA tokens. This escrow account can be pre-funded to any desired amount and works like a prepaid service to cover the costs associated with the Chainlet. If the deposited fee is depleted, the Chainlet goes offline until the developer deposits more SAGA in the account. The fee is determined per epoch, where one epoch lasts approximately one day.
Diverse methods could be used for funding the escrow account with SAGA tokens:
This subscription fee is determined by the Saga validator set. Before the start of a new epoch, each Saga validator submits the fee they would like to receive for running a Chainlet. These bids are then locked before the start of the next epoch, and a Musical Chair Auction begins.
The Musical Chair Auction is a process that aims to establish a universal price for running a Chainlet. In this context, each validator presents their bid, and only the w validators with the lowest prices are included in the 'Winning Set'. The remaining validators with higher bids constitute the 'Losing Set'.
The final cost of running a Chainlet is determined by the highest bid within the Winning Set. This implies that the validator with the highest bid in the Winning Set gets its desired price, while other validators within the Winning Set not only secure their desired price but also receive an additional margin on their bid.
The price that developers will have to pay for Saga validators to run a Chainlet is:
Pricerun chainlet = max(BidWinning Set )Number ValidatorsSaga
To prevent collusion or Sybil attacks related to the Winning and Losing Set, the count of validators within this set must be large enough to make controlling the Winning Set challenging. According to the Saga team, this number should range between 75% and 85% of the participants in the Musical Chair Auction.
However, the Musical Chair Auction is not riskless for a validator. In fact, the mechanism is designed to incentivize validators to submit bids as low as possible, rewarding validators within the Winning Set, while penalizing those in the Losing Set.
A possible way for the team to handle punishment is to treat it like validator downtime: validators who are down for a certain period get a minor slash and are jailed (removed from the active set). Validators who lose the auction too often in a given period could also be minorly slashed and jailed.
Hence, the SAGA token has multiple use cases: it is used as a subscription fee to keep the Chainlet alive and to reward the validators for running the infrastructure. In this case, there is a 1:1 relationship between costs and revenues with the auction system. We can also think about having pools of validators that share the cost, with validators only running some Chainlets and not others, to improve scalability.
Saga and its Chainlets introduce an interesting token structure, as gas fees are not explicitly collected from end users. Within a Chainlet, gas fees can be paid using Saga, the developer’s own Chainlet token, no tokens at all (gasless transactions), or even other tokens such as ETH or USDC.
It's worth noting that gas fees generated within a specific Chainlet are directed to a wallet managed by the developer. This confers a high degree of flexibility to the Chainlet and its team in determining their preferred monetization approach.
Consequently, with Chainlets, developers benefit from predictable and low costs, an easy process for deploying their blockchains, and the capacity to horizontally scale applications. While Chainlets inherit security from Saga, there exists a method for a Chainlet to also leverage and inherit Ethereum's security using the Saga stack. Let’s delve into this aspect in the following section.
Saga Ethlet is a new Ethereum scaling solution that combines the best attributes from appchains, rollups, and validiums into a single product. Launching an Ethlet will be as easy as launching a Chainlet: with one click, an Ethlet can be created and inherit Ethereum's security.
How does this mechanism work? Ethlets work with three essential components: Data Availability, State Hash Commitment, and Fraud Proof.
At the end of each epoch (~ 1 day), blocks produced during that time frame are batched, forming the 'batched epoch'. A new epoch referred to as the 'challenge period' then begins. During this challenge period, Saga’s validators can use a fraud-proof mechanism (optimistic ZK or interactive) that enables the identification of any fraudulent transactions or state transitions that might occur within the blocks from the batched epoch. If, by the end of the challenge period, no fraud-proof has been presented, the state hash of the previous batched epoch is committed to Ethereum, and therefore, this committed state inherits the security of Ethereum.
This implies that there is a one-epoch delay for a state hash to be committed to Ethereum and inherit its security. However, it's important to note that blocks inherit Saga’s security even before being committed to Ethereum.
Finally, Saga will be used as a Data Availability layer, similar to a validium, to avoid the high Data Availability costs of Ethereum. An Ethlet thus achieves fast finality through Tendermint, facilitates rapid bridging, and leverages the advantages of IBC. This approach ensures cost-effectiveness while also inheriting Ethereum's security.
Saga offers any developer the ability to easily launch their application as a Chainlet and inherit Saga’s mainnet level of security and decentralization from the start. By choosing this option, the application will benefit from its dedicated blockspace, and the team will gain more control over the blockchain and the application layers compared to launching as a smart contract. If the developer choses, they can upgrade a Chainlet into an Ethlet and gain the benefits of Ethereum Security.
Saga is initially focused on gaming and entertainment chains, as we can notice from their partnerships. Gaming applications are one of the fastest-growing sectors in web3, and a gaming project, such as a video game, needs its own dedicated scalable blockchain capable of supporting high transaction volumes – exactly what Saga is offering and what Chainlets based on the Cosmos SDK can provide. As web3 gaming and entertainment continue to grow and the demand for scalable architecture for users increases, Saga presents itself as the solution to provide the necessary architecture and is confident in onboarding the next 1000 chains in the Multiverse.
About Chorus One
Chorus One is one of the biggest institutional staking providers globally operating infrastructure for 40+ Proof-of-Stake networks including Ethereum, Cosmos, Solana, Avalanche, and Near amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures.
This document is a summary of a longer article — “The financialized staking economy” — published in Chorus One’s ‘Annual Staking Review’ for 2022. Click here to read the entire report.
Cryptocurrencies can be used in three kinds of yield-bearing activity. These have cumulative trust assumptions -
We believe staking yield is the most attractive risk-adjusted source of yield in crypto for two reasons:
Proof-of-stake ecosystems do not have an anchor in the real world. This means that the staking yield rate denoted in native terms is completely decoupled from any kind of factor in the wider economy. For staking, endogenous capital (e.g. ETH) is the only factor of production.
This is a difference to proof-of-work (PoW) systems, where electricity and hardware costs serve as an unbridgeable anchor to the real economy, directly affecting a miner’s yield rate. It is also different from most CeFi and DeFi yield sources, which depend more heavily on user activity.
The above implies that staking can be an uncorrelated yield source for two kinds of investors — those that are bullish long-term and denominate their holdings in native units, and those that are hedged against the price risk of the staked asset.
The token price risk may be hedged out through on- or off-chain solutions. The former case has the advantage of transparency, reflected in an improved counterparty risk assessment and iron-clad terms. With some of the largest lending desks in the space embroiled in a liquidity crisis, this is a significant factor. Validators are ideally positioned to execute on-chain hedging, as they directly interface with the staking yield source and thus no custody transfer, i.e. additional risk, is required to interface with a hedging solution.
One increasingly popular on-chain hedging solution is a “staking yield interest rate swap”. This allows validators to swap token-denominated staking yield for a stablecoin, typically USDC, locking in a stable and predictable income for a staking client. The associated risk is very minor as neither the validator nor the swap counterparty takes custody of the principal — the worst case, a counterparty default, would reduce to the price risk on the yield earned on the staked notional. Chorus One can leverage Alkimiya, the leading protocol for on-chain capital markets, to execute this type of hedge.
A second way to hedge is by using the staking yield to finance classic options-based strategies. For example, a zero-cost collar options package may incorporate the staking yield in a way that enables an asymmetric pay-off.
Chorus One is invested in & advises a range of solutions optimizing staking yield for return (i.e. MEV) and risk (i.e. hedging). Reach out to us at sales@chorus.one to learn more about how these can be tailored to fit your use case.
About Chorus One
Chorus One is one of the biggest institutional staking providers globally operating infrastructure for 35+ Proof-of-Stake networks including Ethereum, Cosmos, Solana, Avalanche, and Near amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures. We are a team of over 50 passionate individuals spread throughout the globe who believe in the transformative power of blockchain technology.
For more information, please visit chorus.one