Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Core Research
MEV
Networks
Analyzing MEV Instances on Solana — Part 2
This is the second article of the Solana MEV outlook series.
May 31, 2022
5 min read

Introduction

This is the second article of the Solana MEV outlook series. In this series, we use a subset of transactions to extrapolate which type of Maximum Extractable Value (MEV) is being extracted on the Solana network and by whom.

MEV is an extensive field of research, ranging from opportunities created by network design or application-specific behaviour to trading strategies similar to those applied in the traditional financial markets. As a starting point, our attempt is to investigate if sandwich attacks are happening. In the first article, we examined Orca’s swap transactions searching for evidence of this pattern. Head to Solana MEV Outlook — part 1 for a detailed introduction, goals, challenges and methodology. A similar study is performed in the present article. We are going to look at on-chain data, considering approximately 8 h of transactions on the Raydium DEX. Given the magnitude of 4 x 10⁷ transactions per day, considering only Decentralized Exchanges (DEX) applications on the Solana ecosystem. This simplification is done to get familiarity with data, extrapolating as much information as we can to extend towards a future analysis by employing a wider range of transactions.

Raydium DEX

Raydium is a relevant Automated Market Maker (AMM) application on the Solana ecosystem, the second program in the number of daily active users and the third in terms of program activity.

Fig. 1: Solana programs activity breakdown, source from solana.fm.

Raydium program offers two different swap instructions:

  1. SwapBaseIn: take as input the amount of token that the user wants to swap, and the minimum amount of token in output needed to avoid excessive slippage.
  2. SwapBaseOut: take the amount of token that the user wants to receive, and the maximum amount of token in input needed to avoid excessive slippage.

Although the user interface (“UI”) interacting with the smart contract sets the swap instruction to use the first instruction type, leaving SwapBaseIn responsible for 99.9% of successfully executed swap instructions:

Fig. 2: Swap instructions from here.

We built a dataset, extracting the inputs from the data byte array passed to the program, and the actual swap token amounts by looking at the instructions contained in the transaction. Comparing the minimum amount of tokens specified in the transaction and the actual amount the user received, we estimate the maximum slippage tolerance for every transaction. By computing the corresponding slippage, we obtain the histogram:

Fig 3: Number of transactions per slippage.

The default value for slippage on the Raydium App is set to 1%. We can assume that at least 28% of transactions use the default value. Since it is not possible to know the state of the pool when creating the transaction, this number could be a bit higher.

It can be assumed that nearly 0% of slippage values are only achieved by sophisticated investors using automated trading strategies. Orca swaps’ histogram, presented in Fig 2.2 of the previous article, shows a peak in transactions with slippage of around 0.1%. On Raydium, a relevant proportion of transactions lies below 0.05%. This fact can suggest that trading strategies with lower risk tolerance, i.e price-sensitive strategies correspond to 25% of the swaps transactions (accumulating the first two bars in the histogram).

Other evidence of automated trading being common on this DEX is that on average, 40% of transactions fail, mostly because of the tight slippage allowed by user settings.

Fig 4.1: Number of transactions successfully executed (blue) and reverted (gray) by Raydium program. Source: dune.com.
Fig 4.2: Error messages in reverted transactions breakdown. Source: dune.com.

Dataset

We are considering more than 30,000 instructions interacting with the Raydium AMM program, from time 02:43:41 to time 10:25:21 of 2022–04–06 UTC. For statistics purposes, failed transactions are ignored.

Although 114 different liquidity pools are accessed during this period, the SOL/USDC pool is the most traded pool, with 4,000 transactions.

Fig. 5: 40 most relevant pools — representing 75% of all Raydium swap transactions.

The sample contains 1366 different validators as leaders in more than 35000 slots we are considering, representing 93% of the total stake and 78% of the total validator population by the time of writing, according to Solana Beach.

Fig. 6: The proportion of slots for each of the 20 most relevant leaders.

Of 5,101 different addresses executing transactions, 10 accounts concentrate 23% of the total transactions. One of the most active accounts on Raydium, Cwy…3tf also appears in the top 5 accounts in Orca DEX.

Fig. 7: Top 10 accounts by number of Raydium swaps

The graph below shows the total number of transactions for accounts with at least two transactions in the same slot. If used as a proxy to identify automated trading, on average 9 different accounts can be classified:

  • high-frequency behaviour: accounts with 3 successful executed transactions per second;
  • moderate frequency: accounts with approximately 1 transaction per second.
Fig. 8: number of transactions for the 60 more active accounts with multiple transactions in at least one slot

We can also look at the pools where these accounts execute more often. It is possible to notice they tend to specialize in different pools. The table below shows the two pools with more transactions for each of the 5 more active addresses:

By deep-diving into account activity by pool, we can see that two accounts concentrate transactions on WSOL/USDT pool; one account is responsible for half of all transactions in the mSOL/USDC pool; most of the transactions in the GENE/RAY pool are done by only one account (Cwy…3tf).

Fig. 9: Transactions owner breakdown for the 5 pools with the highest number of transactions. Each different account is represented by a new color.

Results

Searching for sandwich behaviour means we need to identify at least 3 transactions executed in the same pool in a short period of time. For the purpose of this study, only consecutive transactions would be considered. The strategy implies the first transaction to be in the same direction of the sandwiched transaction and a transaction in the opposite direction of the initial trade, closing out the positions of the MEV player.

Fig. 10: 3 steps of a sandwich attack

The need for price impact implies a dependence on the amount of capital available to be used in every trade. Some MEV strategies can be performed atomically, with a sequence of operations executed in the same transaction. These strategies usually benefit from flash loans, allowing for anyone to apply it disregarding the capital they have access to. This is not the case for sandwich attacks, since the profit is realized after the successful execution of all the transactions (Fig. 10).

As shown in the first article, the amount of capital needed in order to create value depends on the Total Value Locked in the pool — the deeper the liquidity, the more difficult it is to impact the price. Head to Fig. 2.4 of the first article for the results of simulation into the Orca’s SOL/USDC pool. The figure shows the initial capital needed in order to extract a given percentage of the swap.

In the current sample, we have found 129 blocks with more than three swaps in the same pool, most of the swaps are in the same direction — no evidence of profit-taking. As shown in Fig. 11 below, the pool SAMO_RAY is the pool with more occurrences of multiple swaps in the same slot.

Fig. 11: pools presenting more than 3 swaps in a single slot

When searching for blocks and pools with swaps in opposite directions as a proxy to profit-taking, 9 occurrences are left with a potential sandwich attack pattern, as shown in the table below (Fig 12). After further investigation of the transactions and the context in which the instructions were executed, it is fair to assume the operations are related to arbitrage techniques between different trading venues or pools.

Fig. 12: slots and pools with more than 3 swaps and evidence of profit-taking

Conclusion

In this report, we were able to access the activity of the Raydium DEX. The conclusions are based on a limited amount of data, assuming our sample is comprehensive enough to reflect the general practices involving the dApp.

It is possible to notice relevant activity from automated trading and price-sensitive strategies such as arbitrage, which corresponds to 25% of swap transactions. On average, only 40% of transactions are successfully executed and 72% of all reverted transactions fail because of small slippage tolerance. Approximately, 28% of transactions can be classified as manual trading, since they use the default slippage value.

Of 5101 different accounts interacting with the Raydium program, 10 accounts concentrate 23% of the total transactions. One of the most active accounts on Raydium, Cwy…3tf also appears in the top 5 accounts in Orca DEX transactions. This same account is responsible for 77% of swaps in the GENE/RAY pool.

There were 9 occurrences of a potential pattern of a Sandwich attack discarded after further investigation.

It is important to mention that this behaviour is not only dependent on the theoretical possibility but largely biased by market conditions. The results in $13m MEV during Wormhole Incident and $43m Total MEV from Luna/ UST Collapse on Solana demonstrate the increase in profit extracted from MEV opportunities during stressful scenarios. Although the study focuses attention on different strategies and does not mention sandwich attacks, the probability of this strategy happening can also increase, given the smaller liquidity in pools (TVL) and the occurrence of trades with bigger size and slippage tolerance.

This is my first published article. I hope you enjoyed it. If you have questions, leave your comment below and I will be happy to help.

Core Research
MEV
Networks
Analyzing MEV Instances on Solana — Part 1
Solana is a young blockchain, and having a complete picture of what is happening on-chain is a difficult task — especially due to the high number of transactions daily processed.
May 5, 2022
5 min read

Introduction

Solana is a young blockchain, and having a complete picture of what is happening on-chain is a difficult task — especially due to the high number of transactions daily processed. The current number of TPS is around 2,000, meaning that we need to deal with ~ 10⁸ transactions per day, see Fig. 1.1.

Fig. 1.1: This figure shows the daily number of transactions vs time, source from solana.fm.

When processing transactions, we have to deal with the impossibility of a-priori knowing its status before querying information from an RPC node. This means that we are forced to process both successful and failed transactions. The failed transactions, most of which come from spamming bots that are trying to make a profit (e.g. NTF, arbitrage, etc.), constitutes ~ 20% of the successful ones. The situation slightly improves if we consider only program activity. By only considering what happens on Decentralized Exchanges (DEXs), we are talking about 4x10⁷ transactions per day, see Fig. 1.2. This makes it clear that a big effort is required to assess which type of Maximum Extractable Value (MEV) attack is taking place and who is taking advantage of it, even because tools like Flashbots do not exist on Solana.

Fig. 1.2: This figure shows the program activity vs time, source from solana.fm.

In what follows, we are going to estimate what happened on-chain considering only ~5 h of transactions on Orca DEX, from 11:31:41 to 16:34:19 on 2022–03–14. This simplification is done to get familiarity with data, extrapolating as much information as we can to extend towards a future analysis by employing a wider range of transactions. It is worth mentioning that Orca DEX is not the program with the highest number of processed instructions, which indicates that a more careful analysis is needed to look also into other DEX — this is left for future study.

The aim of this preliminary analysis is to gain familiarity with the information contained in usual swap transactions. One of our first attempts is to extrapolate if sandwich attacks are happening, and if so, with which frequency. In Section 2, we are going to look at the anatomy of a swap transaction, focussing on the type of sandwich swap in section 2.1. Section 2.2 is devoted to the description of “actors” that can make a sandwich attack. In Section 3, we describe the dataset employed, leaving the description of the results in Section 4. Conclusions are drawn in Section 5.

Section 2: Anatomy of swap transactions

On Solana, transactions are made by one or more instructions. Each instruction specifies the program that executes them, the accounts involved in the transaction, and a data byte array that is passed to the program. It is the program’s task to interpret the data array and operate on the accounts specified by the instructions. Once a program starts to operate, it can return only two possible outcomes: success or failure. It is worth noticing that an error return causes the entire transaction to fail immediately. For more details about the general anatomy of the transaction see the Solana documentation.

To decode each of the instructions we need to know how the specific program is written. We know that Orca is a Token Swap Program, thus we have all the ingredients needed to process data. Precisely, taking a look at the token swap instruction, we can immediately see that a generic swap takes as input the amount of token that the user wants to swap, and the minimum amount of token in output needed to avoid excessive slippage, see Fig. 2.1.

Fig. 2.1: Swap instructions from here.

The minimum amount of tokens in output is related to the actual number of tokens in output by the slippage S, i.e.

from which

Thus, we can extract the token in input and the minimum token in output from the data byte array passed to the program, and the actual token in output by looking at the instructions contained in the transaction.

Fig. 2.2: Number of transactions per slippage.

By computing the corresponding slippage defined in Eq. (2.2) we obtain the histogram in Fig. 2.2. From this picture, we can extrapolate different information. The first one is, without doubt, the distribution of transactions around the default value of slippage on Orca, i.e. 0.1%, 0.5% and 1%. This makes complete sense since the “common-user” is prone to use default values, without spending time in customization. The second one is the preference of users to select the lowest value for the slippage. The last one concerns the shape of the tails around the default values. A more detailed analysis is needed here since it is not an easy task to have access to what actually is contained inside them. The shape surely depends on the bid/ask scatter, which is a pure consequence of the market dynamic. The tails may also contain users that select a different slippage with respect to the default values. However, one thing is assured: this histogram contains swaps from which the slippage can yet be extracted. As we will see, from this we can extrapolate an estimate of the annualized revenue due to sandwich attacks.

Section 2.1: Type of sandwich swaps

The goal of this report is to search for hints of sandwich swaps happening on Orca DEX. All findings will be used for future research, thus we think it is useful to define what we refer to as sandwich swaps and how can someone take advantage of them.

Let’s start with its basic definition. Let’s assume a user (let’s say Alice) wants to buy a token X on a DEX that uses an automated market maker (AMM) model. Let’s now assume that an adversary sees Alice’s transaction (let’s say Bob) and can create two of its own transactions which it inserts before and after Alice’s transaction (sandwiching it). In this configuration, Bob buys the same token X, which pushes up the price for Alice’s transaction, and then the third transaction is the adversary’s transaction to sell token X (now at a higher price) at a profit, see Fig. 2.3. This mechanism works until the price at which Alice buys X remain sbelow the value X・(1+S), where S represents the slippage set by Alice when she sends the swap transaction to the DEX.

Fig. 2.3: Graphical representation of sandwich transaction.

Since Bob needs to increase the value of the token X inside the pool where Alice is performing the swap, it is evident that the core swaps inserted by Bob should live on the same pool employed by Alice.

From the example above, it may happen that Bob does not have the capital needed to significantly change the price of X inside the pool. Suppose that the pool under scrutiny regards the pair X/Y and that the AMM implements a constant product curve. In the math formula we have:

where k is the curve invariant. If we set the number of tokens Y in the pool equal to 1,000,000 and the number of tokens X equal to 5,000,000 and assuming that Alice wants to swap 1,000 token Y, we have that the amount of token X in output is:

It is worth noting that here we are not considering the fee that is usually paid by the user. If Alice set a slippage of 5%, this means that the transaction will be executed until the output remains above 4'745.25. This means if Bob is trying to take this 5%, he will need an initial capital of 26,000 token Y.

Sometimes this capital may be inaccessible, allowing Bob to only take a portion of the 5% slippage. For example, let’s consider the Orca pool SOL/USDC, with a total value locked (TVL) of $108,982,050.84 at the time of writing. This pool implements a constant product curve, which allows us to use Eqs. (2.3) and (2.4) to simulate a sandwich attack. Fig. 2.4 shows the result of this calculation.

Fig. 2.4: Simulation of a sandwich attack into the SOL/USDC pool. This figure shows the initial capital needed (x-axes) to extrapolate a given percentage of the swap (y-axes).

It is clear that the initial capital to invest may not be accessible to everyone. Further, it is important to clarify that the result is swap-amount independent. Indeed, for each amount swapped by Alice, the swap made by Bob is the one that “moves” the prices of the initial tokens inside the pool. The scenario is instead TVL dependent. If we repeat the same simulation for the Orca pool ETH/USDC, with a TVL of $2,765,189.76, the initial capital needed to extract a higher percentage of the slippage of Alice drastically decreases, see Fig. 2.5.

Fig. 2.5: Simulation of a sandwich attack into the ETH/USDC pool. This figure shows the initial capital needed (x-axes) to extrapolate a given percentage of the swap (y-axes).

From the example above, let’s consider the case in which Bob has an initial capital of 2,000 token Y. If he is able to buy the token Y before Alice’s transaction, Alice will obtain an output of 4,975.09 token X, which is only 0.4% lower than the original amount defined in Eq. (2.4).

At this point, Bob has another possibility. He can try to order transactions that are buying the same token X after its transaction, but immediately before Alice’s swap. In this way, he can use the capital of other users to take advantage of Alice’s slippage, even if Bob’s initial capital is not enough to do so, see Fig. 2.6. This of course results in a more elaborate attack, but likely to happen if Bob has access to the order book.

Fig. 2.6: Graphical representation of sandwich transaction when Bob uses other X-buyers before Alice’s transaction to increase the value of X.

Section 2.2 Who are the actors of a sandwich attack?

It is not an easy task to spot the actors behind a sandwich attack on Solana. In principle, the only profitable attackers are the leaders. This is because there isn’t a mempool, and the only ones that know the exact details of the transactions are the validators that are in charge of writing a block. In this case, it may be easier to spot hints of a sandwich attack. Indeed, if a leader orders the swap transactions to perform a sandwich, it should include all of them in the same block to prevent an unsuccessful sandwich.

The immediately following suspect is the RPC service that the DAPP is using. This is because the RPC service is the first to receive the transaction over HTTP, since it is its role to look up the current leader’s info using the leader schedule and send it to the leader’s Transaction Processing Unit (TPU). In this case, it would be much more difficult to spot hints of sandwiching happening since in principle the swap transactions involved can be far from each other. The only hook we can use to catch the culprit is to spot surrounding transactions made by the same user, which will be related to the RPC. This is a consequence of the lower price fee on Solana, which raises the likelihood that a sandwich attack can happen by chance spamming transactions in a specific pool. This last one is clearly the riskiest since there is no certainty that the sequence of transactions is included in the exact order in which the attacker originally planned it.

Section 3: Dataset description

Before entering the details of the analysis, it is worth mentioning that, standing on what is reported on Solana Beach, we have a total of 1,696 active validators. Our sample contains 922 of them, i.e. 54.37% of the total validator population. The table below shows the validator that appears as the leader in the time window we are considering. Given the likelihood-by-stake for a validator to be selected as a leader, we retain fair to assume that our sample is a good representation of what’s happening on Orca. Indeed, if a validator is running a modified version of the vote account program to perform sandwich swap, the rate of its success will be related to the amount of staked tokens, not only by actual MEV opportunities. Further, modifying the validator is not an easy task, thus smaller validators will not have the resources to do that. Since we have all the 21 validators with a supermajority plus a good portion of the others (i.e. we are considering half of the current number of active validators), if such a validator exists, its behaviour is easily spotted in our sample. However, it is worth mentioning that a complete overview of the network requires the scrutiny of all validators, without making assumptions of that kind. Such achievement is behind the scope of this report, which aims primarily to explore which type of sandwich can be done and how to spot them.

Having clarified this aspect, we firstly classify the types of swaps that are performed on the Orca DEX. The table below shows the accounts that are performing more than two transactions. It is immediately visible that most of the transactions are done by only 2 accounts over 78 involved.

As explained in Section 1, we are considering 5H of transactions on Orca DEX, from 11:31:41 to 16:34:19 on 2022–03–14. This sample contains a total of 12,106 swaps, with pool distribution in Fig. 3.1.

Fig. 3.1: Pool distributions of the swaps employed. Here [aq] stands for Aquafarm, i.e. an Orca’s yield farming program. The pool denoted with Others represents the other pool with less than 100 swaps.

By deep-diving into the swap, we can see that most of the transactions in the 1SOL/SOL [aq] and 1SOL/USDC [aq] are done by only two accounts, see Fig. 3.2. Here [aq] stands for Aquafarm, i.e. an Orca’s yield farming program. We can also see the presence of some aggregate swaps in the SOL/USDC [aq] and ORCA/USDC [aq] pools.

Fig. 3.2: Same as Fig. 3.1, but considering only the 5 pools with the highest number of transactions. The color legend refers to the number of transactions performed by a defined user.

Section 4: Results

We started searching for the presence of leaders performing sandwich swaps. As we described in Section 2.1, in general, a swap can happen in two ways. For both of them, if such a type of surrounding is done by a leader, we should see the transactions under scrutiny included in the same block. This is because, if a leader wants to make a profit, the best strategy is to avoid market fluctuations. Further, if the attacker orders the transactions without completing the surrounding, the possibility that another leader reorders transactions cancelling the effect of what was done by the attacker is not negligible.

By looking at the slots containing more than 3 swaps in the same pool, we ended up with 6 slots of that kind, out of 7479. Deep diving into these transactions, we found that there is no trace of a sandwich attack done within the same block (and so, from a specific leader). Indeed, each of the employed transactions is done by a different user, marking no evidence of surrounding swaps done to perform a sandwich attack. The only suspicious series of transactions is included in block # 124899704. We checked that the involved accounts are interacting with the program MEV1HDn99aybER3U3oa9MySSXqoEZNDEQ4miAimTjaW, which seems to be an aggregator for arbitrage opportunities.

As mentioned in Section 2.2, validators are not the only possible actors. Thus, to complete the analysis we also searched for general surrounding transactions, without constraining them to be included in the same block. We find that only 1% of the total swaps are surrounded, but again without strong evidence of actual sandwich attacks (see Fig. 4.1 for the percentage distribution). Indeed, by looking at those transactions it comes out that the amount of token exchanged is too low to be a sandwich attack (see Sec. 2).

Fig. 4.1: Percentage of surrounding transactions per account.

Before ending this section, it is worth mentioning that if we extrapolate the annual revenue that a leader obtains by taking 50% of the available slippage for swaps with a slippage greater than 1%, we are talking about an amount of ~ 240,000.00 USD (assuming that the attacker is within the list of 21 validators with supermajority), see Fig. 4.2. Of course, this is not a real estimate since it is an extrapolation from only 5h of transactions, thus we need to stress that the actual revenue can be different. Further, this is not an easily accessible amount due to what we showcased in Sec. 2. However, the amount in revenue clearly paves the way for a new type of protection that validators should offer to users, especially if we take into account that Orca is not the DEX with the highest amount of processed swaps. Since at the moment there is no evidence that swaps are sandwiched, we will take no action in this direction. Instead, we will continue monitoring different DEXs by taking snapshots in different timeframes informing our users if a sandwich attack is spotted on Solana.

Fig. 4.2: Annualized revenue from sandwich attacks (per leader) as a function of the slippage. Precisely, the blue dots represent the annualized revenue that a leader in the 21 validators with a supermajority list obtains if takes the 50% of the swaps with an available slippage greater than the one on the X-axes.

Section 5: Conclusion

In this report, we define two types of sandwich attacks that may happen on a given DEX. We further describe who are the possible actors that can perform such a type of attack on Solana and how to spot them. We analyzed data from ~5 h of transactions on Orca DEX, from 11:31:41 to 16:34:19 on 2022–03–14 (that is, 12,106 swaps). Despite the cutting of the number of transactions employed, we argued why we believe this sample could fairly be a “good” representation of the entire population.

Our findings show no evidence that sandwich attacks are happening on Solana by considering two possibilities. The former is that a validator is running a modified version “trained” to perform a sandwich attack on Orca. The latter is that an RPC is trying to submit surrounding transactions. We discovered that only 1% of transactions are actually surrounded by the same user, but none of them is included in the same block — excluding the possibility that a leader is taking advantage of the slippage. By deep-diving into this, we discover that the amount exchanged by these transactions results are too low for capital to be invested to exploit the slippage and submit a profitable sandwich attack.

We also show how the capital needed to make sandwich attacks profitable may not be accessible to everyone, narrowing the circle of possible actors.

Core Research
Networks
The Stakes of Staking (Altair Update)
Big thanks to my colleagues at Chorus One for their contributions to this post, especially Umberto Natale for providing a lot of the data used, full report here.
April 7, 2022
5 min read

The Stakes of Staking (Altair Update)

Big thanks to my colleagues at Chorus One for their contributions to this post, especially Umberto Natale for providing a lot of the data used, full report here.

TL;DR

  • The Altair upgrade introduced a number of changes to the reward/penalty system for Ethereum: sync committees, incentive reforms to the inactivity leak and block proposals, changes to the rewarded weight of validator duties, and others.
  • An increase in the proposer reward and the new sync committees will contribute to a greater variability of rewards than previously, but also a general increase in opportunities for profit.
  • The rewards and penalties outlined in this analysis make staking a good business endeavour for both validators and delegators, and set the terms for an unstoppable and stable network.

Introduction

Many different industries are using Ethereum to build new decentralized applications.

2021 was the year when this vision stopped being reserved for a small subset of the population with pre-existing capital (investors) or technical expertise (developers), as the popularity of Ethereum reached new heights.

Artists are disrupting traditional notions of value, with OpenSea (the largest NFT Marketplace) growing its transactions volume by “over 600x” in a year.

People are organizing self-sustainable communities, as DAO members take control over their own financial freedom and digital identity.

Builders are creating never-before-seen decentralized financial assets, where Ethereum-based Uniswap continues to dominate. But real fun also came to crypto gaming: many are playing Dark Forest, a game that experiments with cutting edge scaling technology.

Most recently, the whole crypto community has come together to aid Ukraine, at the same rate as well-established international organizations.

Bringing Ethereum into the sun and serving all of humanity, inevitably requires a scalable, secure, and resilient network.

This post aims to set the stage for Ethereum as it nears its greatest milestone, and to take a peek at the future for what this could mean in impact for both staking providers and delegators, after the Altair upgrade. To do this, we have delved into risks, rewards and the complex network that sits in between.

Designing Proof of Stake (PoS)

The big goals for the future of Ethereum are scattered across its official roadmap.

Understanding this design is key to comprehending the associated risks and rewards of PoS Ethereum. The process of upgrading started in December 2020, when the first piece of the puzzle fell into place: the Beacon Chain went live. This PoS system sets the basic consensus mechanism, by assigning the right to create a block through a deterministic lottery process. Staking nodes with a higher balance have more probability to be selected. The rewards for staking include block rewards and transaction fees, and we explore these further in the following section.

To stake in Ethereum and run a validator, 32 ETH needs to be sent to the Ethereum Deposit Contract, along with two key parameters: 1) the validator public key and 2) the withdrawal credentials for the deposit. Critically, the public key and withdrawal credentials do not need to be controlled by the same entity. This allows for two ways to participate in the protocol: as a validator or as a delegator (individuals who pass the responsibility of validation while still earning a portion of the rewards). Staking providers such as Chorus One offer ETH holders the opportunity to stake their tokens and participate in consensus through its platform.

Because chosen stakers are given exclusive rights to create a block, the protocol must consist of measures to counteract malicious attack vectors. The implementation of this consensus mechanism relies on three core elements: A fork-choice rule, a concept of finality, and slashing conditions. It is important to note that in PoS networks, slashing is not a necessary incentive for correct behavior by validators but rather an artifact of the particular block rewards and mechanism implemented. Because rewards are based on blocks processed or accepted, there’s an incentive for a validator to validate all forks in the chain, even conflicting ones (nothing at stake). Therefore, a slashing rule has to be implemented, as a matter of design.

The Altair hard fork of October 2021 introduced additional elements to consensus, namely sync committees. Validators that are part of this committee have the duty of continually signing the block header to allow a new set of light clients to easily sync up at very low computational and data cost. These concepts of head of the chain, target of attestation and source of attestation are critical to finalizing blocks and earning rewards. Checkpoints are set on-chain to achieve these goals: when a checkpoint is finalized, all previous slots are finalized. There is no limit to the number of blocks that can go through this system. A checkpoint can only be finalized after the process of consensus chooses another validator, and the infinite machine starts all over again.

A look into cryptoeconomics

It is likely that you’ve come across the prime assumption for PoS: “validators will be lazy, take bribes, and that they will try to attack the system unless they are otherwise incentivized not to” Hope for the best, but expect the worst.

You may have also seen floating around different figures for the “estimated APR” for running a validator, and wondered — where does this number even come from? All estimations for returns rest on a set of assumptions, and many published calculations were presented using outdated specs for the Beacon Chain.

So, let’s take a current look. Incentives in Altair come in the form of rewards, penalties, and slashings. Of these three, slashing is the most relevant to validator health. While crypto rewards have been around for years, their complexity and adoption have seen a significant rise in the recent past. Offerings do differ platform by platform, and all carry different kinds of risks.

One of the main conceptual reworks of Altair was in redesigning how validators are rewarded (and penalized). The idea was to make these incentives more systematic and simplify state management. But it also ups the ante on validator responsibilities.

At present, validators are rewarded/penalized if they fulfill determined duties:

  • Submit an attestation that correctly identifies the head of the chain
  • Submit an attestation that correctly identifies the target
  • Submit an attestation that correctly identifies the source
  • Submit the sync committee signature (for validators in the sync committee)
  • Proposing a block (if selected as proposer)

A validator can propose one attestation and one block at a time. Depending on their properties, the reward varies. Participation in proposals and in the sync committee are a matter of luck but quite infrequent, however, attestations should be done once per epoch.

How we view rewards while considering risk in staking is a subject of research at Chorus One. This piece aims for other validators and interested parties to understand the “main-principles” they need to follow in order to minimize losses, and in turn, maximize profits in the process of validation. In our study, we found that currently the expected annualized reward for an ideal validator (perfect performance) is 5.44%. This amount decreases to 5.4% when we take into account a less idealized case.

After providing validators a feeling for how much they stand to earn, the following part will present a more practical example and explain how these figures may actually vary.

Risks overview and scenarios

Slashing risk is a type of platform-dependent risk, as platforms that offer a similar service carry common risks. This section refers to the different types of penalties, and methods to calculate them at certain scenarios.

All formulas presented have been transformed in order to give a more general idea. The risk modeling was done using the actual definition from the Beacon Chain specs (Phase 0 and Altair) and the state of the chain at the time of writing. More on our methods can be found in our full study, linked previously.

Slashing includes all penalties that result in the partial or total loss of the staked assets of a validator, which range from 0% to 100% of the assets. Failing to perform the current validator duties properly (see last section) leads to being penalized and, in the case of slashable actions, being forcefully ejected from the Beacon Chain for suspected malicious activity. This is done to protect both the validator from further losses, and to help the chain finalize.

The main reasons from slashing that validators must be aware of are: proposing two different (conflicting) blocks, submitting two different (conflicting) attestations and submitting an attestation that completely surrounds or is surrounded by another attestation. If these events are not the result of a malicious action, then it follows that they must come from a bug or error. To account for this, the amount of staked destroyed is proportional to the number of validators slashed around the same time. If this number is small, then it is unlikely to be the result of a coordinated attack because that would require a high number of validators. These “honest mistakes” are punished lightly, at a minimum of 1ETH. If, on the other hand, there’s a high number of validators slashed at the same time, then it is assumed to be an attack, resulting in a higher amount of stake being destroyed, up to the full balance of the node.

There’s of course a certain pressure on validators to avoid going out at the same time as other validators. This expectation to decentralize touches on aspects of client diversity, but also on sources of truth or hosting for clients. This is a very critical point for everyone participating in the Ethereum ecosystem, and one that Chorus One has considered in our design. But back to the topic, these penalties hold true whether or not blocks are being finalized (meaning, 2/3 of validators weighted by stake are online and their votes are being counted). This is the state of normal operations for Ethereum. Anything under that, and we can no longer reach agreement and the inactivity leak mentioned previously comes in to restore balance.

With a clear understanding of the rewards system, estimating the source of possible penalties is much more simple, by calculating the attestation reward/penalty delta.

Indeed, if the reward is not finalized, the corresponding amount is removed from the attester’s balance (the minimum penalty). At that point, the validator’s unlock date for the stake is delayed about 36 days. This is to allow another, potentially much greater, slashing penalty to be applied later, once the chain knows how many validators have been slashed together around the same time (further penalty). If an inactivity leak is active, then the potential reward drops to 0, so by fulfilling the duties you are only able to avoid penalties.

Since getting the source vote wrong implies getting the target vote wrong, and getting the target vote wrong implies getting the head vote wrong, the possible slashing scenarios reduce to these:

  • Incorrect source
  • Correct source, incorrect target
  • Correct source and target, incorrect head.

To quantify the outcomes for performing validator duties, we would like to compare what could be considered a generic validator across a selection of edge scenarios. This example takes into consideration the following values:

Perfect Validator

To start off, let’s look at what this validator would earn if they, and all other validators, had an ideal participation record under the defined specs.

Attestations can be rewarded with a portion of the “base reward” for each of the correlated duties, weighted by the specific service provided. In the latest specs, the target vote receives the highest rewards, as it is the most important to reach consensus. The base reward is a constant across the network at all times.

BASE REWARD (in Gwei) = Effective balance * [Base reward factor / sqrt(staked ETH balance)]BASE REWARD = 32,000,000,000 * [64 / sqrt(10,000,000,000,000,000)] = 20,480 Gwei = 0.00002048 ETH

Following the upgrade, the block reward allocated is now ⅛ of total rewards as intended by the Ethereum researchers, rather than ⅛ of ¼ of rewards, as was the case pre-Altair. You may notice the delay reward is missing. Now, all attestations are given specific inclusion deadlines to claim their rewards in a gradual pattern, so prompt voting is logically accounted for.

Since all validators are supposed to attest at least one time during an epoch (for a perfectly working network), the number of attesting validators is equal to the total active validators divided by the number of slots per epoch.

ATTESTING VALIDATORS = ACTIVE VALIDATORS / SLOTS PER EPOCHATTESTING VALIDATORS = 300,000 / 32 = 9,375 validatorsTOTAL REWARD = BASE REWARD * ATTESTING VALIDATORSTOTAL REWARD = 20,480 * 9,375 = 192,000,000 GweiBLOCK REWARD = TOTAL REWARD / 8 = 24,000,000 Gwei = 0,024 ETHTARGET REWARD = 26 * TOTAL REWARD / 64 = 78,000,000 Gwei = 0,078 ETHSOURCE REWARD = 14 * TOTAL REWARD / 64 = 42,000,000 Gwei = 0,042 ETHHEAD REWARD = 14 * TOTAL REWARD / 64 = 42,000,000 Gwei = 0,042 ETH

Sync committees rotate rather slowly (every 256 epochs or every day), and validators selected can earn the sync committee reward for each slot in which they participate. A high number of validators will not be actually selected for this reward in a year’s time.

SYNC COMMITTEE REWARD = 2 * TOTAL REWARD / 64 = 6,000,000 Gwei = 0,06 ETH

Finally, we see the maximum possible reward in an epoch (this number also coincides with the minimum penalty for being offline or failing to fulfill the previous duties):

MAXIMUM REWARD = (BLOCK+TARGET+SOURCE+HEAD+SYNC COMMITTEE) REWARDMAXIMUM REWARD = 0,246 ETH

It is important to note that there’s still a potential variation to this reward of a few percent over the course of a year due to sheer luck (e.g. the probability of being chosen to propose, be in the sync committee, being offline exactly at the moment you are selected, etc. This applies even considering this ideal case, where the validator performs all their duties perfectly. The effect increases as the validator set grows, due to probability. Although not worrying in terms of an investment risk (marginal differences should even out over the course of a year), it still should be kept in mind as we delve into actual performance of validators in the network.

If we were to expand this timeline to a year, then the expected reward for this single validator per year sits at around 1.7428 ETH, which corresponds to the 5.44%% APY we mentioned in the previous section. A validator can optimally earn one base reward per epoch over a long time horizon.

Realistic Validator

However, to get a bit closer to model real-world rewards, we must consider the impact of a less-than-perfect validator performance.

As we learned previously, rewards are maximized for validators the better the network behaves. This helps disincentivize malicious behavior but also means that rewards can be reduced by external factors. A model that considers all the reasons why this validator might fail to produce attestations, produce blocks, or fail to propagate is an option, but here we wanted to observe: what would happen if we assume that 99.25% (a fair figure in reality) of active validators are actually attesting blocks? Also, we wanted to make a more conservative choice and assume that our validator was online 99.9% of the time.

As we can see, in this new realistic scenario, the total distribution shifts slightly. The expected annualized reward suffers a reduction of about 0.8% and the resulting expected APY reduces to 5.4%, as we had mentioned. The probability of certain events happening plays a huge part in this scenario, so this is just a starting point to analyze.

Slashed Validator

Next we wanted to estimate what would happen if our validator was caught committing a slashable offense, the ones that were previously outlined to result in substantial loss of stake. To do this, we will assume that we simultaneously sign two different blocks with 1000 validators. In this case, we suffer three types of slashing for each validator involved:

  • A minimum penalty of 0.5 ETH
  • A penalty, that depends on the number of double signing validators, of 0.2197 ETH
  • The penalty associated with missing attestation (wrong source and target) assigned for the 36 days of delay, corresponding to 0.1086 ETH

This corresponds to a total loss of -0.8282 ETH. It is worth noting that this slashed amount increases with the number of validators that are slashed at the same time, as was discussed in the slashing overview.

Conclusion

PoS Ethereum is a highly complex and elaborate system. It is thoughtfully designed, but can be difficult to fully grasp from a validator’s perspective, which can make staking seem like an uncertain and unpredictable endeavor. There is still a compelling amount of ETH that is yet to be staked, therefore we must all prioritize network participation and security in the future.

To make sense of the opportunities presented by staking, we wanted to explore the risks native to Ethereum and how said risks stack up versus the offered rewards after the Altair upgrade. Hopefully this article has helped to clarify why and how these rewards can vary from small changes in state or from bigger events, and also show how staking is a profitable business to take part in the long term. As we found in our analysis, the profit of a single validator is around 1.7428 ETH per year, or if we prefer to see in terms of percentage, this corresponds to a 5.44%% APY.

Based on the analysis performed, we find that the most realistic impact is low enough for self-cover to be a viable option, but not low enough to be completely trivial. We have identified the most relevant scenarios to come up with this conclusion. Additionally, we have found that risk can be significantly reduced by non-financial actions, such as promoting validator diversity and operator distribution, as well as putting in place mechanisms to maintain high validation quality standards. You can read our full report here.

Taking Ethereum from the individual to the masses will require a set of tools that accelerate the process of setting-up a validator whilst maintaining the same level of security and protection. At Chorus One we are working to make this a reality through our infrastructure services, and we are preparing to launch new services in the near future that take this to the next level. To learn more, please reach out to research@chorus.one.

Core Research
Networks
Stargaze — Pioneering Interchain NFTs for Web3
Stargaze is an interchain NFT marketplace that solves many problems that exist in NFT marketplaces today.
December 8, 2021
5 min read

Stargaze is an interchain NFT marketplace that solves many problems that exist in NFT marketplaces today. Since January 1 2021, average daily NFT sales have gone from ~$300,000 USD to $73,000,000 USD as of November 26 2021 (a 24,333% increase). Currently, most NFT sales occur on Ethereum, which has popular marketplaces such as OpenSea, Rarible and Sorare. Like many things in crypto, adoption of a particular primitive tends to start on Ethereum and then expands to other blockchains when users start experiencing bottlenecks. Ronin, Wax, Solana and Flow are the four blockchains that trail Ethereum in 24hr NFT sales currently (as of November 26 2021). Blockchains that trail Ethereum in NFT sales address scalability issues that arise from Ethereum’s network congestion. However, many NFT marketplaces that exist on competing blockchains enforce restrictions on how NFT projects can utilise them. With the advent of Stargaze, the Cosmos ecosystem has a dedicated zone for NFTs that does not suffer from scalability issues whilst differentiating from existing NFT marketplaces by being more secure, decentralised, transparent and flexible.

Background

Stargaze is a fully decentralised NFT marketplace in the Cosmos ecosystem, which launched Mainnet Phase 0 on October 30th 2021. Recently, Stargaze announced 25% of their token supply will be ‘fairdropped’ to ATOM and OSMO stakers + to Stargaze validator delegators on Cosmos, Osmosis & Regen. For those who did not qualify for the airdrop, Stargaze is offering early adopters the chance to purchase STARS in a Liquidity Bootstrapping Pool (LBP) held in Osmosis as part of Mainnet Phase 1. The construction of the STARS / OSMO LBP is first-of-its-kind, as Stargaze proposed to borrow OSMO to kickstart the initial STARS / OSMO pool weights. The borrowed OSMO will be returned at the end of the LBP when STARS / OSMO weights are 50/50 and STARS has achieved price discovery. After the LBP has concluded, Stargaze will activate inflation in Mainnet Phase 2 and delegators will have the opportunity to earn staking rewards for securing the network. Finally, Stargaze will go fully-live with their decentralised NFT marketplace as part of Mainnet Phase 3 in Q1 2022, unleashing unmatched economic freedom for creators, stellar incentives for curators and superior security for NFT traders.

Problems That Exist in NFT Marketplaces Today

There are a number of issues that exist in NFT marketplaces today such as centralised curation, bad security, difficult upload workflows, limited flexibility, high gas fees, scams, intransparency of marketplace contracts and royalty restrictions.

In September 2021, the Head of Product at OpenSea used internal information to buy NFTs before they were featured on the homepage and ‘flipped’ them once featured for a profit, which in traditional finance would be considered insider trading. This is an outcome of OpenSea being non-transparent and centralised and could have been mitigated if NFT curation in OpenSea was decentralised. In the same month (September 2021), a critical security vulnerability was disclosed to OpenSea. The critical security vulnerability detected in OpenSea involved attackers airdropping SVG files to OpenSea users, which if signed by a user upon opening (even if opened on the OpenSea domain ingenuously) would give an attacker full access to a user’s funds in the wallet the malicious NFT was being viewed from. On top of these evident issues, OpenSea also restricts NFT projects to setting a maximum of 10% for royalties from sales. Lest we mention that at current ETH gas prices (124 gwei), it costs minimum $200 to buy or sell an NFT on OpenSea, which prices out a majority of retail. However, high gas prices on Ethereum for buyers and sellers can minimise scam collections, which are more commonplace on cheaper blockchains like Solana. Metaplex, a major NFT platform on Solana, also has their own issues when it comes to difficult NFT upload workflows. Finally, many existing NFT marketplaces are not open-source, which increases risks when interacting with the native smart contracts (as users have to rely on one auditing party.

So, what if I told you that a new NFT marketplace is emerging in the Cosmos ecosystem that offers high-quality security, decentralised curation, simple upload workflows, maximum flexibility, low transaction fees, open-source contracts, vetted projects and unlimited customisation of economic parameters?

Stargaze Shines Brighter Than Any Other NFT Marketplace

Security

Stargaze is opting to build out a zone using Cosmos SDK, which enables the network to have an unparalleled level of security and customisation vs existing NFT marketplaces. Cosmos SDK is built with capabilities in mind, which capitalises on least authority to minimise possible exploits at the execution layer. As Stargaze is its own sovereign chain, it also has 100 reputable validators securing it, all of which are specialised solely on verifying transactions that occur in the zone and can react quickly to upgrading the network to enhance the performance and/or security of it. This is completely different to NFT marketplaces on Ethereum and Solana, which are built as applications and have a reliance on validators to secure the underlying network as opposed to the application itself being in full control of its own security. Separately, the Stargaze NFT marketplace is built using CosmWasm, which is orders of magnitude more secure than the Ethereum Virtual Machine (EVM) because EVM attack vectors such as re-entrancy are not possible. All in all, Stargaze leveraging Cosmos SDK and CosmWasm ensures the network is secure and reliable.

Decentralised Curation

Stargaze introduces a new type of ecosystem actor into their NFT marketplace, namingly, the CurationDAO. The CurationDAO in Stargaze is responsible for curating what artwork can be traded in the marketplace. The DAO is membership-based and is governance-driven, ensuring an open and transparent system is in place for the selection of artwork in the marketplace. Stargaze governance may incentivise the CurationDAO by directing an amount of STARS from emitted inflation to reward their work. Having a DAO that curates what is available on the Stargaze marketplace results in better due diligence of projects and reduces the surface area for scams. It could be expected that Stargaze users (both buyers and sellers) benefit from having a CurationDAO too, as only legitimate projects will be able to be traded, which should lead to more liquid markets.

Maximum Flexibility

The Stargaze marketplace has a built-in feature that gives NFT projects the flexibility to choose what type of launch they would like to have (e.g. first-come-first-served mint, auction over t periods, etc). The flexibility of launch options offered by Stargaze allows NFT projects to satisfy demands of their community by working closely with them to determine what type of launch is fairest. Stargaze being a sovereign chain also lets governance exercise a high-level of customisation on protocol parameters, which is beneficial for keeping the network competitive in the long-run. For example, governance could vote on specific network upgrades proposed to improve the performance of the network (which would not be possible in an NFT marketplace that existed as an application). In turn, Stargaze can be much more adaptive than existing NFT marketplaces because governance can vote on introducing changes at the network level to give it a competitive edge.

Simple Upload Workflows

The Stargaze marketplace provides a simple interface for NFT projects to upload files and add metadata into, which uploads to Interplanetary File System (IPFS) in a matter of seconds. Files that are uploaded in Stargaze are immediately and permanently stored in a distributed and resilient system. The user experience is seamless, as the entire storage process is abstracted away from the end-user via nft.storage. All Stargaze users can be rest assured that the NFTs they own are permanently available, unlike some NFT collections that rely on a third party to host the file the NFT points to.

Low Transaction Fees

Fees on Stargaze are negligible compared to what can be seen on Ethereum, so the network is accessible to all types of users (not just those with a high amount of initial capital). It can be expected that fees will be just high enough to prevent spam but low enough to encourage frequent use. Low fees in an NFT marketplace enable more growth and innovation as buyers have greater purchasing power and projects can release more NFTs without transaction fee concerns.

Customisation of Economic Parameters

Another unique layer of customisation available on Stargaze vs other NFT marketplaces is that of staking on the native network. One could imagine utility being introduced to STARS that would not be possible on existing NFT marketplaces like OpenSea. For example, in the future users might be able to deposit their STARS into a liquid staking protocol to receive the equivalent staked STARS (stSTARS) that could be used to bid on NFTs (i.e. users could earn yield whilst bidding on NFTs). It might also be a requirement to stake STARS in order to join the CurationDAO (the DAO responsible for selecting what collections are released on Stargaze). Or perhaps, users could stake a minimum amount of STARS in a given time period to be eligible to vote on what collections are reviewed by the CurationDAO. Another option could be to stake some amount of STARS in order to have a higher chance of getting into lottery-based NFT launches. There are limitless possibilities that could be thought of to add utility to STARS. On the flipside of staking STARS, the inflation emitted by Stargaze could also be used to reward creators of NFT projects. Once an NFT project has been vetted by the CurationDAO, it might be eligible to earn x% of staking rewards reserved for creators. In other words, NFT project creators might be entitled to a double source of income in Stargaze — royalties coming from trading of their NFTs on the marketplace + their proportion of a steady stream of STARS emitted every block directed towards creators.

Open-Source Contracts

Stargaze code is fully open-source and the core team recently released a LBP simulator that other projects in the Cosmos ecosystem can use to experiment with tweaking parameters before launching an LBP on Osmosis. The Stargaze code is available in a repository on Github for anyone to see, which means anyone can audit the code to ensure there are no vulnerabilities and engineers can easily build on top of existing code to enhance the platform in a collaborative way.

To conclude, Stargaze is a marketplace that exemplifies security, decentralisation, transparency and flexibility, which differentiates it from any existing competition from NFT marketplaces. Due to the nascency of the NFT space, there are many existing inefficiencies in NFT marketplaces across a multitude of blockchains. Stargaze has an opportunity to capture a large segment of a growing NFT market by offering distinct products and services for stakeholders such as NFT projects, curators and users. Novel web3 products will be built out that incorporate Stargaze NFTs in ways we cannot possibly imagine. A new era of interchain NFTs is upon us, enter Stargaze.

Written by Xavier Meegan, Research Analyst at Chorus One

About Chorus One

Chorus One is offering staking services and building tools that advance the Proof-of-Stake ecosystem.

Website: https://chorus.one
Twitter: https://twitter.com/chorusone
Telegram: https://t.me/chorusone
Newsletter: https://substack.chorusone.com

About Stargaze

Website: https://stargaze.zone/
Twitter: https://twitter.com/StargazeZone
Stargaze LBP Details: https://gov.osmosis.zone/proposal/discussion/2882-details-and-parameters-of-stargaze-lbp-on-osmosis/

Core Research
Networks
Bootstrapping Liquidity for Lido for Solana
Lido for Solana launched about a month ago and so far north of $200m worth of SOL has already been staked with Lido.
October 8, 2021
5 min read

800,000 LDO and many more rewards are live on Lido for Solana and its DeFi integrations

Lido for Solana launched about a month ago and so far north of $200m worth of SOL has already been staked with Lido. Today, we are glad to announce that further liquidity pools and the first liquidity rewards in LDO tokens bridged from Ethereum will start to be distributed.

Holders of stSOL can now supply liquidity to pools like stSOL-SOL, stSOL-USDC, and even stSOL-wstETH

Users providing liquidity to pools will be rewarded in LDO and, for some pools, tokens from our partners ( ORCAfor the Orca pool and MER in the Mercurial Finance pool). In addition, LPs will also collect a portion of pool swap fees and accrue value in their stSOL tokens in accordance with Lido for Solana’s staking APR.

As promised we have partnered with various AMMs to utilize stSOL — the liquid representation of your SOL stake in Lido. To bootstrap and incentivize liquidity providers Lido has initiated the formation of the various pools. Holders of stSOL can now supply liquidity to pools like stSOL-SOL, stSOL-USDC, and even stSOL-wstETH— a first-of-its-kind liquidity pool with two value-accruing Lido liquid staking assets, with wstETH being bridged via Wormhole’s decentralized validator set.

800,000 LDO will be distributed as LP rewards over 2 months on Solana AMMs

The following list contains the current stSOL liquidity integrations:

Orca

Orca | The DEX for people, not programs

Orca is the easiest, fastest, and most user-friendly cryptocurrency exchange on the Solana blockchain.

www.orca.so

Orca has launched a stSOL-wstETH (the wrapped version of Lido’s stETH). This is especially good news for stETH holders. Now, in addition to earning rewards by staking ETH and SOL, you get additional yield by adding liquidity to the wstETH-stSOL pool on Orca. Liquidity providers on Orca will earn 250,000 LDO supplemented by about 35,000 ORCA over the initial
8 weeks of this pool being live.

This first-of-its-kind liquidity pool is a very cool DeFi product! Not only is it composed of two staked assets earning staking rewards, but it also has one of these bridged over to Solana from Ethereum in a decentralized way, highlighting the power of cross-chain DeFi!

To participate in the Orca pool visit the guide linked below.

Wormhole Transfer and Orca Pool Guide | Lido for Solana

This is a step-by-step guide on providing liquidity to the following Orca Pool — stSOL-wstETH to earn more rewards…

docs.solana.lido.fi

Guide — https://docs.solana.lido.fi/staking/Orca-pool-Wormhole-guide/
Make sure to double dip after you add liquidity to the Orca Pool

Mercurial

The amazing Mercurial Finance team went live with a stSOL/SOL pool that will use our internal price oracle to create a maximally efficient liquidity pool. Providers of liquidity to Mercurial will earn 150,000 LDO and matched MER rewards on top of the swap rewards while resting assured that their passive LP position is not exposed to impermanent loss. Read more about this integration.

Introducing Our First Non-Pegged Stable Pool: Lido x Mercurial

In our previous blog post, we introduced several innovative AMM systems we are bringing to the market. Today, we are…

blog.mercurial.finance

Raydium

We’ve launched a stSOL-USDC pool in collaboration with Raydium. Providers of liquidity to this pool will collect 250,000 LDO over 2 months in addition to the LP rewards from swaps on the OG of decentralized exchanges that integrates with Solana’s order book DEX Serum.

Saber

Finally, Saber, the leading cross-chain stablecoin and wrapped assets exchange on Solana, has launched the stSOL-SOL pool that currently holds TVL of $160M. Liquidity providers stand to gain 150,000 LDO in addition to the LP rewards and SBR yields for this pool. These rewards will be activated once Saber supports cross-incentivization. The stSOL-SOL Saber yield farm can be found here

LDO Incentive Overview

Lido DAO in partnership with Lido for Solana multisig has transferred LDO incentives from Ethereum to Solana by using the decentralized Wormhole v2 token bridge.

As listed above, 800,000 LDO will be distributed as LP rewards over 2 months on Solana AMMs to bootstrap liquidity for SOL.

  • 250,000 LDO for stSOL/wstETH on Orca co-incentivized by ORCA
  • 250,000 LDO for stSOL/USDC on Raydium
  • 150,000 LDO for stSOL/SOL on Saber co-incentivized by SBR
  • 150,000 LDO for stSOL/SOL on Mercurial Finance co-incentivized by MER

Keep a lookout for this and further upcoming integrations at the liquid staking page on Chorus’s website.

Chorus One

Get stSOL and passively earn staking rewards. Put your stSOL to work in DeFi and compound your yield. Stake Sol Deposit…

chorus.one

About Chorus One

Chorus One is offering staking services and building protocols and tools to advance the Proof-of-Stake ecosystem.

Website: https://chorus.one
Twitter: https://twitter.com/chorusone
Telegram: https://t.me/chorusone
Newsletter: https://substack.chorusone.com

Core Research
Towards Multisig Administration in Lido for Solana
Lido for Solana is governed by the Lido Decentralized Autonomous Organization (Lido DAO).
August 20, 2021
5 min read

The ways in which multisig reduces trust surfaces and speeds up project execution

Lido for Solana is governed by the Lido Decentralized Autonomous Organization (Lido DAO). Members of the DAO — holders of the LDO governance token — can vote on high-level proposals, such as whether to expand to a new chain. For day-to-day tasks, we have a much more narrowly scoped need for somebody to execute privileged operations: an administrator.

The administrator rights reside with a 4-out-of-7 multisig that consists of established validators and ecosystem partners. Last week, we successfully set up the multisig on Lido for Solana Testnet. In the coming days, the same will repeat for the mainnet launch, beyond which all new proposals by Lido DAO will be processed via this multisig structure.

This post explores why multisig is important in making Lido for Solana secure and efficient and the way forward for governance in Lido for Solana.

The concept of multisig

Multi-signature is a digital signature scheme that allows a group of users to sign a single transaction. The transaction could be a governance proposal, a snapshot vote, or even a simple fund transfer instruction. A common terminology to describe a multisig setup is m-of-n multisig. Given n parties with their own private keys, at least m of the private keys must sign a transaction to perform a transaction. For example, a multisig that has 7 members in the group and requires 4 signatures for a transaction to be fully signed — will be termed 4-of-7 multisig

The need for a multisig administration

Before we answer the question — why do we need multisig administration? — let us first understand how it supplements DAO governance.

DAO Governance

In a DAO governance model, decisions get executed automatically through smart contracts as a result of LDO governance token holders voting on these decisions. This results in a decentralized governance model and eliminates dependence on a centralized authority to execute decisions thereby removing the risk of a single point of failure.

On-chain DAO Governance

However, in the case of Lido for Solana, even though decisions are taken by the Lido DAO, they are executed by the multisig administration.

DAO takes decisions | Multisig executes them

To understand why offloading the decision-execution to multisig administration is a good approach let’s look at the different administration methods that are possible in such a scenario

  1. A single person could act as the administrator. This has a very low overhead, and the administrator can move quickly when there is a need to deploy a critical bug fix. However, it also places a high degree of trust in a single person.
  2. On the opposite side of the spectrum, a DAO program could act as the administrator. Administrative tasks could only be executed after a majority of LDO token holders approve. This is decentralized, but it makes it very difficult to act quickly when needed.

A good middle ground between these two extremes is multisig, a program that executes administrative tasks after m out of n members have approved. For m greater than one, no single party can unilaterally execute administrative tasks. At the same time, we only need to coordinate with m parties (instead of a majority of LDO holders) to get something done.

The benefits of multisig don’t end here. Using a multisig eliminates a lot of concerns that a typical user might have while investing. Let’s take a look at some of the other problem areas that the use of a multisig addresses.

1. Reducing points of trust

Can I trust the creators of the program to not change critical parameters of their own accord?

There is always the risk that an administrator (the authority that executes the DAO’s decisions) can start executing decisions arbitrarily. By including multiple parties in the multisig, we reduce the points of trust and make the decision execution more decentralized.

2. Execution Pace

Can Lido for Solana perform program upgrades quickly, in case of a critical bug?

A pitfall of on-chain governance is that in the case a critical bug-fix is required, achieving consensus on-chain could prove to be too slow and very costly as a result.

A completely decentralized model of governance slows down project execution, especially if a project is in its initial stages. There is always a tradeoff between the ease of execution and the degree of decentralization. However, that does not mean that one should do away with decentralization completely.

A governance model carried out by a multisig administration is the perfect compromise for a project like Lido for Solana. This lends it speed to execute decisions quickly in the earlier stages and also mitigates the risk of delayed fixing of critical bugs.

3. Decentralized program upgrades

Who decides which upgrades will happen in the future and can I trust them to remain benevolent?

Decision on Program Upgrades
The multisig decides on program upgrades. To understand why this is a reasonable solution, we need to take a look at the two possible extreme cases.

1) Single upgrade authority — In Solana the upgrade authority — the address that can sign upgrades — has a lot of power. A single upgrade authority could upgrade programs maliciously at will. For example, a malicious upgrade authority could upload a new version of the Lido program that withdraws all Lido funds into some address and runs away with the funds!

2) No upgrades allowed — On the other hand, if we don’t allow the program to be upgraded at all, and then if it turns out to contain a critical bug, we can’t fix it.

So, a multisig is a good middle ground, where no single entity can take control over the programs and their funds, but we can still enable upgrades.

Trusting Multisig to remain benevolent
The DAO can be trusted because the Lido DAO is large and decentralized, and consists of stakeholders who are aligned long-term. The proposals they vote positively on are by definition aligned with the interests of the stakeholders.

The multisig executes the decisions taken by the DAO. The multisig can be trusted because the multisig participants in turn are all reputable industry partners; their reputation is at stake if they suddenly go rogue!. Additionally, no single multisig member has anything to gain by going rogue.

4. Cross-Chain Governance Complications

Why can’t Lido DAO’s proposals be executed directly on-chain?

This is because Lido DAO uses Ethereum for governance and to be able to implement Lido DAO’s decisions on Solana blockchain cross-chain execution is required. Cross-chain governance, at this point, is not mature or fast enough to be a feasible solution.

Therefore, the role of multisig then becomes that of executing the decisions made by the Lido DAO. The governance authority, which is Lido DAO, sets the long-term goals and decides on major proposals. The administrator, multisig in this case, then upgrades the program accordingly and changes its parameters.

Governance — Lido DAO
Administration — Multisig.

5. Transparency

Is the source code public and has it been verified that the Lido program is built from that source code?

It is imperative for users, who invest their SOL in Lido, to be sure that the Lido program does not contain any backdoors or hidden features that might hurt their investments. One way to be sure of this is to know that the multisig owners have verified that the Lido and multisig programs were built from the source code that is publicly available

Furthermore, even the users can verify this fact themselves if they wish to do so.

6. Credibility

How can I trust the parties involved in this multisig?

Another aspect of transparency inherent to Lido for Solana is the fact that we have made public the names of all 7 organizations that are part of the multisig ceremony. By doing so, users know which parties control the program and can decide whether they trust these parties. We embolden the trust of our users by including only reputable participants and by making sure that this is public information.

Multisig Ceremony

Multisig ceremony is the process that the multisig uses to execute decisions. On a high level, this process works as a series of steps.

  1. Build a Solana transaction to propose
  2. Wrap the transaction in a multisig transaction (Instead of signing it with a wallet and executing, like we normally would)
  3. Sign and broadcast the wrapped transaction to the blockchain
  4. Notify the other N-1 signers to review the transaction
  5. The signers sign and submit their approval transactions to the blockchain
  6. When the multisig transaction has enough approvals, anybody (usually the last party to approve) can step in and execute the transaction

As explained earlier, multisig programs require multiple signatures to approve a transaction. This allows the signers to review an action on the blockchain before it is executed — making for decentralized governance. Chorus One is using the Serum Multisig program to introduce decentralization in Lido for Solana. The Multisig that we have set up has 7 participants and requires at least 4 of them to sign for a transaction to be approved.

The 7 parties that comprise the multisig are

  1. Staking Facilities
  2. Figment
  3. Chorus One
  4. ChainLayer
  5. P2P
  6. Saber
  7. Mercurial

The Way forward — On-Chain Governance

For now, the power to upgrade the Lido program (upon recommendation of DAO) rests with the multisig, but in the long-term Lido for Solana’s governance would be a completely on-chain decision-making process where the LDO token holders vote with their share on a proposal and collectively accept or reject it.

Decentralized policy-making in the crypto world is a complex problem. Top-down governance, as in the case of centralized organizations, is easy to implement but may not represent the best interests and needs of the stakeholders. On the other hand, a horizontal mode of decentralized governance promises a fairer representation of the voice of stakeholders but is much harder to implement.

There are multiple governance frameworks out there that exhibit varying degrees of decentralization and ease of execution. There is always a tradeoff between how easily one can implement a governance model v/s how decentralized it is. Early on, in a project’s life cycle a less decentralized but easily executable governance model makes more sense.

The long-term goal for Lido for Solana is to have a decentralized governance system with on-chain execution of decisions. In the meantime, executing decisions through a multisig helps us move quickly in the early stages, without having to trust a single party.

In terms of the project roadmap, going ahead we are looking for another audit of our code. That coupled with the results of a bug bounty will put us on the path to the mainnet launch.

Lido for Solana is poised to become the largest liquid staking solution in the market and through DAO governance and multisig administration, we make it secure and efficient. We are committed to reduce the trust surfaces required in Lido for Solana and to keep securely developing this project at a swift pace.

To read about Lido for Solana’s project roadmap please visit

Project Roadmap — Lido for Solana

Lido for Solana Mainnet will launch soon. Here’s what we have been up to!

medium.com

Disclaimer

Our content is intended to be used and must be used for educational purposes only. It is not intended as legal, financial or investment advice and should not be construed or relied on as such. The information is general in nature and has not taken into account your personal financial position or objectives. Before making any commitment of financial nature you should seek advice from a qualified and registered financial or investment adviser. Chorus One does not recommend that any cryptocurrency should be bought, sold, or held by you. Any reference to past or potential performance is not, and should not be construed as, a recommendation or as a guarantee of any specific outcome or profit. Always remember to do your own research.

About Chorus One

Chorus One is offering staking services and building protocols and tools to advance the Proof-of-Stake ecosystem.

Website: https://chorus.one
Twitter: https://twitter.com/chorusone
Telegram: https://t.me/chorusone
Newsletter: https://substack.chorusone.com

Core Research
Networks
Helium Staking Economics and the Utility of HNT
Helium is a blockchain network with a native cryptocurrency (HNT) used to incentivise individuals around the world to provide coverage on a global peer-to-peer wireless network.
June 23, 2021
5 min read

Helium Overview

Helium is a blockchain network with a native cryptocurrency (HNT) used to incentivise individuals around the world to provide coverage on a global peer-to-peer wireless network. This is done using a Helium compatible hotspot, which to date provides coverage for low-power IoT devices. Traditional networks such as WiFi do not suit IoT devices well because of their lower range compared to other types of networks such as LoRaWaN. To solve this problem, Helium pioneered LongFi, which represents a mixture of LoRaWaN and blockchain technology. In the past, there were not enough incentives for participants to operate LoRaWaN hotspots resulting in higher costs for companies using IoT devices. With the invention of LongFi and using HNT to reward participants to grow the decentralised network, IoT companies now have a cheaper alternative to use. Helium has already secured multiple partnerships with IoT companies, such as Salesforce, Lime, Airly, Nobel Systems, and more. Network users pay ‘Data Credits’ (fees) to the Helium network to transmit data for any IoT device such as a tracker, temperature sensor, water meter, etc. Hotspots earn rewards (paid for by companies with IoT devices) when an IoT device has used it directly for transmitting data (e.g. to update the companies’ servers about the geolocation of the IoT device). Previously, hotspots played a role in the consensus of the network. However, Helium governance has now voted in favour of introducing validators to replace hotspots in transaction consensus. The new proposal, HIP-25, alleviates the network pressure that hotspots currently endure from validating transactions and transfers consensus work to validators.

The 4 Core Primitives of Helium

Helium introduced 4 core primitives to allow their decentralised wireless network to grow.

The first major design decision was to introduce HoneyBadgerBFT as the consensus layer for the network. HoneyBadgerBFT does not require a leader node, tolerates corrupted nodes, and makes progress in adverse network conditions.

The second major design decision was to introduce hotspots. Helium hotspots are the epicenter of the Helium wireless network. Hotspots are purchased from external providers (currently ~$500) and then plugged into a power source and connected to WiFi. Hotspots act similarly to a WiFi router but with a coverage range orders of magnitude greater (5–15 kilometers); their primary role is to send and receive messages from Helium compatible IoT device sensors and update data from IoT devices to the cloud.

The third major design decision was Helium’s invention of LongFi. LongFi is a type of network design that utilises LoRaWaN and blockchain, which is especially designed for low-bandwidth data (5–20kbps) and variable packet sizes — perfect for IoT devices.

The fourth major design decision of the Helium network was to introduce Proof-of-Coverage. Put simply, Helium hotspots verify the location of hotspots in the LongFi P2P network. Hotspots issue challenges (via challengers) every 240 blocks to targets (other hotspots), whereby the target hotspot must prove their geolocation via transmitting radio frequency packets back to the hotspot challenger. Other hotspots in close proximity to the transmitter (up to 5) must witness and attest to the challenger that the target has responded to the challenge. Previously in the Helium network, 6% of HNT inflation rewards were dispersed to hotspots sending, submitting, and witnessing location proofs. All that is about to change with the introduction of HIP-25.

Helium’s Motivation to Use Proof-of-Stake to Complement Proof-of-Coverage in HIP-25

Due to the sheer growth Helium has experienced, it proved no longer feasible for Hotspots to act as block producers in Helium and issue, submit, and witness location proofs. In general, more hotspots joining the Helium network should be encouraged to join. However, right now, there is a trade-off whereby the more hotspots that join the network, the slower the network becomes. A slow network is detrimental for Helium because the network relies on large volumes of transactions being sent at rapid speeds due to the wide plethora of use-cases Helium network enables (e.g. pet-tracking, air-quality monitoring, art temperature checking, car-park availability alerting, COVID-19 case tracing and much more). When block times are slower, inflation of HNT is also slower, which also impacts the viability of setting up a hotspot to participate in the network. Hotspot addresses change and move, perfect for connecting IoT devices but not so much for verifying on a blockchain undergoing exponential growth. To alleviate the pressure off of hotspots, governance voted on introducing validators in HIP-25 that will run infrastructure to secure the Helium blockchain allowing Hotspots to focus on their core purpose. The role of the validator is a specialised one in blockchain and it is understandable that Helium now wants to utilise reliable node operators to ensure network performance is optimal. We were excited by the news that we would now be able to contribute to such a unique network and are strong believers in the long-term potential of Helium.

Helium Network Proof-of-Stake Economics

The introduction of node operators onto Helium introduces one key change to the staking economics of the network because those participating in consensus have changed. Previously, when hotspots participated in Proof-of-Coverage consensus, they received a consistent share in relation to all other hotspots of the 6% annual HNT inflation rewards. The economics of Helium Network are clear — there is approximately 5M HNT minted every month. Validators now participate in the consensus group and stand to earn 6% of the 5M HNT inflation that hotspots used to earn as rewards.

This means that the consensus group stands to earn 300,000 HNT per month, or 1.8m HNT annually. To run a node on Helium, there is a 10,000 HNT self-bond requirement. The requirement per node on Helium is strict, meaning you cannot be below or above 10,000 HNT per node. If you are below you will not be able to earn staking rewards and if you are above you will not earn any extra rewards for over-staking. The capital and technical requirement in Helium’s Proof-of-Stake network is high. For this reason, Chorus One is offering a Validator-as-a-Service solution for HNT holders that have enough HNT to stake, meaning we will provide infrastructure for HNT stakers who do not have previous experience running nodes. From our calculations, we estimate a staking APR between ~6–36% for HNT stakers.

There are still meaningful incentives for users to set up hotspots given the rewards allocations to data transmission and Proof of CoverageHotspot owners will continue to earn proportionate HNT rewards (up to 32.5% of the inflation rewards per epoch) if IoT devices utilise their hotspot during the duration of an epoch (30 minutes). Hotspot owners will also continue to earn rewards for verifying geographic locations of their hotspot or others in their vicinity (known as challenges, mentioned above).

The only economics that change in HIP-25 are consensus group rewards (6%) now going to validators

In Proof-of-Stake networks, inflation is not the only element that contributes to staking rewards. Another key component contributing to staking rewards in PoS networks comes from transaction fees within the network. One interesting aspect of the economics in Helium is their use of data credits to pay for transaction fees. You can think of Helium as somewhat similar to how an algorithmic stablecoin network (such as Terra) would operate. The token HNT is burned to pay for Data Credits (DC), denominated in USD. One data credit is equal to USD $0.00001. In this sense, 100,000 DC would be equal to USD $1. Anyone is able to view the list of fees that are used in Helium. The most common transaction in Helium would include Hotspots sending or receiving IoT data, which costs 1 DC per 1 transfer of packet data. If hotspots needed to send and/or receive 100,000 packets of data and held 1 HNT in their wallet (worth USD $10 at the time), the user would burn 0.1 HNT to receive 100,000 data credits, enough to pay for 100,000 transfers of data to IoT devices.

How Helium Network Activity Impacts HNT Burn and Mint Token Equilibrium

Now we understand how the economics of the network works, we can dig in a little deeper to what is actually going on in the network right now. As of June 2 2021, there are 48,319 hotspots on Helium. Using the Helium block explorer, in 24h we calculated there to be ~200k transactions. In 30 days, extrapolated this would mean 5.9m transactions and in 365 days, extrapolated this would mean 71m transactions. Using the Helium block explorer, we can see that there were 22.5m DC spent in 30 days. In USD terms, that means that $225 was spent from users sending and receiving data across IoT devices in 30 days. If there are 5.9m transactions per month and this results in $225 USD of HNT burnt (for use as DC) — this means that every 26k transactions generates $1 USD (in other words $1 USD amount of HNT is burned). We can plug in the above current network activity and model it to find out just how much $USD will be used to buyback and burn HNT.

As you can see, a 100x in transaction growth on the Helium network is likely to lead to $270k worth of HNT being burned from circulation annually. Not only that, but stakers stand to earn between 6–36% APR annually as well. This means that there is demand for HNT to buyback and burn when network activity increases and stakers stand to benefit from this the most as their HNT stack increases over time from the staking rewards they are earning. In the past 30 days, the amount of hotspots that are connected in the Helium network increased from 34,550 to 48,130 (39% MoM). If the demand for hotspots continues at this monthly growth rate (CMGR) there will be 5,340% more hotspots than there are today by the end of the year (1.8m). One constraint of the Helium network is that sometimes there is so much demand there is not enough supply of Hotspots from manufacturers. However, more demand for hotspots over time will lead to economies of scale for hotspot manufacturers and is likely to entice competitors to enter the market to fill the demand. This in turn, translates to cheaper prices for hotspot buyers, meaning they can recover their initial costs (hotspot purchase) faster. Our friends at Multicoin capital called this the Flywheel Effect.

The Supply-Side Helium Flywheel Effect — Staking Rewards Crunching HNT Supply

The Helium Flywheel Effect — Conceptualised by Multicoin Capital

One small remark about the original Flywheel Effect envisioned by Multicoin is that it does not take into consideration the possibility of earning staking rewards (due to the removal of hotspots out of the consensus group). HIP-25 shifts 6% of inflation (consensus rewards) from hotspots to HNT stakers. This in turn, will lead to faster economies of scale for hotspot manufacturers and result in lower hotspot costs for network participants as the hotspots that will be created in future can become ‘dumber’ (i.e. not need to be built to understand the intricacies of consensus). The mining ROI mentioned in the original flywheel effect still applies to hotspots, the only change is that 6% of the hotspot mining ROI will now be earned by users staking HNT to secure the network. If anything, the flywheel effect accelerates with HIP-25 as hotspots will become more minimal and therefore cheaper to buy, thanks to specialised workers (validators) securing the network with specialised infrastructure. One might think of HIP-25 as an efficient re-allocation of resources.

A Hypothesised Delegated Proof of Stake Model to Accelerate the Supply-Side Flywheel

We can hypothesise a consensus rewards model where delegation is possible (note: delegation is NOT possible in phase 1) to display what the offset might look like (ignoring other rewards hotspots earn). If the price of HNT and staking APR remains constant (est. 18%), we ignore the time value of money and assume that hotspot prices will come down to $100 (due to economies of scale) we can model the Proof-of-Stake economic equivalent to Proof-of-Coverage (i.e. what it takes for new participants to recover their initial investment).

$100 Hotspot costs, $400 HNT cost (PoS)
$500 Hotspot Costs, No HNT Cost (PoC)

Using the hypothesised model, if governance decided to activate delegation for HNT stakers, PoS economics become more attractive than PoC economics (disregarding other hotspot rewards e.g. data transmission) once the network reaches 179,000 hotspots. As of time of writing, there are 52,232 hotspots (growing 39% MoM as mentioned above). Helium’s transition to a PoS is a win-win for the network on the whole, introducing better network economics and performance through the introduction of Proof-of-Stake. It is important to know that this PoS model assumes delegation is possible, whereas right now delegation is not possible (nodes cannot stake less or more than 10,000 HNT from one address). In the future, governance might vote to turn on delegation when the Proof-of-Stake network matures. It is important to note that this model has many assumptions and disregards hotspot rewards earned through data transmission to IoT devices. Hotspot rewards earned through data transmission can be very inconsistent and are therefore ignored in this simple model.

The Demand-Side Helium Flywheel Effect — Companies Purchasing DC to Use Helium Network

The demand-side of HNT comes from IoT companies wanting lower cost networking services globally for their devices that do not require a lot of bandwidth. If a customer were to use a cellular modem for their IoT devices, they would pay 1000x more than if they connected to LongFi and have 200x less range. The benefits of IoT devices being able to connect anywhere in the globe where hotspots are available at a fraction of the cost are profound. The business model for Helium is B2B. Customers are companies that have low-bandwidth devices and want to connect to Helium for the cost-savings compared to connecting to a cellular modem. For example, Lime uses Helium to track the location of its scooters. Companies such as Lime are growing at a similar rate to Helium Network, expanding to countries worldwide. As more scooters and hotspots are set-up across the globe, more Data Credits will need to be burned from HNT in order to use the LongFi network. Helium already has 14 multinational companies using its LongFi network.

Companies currently using Helium Network

Whilst most companies that are using the Helium network now are Western-based, there has been a surge of new hotspots being set-up in China in the past two months (May-June 2021). As new geographies grow and more hotspots are set up across the globe thanks to crypto-economic incentives, it becomes more viable for multinational companies to utilise Helium’s services across borders (e.g. to track an IoT across many countries in Asia).

Change in active hotspots in China from May-June 2021 — Source: Helium

As more hotspots come online on the Helium network, the range of the network increases, making the LongFi network more appealing for companies. This could be considered a Flywheel effect on the demand-side.

Staking rewards incentivising participants to set-up hotspots and allowing hotspots to be dumber increases demand for the network (from companies) and improves economies of scale

To conclude, we are very excited that Helium is transitioning to a Proof-of-Stake network and that we have the opportunity to be one of the first validators supporting it. We are long-term believers in Helium and can’t wait to help the network scale to reach its full potential. Hotspots for IoT devices are just the beginning for Helium. Helium governance recently passed HIP-27 to create the first consumer-owned 5G network in the world on Helium network. In the not so distant future, anyone with a phone may be able to connect to the Helium network’s 5G hotspots and save costs.

Helium’s ambition to launch new technologies by using crypto-economic incentives to enable consumer-owned economies is one Chorus One fully supports. Stay tuned for a future announcement on how HNT holders can stake their assets with Chorus One.

Disclaimer

Our content is intended to be used and must be used for educational purposes only. It is not intended as legal, financial or investment advice and should not be construed or relied on as such. The information is general in nature and has not taken into account your personal financial position or objectives. Before making any commitment of financial nature you should seek advice from a qualified and registered financial or investment adviser. Chorus One does not recommend that any cryptocurrency should be bought, sold, or held by you. Any reference to past or potential performance is not, and should not be construed as, a recommendation or as a guarantee of any specific outcome or profit. Always remember to do your own research.

About Chorus One

Chorus One is offering staking services and building protocols and tools to advance the Proof-of-Stake ecosystem.

Website: https://chorus.one
Twitter: https://twitter.com/chorusone
Telegram: https://t.me/chorusone
Newsletter: https://substack.chorusone.com

Networks
Core Research
Chainlink 2.0 — Super-linear Staking Economics Explained
Chainlink 2.0 aims to create a decentralised metalayer through hybrid smart contracts by having a large number of oracle networks serve users on an individual basis.
May 12, 2021
5 min read

Chainlink 2.0 — The Decentralised Finance (DeFi) Metalayer

The Chainlink 2.0 whitepaper was published on April 15th. Chainlink 2.0 aims to create a decentralised metalayer through hybrid smart contracts by having a large number of oracle networks serve users on an individual basis. The end goal of this, would be to have smart contracts interact with multiple oracles, just as users interact with multiple APIs in web2 today. In essence, Chainlink wants to take as much load off of smart contracts as possible. The DeFi metalayer will look something like an off-chain outcomes data factory. Large amounts of data will flow into Chainlink oracle networks and there will be large amounts of nodes offering more specialised services to report on complex values that DeFi smart contracts will call for. Developers will have the flexibility to pick and choose what oracles they need, which in turns allows them to simplify their smart contract code.

The role of nodes in Chainlink and how they can be exploited

The way Chainlink generates one standardised value to send to a smart contract is by aggregating all values it receives from individual nodes for a given variable. A service level agreement (SLA) normally defines how much an individual node can deviate from the aggregated result considered to be correct by the network (usually ~1%). Values are aggregated by the oracle and a median value is sent to a smart contract. This means if there are over 50% of nodes reporting a false value, this false value will be reported to a smart contract (which could have detrimental effects on the functioning of the smart contract it has been sent to). Nodes could have incentive to deliberately report false values if it is in their financial interest to do so. For example, nodes could have information asymmetry if they report false values of crypto-assets and conduct arbitrage across different exchanges reporting different values of crypto-assets (that they have contributed to). There are many different options or reasons that a node might have to report a false value to an oracle. Nodes can also be bribed to report a false value for the benefit of another agent.

Implicit and explicit incentives mitigate malicious behaviour of oracle nodes in Chainlink 2.0

Chainlink uses implicit and explicit economic incentives to ensure oracle nodes do not behave maliciously. Explicitly, Chainlink requires two ‘deposits’, one deposit that can be slashed for reporting an incorrect value not agreed upon by the aggregate network and another deposit that can be slashed for falsely reporting that a network of nodes have collectively reported a false value to an adjudicator known as a ‘second-tier’ (more on this later). Implicitly, Chainlink assumes rational economic actors (nodes) will send correct values to oracles because it is in their best interest to do so (i.e. there is an opportunity cost of rewards a node misses out on for behaving maliciously). Implicit incentives are known as the ‘future fee opportunity’ (FFO) in Chainlink. Chainlink are aiming to measure implicit incentives with their ‘Implicit-Incentive Framework’, a revolutionary attempt at quantifying opportunity cost of nodes that includes a node’s performance history, data access, oracle participation and cross-platform activity (e.g. nodes that might be on other networks such as Chorus One and how they perform on their with regards to downtime, slashing etc.). In fact, Chainlink has gone so far as to create an equation to find the implicit incentives of nodes, which can be found below:

Source: Chainlink 2.0 Whitepaper

This formula defines why a node in Chainlink would implicitly continue to report correct values to oracles because if they do not, they stand to lose their future fee opportunity (found in the equation above).

An interesting point to note about implicit incentives from Chainlink’s whitepaper is that of ‘speculative FFO’. New nodes that go live on Chainlink are betting that their expenses will be outweighed by their future fee opportunity. In essence, those running a node on Chainlink in the early stages are taking a speculative bet on the fact that they will earn considerable fees in the future. The ‘speculative’ side of FFO (i.e. betting on the future success of Chainlink) multiplies the implicit incentive for nodes to ensure they are behaving correctly because they have a stake in the network performing well. The speculative FFO is an interesting take on what the true value of this implicit incentive is. At Chorus, we believe the value of this implicit incentive is only just now becoming more understood by networks. This implicit incentive can be further strengthened by giving node operators more skin-in-the-game. For Chainlink, an existing network, this could mean an airdrop to node operators of x amount of tokens to ensure they care about the success of the network. An even greater implicit incentive might be for Chainlink to offer supercharged rewards (i.e. 2x rewards such as can be found in Mina) to node operators who have the greatest reputational equity, which would be a positive externality for the entire crypto ecosystem as nodes want to increase their reputation across all networks. For new networks, the implicit incentive could be strengthened by offering tokens to node operators in private sales to make sure they have further skin-in-the-game from the inception of a network. Incentivised testnets can also work well for new networks to encourage validators to get actively involved. The earlier a validator has skin-in-the-game and the larger that skin is early-on, the more attention a validator is likely to pay to the future success and security of the network. We will discuss the importance of implicit and explicit incentives for node operators on other networks in greater depth in a future article.

Enhancing explicit incentives for nodes to behave correctly via super-linear staking

Chainlink 2.0 introduces the concept of super-linear staking (or quadratic staking) to ensure nodes are incentivised to always report correct values (as agreed upon by other nodes). Chainlink has essentially created a second layer (known as tier in the whitepaper) that will be used as a backstop if a watchdog believes that an aggregated value being reported by a network of nodes is false. A watchdog is any node in the first-layer that alerts the higher second-layer when they believe a reported value is wrong. You can think of this system like a ‘dibber-dobber’ system. A watchdog is like a student in a class (tier 1) that the teacher (tier 2) trusts will always report back to him/her if the rest of the class misbehaves. To continue with this analogy, let’s say a teacher is leaving for 10 minutes and is offering a candy reward to all students if they do not misbehave when he/she is gone (this is like an explicit incentive deposit for all students) and a second reward for reporting if >50% of the class misbehaves (reward is given by stripping the explicit incentive deposit from misbehaving students). When the teacher leaves, over half of your class starts misbehaving, which means you cannot work because you are distracted. However, your misbehaving classmates want the best of both worlds, they want to misbehave and earn the reward (keep the deposit) from the teacher. Now let’s imagine that anyone can tell the teacher when over half the class is misbehaving to earn a reward but the teacher already has some randomised priority of how she will distribute the rewards from the explicit incentive of misbehaving students to a ‘winner-take-all’ system (i.e. only one student receives all the rewards ‘slashed’ from misbehaving students for dobbing on their peers). Now let’s imagine that the misbehaving students try to convince the behaving students to not report misbehaviour. If only 1 student reports misbehaviour, they will earn all of the rewards (deposit) of misbehaving students. Therefore misbehaving students need to pay more than the maximum reward one behaving student could receive to all behaving students. Keep in mind the priority, rewards are not even and therefore all rewards for a correct report of misbehaviour will go to one student. This is the super-linear quadratic effect of Chainlink 2.0 staking. It becomes much dearer to bribe behaving students (nodes) in the classroom because the maximum amount required to bribe an individual student is the maximum reward a student could receive from overall slashing of misbehaving students. The minimum adversaries must pay to ensure incorrectness is the maximum reward to every behaving student because if only one student tells the teacher, they stand to receive all rewards of misbehaving students (that’s a lot of candy). If rewards of misbehaving students were distributed equally, it would be much cheaper to convince (bribe) behaving students to falsely report to the teacher. In this sense, the tier system (having a second tier that has the final say) and watchdog priority (having a dibber dobber with some priority that stands to earn all rewards of misbehaving nodes for correctly reporting they are acting maliciously) ensures data integrity of reported values in Chainlink.

Quadratic staking on Chainlink 2.0, visualised
The dibber-dobber stands to earn their classmates’ candy for being a good student in Chainlink 2.0

Economies of scale in Chainlink 2.0

Using super-linear staking and adding capped future fee opportunities per node contributes to economies of scale that can be achieved by Chainlink. Each new user that joins a decentralised oracle network lowers the cost for other users on that network and lowers the average cost per unit of economic security. Chainlink supposes the average cost per dollar of network security is the future fee opportunity / number of nodes. If in future Chainlink decides to cap the future fee opportunity at x per n, any fees that are > x per n will be reserved for new nodes that stake in that network. This achieves economies of scale because it is cheaper for an existing user to join an already existing network rather than to create their own (i.e. fees signal where nodes should be, nodes stake and join that network and security is higher). Due to super-linear staking, the more nodes that exist in a network, the more economically secure a network becomes (quadratically!). Economic security is provided by stake, nodes provide this stake and this can be used to find a node’s average cost per dollar of economic security (how much one node contributes to security in a network, the cost is lower as more nodes join a network when FFO is capped and hence economies of scale are found). Therefore there is an implicit incentive in itself for Chainlink to make sure it grows its staking business. The more total value locked grows, the more smart contracts that need oracles, the more funds that can be exploited, the more at risk Chainlink is when oracles are exploited to drain smart contracts, the more reputational risk Chainlink has, the more funds they will require nodes to stake, the more secure the network gets, the less expensive it is for one dollar of stake to secure the network, the more economies of scale Chainlink achieves.

Delegations on Chainlink 2.0

In any PoS network, it is critical that there is enough at stake of economic actors participating in the network to ensure they do not misbehave. To have more assets at stake in any network, barriers need to be lowered. Delegations were not mentioned in Chainlink 2.0, meaning holders of the $LINK token cannot natively delegate their assets to a node operator such as Chorus One. Currently, the only way for users to earn staking rewards in Chainlink 2.0 is by running their own node to report values for jobs that are assigned to them. However, delegation protocols are being worked on. For example, Linkpool are working on democratising staking rewards for $LINK holders via staking pools. The demand for $LINK delegation has been high since the inception of this service by Linkpool. We expect this demand to continue when Chainlink 2.0 goes live, especially because Chainlink 2.0 will likely require most nodes to have some collateral (stake) in order to report values for jobs. Delegation in Chainlink 2.0 gives users the opportunity to earn staking rewards on otherwise idle $LINK and allows nodes to report values on more jobs to increase their future-fee opportunity (FFO), both of which are a net positive for Chainlink. It is very possible that delegation demand could translate into a new era of $LINK liquid staking innovation.

To conclude, Chainlink 2.0 is secured by implicit (e.g. FFO) and explicit incentives (e.g. super-linear staking). The importance of oracle security has never been higher, as the value that oracles secure in DeFi grows every day. Any oracle exploitation is disastrous for DeFi, so it is important oracle networks such as Chainlink improve their security at the same rate that DeFi grows. The proactive approach of Chainlink to change their economics to capitalise on network effects (incentives to run more nodes) and economies of scale (security becomes cheaper as more nodes join) is timely and likely to sustain Chainlink’s position as an oracle market leader well into the future.

No results found.

Please try different keywords.

 Join our mailing list to receive our latest updates, research reports, and industry news.

Thanks for subscribing. Watch out for us in your inbox.
Oops! Something went wrong while submitting the form.

Want to be a guest?
Drop us a line!

Submit
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.