Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Core Research
Networks
What are Avalanche subnets and why are they a big deal?
We analyze its technical setup and talk about some of our favorite subnets.
July 7, 2022
5 min read

Avalanche has a thriving, friendly, and engaging community. On top of that, it also has the quickest and most valuable bridge solution to and from Ethereum, with BTC onboarding shortly. Avalanche is fortunate to have a team that consistently produces and executes at the top level. It’s great for validators like us too. There’s no slashing and rewards are dependent only on uptime. Currently, the annual staking rewards are at 9.1%. This makes locking AVAX to stake appealing. The thriving ecosystem is already on display, with liquid-staking now accessible via BenQi (sAVAX, $179M in TVL) and two additional solutions on the way: LAVA and Eden Network + YieldYak. Lido is also building its liquid staking implementation for AVAX. A competitive DeFi landscape is also in operation, including TraderJoe (DEX, $179M in TVL), Platypus (stable swap, $155M in TVL), Aave (lending, $4.64Bn in TVL), and many more. Subnets now allow innovative technologies in both consensus and horizontal scalability architecture to join the network. To make the experience complete they even provide VMs as free open source code ready to be picked up by companies wishing to join the subnet movement.

What are subnets?

Avalanche mainnet is made up of two blockchains (C-Chain and P-Chain) and one DAG (X-Chain for ultra-high TPS). These are two types of distributed ledger technologies (DLTs). The P-Chain is responsible not only for dealing with Subnet and all validator information but also to create new subnets and blockchains.

Avalanche and its multiple chains
https://docs.avax.network/subnets

Although the term “subnet” is used interchangeably and synonymously with blockchains, subnets are a bit more complex than that. The technical definition of a subnet is as follows:

A Subnet is a dynamic set of validators working together to achieve consensus on the state of a set of blockchains, according to Avalanche’s FAQ page.

The unleashing and unlocking of subnets is an event of great importance in the wider web 3 ecosystem. It brings value from its extensive use cases and benefits:
  • Horizontal scaling capabilities for the primary network or any project that wants to scale beyond one blockchain or include multi-blockchain functionality.
  • Virtual Machine (VM) flexibility is enabled and virtual machines using EVM, WASM, B-Script, and other cross-ecosystem technologies can be used for a subnet. Also, any native blockchain token can be used for gas fees, selected by the developer.
  • Highly customizable and flexible in design so they can be compliant with regulatory and jurisdiction laws.
  • A marketplace can emerge where validators offer their services to validate subnets.
  • Virtualize entire ecosystems such as Ethereum, Solana etc. on Avalanche.
  • Because there is no competition for block space, TPS is greater because transactions in one chain are not hindered by dApps on other chains.

So if devs can decide their own token and VM, how does this help AVAX?

Verifying a subnet needs validating the mainnet (remember: C-chain, X-chain and P-chain) too. This needs AVAX. Hence, when new subnets form, more AVAX will be staked. There is a limit to how many subnets a validator may operate (due to hardware constraints), however, the number of validators inside a subnet is infinite, with a minimum of 5. Each validator can realistically operate the C-chain plus a few more subnets at most, so, validators should attempt to choose the finest subnet games out there, supporting competition and production of competitive products. The mechanism is as follows:
  • A new subnet is constructed, and the operating nodes cannot handle further subnet validation, thus new nodes are built; as a result, more AVAX is staked and the mainnet is secured further.
  • Staking is limited to a minimum of 2000 AVAX and a maximum of 3,000,000 AVAX (likely to decline) per validator. This implies that validators cannot operate a single, highly concentrated node; rather, they must operate new nodes.
  • Delegators may have up to four times the stake. This implies that you can not simultaneously operate a validator with 2K AVAX and have 1M AVAX delegated to you. This implies that you must continue staking additional nodes, enhancing the mainnet.
  • As such, decentralisation is highly incentivised.

Subnets allow anybody to quickly establish permissioned or permissionless networks with unique implementations that are powerful, dependable, and secure. Developers can use AvalancheGo or AvalancheJS, and Ethereum developers can seamlessly use Solidity to launch dApps as it is fully compatible. Avalanche includes features not seen on other chains, such as the ability to choose which validators secure their Subnet activity, which token is utilized for gas costs, bespoke economic models, and more. Subnets, crucially, stay naturally linked with the larger Avalanche ecosystem, do not compete for network resources with other projects, and are accessible in an infinite supply. With standard rules underlying all apps on a smart contract network, Web3 applications may distinguish on user experience like never before. A similar approach can be found in Cosmos with Saga and their “chainlets” approach and in Ethereum with Skale.

Why are Avalanche subnets a big deal? — Enter GameFi

GameFi, a common phrase in the crypto-verse, is a combination of the words “Gaming” and “Finance.” It covers the gamification of the working system in order to generate profit via play-to-earn crypto games. In GameFi games, items are represented by NFTs. Users may boost their earning potential by levelling up and upgrading their characters, as well as participating in tournaments. As an example, players in Axie Infinity (arguably the biggest GameFi game in 2021) earned more than $1000 worth of $SPL a month before it suffered a hack. Many of these blockchain games are communities where players may earn tokens to swap for money. It’s remarkable to watch blockchain games with a few hundred players in 2013 turn into top-grossing games like Axie Infinity with hundreds of thousands of dollars in daily trade volume. And this is just the first generation of games on blockchains.

Adoption has skyrocketed over the past years. With a large number of retail investors as well as big companies like Microsoft, Nike, Meta and many more already involved, the metaverse market is expected to grow significantly. Major investors such as Gala Games and C2 Ventures formed a $100 million venture fund for GameFi. Solana Ventures and others also launched a $150 million fund by the end of 2021. More recently, Framework Ventures has allocated half of the $400M fund to Web3 gaming. As evidence of the blockchain gaming industry’s expansion, the blockchain games and infrastructure business attracted over $4 billion in venture capital financing in 2021 alone. Blockchain gaming has grown by 2,000 percent in a year, according to the conclusions of a joint report by DappRadar and the Blockchain Game Alliance (BGA). Although this was prior to the latest crypto meltdown. The scenario might be extremely different right now. However, the crypto gaming business has already received $2.5 billion in investment this year; if this trend continues, it might reach $10 billion by the end of 2022. The report also states that blockchain games drew $1.22 million in unique active wallets (UAW) in March, representing 52% of industry activity. With all of the various technologies collaborating to build a self-sustaining ecosystem, the blockchain gaming sector is poised to become a significant income source and probably the first real utility for blockchains outside payments.

What is required for crypto games to become mainstream?

GameFi might expose a big market to crypto, but its games aren’t there yet. At least initially, players don’t need to realize the game uses NFTs and tokens. Gamers shouldn’t learn about wallets or pay great amounts until they’re addicted. For the optimal user and developer experience, games require application-specific chains. ASBs are the best way to scale block space for the next billion users. Cosmos, Avalanche Subnets, Polygon Supernets, and StarkNet Layer 3s sell block space. Application-specific blockchains provide cheaper costs, fine-tuned performance, transaction isolation, and developer control. Other requirements are:
  • Transactions per second (TPS) — A single popular game will need 1000s of transactions per second.
  • Time to finality- It is adamant. No one wants to take 5 minutes to kick a football.
  • Free gas fees for users — Users will not kick the football if the cost is more than the worth of the action. Ideally, consumers have no idea what gas or transactions are.
  • Strong financial incentives for validators — Gas costs should be used to motivate validators; otherwise, no one will operate a validator node. This is a tricky balance since it contradicts the goal of keeping gas prices low for customers.
  • Ease of development — Game designers should not be required to create their own chain. Distributed consensus is a really difficult problem. The majority of Web2 developers no longer build their own software infrastructure and instead rely on cloud companies.

Why Avalanche for games?

The key advantage of using AVAX for GameFi is the three-pronged structure, which comprises validators and subnets using the P-Chain. Subnets let projects create their own application-specific blockchains (ASBs) that do not disrupt the rest of the chain. As a result, no single game utilizes the whole network bandwidth. GameFi on Avalanche offers the best chance for blockchain games to thrive in their intended setting. Avalanche is also great for creating NFTs, which makes digital assets like NFTs easily available for P2E games or the metaverse. Users can utilize Avalanche to establish their own localized chains that run independently of other chains, allowing them to sandbox their own knowledge and technology for the benefit of their own efforts. Most developers use their own token for gas on their subnet, however, a subsidised gas fee is also an option. Avalanche allows network developers to utilize whatever virtual machine they want or to create their own. You may use EVM or any other VM you like. Aside from the EVM and AvalancheVM, Avalanche now provides SpacesVM (key/value storage), BlobVM (binary storage), TimestampVM (a minimum viable VM), and others are in the works. Modularity rules the roost. Observing web2 games moving into web3 through subnets is a great place to start.

Some of our favourite emerging Subnets

It is worth noting that Avalanche gaming developers are taking a Play-and-Earn method rather than a Play-to-Earn approach. This emphasizes the necessity for the game is enjoyable and long-lasting.

  • Shrapnel, the world’s first blockchain-enabled AAA first-person shooter game, has announced that it would use the Avalanche network as its foundation for its impending release. They want to establish a subnet devoted to the game using the Avalanche Subnet capabilities. Shrapnel is creating a novel AAA experience for gamers that puts competitive multiplayer, creative tools, and genuine digital ownership front and centre.
  • TimeShuffle is a play-and-earn turn-based strategy game in which warriors from across history battle in randomly created battlefield settings. Each player may begin his conquest with a free-to-play hero and advance his heroes as they play, unleashing the full potential of cryptocurrency gaming and the play-and-earn paradigm.
  • Ascenders is a sci-fantasy, open-world action RPG powered by Avalanche with a fully decentralized, player-driven economy. Players may participate in daily tasks for AGC tokens while also producing NFT products and land plots. The game’s development team has concentrated on developing a truly player-centric experience. The first alpha release of the gameplay is scheduled in the coming months.
  • Ragnarok is one of the most hyped initiatives in both the NFT and blockchain gaming sectors. Earlier this year, the official NFT collection debuted on Ethereum. The team is now working on creating one of the most thrilling gaming experiences to hit the Avalanche blockchain, positioning subnets as an alternative to Ethereum. The game will unveil the first 77 in-game playable characters this month. Find many more subnet projects here.

Subnet Disclaimers

  • Games on the blockchain need to be the next big thing. We still have to see a real adoption for GameFi which will test the technology.
  • The biggest drawback of subnets is that there is no Inter-Blockchain Communication (IBC) protocol yet. This means that subnets need to bridge to one another, which is less secure than IBC. This is being considered as Ava Labs announced in their first Avalanche Summit but is still in the early stages. For the time being, only projects within the same subnet are able to benefit from shared security.

Conclusion

Overall, blockchain games continue to be one of the most appealing parts of the dApp market. Although demand for blockchain games looks to have peaked, gaming dApps continue to drive most of the industry’s on-chain activities. Notably, subnet games like Crabada and Defi Kingdoms are still drawing players even in a difficult 2022.

VCs and investors are pouring money into Web3 gaming ventures at an all-time high pace. Furthermore, financial firms like Morgan Stanley have assessed the metaverse’s economic potential to be at least an $8 trillion business. The Sandbox’s second Alpha season, Decentraland’s Fashion Week, and the overwhelming demand for NFT Worlds indicate a positive future for GameFi. However, security risks such as the Ronin bridge vulnerability and the difficulties of attaining full interoperability remind everyone interested that widespread adoption is not yet here. Avalanche Foundation believes that subnets like Shrapnel and TimeShuffle are the solution for the next generation of gaming, thus it launched Avalanche Multiverse last March, a $290 million incentive program to accelerate the growth of the new Internet of Subnets.

Core Research
Networks
Upcoming Upgrades on Solana That Will Improve Network Performance
Solana has announced three main changes in its mitigation plan to address the stability and resilience of the network: QUIC Stake Weighted QoS Fee Markets
June 15, 2022
5 min read

Solana has announced three main changes in its mitigation plan to address the stability and resilience of the network:

  1. QUIC
  2. Stake Weighted QoS
  3. Fee Markets

The measures are targeting the intense traffic responsible for two out of the three recent incidents. Although the changes being proposed by Solana developers are considered abstract or deeply technical for the general part of the community, the concepts are not completely new, being imported from other already mature systems. In this article, we will try to break down the technicalities and explain them in simple terms.

The current Solana client version for validator nodes (v1.10) already paves the way for some of these improvements to be iterated on until optimal market fit. Fee prioritization is targeted for the v1.11 release, according to the official announcement.

Some Context on Network Communication Protocols

Solana used to adopt the User Datagram Protocol (UDP) for transmitting transactions between nodes in the network. Nodes send transactions through UDP directly to the leader — the staked node responsible for proposing the block in that particular slot — without a previous connection being established. UDP does not handle traffic congestion or delivery confirmation for data. In situations of network congestion, the leader is unable to handle the volume of incoming traffic, which means some packets get dropped. Even at quiet times, some level of packet loss is normal. By sending the same transaction multiple times, users have a greater chance that at least one of their attempts will arrive.

Fig. 1: Illustration of UDP protocol, featuring multiple data transfers that can burden the receiver and lose packets in the way.

In contrast to UDP is the Transmission Control Protocol (TCP). TCP includes more sophisticated features but for this to work, it requires a session (i.e. a known connection was previously established between the client and the server). The receiver acknowledges (“acks”) packets and the sender knows when to stop sending packets in case of intense traffic. TCP allows for re-transmitting lost packets, once the sender stops receiving acks, the interpretation is that something must be lost, so the sender should slow down.

TCP is not ideal for some use cases though. In particular, it sequences all traffic. If one portion of the data is lost, everything after it needs to wait. That is not great for Solana transactions, which are independent.

Fig. 2: Illustration of TCP protocol, featuring serial data transfer. Lost packets affect subsequent ones since the sender waits for server acknowledgement.

1. QUIC

QUIC is a general-purpose protocol which is used by more than half of all connections from the Chrome web browser to Google’s servers. QUIC is the name of the protocol, not an acronym.

QUIC is an alternative to TCP with similar features: a session, which then enables backpressure to slow the sender down, but it also has a concept of separate streams; so if one transaction gets dropped, it doesn’t need to block the remaining ones.

Fig. 3: And QUIC protocol illustration, representing a mix of features from TCP and UDP.

Solana is a permissionless network. Anyone running a Solana client is a “node” in the network and is able to send messages to the leader. Nodes can operate as validators — when it is signing and sending votes — and (or) they can expose their RPC (Remote Procedure Call) interface to receive messages from applications such as wallets and DEXs, and send those to the leader.

The leader listens on a UDP port and RPCs listen on a TCP port. Given the leader schedule is public, sophisticated players with algorithmic strategies (“bots”) are able to send transactions to the leader directly, bypassing any additional RPC nodes that would only increase latency. With the leader being spammed, the network gets congested and that deteriorates performance. The UDP port used by the leader will be replaced by a QUIC port.

2. Stake Weighted QoS

Quality of Service (“QoS”) is the practice of prioritizing certain types of traffic when there is more traffic than the network can handle.

Last January, after Solana faced performance issues as automated trading strategies (aka “liquidator bots”) spammed the network with more than 2 million packets per second, mostly duplicate messages, Anatoly Yakovenko mentioned in a tweet that they would bring the QoS concept to Solana.

The Leader currently tries to process transactions as soon as they arrive. Because IPs are verifiable through QUIC, validators will be able to prioritize and limit the traffic for specific connections. Instead of validators and RPCs blasting transactions at the leader as fast as they can, effectively DoS’ing the leader, they would have a persistent QUIC connection. If the network (IP) gets congested, it will be possible to identify and apply policies to large traffic connections, limiting the number of messages the node can send (“throttle”). These policies are known as QoS.

Internally, staked weighted QoS means queuing transactions in different channels depending on the sender, weighted by the amount of SOL staked. Non-staked nodes will then be incentivized to send transactions to staked nodes first, instead of sending directly to the leader, for a better chance of finding execution, since excess messages from non-staked nodes will most likely be dropped by the leader.

According to Anatoly validators will be responsible for shaping their own traffic, and applying policies that will avoid vulnerability. For example, if a particular node sends huge amounts of transactions, even if they are staked, validators can take action, ignoring the connections established with this node in order to protect network performance.

3. Fee Markets

Solana fees are currently fixed and charged for each signature required in a transaction (5000 lamports = 0.000005 SOL). If there is high competition in a specific market, users face the risk of not getting transactions executed. With a fixed transaction fee, there is no way to communicate priority or compete by paying more to get their transaction prioritized. Without alternatives, users (usually bots) spam transactions to the leader (and soon-to-be leaders) in hope that at least one of them is successful. In many situations, this behavior generates more traffic than what the network can process.

A priority fee is soon to be included in Solana, allowing users to specify an arbitrary “additional fee” to be collected upon execution of the transaction and its inclusion in a block. This mechanism would not only help the network to prioritize time-sensitive transactions but also tends to reduce the amount of invalid or duplicated messages sent by algorithms since speculative operations can become unprofitable with an increase in the total cost.

The ratio of this fee to the requested compute units (the computational cost to the program to perform all operations) will serve as a transaction’s execution priority weight. This ratio will be used by nodes to prioritize the transactions they send to the leader. Additional fees will be treated identically to the base fee today: 50% of the fees paid will be collected by the leader and 50% will be burned.

At this point, you could think of several blocks being filled only with transactions targeting an NFT mint. However, there is a limit time for each account to be locked for writing on a single slot (600 to 800 milliseconds). The remnant block space can be filled with transactions writing in different accounts, even if they offer a smaller priority fee. High-priority transactions trying to write to an account that has already reached its limit will be included in the next block.

Each Solana transaction specifies the writable accounts — the portion of the state that will be modified. This allows transactions to be executed in parallel, as long as transactions are independent, i.e. do not access the same accounts. If two transactions write or read to the same account, these two transactions can not be processed in parallel, because they affect the same state.

The Solana team argues that the priority fee will then behave as parallel auctions, affecting only the “hot market” instead of the global price, allowing the fee to grow for a specific queue of transactions trying to write in that account only.

How does the user know the fee to adopt to get a mint? RPCs nodes will need to estimate an adequate fee, most likely using a simple statistical method, for example averaging the actual cost of similar transactions in previous N blocks, or even a quantile. The optimal method will depend on the market, and whether fees for similar transactions are more volatile (high demand) or stable (less demand).

In practice, the priority fee can have a global effect, if the parallel auctions are not implemented on the validator client. With RPCs and users being responsible for arbitrarily setting it, during high intense traffic, applications will likely try to get priority even though they do not interact with the “hot market”, causing an increase in the fee price for other lower demand dApps.

Fee prioritization is targeted for the v1.11 release, according to the official announcement.

In Short

The present article covered the three pieces Solana is actively working on to deal with congestion issues, which include changing the communication protocol from UDP to QUIC, adding stake-weighted QoS for transaction prioritization and a fee market that increases fees with high demand. All of these 3 improvements aspire to improve the performance of Solana, which has been experiencing degraded performance quite often.

We hope it was possible to clarify these concepts and understand the motivations and choices being made. Exploring Solana source code would be an essential next step to investigate the exact metrics being implemented in QoS to select or drop transactions or the mechanism behind the increase (and decrease) of fees and other questions that remain unanswered.

I would like to thank the Chorus One team for the enlightening discussions and knowledge sharing, especially Ruud van Asseldonk for the technical review, and Xavier Meegan for the support.

Core Research
MEV
Networks
Analyzing MEV Instances on Solana — Part 2
This is the second article of the Solana MEV outlook series.
May 31, 2022
5 min read

Introduction

This is the second article of the Solana MEV outlook series. In this series, we use a subset of transactions to extrapolate which type of Maximum Extractable Value (MEV) is being extracted on the Solana network and by whom.

MEV is an extensive field of research, ranging from opportunities created by network design or application-specific behaviour to trading strategies similar to those applied in the traditional financial markets. As a starting point, our attempt is to investigate if sandwich attacks are happening. In the first article, we examined Orca’s swap transactions searching for evidence of this pattern. Head to Solana MEV Outlook — part 1 for a detailed introduction, goals, challenges and methodology. A similar study is performed in the present article. We are going to look at on-chain data, considering approximately 8 h of transactions on the Raydium DEX. Given the magnitude of 4 x 10⁷ transactions per day, considering only Decentralized Exchanges (DEX) applications on the Solana ecosystem. This simplification is done to get familiarity with data, extrapolating as much information as we can to extend towards a future analysis by employing a wider range of transactions.

Raydium DEX

Raydium is a relevant Automated Market Maker (AMM) application on the Solana ecosystem, the second program in the number of daily active users and the third in terms of program activity.

Fig. 1: Solana programs activity breakdown, source from solana.fm.

Raydium program offers two different swap instructions:

  1. SwapBaseIn: take as input the amount of token that the user wants to swap, and the minimum amount of token in output needed to avoid excessive slippage.
  2. SwapBaseOut: take the amount of token that the user wants to receive, and the maximum amount of token in input needed to avoid excessive slippage.

Although the user interface (“UI”) interacting with the smart contract sets the swap instruction to use the first instruction type, leaving SwapBaseIn responsible for 99.9% of successfully executed swap instructions:

Fig. 2: Swap instructions from here.

We built a dataset, extracting the inputs from the data byte array passed to the program, and the actual swap token amounts by looking at the instructions contained in the transaction. Comparing the minimum amount of tokens specified in the transaction and the actual amount the user received, we estimate the maximum slippage tolerance for every transaction. By computing the corresponding slippage, we obtain the histogram:

Fig 3: Number of transactions per slippage.

The default value for slippage on the Raydium App is set to 1%. We can assume that at least 28% of transactions use the default value. Since it is not possible to know the state of the pool when creating the transaction, this number could be a bit higher.

It can be assumed that nearly 0% of slippage values are only achieved by sophisticated investors using automated trading strategies. Orca swaps’ histogram, presented in Fig 2.2 of the previous article, shows a peak in transactions with slippage of around 0.1%. On Raydium, a relevant proportion of transactions lies below 0.05%. This fact can suggest that trading strategies with lower risk tolerance, i.e price-sensitive strategies correspond to 25% of the swaps transactions (accumulating the first two bars in the histogram).

Other evidence of automated trading being common on this DEX is that on average, 40% of transactions fail, mostly because of the tight slippage allowed by user settings.

Fig 4.1: Number of transactions successfully executed (blue) and reverted (gray) by Raydium program. Source: dune.com.
Fig 4.2: Error messages in reverted transactions breakdown. Source: dune.com.

Dataset

We are considering more than 30,000 instructions interacting with the Raydium AMM program, from time 02:43:41 to time 10:25:21 of 2022–04–06 UTC. For statistics purposes, failed transactions are ignored.

Although 114 different liquidity pools are accessed during this period, the SOL/USDC pool is the most traded pool, with 4,000 transactions.

Fig. 5: 40 most relevant pools — representing 75% of all Raydium swap transactions.

The sample contains 1366 different validators as leaders in more than 35000 slots we are considering, representing 93% of the total stake and 78% of the total validator population by the time of writing, according to Solana Beach.

Fig. 6: The proportion of slots for each of the 20 most relevant leaders.

Of 5,101 different addresses executing transactions, 10 accounts concentrate 23% of the total transactions. One of the most active accounts on Raydium, Cwy…3tf also appears in the top 5 accounts in Orca DEX.

Fig. 7: Top 10 accounts by number of Raydium swaps

The graph below shows the total number of transactions for accounts with at least two transactions in the same slot. If used as a proxy to identify automated trading, on average 9 different accounts can be classified:

  • high-frequency behaviour: accounts with 3 successful executed transactions per second;
  • moderate frequency: accounts with approximately 1 transaction per second.
Fig. 8: number of transactions for the 60 more active accounts with multiple transactions in at least one slot

We can also look at the pools where these accounts execute more often. It is possible to notice they tend to specialize in different pools. The table below shows the two pools with more transactions for each of the 5 more active addresses:

By deep-diving into account activity by pool, we can see that two accounts concentrate transactions on WSOL/USDT pool; one account is responsible for half of all transactions in the mSOL/USDC pool; most of the transactions in the GENE/RAY pool are done by only one account (Cwy…3tf).

Fig. 9: Transactions owner breakdown for the 5 pools with the highest number of transactions. Each different account is represented by a new color.

Results

Searching for sandwich behaviour means we need to identify at least 3 transactions executed in the same pool in a short period of time. For the purpose of this study, only consecutive transactions would be considered. The strategy implies the first transaction to be in the same direction of the sandwiched transaction and a transaction in the opposite direction of the initial trade, closing out the positions of the MEV player.

Fig. 10: 3 steps of a sandwich attack

The need for price impact implies a dependence on the amount of capital available to be used in every trade. Some MEV strategies can be performed atomically, with a sequence of operations executed in the same transaction. These strategies usually benefit from flash loans, allowing for anyone to apply it disregarding the capital they have access to. This is not the case for sandwich attacks, since the profit is realized after the successful execution of all the transactions (Fig. 10).

As shown in the first article, the amount of capital needed in order to create value depends on the Total Value Locked in the pool — the deeper the liquidity, the more difficult it is to impact the price. Head to Fig. 2.4 of the first article for the results of simulation into the Orca’s SOL/USDC pool. The figure shows the initial capital needed in order to extract a given percentage of the swap.

In the current sample, we have found 129 blocks with more than three swaps in the same pool, most of the swaps are in the same direction — no evidence of profit-taking. As shown in Fig. 11 below, the pool SAMO_RAY is the pool with more occurrences of multiple swaps in the same slot.

Fig. 11: pools presenting more than 3 swaps in a single slot

When searching for blocks and pools with swaps in opposite directions as a proxy to profit-taking, 9 occurrences are left with a potential sandwich attack pattern, as shown in the table below (Fig 12). After further investigation of the transactions and the context in which the instructions were executed, it is fair to assume the operations are related to arbitrage techniques between different trading venues or pools.

Fig. 12: slots and pools with more than 3 swaps and evidence of profit-taking

Conclusion

In this report, we were able to access the activity of the Raydium DEX. The conclusions are based on a limited amount of data, assuming our sample is comprehensive enough to reflect the general practices involving the dApp.

It is possible to notice relevant activity from automated trading and price-sensitive strategies such as arbitrage, which corresponds to 25% of swap transactions. On average, only 40% of transactions are successfully executed and 72% of all reverted transactions fail because of small slippage tolerance. Approximately, 28% of transactions can be classified as manual trading, since they use the default slippage value.

Of 5101 different accounts interacting with the Raydium program, 10 accounts concentrate 23% of the total transactions. One of the most active accounts on Raydium, Cwy…3tf also appears in the top 5 accounts in Orca DEX transactions. This same account is responsible for 77% of swaps in the GENE/RAY pool.

There were 9 occurrences of a potential pattern of a Sandwich attack discarded after further investigation.

It is important to mention that this behaviour is not only dependent on the theoretical possibility but largely biased by market conditions. The results in $13m MEV during Wormhole Incident and $43m Total MEV from Luna/ UST Collapse on Solana demonstrate the increase in profit extracted from MEV opportunities during stressful scenarios. Although the study focuses attention on different strategies and does not mention sandwich attacks, the probability of this strategy happening can also increase, given the smaller liquidity in pools (TVL) and the occurrence of trades with bigger size and slippage tolerance.

This is my first published article. I hope you enjoyed it. If you have questions, leave your comment below and I will be happy to help.

Core Research
MEV
Networks
Analyzing MEV Instances on Solana — Part 1
Solana is a young blockchain, and having a complete picture of what is happening on-chain is a difficult task — especially due to the high number of transactions daily processed.
May 5, 2022
5 min read

Introduction

Solana is a young blockchain, and having a complete picture of what is happening on-chain is a difficult task — especially due to the high number of transactions daily processed. The current number of TPS is around 2,000, meaning that we need to deal with ~ 10⁸ transactions per day, see Fig. 1.1.

Fig. 1.1: This figure shows the daily number of transactions vs time, source from solana.fm.

When processing transactions, we have to deal with the impossibility of a-priori knowing its status before querying information from an RPC node. This means that we are forced to process both successful and failed transactions. The failed transactions, most of which come from spamming bots that are trying to make a profit (e.g. NTF, arbitrage, etc.), constitutes ~ 20% of the successful ones. The situation slightly improves if we consider only program activity. By only considering what happens on Decentralized Exchanges (DEXs), we are talking about 4x10⁷ transactions per day, see Fig. 1.2. This makes it clear that a big effort is required to assess which type of Maximum Extractable Value (MEV) attack is taking place and who is taking advantage of it, even because tools like Flashbots do not exist on Solana.

Fig. 1.2: This figure shows the program activity vs time, source from solana.fm.

In what follows, we are going to estimate what happened on-chain considering only ~5 h of transactions on Orca DEX, from 11:31:41 to 16:34:19 on 2022–03–14. This simplification is done to get familiarity with data, extrapolating as much information as we can to extend towards a future analysis by employing a wider range of transactions. It is worth mentioning that Orca DEX is not the program with the highest number of processed instructions, which indicates that a more careful analysis is needed to look also into other DEX — this is left for future study.

The aim of this preliminary analysis is to gain familiarity with the information contained in usual swap transactions. One of our first attempts is to extrapolate if sandwich attacks are happening, and if so, with which frequency. In Section 2, we are going to look at the anatomy of a swap transaction, focussing on the type of sandwich swap in section 2.1. Section 2.2 is devoted to the description of “actors” that can make a sandwich attack. In Section 3, we describe the dataset employed, leaving the description of the results in Section 4. Conclusions are drawn in Section 5.

Section 2: Anatomy of swap transactions

On Solana, transactions are made by one or more instructions. Each instruction specifies the program that executes them, the accounts involved in the transaction, and a data byte array that is passed to the program. It is the program’s task to interpret the data array and operate on the accounts specified by the instructions. Once a program starts to operate, it can return only two possible outcomes: success or failure. It is worth noticing that an error return causes the entire transaction to fail immediately. For more details about the general anatomy of the transaction see the Solana documentation.

To decode each of the instructions we need to know how the specific program is written. We know that Orca is a Token Swap Program, thus we have all the ingredients needed to process data. Precisely, taking a look at the token swap instruction, we can immediately see that a generic swap takes as input the amount of token that the user wants to swap, and the minimum amount of token in output needed to avoid excessive slippage, see Fig. 2.1.

Fig. 2.1: Swap instructions from here.

The minimum amount of tokens in output is related to the actual number of tokens in output by the slippage S, i.e.

from which

Thus, we can extract the token in input and the minimum token in output from the data byte array passed to the program, and the actual token in output by looking at the instructions contained in the transaction.

Fig. 2.2: Number of transactions per slippage.

By computing the corresponding slippage defined in Eq. (2.2) we obtain the histogram in Fig. 2.2. From this picture, we can extrapolate different information. The first one is, without doubt, the distribution of transactions around the default value of slippage on Orca, i.e. 0.1%, 0.5% and 1%. This makes complete sense since the “common-user” is prone to use default values, without spending time in customization. The second one is the preference of users to select the lowest value for the slippage. The last one concerns the shape of the tails around the default values. A more detailed analysis is needed here since it is not an easy task to have access to what actually is contained inside them. The shape surely depends on the bid/ask scatter, which is a pure consequence of the market dynamic. The tails may also contain users that select a different slippage with respect to the default values. However, one thing is assured: this histogram contains swaps from which the slippage can yet be extracted. As we will see, from this we can extrapolate an estimate of the annualized revenue due to sandwich attacks.

Section 2.1: Type of sandwich swaps

The goal of this report is to search for hints of sandwich swaps happening on Orca DEX. All findings will be used for future research, thus we think it is useful to define what we refer to as sandwich swaps and how can someone take advantage of them.

Let’s start with its basic definition. Let’s assume a user (let’s say Alice) wants to buy a token X on a DEX that uses an automated market maker (AMM) model. Let’s now assume that an adversary sees Alice’s transaction (let’s say Bob) and can create two of its own transactions which it inserts before and after Alice’s transaction (sandwiching it). In this configuration, Bob buys the same token X, which pushes up the price for Alice’s transaction, and then the third transaction is the adversary’s transaction to sell token X (now at a higher price) at a profit, see Fig. 2.3. This mechanism works until the price at which Alice buys X remain sbelow the value X・(1+S), where S represents the slippage set by Alice when she sends the swap transaction to the DEX.

Fig. 2.3: Graphical representation of sandwich transaction.

Since Bob needs to increase the value of the token X inside the pool where Alice is performing the swap, it is evident that the core swaps inserted by Bob should live on the same pool employed by Alice.

From the example above, it may happen that Bob does not have the capital needed to significantly change the price of X inside the pool. Suppose that the pool under scrutiny regards the pair X/Y and that the AMM implements a constant product curve. In the math formula we have:

where k is the curve invariant. If we set the number of tokens Y in the pool equal to 1,000,000 and the number of tokens X equal to 5,000,000 and assuming that Alice wants to swap 1,000 token Y, we have that the amount of token X in output is:

It is worth noting that here we are not considering the fee that is usually paid by the user. If Alice set a slippage of 5%, this means that the transaction will be executed until the output remains above 4'745.25. This means if Bob is trying to take this 5%, he will need an initial capital of 26,000 token Y.

Sometimes this capital may be inaccessible, allowing Bob to only take a portion of the 5% slippage. For example, let’s consider the Orca pool SOL/USDC, with a total value locked (TVL) of $108,982,050.84 at the time of writing. This pool implements a constant product curve, which allows us to use Eqs. (2.3) and (2.4) to simulate a sandwich attack. Fig. 2.4 shows the result of this calculation.

Fig. 2.4: Simulation of a sandwich attack into the SOL/USDC pool. This figure shows the initial capital needed (x-axes) to extrapolate a given percentage of the swap (y-axes).

It is clear that the initial capital to invest may not be accessible to everyone. Further, it is important to clarify that the result is swap-amount independent. Indeed, for each amount swapped by Alice, the swap made by Bob is the one that “moves” the prices of the initial tokens inside the pool. The scenario is instead TVL dependent. If we repeat the same simulation for the Orca pool ETH/USDC, with a TVL of $2,765,189.76, the initial capital needed to extract a higher percentage of the slippage of Alice drastically decreases, see Fig. 2.5.

Fig. 2.5: Simulation of a sandwich attack into the ETH/USDC pool. This figure shows the initial capital needed (x-axes) to extrapolate a given percentage of the swap (y-axes).

From the example above, let’s consider the case in which Bob has an initial capital of 2,000 token Y. If he is able to buy the token Y before Alice’s transaction, Alice will obtain an output of 4,975.09 token X, which is only 0.4% lower than the original amount defined in Eq. (2.4).

At this point, Bob has another possibility. He can try to order transactions that are buying the same token X after its transaction, but immediately before Alice’s swap. In this way, he can use the capital of other users to take advantage of Alice’s slippage, even if Bob’s initial capital is not enough to do so, see Fig. 2.6. This of course results in a more elaborate attack, but likely to happen if Bob has access to the order book.

Fig. 2.6: Graphical representation of sandwich transaction when Bob uses other X-buyers before Alice’s transaction to increase the value of X.

Section 2.2 Who are the actors of a sandwich attack?

It is not an easy task to spot the actors behind a sandwich attack on Solana. In principle, the only profitable attackers are the leaders. This is because there isn’t a mempool, and the only ones that know the exact details of the transactions are the validators that are in charge of writing a block. In this case, it may be easier to spot hints of a sandwich attack. Indeed, if a leader orders the swap transactions to perform a sandwich, it should include all of them in the same block to prevent an unsuccessful sandwich.

The immediately following suspect is the RPC service that the DAPP is using. This is because the RPC service is the first to receive the transaction over HTTP, since it is its role to look up the current leader’s info using the leader schedule and send it to the leader’s Transaction Processing Unit (TPU). In this case, it would be much more difficult to spot hints of sandwiching happening since in principle the swap transactions involved can be far from each other. The only hook we can use to catch the culprit is to spot surrounding transactions made by the same user, which will be related to the RPC. This is a consequence of the lower price fee on Solana, which raises the likelihood that a sandwich attack can happen by chance spamming transactions in a specific pool. This last one is clearly the riskiest since there is no certainty that the sequence of transactions is included in the exact order in which the attacker originally planned it.

Section 3: Dataset description

Before entering the details of the analysis, it is worth mentioning that, standing on what is reported on Solana Beach, we have a total of 1,696 active validators. Our sample contains 922 of them, i.e. 54.37% of the total validator population. The table below shows the validator that appears as the leader in the time window we are considering. Given the likelihood-by-stake for a validator to be selected as a leader, we retain fair to assume that our sample is a good representation of what’s happening on Orca. Indeed, if a validator is running a modified version of the vote account program to perform sandwich swap, the rate of its success will be related to the amount of staked tokens, not only by actual MEV opportunities. Further, modifying the validator is not an easy task, thus smaller validators will not have the resources to do that. Since we have all the 21 validators with a supermajority plus a good portion of the others (i.e. we are considering half of the current number of active validators), if such a validator exists, its behaviour is easily spotted in our sample. However, it is worth mentioning that a complete overview of the network requires the scrutiny of all validators, without making assumptions of that kind. Such achievement is behind the scope of this report, which aims primarily to explore which type of sandwich can be done and how to spot them.

Having clarified this aspect, we firstly classify the types of swaps that are performed on the Orca DEX. The table below shows the accounts that are performing more than two transactions. It is immediately visible that most of the transactions are done by only 2 accounts over 78 involved.

As explained in Section 1, we are considering 5H of transactions on Orca DEX, from 11:31:41 to 16:34:19 on 2022–03–14. This sample contains a total of 12,106 swaps, with pool distribution in Fig. 3.1.

Fig. 3.1: Pool distributions of the swaps employed. Here [aq] stands for Aquafarm, i.e. an Orca’s yield farming program. The pool denoted with Others represents the other pool with less than 100 swaps.

By deep-diving into the swap, we can see that most of the transactions in the 1SOL/SOL [aq] and 1SOL/USDC [aq] are done by only two accounts, see Fig. 3.2. Here [aq] stands for Aquafarm, i.e. an Orca’s yield farming program. We can also see the presence of some aggregate swaps in the SOL/USDC [aq] and ORCA/USDC [aq] pools.

Fig. 3.2: Same as Fig. 3.1, but considering only the 5 pools with the highest number of transactions. The color legend refers to the number of transactions performed by a defined user.

Section 4: Results

We started searching for the presence of leaders performing sandwich swaps. As we described in Section 2.1, in general, a swap can happen in two ways. For both of them, if such a type of surrounding is done by a leader, we should see the transactions under scrutiny included in the same block. This is because, if a leader wants to make a profit, the best strategy is to avoid market fluctuations. Further, if the attacker orders the transactions without completing the surrounding, the possibility that another leader reorders transactions cancelling the effect of what was done by the attacker is not negligible.

By looking at the slots containing more than 3 swaps in the same pool, we ended up with 6 slots of that kind, out of 7479. Deep diving into these transactions, we found that there is no trace of a sandwich attack done within the same block (and so, from a specific leader). Indeed, each of the employed transactions is done by a different user, marking no evidence of surrounding swaps done to perform a sandwich attack. The only suspicious series of transactions is included in block # 124899704. We checked that the involved accounts are interacting with the program MEV1HDn99aybER3U3oa9MySSXqoEZNDEQ4miAimTjaW, which seems to be an aggregator for arbitrage opportunities.

As mentioned in Section 2.2, validators are not the only possible actors. Thus, to complete the analysis we also searched for general surrounding transactions, without constraining them to be included in the same block. We find that only 1% of the total swaps are surrounded, but again without strong evidence of actual sandwich attacks (see Fig. 4.1 for the percentage distribution). Indeed, by looking at those transactions it comes out that the amount of token exchanged is too low to be a sandwich attack (see Sec. 2).

Fig. 4.1: Percentage of surrounding transactions per account.

Before ending this section, it is worth mentioning that if we extrapolate the annual revenue that a leader obtains by taking 50% of the available slippage for swaps with a slippage greater than 1%, we are talking about an amount of ~ 240,000.00 USD (assuming that the attacker is within the list of 21 validators with supermajority), see Fig. 4.2. Of course, this is not a real estimate since it is an extrapolation from only 5h of transactions, thus we need to stress that the actual revenue can be different. Further, this is not an easily accessible amount due to what we showcased in Sec. 2. However, the amount in revenue clearly paves the way for a new type of protection that validators should offer to users, especially if we take into account that Orca is not the DEX with the highest amount of processed swaps. Since at the moment there is no evidence that swaps are sandwiched, we will take no action in this direction. Instead, we will continue monitoring different DEXs by taking snapshots in different timeframes informing our users if a sandwich attack is spotted on Solana.

Fig. 4.2: Annualized revenue from sandwich attacks (per leader) as a function of the slippage. Precisely, the blue dots represent the annualized revenue that a leader in the 21 validators with a supermajority list obtains if takes the 50% of the swaps with an available slippage greater than the one on the X-axes.

Section 5: Conclusion

In this report, we define two types of sandwich attacks that may happen on a given DEX. We further describe who are the possible actors that can perform such a type of attack on Solana and how to spot them. We analyzed data from ~5 h of transactions on Orca DEX, from 11:31:41 to 16:34:19 on 2022–03–14 (that is, 12,106 swaps). Despite the cutting of the number of transactions employed, we argued why we believe this sample could fairly be a “good” representation of the entire population.

Our findings show no evidence that sandwich attacks are happening on Solana by considering two possibilities. The former is that a validator is running a modified version “trained” to perform a sandwich attack on Orca. The latter is that an RPC is trying to submit surrounding transactions. We discovered that only 1% of transactions are actually surrounded by the same user, but none of them is included in the same block — excluding the possibility that a leader is taking advantage of the slippage. By deep-diving into this, we discover that the amount exchanged by these transactions results are too low for capital to be invested to exploit the slippage and submit a profitable sandwich attack.

We also show how the capital needed to make sandwich attacks profitable may not be accessible to everyone, narrowing the circle of possible actors.

Core Research
Networks
The Stakes of Staking (Altair Update)
Big thanks to my colleagues at Chorus One for their contributions to this post, especially Umberto Natale for providing a lot of the data used, full report here.
April 7, 2022
5 min read

The Stakes of Staking (Altair Update)

Big thanks to my colleagues at Chorus One for their contributions to this post, especially Umberto Natale for providing a lot of the data used, full report here.

TL;DR

  • The Altair upgrade introduced a number of changes to the reward/penalty system for Ethereum: sync committees, incentive reforms to the inactivity leak and block proposals, changes to the rewarded weight of validator duties, and others.
  • An increase in the proposer reward and the new sync committees will contribute to a greater variability of rewards than previously, but also a general increase in opportunities for profit.
  • The rewards and penalties outlined in this analysis make staking a good business endeavour for both validators and delegators, and set the terms for an unstoppable and stable network.

Introduction

Many different industries are using Ethereum to build new decentralized applications.

2021 was the year when this vision stopped being reserved for a small subset of the population with pre-existing capital (investors) or technical expertise (developers), as the popularity of Ethereum reached new heights.

Artists are disrupting traditional notions of value, with OpenSea (the largest NFT Marketplace) growing its transactions volume by “over 600x” in a year.

People are organizing self-sustainable communities, as DAO members take control over their own financial freedom and digital identity.

Builders are creating never-before-seen decentralized financial assets, where Ethereum-based Uniswap continues to dominate. But real fun also came to crypto gaming: many are playing Dark Forest, a game that experiments with cutting edge scaling technology.

Most recently, the whole crypto community has come together to aid Ukraine, at the same rate as well-established international organizations.

Bringing Ethereum into the sun and serving all of humanity, inevitably requires a scalable, secure, and resilient network.

This post aims to set the stage for Ethereum as it nears its greatest milestone, and to take a peek at the future for what this could mean in impact for both staking providers and delegators, after the Altair upgrade. To do this, we have delved into risks, rewards and the complex network that sits in between.

Designing Proof of Stake (PoS)

The big goals for the future of Ethereum are scattered across its official roadmap.

Understanding this design is key to comprehending the associated risks and rewards of PoS Ethereum. The process of upgrading started in December 2020, when the first piece of the puzzle fell into place: the Beacon Chain went live. This PoS system sets the basic consensus mechanism, by assigning the right to create a block through a deterministic lottery process. Staking nodes with a higher balance have more probability to be selected. The rewards for staking include block rewards and transaction fees, and we explore these further in the following section.

To stake in Ethereum and run a validator, 32 ETH needs to be sent to the Ethereum Deposit Contract, along with two key parameters: 1) the validator public key and 2) the withdrawal credentials for the deposit. Critically, the public key and withdrawal credentials do not need to be controlled by the same entity. This allows for two ways to participate in the protocol: as a validator or as a delegator (individuals who pass the responsibility of validation while still earning a portion of the rewards). Staking providers such as Chorus One offer ETH holders the opportunity to stake their tokens and participate in consensus through its platform.

Because chosen stakers are given exclusive rights to create a block, the protocol must consist of measures to counteract malicious attack vectors. The implementation of this consensus mechanism relies on three core elements: A fork-choice rule, a concept of finality, and slashing conditions. It is important to note that in PoS networks, slashing is not a necessary incentive for correct behavior by validators but rather an artifact of the particular block rewards and mechanism implemented. Because rewards are based on blocks processed or accepted, there’s an incentive for a validator to validate all forks in the chain, even conflicting ones (nothing at stake). Therefore, a slashing rule has to be implemented, as a matter of design.

The Altair hard fork of October 2021 introduced additional elements to consensus, namely sync committees. Validators that are part of this committee have the duty of continually signing the block header to allow a new set of light clients to easily sync up at very low computational and data cost. These concepts of head of the chain, target of attestation and source of attestation are critical to finalizing blocks and earning rewards. Checkpoints are set on-chain to achieve these goals: when a checkpoint is finalized, all previous slots are finalized. There is no limit to the number of blocks that can go through this system. A checkpoint can only be finalized after the process of consensus chooses another validator, and the infinite machine starts all over again.

A look into cryptoeconomics

It is likely that you’ve come across the prime assumption for PoS: “validators will be lazy, take bribes, and that they will try to attack the system unless they are otherwise incentivized not to” Hope for the best, but expect the worst.

You may have also seen floating around different figures for the “estimated APR” for running a validator, and wondered — where does this number even come from? All estimations for returns rest on a set of assumptions, and many published calculations were presented using outdated specs for the Beacon Chain.

So, let’s take a current look. Incentives in Altair come in the form of rewards, penalties, and slashings. Of these three, slashing is the most relevant to validator health. While crypto rewards have been around for years, their complexity and adoption have seen a significant rise in the recent past. Offerings do differ platform by platform, and all carry different kinds of risks.

One of the main conceptual reworks of Altair was in redesigning how validators are rewarded (and penalized). The idea was to make these incentives more systematic and simplify state management. But it also ups the ante on validator responsibilities.

At present, validators are rewarded/penalized if they fulfill determined duties:

  • Submit an attestation that correctly identifies the head of the chain
  • Submit an attestation that correctly identifies the target
  • Submit an attestation that correctly identifies the source
  • Submit the sync committee signature (for validators in the sync committee)
  • Proposing a block (if selected as proposer)

A validator can propose one attestation and one block at a time. Depending on their properties, the reward varies. Participation in proposals and in the sync committee are a matter of luck but quite infrequent, however, attestations should be done once per epoch.

How we view rewards while considering risk in staking is a subject of research at Chorus One. This piece aims for other validators and interested parties to understand the “main-principles” they need to follow in order to minimize losses, and in turn, maximize profits in the process of validation. In our study, we found that currently the expected annualized reward for an ideal validator (perfect performance) is 5.44%. This amount decreases to 5.4% when we take into account a less idealized case.

After providing validators a feeling for how much they stand to earn, the following part will present a more practical example and explain how these figures may actually vary.

Risks overview and scenarios

Slashing risk is a type of platform-dependent risk, as platforms that offer a similar service carry common risks. This section refers to the different types of penalties, and methods to calculate them at certain scenarios.

All formulas presented have been transformed in order to give a more general idea. The risk modeling was done using the actual definition from the Beacon Chain specs (Phase 0 and Altair) and the state of the chain at the time of writing. More on our methods can be found in our full study, linked previously.

Slashing includes all penalties that result in the partial or total loss of the staked assets of a validator, which range from 0% to 100% of the assets. Failing to perform the current validator duties properly (see last section) leads to being penalized and, in the case of slashable actions, being forcefully ejected from the Beacon Chain for suspected malicious activity. This is done to protect both the validator from further losses, and to help the chain finalize.

The main reasons from slashing that validators must be aware of are: proposing two different (conflicting) blocks, submitting two different (conflicting) attestations and submitting an attestation that completely surrounds or is surrounded by another attestation. If these events are not the result of a malicious action, then it follows that they must come from a bug or error. To account for this, the amount of staked destroyed is proportional to the number of validators slashed around the same time. If this number is small, then it is unlikely to be the result of a coordinated attack because that would require a high number of validators. These “honest mistakes” are punished lightly, at a minimum of 1ETH. If, on the other hand, there’s a high number of validators slashed at the same time, then it is assumed to be an attack, resulting in a higher amount of stake being destroyed, up to the full balance of the node.

There’s of course a certain pressure on validators to avoid going out at the same time as other validators. This expectation to decentralize touches on aspects of client diversity, but also on sources of truth or hosting for clients. This is a very critical point for everyone participating in the Ethereum ecosystem, and one that Chorus One has considered in our design. But back to the topic, these penalties hold true whether or not blocks are being finalized (meaning, 2/3 of validators weighted by stake are online and their votes are being counted). This is the state of normal operations for Ethereum. Anything under that, and we can no longer reach agreement and the inactivity leak mentioned previously comes in to restore balance.

With a clear understanding of the rewards system, estimating the source of possible penalties is much more simple, by calculating the attestation reward/penalty delta.

Indeed, if the reward is not finalized, the corresponding amount is removed from the attester’s balance (the minimum penalty). At that point, the validator’s unlock date for the stake is delayed about 36 days. This is to allow another, potentially much greater, slashing penalty to be applied later, once the chain knows how many validators have been slashed together around the same time (further penalty). If an inactivity leak is active, then the potential reward drops to 0, so by fulfilling the duties you are only able to avoid penalties.

Since getting the source vote wrong implies getting the target vote wrong, and getting the target vote wrong implies getting the head vote wrong, the possible slashing scenarios reduce to these:

  • Incorrect source
  • Correct source, incorrect target
  • Correct source and target, incorrect head.

To quantify the outcomes for performing validator duties, we would like to compare what could be considered a generic validator across a selection of edge scenarios. This example takes into consideration the following values:

Perfect Validator

To start off, let’s look at what this validator would earn if they, and all other validators, had an ideal participation record under the defined specs.

Attestations can be rewarded with a portion of the “base reward” for each of the correlated duties, weighted by the specific service provided. In the latest specs, the target vote receives the highest rewards, as it is the most important to reach consensus. The base reward is a constant across the network at all times.

BASE REWARD (in Gwei) = Effective balance * [Base reward factor / sqrt(staked ETH balance)]BASE REWARD = 32,000,000,000 * [64 / sqrt(10,000,000,000,000,000)] = 20,480 Gwei = 0.00002048 ETH

Following the upgrade, the block reward allocated is now ⅛ of total rewards as intended by the Ethereum researchers, rather than ⅛ of ¼ of rewards, as was the case pre-Altair. You may notice the delay reward is missing. Now, all attestations are given specific inclusion deadlines to claim their rewards in a gradual pattern, so prompt voting is logically accounted for.

Since all validators are supposed to attest at least one time during an epoch (for a perfectly working network), the number of attesting validators is equal to the total active validators divided by the number of slots per epoch.

ATTESTING VALIDATORS = ACTIVE VALIDATORS / SLOTS PER EPOCHATTESTING VALIDATORS = 300,000 / 32 = 9,375 validatorsTOTAL REWARD = BASE REWARD * ATTESTING VALIDATORSTOTAL REWARD = 20,480 * 9,375 = 192,000,000 GweiBLOCK REWARD = TOTAL REWARD / 8 = 24,000,000 Gwei = 0,024 ETHTARGET REWARD = 26 * TOTAL REWARD / 64 = 78,000,000 Gwei = 0,078 ETHSOURCE REWARD = 14 * TOTAL REWARD / 64 = 42,000,000 Gwei = 0,042 ETHHEAD REWARD = 14 * TOTAL REWARD / 64 = 42,000,000 Gwei = 0,042 ETH

Sync committees rotate rather slowly (every 256 epochs or every day), and validators selected can earn the sync committee reward for each slot in which they participate. A high number of validators will not be actually selected for this reward in a year’s time.

SYNC COMMITTEE REWARD = 2 * TOTAL REWARD / 64 = 6,000,000 Gwei = 0,06 ETH

Finally, we see the maximum possible reward in an epoch (this number also coincides with the minimum penalty for being offline or failing to fulfill the previous duties):

MAXIMUM REWARD = (BLOCK+TARGET+SOURCE+HEAD+SYNC COMMITTEE) REWARDMAXIMUM REWARD = 0,246 ETH

It is important to note that there’s still a potential variation to this reward of a few percent over the course of a year due to sheer luck (e.g. the probability of being chosen to propose, be in the sync committee, being offline exactly at the moment you are selected, etc. This applies even considering this ideal case, where the validator performs all their duties perfectly. The effect increases as the validator set grows, due to probability. Although not worrying in terms of an investment risk (marginal differences should even out over the course of a year), it still should be kept in mind as we delve into actual performance of validators in the network.

If we were to expand this timeline to a year, then the expected reward for this single validator per year sits at around 1.7428 ETH, which corresponds to the 5.44%% APY we mentioned in the previous section. A validator can optimally earn one base reward per epoch over a long time horizon.

Realistic Validator

However, to get a bit closer to model real-world rewards, we must consider the impact of a less-than-perfect validator performance.

As we learned previously, rewards are maximized for validators the better the network behaves. This helps disincentivize malicious behavior but also means that rewards can be reduced by external factors. A model that considers all the reasons why this validator might fail to produce attestations, produce blocks, or fail to propagate is an option, but here we wanted to observe: what would happen if we assume that 99.25% (a fair figure in reality) of active validators are actually attesting blocks? Also, we wanted to make a more conservative choice and assume that our validator was online 99.9% of the time.

As we can see, in this new realistic scenario, the total distribution shifts slightly. The expected annualized reward suffers a reduction of about 0.8% and the resulting expected APY reduces to 5.4%, as we had mentioned. The probability of certain events happening plays a huge part in this scenario, so this is just a starting point to analyze.

Slashed Validator

Next we wanted to estimate what would happen if our validator was caught committing a slashable offense, the ones that were previously outlined to result in substantial loss of stake. To do this, we will assume that we simultaneously sign two different blocks with 1000 validators. In this case, we suffer three types of slashing for each validator involved:

  • A minimum penalty of 0.5 ETH
  • A penalty, that depends on the number of double signing validators, of 0.2197 ETH
  • The penalty associated with missing attestation (wrong source and target) assigned for the 36 days of delay, corresponding to 0.1086 ETH

This corresponds to a total loss of -0.8282 ETH. It is worth noting that this slashed amount increases with the number of validators that are slashed at the same time, as was discussed in the slashing overview.

Conclusion

PoS Ethereum is a highly complex and elaborate system. It is thoughtfully designed, but can be difficult to fully grasp from a validator’s perspective, which can make staking seem like an uncertain and unpredictable endeavor. There is still a compelling amount of ETH that is yet to be staked, therefore we must all prioritize network participation and security in the future.

To make sense of the opportunities presented by staking, we wanted to explore the risks native to Ethereum and how said risks stack up versus the offered rewards after the Altair upgrade. Hopefully this article has helped to clarify why and how these rewards can vary from small changes in state or from bigger events, and also show how staking is a profitable business to take part in the long term. As we found in our analysis, the profit of a single validator is around 1.7428 ETH per year, or if we prefer to see in terms of percentage, this corresponds to a 5.44%% APY.

Based on the analysis performed, we find that the most realistic impact is low enough for self-cover to be a viable option, but not low enough to be completely trivial. We have identified the most relevant scenarios to come up with this conclusion. Additionally, we have found that risk can be significantly reduced by non-financial actions, such as promoting validator diversity and operator distribution, as well as putting in place mechanisms to maintain high validation quality standards. You can read our full report here.

Taking Ethereum from the individual to the masses will require a set of tools that accelerate the process of setting-up a validator whilst maintaining the same level of security and protection. At Chorus One we are working to make this a reality through our infrastructure services, and we are preparing to launch new services in the near future that take this to the next level. To learn more, please reach out to research@chorus.one.

Core Research
Networks
Stargaze — Pioneering Interchain NFTs for Web3
Stargaze is an interchain NFT marketplace that solves many problems that exist in NFT marketplaces today.
December 8, 2021
5 min read

Stargaze is an interchain NFT marketplace that solves many problems that exist in NFT marketplaces today. Since January 1 2021, average daily NFT sales have gone from ~$300,000 USD to $73,000,000 USD as of November 26 2021 (a 24,333% increase). Currently, most NFT sales occur on Ethereum, which has popular marketplaces such as OpenSea, Rarible and Sorare. Like many things in crypto, adoption of a particular primitive tends to start on Ethereum and then expands to other blockchains when users start experiencing bottlenecks. Ronin, Wax, Solana and Flow are the four blockchains that trail Ethereum in 24hr NFT sales currently (as of November 26 2021). Blockchains that trail Ethereum in NFT sales address scalability issues that arise from Ethereum’s network congestion. However, many NFT marketplaces that exist on competing blockchains enforce restrictions on how NFT projects can utilise them. With the advent of Stargaze, the Cosmos ecosystem has a dedicated zone for NFTs that does not suffer from scalability issues whilst differentiating from existing NFT marketplaces by being more secure, decentralised, transparent and flexible.

Background

Stargaze is a fully decentralised NFT marketplace in the Cosmos ecosystem, which launched Mainnet Phase 0 on October 30th 2021. Recently, Stargaze announced 25% of their token supply will be ‘fairdropped’ to ATOM and OSMO stakers + to Stargaze validator delegators on Cosmos, Osmosis & Regen. For those who did not qualify for the airdrop, Stargaze is offering early adopters the chance to purchase STARS in a Liquidity Bootstrapping Pool (LBP) held in Osmosis as part of Mainnet Phase 1. The construction of the STARS / OSMO LBP is first-of-its-kind, as Stargaze proposed to borrow OSMO to kickstart the initial STARS / OSMO pool weights. The borrowed OSMO will be returned at the end of the LBP when STARS / OSMO weights are 50/50 and STARS has achieved price discovery. After the LBP has concluded, Stargaze will activate inflation in Mainnet Phase 2 and delegators will have the opportunity to earn staking rewards for securing the network. Finally, Stargaze will go fully-live with their decentralised NFT marketplace as part of Mainnet Phase 3 in Q1 2022, unleashing unmatched economic freedom for creators, stellar incentives for curators and superior security for NFT traders.

Problems That Exist in NFT Marketplaces Today

There are a number of issues that exist in NFT marketplaces today such as centralised curation, bad security, difficult upload workflows, limited flexibility, high gas fees, scams, intransparency of marketplace contracts and royalty restrictions.

In September 2021, the Head of Product at OpenSea used internal information to buy NFTs before they were featured on the homepage and ‘flipped’ them once featured for a profit, which in traditional finance would be considered insider trading. This is an outcome of OpenSea being non-transparent and centralised and could have been mitigated if NFT curation in OpenSea was decentralised. In the same month (September 2021), a critical security vulnerability was disclosed to OpenSea. The critical security vulnerability detected in OpenSea involved attackers airdropping SVG files to OpenSea users, which if signed by a user upon opening (even if opened on the OpenSea domain ingenuously) would give an attacker full access to a user’s funds in the wallet the malicious NFT was being viewed from. On top of these evident issues, OpenSea also restricts NFT projects to setting a maximum of 10% for royalties from sales. Lest we mention that at current ETH gas prices (124 gwei), it costs minimum $200 to buy or sell an NFT on OpenSea, which prices out a majority of retail. However, high gas prices on Ethereum for buyers and sellers can minimise scam collections, which are more commonplace on cheaper blockchains like Solana. Metaplex, a major NFT platform on Solana, also has their own issues when it comes to difficult NFT upload workflows. Finally, many existing NFT marketplaces are not open-source, which increases risks when interacting with the native smart contracts (as users have to rely on one auditing party.

So, what if I told you that a new NFT marketplace is emerging in the Cosmos ecosystem that offers high-quality security, decentralised curation, simple upload workflows, maximum flexibility, low transaction fees, open-source contracts, vetted projects and unlimited customisation of economic parameters?

Stargaze Shines Brighter Than Any Other NFT Marketplace

Security

Stargaze is opting to build out a zone using Cosmos SDK, which enables the network to have an unparalleled level of security and customisation vs existing NFT marketplaces. Cosmos SDK is built with capabilities in mind, which capitalises on least authority to minimise possible exploits at the execution layer. As Stargaze is its own sovereign chain, it also has 100 reputable validators securing it, all of which are specialised solely on verifying transactions that occur in the zone and can react quickly to upgrading the network to enhance the performance and/or security of it. This is completely different to NFT marketplaces on Ethereum and Solana, which are built as applications and have a reliance on validators to secure the underlying network as opposed to the application itself being in full control of its own security. Separately, the Stargaze NFT marketplace is built using CosmWasm, which is orders of magnitude more secure than the Ethereum Virtual Machine (EVM) because EVM attack vectors such as re-entrancy are not possible. All in all, Stargaze leveraging Cosmos SDK and CosmWasm ensures the network is secure and reliable.

Decentralised Curation

Stargaze introduces a new type of ecosystem actor into their NFT marketplace, namingly, the CurationDAO. The CurationDAO in Stargaze is responsible for curating what artwork can be traded in the marketplace. The DAO is membership-based and is governance-driven, ensuring an open and transparent system is in place for the selection of artwork in the marketplace. Stargaze governance may incentivise the CurationDAO by directing an amount of STARS from emitted inflation to reward their work. Having a DAO that curates what is available on the Stargaze marketplace results in better due diligence of projects and reduces the surface area for scams. It could be expected that Stargaze users (both buyers and sellers) benefit from having a CurationDAO too, as only legitimate projects will be able to be traded, which should lead to more liquid markets.

Maximum Flexibility

The Stargaze marketplace has a built-in feature that gives NFT projects the flexibility to choose what type of launch they would like to have (e.g. first-come-first-served mint, auction over t periods, etc). The flexibility of launch options offered by Stargaze allows NFT projects to satisfy demands of their community by working closely with them to determine what type of launch is fairest. Stargaze being a sovereign chain also lets governance exercise a high-level of customisation on protocol parameters, which is beneficial for keeping the network competitive in the long-run. For example, governance could vote on specific network upgrades proposed to improve the performance of the network (which would not be possible in an NFT marketplace that existed as an application). In turn, Stargaze can be much more adaptive than existing NFT marketplaces because governance can vote on introducing changes at the network level to give it a competitive edge.

Simple Upload Workflows

The Stargaze marketplace provides a simple interface for NFT projects to upload files and add metadata into, which uploads to Interplanetary File System (IPFS) in a matter of seconds. Files that are uploaded in Stargaze are immediately and permanently stored in a distributed and resilient system. The user experience is seamless, as the entire storage process is abstracted away from the end-user via nft.storage. All Stargaze users can be rest assured that the NFTs they own are permanently available, unlike some NFT collections that rely on a third party to host the file the NFT points to.

Low Transaction Fees

Fees on Stargaze are negligible compared to what can be seen on Ethereum, so the network is accessible to all types of users (not just those with a high amount of initial capital). It can be expected that fees will be just high enough to prevent spam but low enough to encourage frequent use. Low fees in an NFT marketplace enable more growth and innovation as buyers have greater purchasing power and projects can release more NFTs without transaction fee concerns.

Customisation of Economic Parameters

Another unique layer of customisation available on Stargaze vs other NFT marketplaces is that of staking on the native network. One could imagine utility being introduced to STARS that would not be possible on existing NFT marketplaces like OpenSea. For example, in the future users might be able to deposit their STARS into a liquid staking protocol to receive the equivalent staked STARS (stSTARS) that could be used to bid on NFTs (i.e. users could earn yield whilst bidding on NFTs). It might also be a requirement to stake STARS in order to join the CurationDAO (the DAO responsible for selecting what collections are released on Stargaze). Or perhaps, users could stake a minimum amount of STARS in a given time period to be eligible to vote on what collections are reviewed by the CurationDAO. Another option could be to stake some amount of STARS in order to have a higher chance of getting into lottery-based NFT launches. There are limitless possibilities that could be thought of to add utility to STARS. On the flipside of staking STARS, the inflation emitted by Stargaze could also be used to reward creators of NFT projects. Once an NFT project has been vetted by the CurationDAO, it might be eligible to earn x% of staking rewards reserved for creators. In other words, NFT project creators might be entitled to a double source of income in Stargaze — royalties coming from trading of their NFTs on the marketplace + their proportion of a steady stream of STARS emitted every block directed towards creators.

Open-Source Contracts

Stargaze code is fully open-source and the core team recently released a LBP simulator that other projects in the Cosmos ecosystem can use to experiment with tweaking parameters before launching an LBP on Osmosis. The Stargaze code is available in a repository on Github for anyone to see, which means anyone can audit the code to ensure there are no vulnerabilities and engineers can easily build on top of existing code to enhance the platform in a collaborative way.

To conclude, Stargaze is a marketplace that exemplifies security, decentralisation, transparency and flexibility, which differentiates it from any existing competition from NFT marketplaces. Due to the nascency of the NFT space, there are many existing inefficiencies in NFT marketplaces across a multitude of blockchains. Stargaze has an opportunity to capture a large segment of a growing NFT market by offering distinct products and services for stakeholders such as NFT projects, curators and users. Novel web3 products will be built out that incorporate Stargaze NFTs in ways we cannot possibly imagine. A new era of interchain NFTs is upon us, enter Stargaze.

Written by Xavier Meegan, Research Analyst at Chorus One

About Chorus One

Chorus One is offering staking services and building tools that advance the Proof-of-Stake ecosystem.

Website: https://chorus.one
Twitter: https://twitter.com/chorusone
Telegram: https://t.me/chorusone
Newsletter: https://substack.chorusone.com

About Stargaze

Website: https://stargaze.zone/
Twitter: https://twitter.com/StargazeZone
Stargaze LBP Details: https://gov.osmosis.zone/proposal/discussion/2882-details-and-parameters-of-stargaze-lbp-on-osmosis/

Core Research
Networks
Bootstrapping Liquidity for Lido for Solana
Lido for Solana launched about a month ago and so far north of $200m worth of SOL has already been staked with Lido.
October 8, 2021
5 min read

800,000 LDO and many more rewards are live on Lido for Solana and its DeFi integrations

Lido for Solana launched about a month ago and so far north of $200m worth of SOL has already been staked with Lido. Today, we are glad to announce that further liquidity pools and the first liquidity rewards in LDO tokens bridged from Ethereum will start to be distributed.

Holders of stSOL can now supply liquidity to pools like stSOL-SOL, stSOL-USDC, and even stSOL-wstETH

Users providing liquidity to pools will be rewarded in LDO and, for some pools, tokens from our partners ( ORCAfor the Orca pool and MER in the Mercurial Finance pool). In addition, LPs will also collect a portion of pool swap fees and accrue value in their stSOL tokens in accordance with Lido for Solana’s staking APR.

As promised we have partnered with various AMMs to utilize stSOL — the liquid representation of your SOL stake in Lido. To bootstrap and incentivize liquidity providers Lido has initiated the formation of the various pools. Holders of stSOL can now supply liquidity to pools like stSOL-SOL, stSOL-USDC, and even stSOL-wstETH— a first-of-its-kind liquidity pool with two value-accruing Lido liquid staking assets, with wstETH being bridged via Wormhole’s decentralized validator set.

800,000 LDO will be distributed as LP rewards over 2 months on Solana AMMs

The following list contains the current stSOL liquidity integrations:

Orca

Orca | The DEX for people, not programs

Orca is the easiest, fastest, and most user-friendly cryptocurrency exchange on the Solana blockchain.

www.orca.so

Orca has launched a stSOL-wstETH (the wrapped version of Lido’s stETH). This is especially good news for stETH holders. Now, in addition to earning rewards by staking ETH and SOL, you get additional yield by adding liquidity to the wstETH-stSOL pool on Orca. Liquidity providers on Orca will earn 250,000 LDO supplemented by about 35,000 ORCA over the initial
8 weeks of this pool being live.

This first-of-its-kind liquidity pool is a very cool DeFi product! Not only is it composed of two staked assets earning staking rewards, but it also has one of these bridged over to Solana from Ethereum in a decentralized way, highlighting the power of cross-chain DeFi!

To participate in the Orca pool visit the guide linked below.

Wormhole Transfer and Orca Pool Guide | Lido for Solana

This is a step-by-step guide on providing liquidity to the following Orca Pool — stSOL-wstETH to earn more rewards…

docs.solana.lido.fi

Guide — https://docs.solana.lido.fi/staking/Orca-pool-Wormhole-guide/
Make sure to double dip after you add liquidity to the Orca Pool

Mercurial

The amazing Mercurial Finance team went live with a stSOL/SOL pool that will use our internal price oracle to create a maximally efficient liquidity pool. Providers of liquidity to Mercurial will earn 150,000 LDO and matched MER rewards on top of the swap rewards while resting assured that their passive LP position is not exposed to impermanent loss. Read more about this integration.

Introducing Our First Non-Pegged Stable Pool: Lido x Mercurial

In our previous blog post, we introduced several innovative AMM systems we are bringing to the market. Today, we are…

blog.mercurial.finance

Raydium

We’ve launched a stSOL-USDC pool in collaboration with Raydium. Providers of liquidity to this pool will collect 250,000 LDO over 2 months in addition to the LP rewards from swaps on the OG of decentralized exchanges that integrates with Solana’s order book DEX Serum.

Saber

Finally, Saber, the leading cross-chain stablecoin and wrapped assets exchange on Solana, has launched the stSOL-SOL pool that currently holds TVL of $160M. Liquidity providers stand to gain 150,000 LDO in addition to the LP rewards and SBR yields for this pool. These rewards will be activated once Saber supports cross-incentivization. The stSOL-SOL Saber yield farm can be found here

LDO Incentive Overview

Lido DAO in partnership with Lido for Solana multisig has transferred LDO incentives from Ethereum to Solana by using the decentralized Wormhole v2 token bridge.

As listed above, 800,000 LDO will be distributed as LP rewards over 2 months on Solana AMMs to bootstrap liquidity for SOL.

  • 250,000 LDO for stSOL/wstETH on Orca co-incentivized by ORCA
  • 250,000 LDO for stSOL/USDC on Raydium
  • 150,000 LDO for stSOL/SOL on Saber co-incentivized by SBR
  • 150,000 LDO for stSOL/SOL on Mercurial Finance co-incentivized by MER

Keep a lookout for this and further upcoming integrations at the liquid staking page on Chorus’s website.

Chorus One

Get stSOL and passively earn staking rewards. Put your stSOL to work in DeFi and compound your yield. Stake Sol Deposit…

chorus.one

About Chorus One

Chorus One is offering staking services and building protocols and tools to advance the Proof-of-Stake ecosystem.

Website: https://chorus.one
Twitter: https://twitter.com/chorusone
Telegram: https://t.me/chorusone
Newsletter: https://substack.chorusone.com

Core Research
Towards Multisig Administration in Lido for Solana
Lido for Solana is governed by the Lido Decentralized Autonomous Organization (Lido DAO).
August 20, 2021
5 min read

The ways in which multisig reduces trust surfaces and speeds up project execution

Lido for Solana is governed by the Lido Decentralized Autonomous Organization (Lido DAO). Members of the DAO — holders of the LDO governance token — can vote on high-level proposals, such as whether to expand to a new chain. For day-to-day tasks, we have a much more narrowly scoped need for somebody to execute privileged operations: an administrator.

The administrator rights reside with a 4-out-of-7 multisig that consists of established validators and ecosystem partners. Last week, we successfully set up the multisig on Lido for Solana Testnet. In the coming days, the same will repeat for the mainnet launch, beyond which all new proposals by Lido DAO will be processed via this multisig structure.

This post explores why multisig is important in making Lido for Solana secure and efficient and the way forward for governance in Lido for Solana.

The concept of multisig

Multi-signature is a digital signature scheme that allows a group of users to sign a single transaction. The transaction could be a governance proposal, a snapshot vote, or even a simple fund transfer instruction. A common terminology to describe a multisig setup is m-of-n multisig. Given n parties with their own private keys, at least m of the private keys must sign a transaction to perform a transaction. For example, a multisig that has 7 members in the group and requires 4 signatures for a transaction to be fully signed — will be termed 4-of-7 multisig

The need for a multisig administration

Before we answer the question — why do we need multisig administration? — let us first understand how it supplements DAO governance.

DAO Governance

In a DAO governance model, decisions get executed automatically through smart contracts as a result of LDO governance token holders voting on these decisions. This results in a decentralized governance model and eliminates dependence on a centralized authority to execute decisions thereby removing the risk of a single point of failure.

On-chain DAO Governance

However, in the case of Lido for Solana, even though decisions are taken by the Lido DAO, they are executed by the multisig administration.

DAO takes decisions | Multisig executes them

To understand why offloading the decision-execution to multisig administration is a good approach let’s look at the different administration methods that are possible in such a scenario

  1. A single person could act as the administrator. This has a very low overhead, and the administrator can move quickly when there is a need to deploy a critical bug fix. However, it also places a high degree of trust in a single person.
  2. On the opposite side of the spectrum, a DAO program could act as the administrator. Administrative tasks could only be executed after a majority of LDO token holders approve. This is decentralized, but it makes it very difficult to act quickly when needed.

A good middle ground between these two extremes is multisig, a program that executes administrative tasks after m out of n members have approved. For m greater than one, no single party can unilaterally execute administrative tasks. At the same time, we only need to coordinate with m parties (instead of a majority of LDO holders) to get something done.

The benefits of multisig don’t end here. Using a multisig eliminates a lot of concerns that a typical user might have while investing. Let’s take a look at some of the other problem areas that the use of a multisig addresses.

1. Reducing points of trust

Can I trust the creators of the program to not change critical parameters of their own accord?

There is always the risk that an administrator (the authority that executes the DAO’s decisions) can start executing decisions arbitrarily. By including multiple parties in the multisig, we reduce the points of trust and make the decision execution more decentralized.

2. Execution Pace

Can Lido for Solana perform program upgrades quickly, in case of a critical bug?

A pitfall of on-chain governance is that in the case a critical bug-fix is required, achieving consensus on-chain could prove to be too slow and very costly as a result.

A completely decentralized model of governance slows down project execution, especially if a project is in its initial stages. There is always a tradeoff between the ease of execution and the degree of decentralization. However, that does not mean that one should do away with decentralization completely.

A governance model carried out by a multisig administration is the perfect compromise for a project like Lido for Solana. This lends it speed to execute decisions quickly in the earlier stages and also mitigates the risk of delayed fixing of critical bugs.

3. Decentralized program upgrades

Who decides which upgrades will happen in the future and can I trust them to remain benevolent?

Decision on Program Upgrades
The multisig decides on program upgrades. To understand why this is a reasonable solution, we need to take a look at the two possible extreme cases.

1) Single upgrade authority — In Solana the upgrade authority — the address that can sign upgrades — has a lot of power. A single upgrade authority could upgrade programs maliciously at will. For example, a malicious upgrade authority could upload a new version of the Lido program that withdraws all Lido funds into some address and runs away with the funds!

2) No upgrades allowed — On the other hand, if we don’t allow the program to be upgraded at all, and then if it turns out to contain a critical bug, we can’t fix it.

So, a multisig is a good middle ground, where no single entity can take control over the programs and their funds, but we can still enable upgrades.

Trusting Multisig to remain benevolent
The DAO can be trusted because the Lido DAO is large and decentralized, and consists of stakeholders who are aligned long-term. The proposals they vote positively on are by definition aligned with the interests of the stakeholders.

The multisig executes the decisions taken by the DAO. The multisig can be trusted because the multisig participants in turn are all reputable industry partners; their reputation is at stake if they suddenly go rogue!. Additionally, no single multisig member has anything to gain by going rogue.

4. Cross-Chain Governance Complications

Why can’t Lido DAO’s proposals be executed directly on-chain?

This is because Lido DAO uses Ethereum for governance and to be able to implement Lido DAO’s decisions on Solana blockchain cross-chain execution is required. Cross-chain governance, at this point, is not mature or fast enough to be a feasible solution.

Therefore, the role of multisig then becomes that of executing the decisions made by the Lido DAO. The governance authority, which is Lido DAO, sets the long-term goals and decides on major proposals. The administrator, multisig in this case, then upgrades the program accordingly and changes its parameters.

Governance — Lido DAO
Administration — Multisig.

5. Transparency

Is the source code public and has it been verified that the Lido program is built from that source code?

It is imperative for users, who invest their SOL in Lido, to be sure that the Lido program does not contain any backdoors or hidden features that might hurt their investments. One way to be sure of this is to know that the multisig owners have verified that the Lido and multisig programs were built from the source code that is publicly available

Furthermore, even the users can verify this fact themselves if they wish to do so.

6. Credibility

How can I trust the parties involved in this multisig?

Another aspect of transparency inherent to Lido for Solana is the fact that we have made public the names of all 7 organizations that are part of the multisig ceremony. By doing so, users know which parties control the program and can decide whether they trust these parties. We embolden the trust of our users by including only reputable participants and by making sure that this is public information.

Multisig Ceremony

Multisig ceremony is the process that the multisig uses to execute decisions. On a high level, this process works as a series of steps.

  1. Build a Solana transaction to propose
  2. Wrap the transaction in a multisig transaction (Instead of signing it with a wallet and executing, like we normally would)
  3. Sign and broadcast the wrapped transaction to the blockchain
  4. Notify the other N-1 signers to review the transaction
  5. The signers sign and submit their approval transactions to the blockchain
  6. When the multisig transaction has enough approvals, anybody (usually the last party to approve) can step in and execute the transaction

As explained earlier, multisig programs require multiple signatures to approve a transaction. This allows the signers to review an action on the blockchain before it is executed — making for decentralized governance. Chorus One is using the Serum Multisig program to introduce decentralization in Lido for Solana. The Multisig that we have set up has 7 participants and requires at least 4 of them to sign for a transaction to be approved.

The 7 parties that comprise the multisig are

  1. Staking Facilities
  2. Figment
  3. Chorus One
  4. ChainLayer
  5. P2P
  6. Saber
  7. Mercurial

The Way forward — On-Chain Governance

For now, the power to upgrade the Lido program (upon recommendation of DAO) rests with the multisig, but in the long-term Lido for Solana’s governance would be a completely on-chain decision-making process where the LDO token holders vote with their share on a proposal and collectively accept or reject it.

Decentralized policy-making in the crypto world is a complex problem. Top-down governance, as in the case of centralized organizations, is easy to implement but may not represent the best interests and needs of the stakeholders. On the other hand, a horizontal mode of decentralized governance promises a fairer representation of the voice of stakeholders but is much harder to implement.

There are multiple governance frameworks out there that exhibit varying degrees of decentralization and ease of execution. There is always a tradeoff between how easily one can implement a governance model v/s how decentralized it is. Early on, in a project’s life cycle a less decentralized but easily executable governance model makes more sense.

The long-term goal for Lido for Solana is to have a decentralized governance system with on-chain execution of decisions. In the meantime, executing decisions through a multisig helps us move quickly in the early stages, without having to trust a single party.

In terms of the project roadmap, going ahead we are looking for another audit of our code. That coupled with the results of a bug bounty will put us on the path to the mainnet launch.

Lido for Solana is poised to become the largest liquid staking solution in the market and through DAO governance and multisig administration, we make it secure and efficient. We are committed to reduce the trust surfaces required in Lido for Solana and to keep securely developing this project at a swift pace.

To read about Lido for Solana’s project roadmap please visit

Project Roadmap — Lido for Solana

Lido for Solana Mainnet will launch soon. Here’s what we have been up to!

medium.com

Disclaimer

Our content is intended to be used and must be used for educational purposes only. It is not intended as legal, financial or investment advice and should not be construed or relied on as such. The information is general in nature and has not taken into account your personal financial position or objectives. Before making any commitment of financial nature you should seek advice from a qualified and registered financial or investment adviser. Chorus One does not recommend that any cryptocurrency should be bought, sold, or held by you. Any reference to past or potential performance is not, and should not be construed as, a recommendation or as a guarantee of any specific outcome or profit. Always remember to do your own research.

About Chorus One

Chorus One is offering staking services and building protocols and tools to advance the Proof-of-Stake ecosystem.

Website: https://chorus.one
Twitter: https://twitter.com/chorusone
Telegram: https://t.me/chorusone
Newsletter: https://substack.chorusone.com

No results found.

Please try different keywords.

 Join our mailing list to receive our latest updates, research reports, and industry news.

Want to be a guest?
Drop us a line!

Submit
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.