Axelar is a universal interoperability network, secured by delegated Proof-of-Stake using AXL, the native token of Axelar: in short, Axelar is a blockchain that connects blockchains. With Axelar, users will be able to use any network with just one wallet (e.g., use MetaMask to make trades on Osmosis). Axelar facilitates many-to-many connectivity and programmability at the network layer for interoperability by connecting to any blockchain via a ‘Gateway’ installed on the connected chain. Users send messages to a Gateway on a source chain, and validators in Axelar’s network sign those messages on a destination chain. Axelar leverages threshold encryption in tandem with its Proof-of-Stake consensus to deliver secure cross-chain communication. Axelar solves the single point-of-failure risks and user-experience issues that are apparent in pairwise bridges and in other interoperability networks, alike. Axelar’s interoperability network unlocks more than just cross-chain transfers; General Message Passing allows developers to perform cross-chain calls of any kind that sync state securely between dApps on various ecosystems. Essentially, the enhanced functionality of cross-chain dApps enabled by Axelar’s network results in a better user experience for all users on all chains. Axelar is valuable for developers because of how inherently programmable, composable, and flexible the network is and for users given the new use-cases it will unlock across chains. Ultimately, Axelar provides permissionless transactions and validation, decentralised security, many-to-many connectivity, and programmability that other interoperability networks cannot duplicate.
Axelar is the first fully permissionless and decentralised interoperability network. Axelar is an interoperability Hub that facilitates many-to-many connectivity and acts as an adaptor for any dApp to leverage in order to communicate securely with any dApp on any other blockchain that has a ‘Gateway’ available for Axelar to plug into. The permissionless aspect of Axelar enables any validator to join the decentralised network; unlike other interoperability networks, it is not gated. Axelar reduces the amount of connections found in existing interoperability solutions by acting as a ‘Hub’, whereby each blockchain only needs to connect to Axelar in order to communicate with any other blockchain connected to it as opposed to opening many connections to many blockchains. The fact that Axelar is a blockchain, itself, enhances interoperability capabilities because programmability is possible at the network layer. To expand, actions such as address routing become much more efficient: new chains are immediately accessible to all connected chains, creating compounding network effects. User experience is also improved: Axelar is able to create one-time deposit addresses on connected blockchains, duplicating the user onramps used by centralized exchanges.
A user sends a payload to an Axelar Gateway, which is deployed by Axelar in the native language of the source blockchain (e.g. Solidity in Ethereum). The payload is recognised by a relayer in Axelar’s network, which notifies Axelar validators that there is a payload that is ready to be collectively signed (e.g. a cross-chain transfer from a user). At this point, validators come to consensus on what should be done with the payload sent by the user that has reached the Gateway on the source chain (e.g. Ethereum). Validators unwrap the payload and collectively sign on what should be done with it and where to route it (e.g. what network to send the payload to). Axelar network uses a weighted threshold signature scheme that validators abide by, whereby each validator has a % of the overall shares needed to produce a signature that correlates to the amount of AXL (token of Axelar network) staked with them. For example, a gateway might require a threshold percentage of signatures in order to sign a payload. If validators constituting that threshold percentage of the overall stake in Axelar’s network execute signing on a payload, then consensus is reached that approves the payload to be executed on a destination chain. In this case, if it is a cross-chain transfer, then a payload can be executed on a destination chain that mints tokens representing the tokens locked-up on the source chain. However, Axelar’s network can facilitate interoperability interactions that are far more intricate than this. More on this later.
Axelar has a simple but elegant design. The most important element in a bridge comes down to who the owners are of smart contracts that receive cross-chain intent payloads. These owners are given custodial or execution responsibility. If a bridge is centralised, a user would send a payload to a designated signer or group of signers, which would custody and approve the message on the user’s behalf. This approach is known as “proof of authority,” in contradistinction to “proof of stake.” The problem with Proof-of-Authority systems is that a user has to trust these designated signers to behave appropriately and not maliciously. If a centralised group of signers steals or cheats the user — or mismanages their private keys and is hacked — a user can do nothing about it. Therefore, Axelar has created a decentralised and dynamic set of validators to custody or sign payloads from users in a way that is trust-minimised (i.e. a permissionless protocol and incentives provided by the AXL token enforce that parties are responsible for signing or custodying payloads via mechanisms such as cryptography, consensus and economics). Axelar uses threshold encryption, a decentralised network and slashing economics to ensure that all validators behave honestly and user intent is executed across chains securely, safely and correctly.
In general, Proof-of-Authority setups have resulted in hundreds of millions in funds lost to security breaches. The Axie Infinity (Ronin Bridge) hack is a recent, costly example. More decentralised approaches can solve the problem of risks encountered by entrusting a designated group with our intent to move across chains. However, thoughtful approaches are still needed. Wormhole was hacked due to an operational error: a code vulnerability was exposed on their GitHub before it was patched. LayerZero, a well-known decentralised bridge network, leaves critical security decisions up to the application developer and user. Nomad, another well-known project, puts safety behind liveness (if the network halts, transactions are not safe). Nomad recently suffered a multimillion-dollar hack due to a vulnerability left unaddressed in its codebase. Axelar code is rigorously and regularly reviewed by auditors; audits are published here. Axelar code is open-source; a multi-million-dollar bug-bounty program encourages white-hat developers to search for vulnerabilities. Loss-prevention measures are also enabled, including mandatory key rotations, and the ability to disconnect compromised chains quickly, set rate limits and cap transaction amounts.
Axelar solves the security problems that are apparent in other interoperability networks by leveraging threshold encryption and a Proof-of-Stake network for security and consensus whilst simultaneously solving the usability problems presented by pairwise bridges. The user barely has to lift a finger when an application they are interacting with leverages Axelar.
There are other high-quality solutions that match Axelar in terms of security, safety and usability such as Inter Blockchain Communication (IBC). However, IBC is restricted in that it requires extensive integration work to connect to blockchains outside of the ecosystem it was built for (Cosmos). Ultimately, Axelar is the premier solution that solves all interoperability problems faced by other solutions and is unmatched when it comes to security, usability and interconnectivity as Axelar can seamlessly connect to any type of blockchain, regardless of the underlying technology.
As mentioned earlier, Axelar can facilitate interoperability interactions that are far more intricate than just cross-chain transfers. Axelar opens up a multitude of possibilities for users to engage with different chains without having to leave their source chain. This is powerful to comprehend, given users can take actions cross-chain using tools familiar to them such as native wallets and currencies. Let’s dig in.
One example of what is made possible with Axelar’s network is a Cosmos user instantly being able to receive USDC to use on Osmosis from a centralised exchange such as Coinbase without needing to use Ethereum. As it stands right now, if a user has USDC on a centralised exchange and wants to withdraw it to a decentralised exchange, it is highly likely that a user will only be able to withdraw USDC to a network such as Ethereum. This is a terrible user experience for Cosmos users, who will need to receive USDC on Ethereum first, before bridging it to Osmosis. Not only is this an unnecessary amount of steps but a user will also need to purchase ETH in order to pay gas costs to move across chains. With the advent of Axelar (as well as Interchain Accounts), if a user provides a centralised exchange with an address on Ethereum that is being observed by Axelar validators on Ethereum, it will arrive on Osmosis without a user needing to take any extra actions or pay any extra fees. This is possible because validators in Axelar’s network observe payloads incoming into a Gateway (in this case on Ethereum) and the Axelar network understands how to translate it and route it cross-chain. Once a payload arrives on Ethereum, Axelar can create an address for a user on Osmosis to receive the USDC. As a blockchain connecting blockchains, Axelar can execute logic that enables multiple steps to be assembled into 1 for users to take actions cross-chain. In this example, Osmosis users will be able to withdraw from centralised exchanges in 1-step, even if a centralised exchange does not provide the optionality. This will unleash a new wave of liquidity into deFi apps and other decentralized applications, like Osmosis.
The power of Axelar’s network can also be leveraged by users outside the Cosmos ecosystem. For example, an Ethereum user that does not want to leave the comfort of the network can utilise Axelar to take actions on applications that exist outside of Ethereum. To elaborate, let’s say that a user wants to swap ETH for AVAX and then borrow USDC on Avalanche with AVAX as collateral, in a decentralised manner. Right now, a user would probably send ETH to a centralised exchange using MetaMask and pay fees in ETH, sell ETH for USDT/USDC on an exchange, buy AVAX with USDT/USDC in another transaction on an exchange, send the AVAX to an Avalanche wallet and pay fees in AVAX, navigate to a lending protocol front-end, deposit AVAX and pay AVAX fees with an Avalanche wallet and then borrow USDT on a lending protocol with an Avalanche wallet (paying another AVAX fee).
Axelar completely abstracts away these extra steps and payments by creating a sequence of instructions for the network to execute cross-chain on behalf of a user.
In this scenario, if a user was on Ethereum as a source chain, the user would use MetaMask to send intent to a Gateway connected to Axelar, alongside a payment of ETH that is requested by network services in order to execute the intent cross-chain. Axelar network then abstracts the payment flow: ETH is converted into AXL to pay validators and then into AVAX to pay fees on Avalanche. A user does not have to leave MetaMask, or Ethereum, or purchase any other currencies in order to transact on other chains. (Notably, this process may create deflationary effects in Axelar, as “change” from these conversions is either refunded to the user, or applied toward potential buyback-and-burn programs. More on this from Axelar Foundation, here). At this point, Axelar has done all of the work on behalf of the user and a user has successfully borrowed USDC on a lending protocol in Avalanche. Axelar opens up new possibilities for users to take cross-chain actions without needing to learn new tools or purchase new currencies to pay fees.
Axelar is a Proof-of-Stake network built with Cosmos SDK and Tendermint consensus. The AXL token is used to secure the decentralised network. For a refresher, stake is the value of a token that has been delegated to validators to secure a Byzantine system. The more stake (value) that has been delegated, and the more diverse the pool of token-holders and validators, the harder it is to attack the system. At this point, it is extremely unlikely for a validator to be malicious in any case given it would be explicitly risking a large sum of its own stake and implicitly risking its reputation in the cryptocurrency ecosystem. Even in a scenario where the value at stake in AXL is less than the amount being transferred, validator collusion toward a malicious outcome is unlikely, given the explicit reward for doing so would likely be very low and reputation risk extremely high.
Holders of AXL have a strong incentive to delegate their AXL to a validator(s) to secure the network. Validators earn block rewards for successfully proposing new blocks that are verified by other validators in the network. A validator has more opportunity to propose blocks (and hence earn more rewards) if it has more stake delegated to it. Delegators are the ones that stand to benefit the most from block rewards because delegators earn the majority of it (often >90%), whilst validators take a commission for securing the network on behalf of them (i.e for. running the node that participates in the Axelar’s network consensus). If an AXL holder does not delegate, they risk being diluted as they will miss out on block rewards being received by other AXL stakers and validators.
Token-holders also have an incentive in the form of their long exposure to AXL, to delegate AXL to validators that they believe will secure the network in the best possible fashion. Delegators can review data on the full list of validators via the Axelar block explorer, Axelarscan, at axelarscan.io/validators. The more AXL that is staked with a validator, the more voting power a validator has (i.e. more chance of a validator being chosen to produce the next block) — but this does not lead to concentration of voting power, because Axelar has implemented quadratic voting. In short, quadratic voting means validators’ voting power is equivalent to the square root of their delegated stake. E.g., to get one vote, a validator would need 1 token; but to get 2 votes they would need 4; to get 3 votes, 9 tokens would be needed, and so on. The validator set of Axelar is limited, so AXL token-holders can play a direct role in ensuring the active validator set is performant and available by delegating to high-quality validators. Ideally, Axelar’s network is very decentralised whereby it would take not just a lot of stake to break liveness guarantees of the network but also a lot of validators (e.g. validator diversification).
Aside from securing Axelar’s network, AXL is also used for token-holders to participate in governance. Due to the fact that Axelar is built with Cosmos SDK, this means that all governance proposals are created and voted upon on-chain. The more AXL that a token-holder holds in the network, the more votes a token-holder has on governance proposals. For example, governance proposals might cover connecting new chains or a proposed upgrade that improves the features of Axelar’s network. However, it is not a requirement for AXL token-holders to participate in governance in Axelar. In networks built using Cosmos SDK, token-holders inherit the vote of the validators they delegate to if they do not vote themselves. If a user does not agree with the vote of a validator, the user always has the optionality to change the vote that was inherited from their validator. All in all, on-chain governance in Cosmos SDK chains runs smoother than most and is a great way for token-holders to actively participate and contribute to decentralised networks.
Finally, AXL is used to pay transaction fees to validators in Axelar’s network. For example, a user active on source-chain Ethereum that signals intent to take actions on destination-chain Avalanche would pay fees in ETH to Axelar’s Gateway on Ethereum. Axelar’s SDK provides services that observe the Gateways and then convert the ETH fee into AXL to pay Axelar validators and AVAX to pay Avalanche validators (all-the-while taking a cut for doing so). In essence, AXL is the fuel for validators to come to consensus on cross-chain intent. Demand for AXL comes from services such as Axelar SDK, which convert other currencies into AXL in order to pay validators for their work. Anyone can provide these services; they can even be handled manually by the user, if desired. The more usage Axelar’s network gets, the more currencies that are converted into AXL to pay validators, the more demand for AXL.
There are many reasons why Axelar is a valuable network. The network is valuable for developers, users and token-holders.
For developers, Axelar is useful due to the Turing-complete programmability the network facilitates, as well as the ability to compose functions cross-chain. Starting with composability, developers that build on top of Axelar can build one-click user experiences consisting of multiple components that interact with each other cross-chain. (Read more for an introduction to architecture approaches, when composing cross-chain.) For example, a developer might choose to build a yield optimiser, whereby a financial strategy reads yield of a certain asset across multiple chains and deploys more or less capital (rebalancing) on a connected chain in order to optimise yield for the next block. Axelar is also entirely programmable, which means that validators in Axelar’s network can take any action on behalf of a user cross-chain, no matter what it is. For example, a developer could choose to build a governance aggregator application whereby a validator set can vote on behalf of a user in a DAO, cross-chain, in the same direction as the majority vote (e.g. vote YES if majority vote is already YES). Related to programmability, Axelar network is Turing-complete, meaning any program that is created by developers can be run by the network, given enough memory and time. These features are possible because Axelar is a blockchain that connects blockchains, and cannot be duplicated by other interoperability networks. All in all, Axelar is the most customisable, flexible, programmable and composable interoperability network.
Users of Axelar can look forward to greater liquidity in their respective ecosystems, a better user experience, less transaction costs and new use-cases. Greater liquidity will be able to freely flow across blockchains that are connected to Axelar and as a result, users will have new assets to trade that were not available previously on their blockchains. There will be a better experience for users moving cross-chain as users will not need to hold multiple tokens across chains to take actions and not need to make separate transactions for each transaction. Any cross-chain transaction can be paid for with one token and instructions can be bundled by validators to execute atomically. Users will also be able to access new types of applications that exist on chains that are not native to the chain they currently interact with. For example, a user on Ethereum might be able to utilise a cross-chain AMM built on Axelar to swap Ethereum assets with assets on Avalanche. Axelar and its partners are already working with the largest dexes on multiple chains (Osmosis, a Cosmos project, is a notable example), who are building these cross-chain liquidity networks. Moreover, many of these projects are using Axelar’s unique functionality to build user onramps (such as one-time deposit addresses) that can rival centralised exchanges for ease-of-use, and welcome users seamlessly, regardless of what tokens they hold.
AXL is the fuel to the Axelar economy. The value of AXL comes from how it is used to secure the network, govern the network and pay node operators in the network to execute cross-chain intent. Holding AXL gives users a way to directly contribute to the sustainability and security of the network.
To conclude, Axelar is a decentralised and permissionless interoperability network built with Cosmos SDK that has a mixture of properties such as many-to-many connectivity, programmability, composability and Proof-of-Stake security that constitutes the most robust interoperability network available for users. Axelar will be secured by AXL, which is used to secure the Proof-of-Stake network, as well as for governance and payment for validators to execute cross-chain intent. Axelar will unlock a variety of use-cases that have not yet been seen, such as interacting cross-chain with other blockchains that might not speak the same language as the user’s source blockchain. For the first time, cross-chain user experience will be seamless as a flux of applications are being built on top of Axelar currently to leverage the profound properties of the interoperability network. Users who enter Web3 via one blockchain will easily access applications and assets on other blockchains, perhaps without even knowing they are doing so. Axelar solves problems of centralised bridges and interoperability networks to produce what can ultimately be argued as the safest, most secure and best cross-chain user experience that is available for users.
Acknowledgements: Thanks to Galen Moore from Axelar for his review of this article.
Xavier Meegan is Research and Ventures Lead at Chorus One.
Medium: https://medium.com/@xave.meegan
Twitter: https://twitter.com/0xave
Chorus One is one of the largest staking providers globally. We provide node infrastructure and closely work with over 30 Proof-of-Stake networks.
Website: https://chorus.one
Twitter: https://twitter.com/chorusone
Telegram: https://t.me/chorusone
Newsletter: https://substack.chorusone.com
YouTube: https://www.youtube.com/c/ChorusOne
Axelar delivers secure cross-chain communication for Web3, enabling dApp users to interact with any asset or application, on any chain, with 1 click.
Website: https://axelar.network/
Twitter: https://twitter.com/axelarcore
Discord: https://discord.com/invite/aRZ3Ra6f7D
Blog: https://axelar.network/blog
YouTube: https://www.youtube.com/c/Axelarcore
Blockchains not only need to be technically good. Besides the protocol and implementation levels, a key element in the success of a decentralized system is having different and independent groups using it, operating it, and governing it. The economic incentives in decentralized systems to achieve such participation by all these different groups have gained attention from researchers who are now interested in “tokenomics”, as a new field of study.
In this article, we are going to explore Solana economics, focusing on the stimulus to network node operators, or validators. We conducted an analysis of the inflation model, the costs and rewards to validators and stakers, as well as the current network activity levels. We also estimate the minimum stake required of a validator in order to break-even, and estimate the impact of different market scenarios, considering the most important variables and how they affect validator profitability.
For this purpose, we built the Solana Validator Dashboard and the Solana Validation Cost Estimator.
Solana validators currently earn from two sources:
The Solana inflation design has defined SOL emissions as starting at 8%, and decreasing by 15% every year. The model was activated on February 10th, 2021 with the payment of 213,841 SOL.
As of July 2022, Solana’s inflation rate is around 6.8%. The staking yield is equivalent to 9.1%, as 75% of the total supply is currently staked (i.e. total inflation rewards are distributed to staked tokens only, resulting in a dilution of non-staked tokens). The rate does not reflect the yearly emission rate. It can be considered a target instead, and the mechanism behind it is broken down below.
Solana’s inflation model considers 400ms block times even though it is mentioned on Docs that the current implementation targets block times to 800ms. The recent average is around 650ms but with high variance.
Although Solana remains extremely performant to the everyday user, the difference in slot times directly impacts the economics and business viability of running a validator on Solana. Longer block times will result in smaller rewards, given a smaller number of epochs in a calendar year, decreasing the amount of SOL distributed to network participants.
In every epoch, Solana calculates the number of tokens instantiated for the inflation pool. The result will be the amount of SOL tokens to be distributed to validators and stakers as inflation rewards, according to the voting and staking status from the previous epoch. 0.45 SOL is the approximate amount currently allocated and distributed among eligible validators in each slot — 195 thousand SOL per epoch.
Block times impact inflation rewards as the function will taper the initial rate (8%) given how many slots have passed since inflation activation on Mainnet — as a proportion of how many slots fit in one year.
Considering an average block time of 650 ms, the inflation being distributed in every epoch is equivalent to a 4.1% yearly rate and the stake yield falls to 5.5%, instead of the 6.8% and 9.1% previously assumed.
Also relevant to validator economics will be the commission. In fact, stake owners, a.k.a. delegators, earn the inflation rewards. Validators earn a portion of it represented by the commission. In the plot below, we can see that a common fee for public nodes is around 10%. There are only 81 nodes charging a 5% fee or smaller. 100% commission is assumed to refer to private nodes (100 validators).
Block reward from transaction fees varies according to network activity. Recent average is around 0.01 SOL per slot. Total per epoch increases with voting power, as the number of slots attributed to the validator is based on the proportional stake.
Theoretically, as inflation decreases with time, validators’ rewards would be supplemented by the increase in transaction fees. The assumption can eventually become a truth as the network matures. Some plots below show that currently this is an unfair assumption:
1- The market’s cyclic nature — the number of non-vote transactions will not necessarily be growing over time. Total transactions (vote + non-vote) picks in Oct21, around 180 thousand in one day. And falls to less than 100 thousand transactions in Apr22.
2- Solana network has invested in growing the network of validators. The plot below shows the number of unique rewards recipients (addresses).
3- As a consequence of voting power dilution and lower network activity, rewards obtained from transaction fees decreased for validators in an individual perspective.
We split the cost into i) hardware, colocation, and bandwidth, to host the validator and ii) personnel, which can vary significantly. The official recommendations can be found on the Solana Documentation.
Small Validator
Medium Validator
Professional Validator
The vote is an affirmation that a block it has received has been verified, as well as a promise not to vote for a conflicting block. — Solana Docs
Validators are expected to vote on the validity of the state proposed by the slot leader. A validator node, at startup, creates a new vote account and registers it in the network. On every new block, the validator submits a new vote transaction and pays the transaction fee (0.000005 SOL).
Validators usually own (a portion or the total of) the staked tokens, a.k.a. self-staking. In this case, the cost of tokens depends on the average price of acquisition. For the purpose of the current analyses, we will consider the validators only to own 100 SOL at a US$ 50 price.
The Solana Foundation promotes the growth of the validator set through the Solana Delegation Program. Applications require small validators to achieve the “baseline” criteria, which includes running a node also on the Testnet, in order to receive 25,000 SOL. Those who meet the baseline criteria and also the “bonus” criteria can receive an extra (dynamic) amount in the delegation. A recent post on stake delegation strategies and why delegation programs are needed, goals, and criteria can be found in How can networks nurture decentralization?
In summary, Solana validator’s profitability depends on the current inflation rate, block times — reflected on the number of epochs in one year, the voting power, the total supply, the number of transactions, the cost structure, and the SOL market price.
For the three operational levels stated above, we will look at three different economic scenarios: optimistic, average, and pessimistic, with the average scenario being the closest to the current values.
The average market price in one year is fixed at $50 for the purpose of break-even analysis. Different price scenarios can be evaluated in a further session.
We found that the 40,000 SOL to break even would be a realistic amount for a small validator, on an average scenario, close to current levels. The number grows to 253,000 SOL for the medium setup. A professional validator would need more than 1.3 million SOL staked.
For a validator with a 0.01% stake, we estimate a 25 SOL reward from transaction fees in one year. The voting process costs around 200 SOLs per year for each node operator. Therefore, small validators are dependent on inflation rewards to achieve break-even, and ideally, become profitable. Around 350 thousand SOL staked would be needed to fully cover the voting cost, when considering rewards from transaction fees only.
Considering active stake on August 2nd:
Although the number of validators may be considered high compared to other Proof of Stake networks, 71 accounts are responsible for 57% of the total 365 million SOL staked.
The majority of validators currently stake between 80 and 90 thousand SOL, as seen in the plot below. There are at least 138 (7%) instances of the validator client with stake amounts smaller than 40 thousand SOL, the estimated break-even level for a small validator.
Simulation shows that medium and professional validators are more sensible to fluctuations in the SOL market price than small-size validators. Considering SOL average price in a year to be $75, the break-even level decreased by more than 30% for medium and professional levels and only 7% for small validators. A similar effect is found if the average price drops to $25.
In PoS networks, adopting an accurate inflation model in conjunction with direct incentives in form of delegation is important to:
Solana validators and stakers have seen rewards decreasing with higher block times compared to the projected rewards from the initial inflation model. As additional factors, the network experienced a contraction in non-vote transactions during the latest months and the expansion of the validator set.
According to the break-even levels discussed above, an 8.85% inflation target would be the rate level to reflect an effective 5.5% emission in one year, considering 650 ms block times (6.3% if 550 ms block times). Assuming 75% of total supply is delegated to validators, staking yield would become 7.1% in one year and the minimum amount in stake to break even drops by 24%, to 32 thousand SOL.
The inflation rate is even more relevant for small validators’ profitability, compared to transaction fee rewards. Adjusting the inflation model according to the actual network configuration would reinforce the interest of those validators staking less than 40 thousand SOL. Supposing the 8.85% rate simulation above, approximately 21 more validators (1.12%) would reach the break-even level — that is the number of validators currently in range 30-40 thousand SOL in stake.
In this study, we explored the variables behind the Solana validator economics, estimating profitability levels for different market scenarios.
We found that:
Fee markets are now live on Solana but the adoption of the priority fee by dApps and general users at the moment is low, with the proportion of around 4% of transactions paying a higher fee than the fixed rate. It has been in an uptrend since launched, in late July.
Go to the Solana Validator Cost estimator in getguesstimate to explore the relevant variables, their interactions, and correlations. Thanks,
, Chorus One engineer, for building it.
“Look below the surface and you will find that all seemingly solo acts are really team efforts.” —John C. Maxwell
This article was brought to you by
. With meticulous review by
and
.
Chorus One is one of the largest staking providers globally. We provide node infrastructure and closely work with over 30 Proof-of-Stake networks.
Website: https://chorus.one
Twitter: https://twitter.com/chorusone
Telegram: https://t.me/chorusone
Newsletter: https://substack.chorusone.com
YouTube: https://www.youtube.com/c/ChorusOne
Blockchains not only need to be technically good. Besides the protocol and implementation levels, a key element in the success of a decentralized system is having different and independent groups using it, operating it, and governing it. The economic incentives in decentralized systems to achieve such participation by all these different groups have gained attention from researchers who are now interested in “tokenomics”, as a new field of study.
In this article, we are going to explore Solana economics, focusing on the stimulus to network node operators, or validators. We conducted an analysis of the inflation model, the costs and rewards to validators and stakers, as well as the current network activity levels. We also estimate the minimum stake required of a validator in order to break-even, and estimate the impact of different market scenarios, considering the most important variables and how they affect validator profitability.
For this purpose, we built the Solana Validator Dashboard and the Solana Validation Cost Estimator.
The Solana inflation design has defined SOL emissions as starting at 8%, and decreasing by 15% every year. The model was activated on February 10th, 2021 with the payment of 213,841 SOL.
As of July 2022, Solana’s inflation rate is around 6.8%. The staking yield is equivalent to 9.1%, as 75% of the total supply is currently staked (i.e. total inflation rewards are distributed to staked tokens only, resulting in a dilution of non-staked tokens). The rate does not reflect the yearly emission rate. It can be considered a target instead, and the mechanism behind it is broken down below.
Solana’s inflation model considers 400ms block times even though it is mentioned on Docs that the current implementation targets block times to 800ms. The recent average is around 650ms but with high variance.
Although Solana remains extremely performant to the everyday user, the difference in slot times directly impacts the economics and business viability of running a validator on Solana. Longer block times will result in smaller rewards, given a smaller number of epochs in a calendar year, decreasing the amount of SOL distributed to network participants.
In every epoch, Solana calculates the number of tokens instantiated for the inflation pool. The result will be the amount of SOL tokens to be distributed to validators and stakers as inflation rewards, according to the voting and staking status from the previous epoch. 0.45 SOL is the approximate amount currently allocated and distributed among eligible validators in each slot — 195 thousand SOL per epoch.
Block times impact inflation rewards as the function will taper the initial rate (8%) given how many slots have passed since inflation activation on Mainnet — as a proportion of how many slots fit in one year.
Considering an average block time of 650 ms, the inflation being distributed in every epoch is equivalent to a 4.1% yearly rate and the stake yield falls to 5.5%, instead of the 6.8% and 9.1% previously assumed.
Also relevant to validator economics will be the commission. In fact, stake owners, a.k.a. delegators, earn the inflation rewards. Validators earn a portion of it represented by the commission. In the plot below, we can see that a common fee for public nodes is around 10%. There are only 81 nodes charging a 5% fee or smaller. 100% commission is assumed to refer to private nodes (100 validators).
Block reward from transaction fees varies according to network activity. Recent average is around 0.01 SOL per slot. Total per epoch increases with voting power, as the number of slots attributed to the validator is based on the proportional stake.
Theoretically, as inflation decreases with time, validators’ rewards would be supplemented by the increase in transaction fees. The assumption can eventually become a truth as the network matures. Some plots below show that currently this is an unfair assumption:
1- The market’s cyclic nature — the number of non-vote transactions will not necessarily be growing over time. Total transactions (vote + non-vote) picks in Oct21, around 180 thousand in one day. And falls to less than 100 thousand transactions in Apr22.
2- Solana network has invested in growing the network of validators. The plot below shows the number of unique rewards recipients (addresses).
3- As a consequence of voting power dilution and lower network activity, rewards obtained from transaction fees decreased for validators in an individual perspective.
We split the cost into i) hardware, colocation, and bandwidth, to host the validator and ii) personnel, which can vary significantly. The official recommendations can be found on the Solana Documentation.
The vote is an affirmation that a block it has received has been verified, as well as a promise not to vote for a conflicting block. — Solana Docs
Validators are expected to vote on the validity of the state proposed by the slot leader. A validator node, at startup, creates a new vote account and registers it in the network. On every new block, the validator submits a new vote transaction and pays the transaction fee (0.000005 SOL).
Validators usually own (a portion or the total of) the staked tokens, a.k.a. self-staking. In this case, the cost of tokens depends on the average price of acquisition. For the purpose of the current analyses, we will consider the validators only to own 100 SOL at a US$ 50 price.
The Solana Foundation promotes the growth of the validator set through the Solana Delegation Program. Applications require small validators to achieve the “baseline” criteria, which includes running a node also on the Testnet, in order to receive 25,000 SOL. Those who meet the baseline criteria and also the “bonus” criteria can receive an extra (dynamic) amount in the delegation. A recent post on stake delegation strategies and why delegation programs are needed, goals, and criteria can be found in How can networks nurture decentralization?
In summary, Solana validator’s profitability depends on the current inflation rate, block times — reflected on the number of epochs in one year, the voting power, the total supply, the number of transactions, the cost structure, and the SOL market price.
For the three operational levels stated above, we will look at three different economic scenarios: optimistic, average, and pessimistic, with the average scenario being the closest to the current values.
The average market price in one year is fixed at $50 for the purpose of break-even analysis. Different price scenarios can be evaluated in a further session.
We found that the 40,000 SOL to break even would be a realistic amount for a small validator, on an average scenario, close to current levels. The number grows to 253,000 SOL for the medium setup. A professional validator would need more than 1.3 million SOL staked.
For a validator with a 0.01% stake, we estimate a 25 SOL reward from transaction fees in one year. The voting process costs around 200 SOLs per year for each node operator. Therefore, small validators are dependent on inflation rewards to achieve break-even, and ideally, become profitable. Around 350 thousand SOL staked would be needed to fully cover the voting cost, when considering rewards from transaction fees only.
Although the number of validators may be considered high compared to other Proof of Stake networks, 71 accounts are responsible for 57% of the total 365 million SOL staked.
The majority of validators currently stake between 80 and 90 thousand SOL, as seen in the plot below. There are at least 138 (7%) instances of the validator client with stake amounts smaller than 40 thousand SOL, the estimated break-even level for a small validator.
Simulation shows that medium and professional validators are more sensible to fluctuations in the SOL market price than small-size validators. Considering SOL average price in a year to be $75, the break-even level decreased by more than 30% for medium and professional levels and only 7% for small validators. A similar effect is found if the average price drops to $25.
Solana validators and stakers have seen rewards decreasing with higher block times compared to the projected rewards from the initial inflation model. As additional factors, the network experienced a contraction in non-vote transactions during the latest months and the expansion of the validator set.
According to the break-even levels discussed above, an 8.85% inflation target would be the rate level to reflect an effective 5.5% emission in one year, considering 650 ms block times (6.3% if 550 ms block times). Assuming 75% of total supply is delegated to validators, staking yield would become 7.1% in one year and the minimum amount in stake to break even drops by 24%, to 32 thousand SOL.
The inflation rate is even more relevant for small validators’ profitability, compared to transaction fee rewards. Adjusting the inflation model according to the actual network configuration would reinforce the interest of those validators staking less than 40 thousand SOL. Supposing the 8.85% rate simulation above, approximately 21 more validators (1.12%) would reach the break-even level — that is the number of validators currently in range 30-40 thousand SOL in stake.
In this study, we explored the variables behind the Solana validator economics, estimating profitability levels for different market scenarios.
Fee markets are now live on Solana but the adoption of the priority fee by dApps and general users at the moment is low, with the proportion of around 4% of transactions paying a higher fee than the fixed rate. It has been in an uptrend since launched, in late July.
Go to the Solana Validator Cost estimator in getguesstimate to explore the relevant variables, their interactions, and correlations. Thanks, Ruud, Chorus One engineer, for building it.
“Look below the surface and you will find that all seemingly solo acts are really team efforts.” —John C. Maxwell
This article was brought to you by Chorus One. With meticulous review by Felix Lutsch and Ruud.
Chorus One is one of the largest staking providers globally. We provide node infrastructure and closely work with over 30 Proof-of-Stake networks.
Website: https://chorus.one
Twitter: https://twitter.com/chorusone
Telegram: https://t.me/chorusone
Newsletter: https://substack.chorusone.com
YouTube: https://www.youtube.com/c/ChorusOne
Maximum Extractable Value (MEV) represents a fundamental concept in cryptoeconomics, highly affecting permissionless blockchains. MEV is the consequence of the design of protocols and brings with it bad and good externalities. Indeed, not all MEV can be considered benign as some represent an invisible tax on the user, e.g. check out one of our previous articles — Solana MEV Outlook. In general, MEV can also be an incentive for consensus instability, see e.g. the time bandit attack. However, considering all types of MEV as bad externalities is wrong. There exist benign forms of MEV that ensure protocol efficiency, and one prominent example is arbitrage. Let’s imagine that some user swaps a huge amount of token A on a specific AMM (huge with respect to the total amount in the pool) and that this transaction creates a $5,000 arbitrage opportunity. All users that swap tokens in the same pool and same direction will see their output lowered with respect to the actual value. Thus, whoever exploits this MEV opportunity will also bring the market back to parity with the true price. This will make the AMM more efficient without harming its users in the process.
On Solana, MEV still represents a dark forest since no one has pointed a flashlight at it. This is because Solana is a much younger blockchain compared to Ethereum, which can be seen in the lack of products like Flashbots. One project that is moving in this direction is Jito Labs, which recently delivered the first MEV Dashboard for Solana representing an explorer aimed at illuminating MEV — see here for an introduction. However, it is not the only one trying to fulfill this duty. Pointing lights on some Solana Decentralized Exchanges (DEXs) in order to illuminate the dark forest is one of the key objectives at Chorus One. MEV is a consequence that will be a crucial factor for the future of PoS networks and we are continually looking for the best way to ride it. You can explore our Solana MEV dashboard here.
It is important to understand that a simple copy of Flashbots may not be good for Solana, since it represents a drastically different network from Ethereum — and Jito seems to be something intrinsically different. In this article, we are going to assess what are the MEV challenges Solana faces. We’ll also review the status of our internal research regarding MEV.
In Section 2, we’ll analyze the current and future status of MEV on Solana, with a detailed analysis of what we found on-chain in Section 2.1.
In Section 3, we’ll discuss some implications of the current MEV strategies and how these can affect the functionality of a PoS network.
MEV has a specific supply chain, which “describes the chain of activity which helps users transform intentions into finalized state transitions in the presence of MEV ”. However, despite this “universal” definition, MEV on Proof of Stake (PoS) networks is drastically different from what it represents on Proof of Work (PoW) networks. This is for several reasons. For sure, the most important difference relies on the possibility of knowing for sure that a validator will propose a block at some point. Further, validators have delegators and can offer to them a portion of the MEV revenue (e.g. lowering the commission) attracting users to delegate with them. This makes MEV on PoS networks a growing business model, which constitutes one of the building blocks for cryptoeconomic incentives. From one side, we have validators who can use MEV revenue to reduce commission rate — even go to negative values — by returning all incomes to the delegators. On the other side, we have incentives for Layer-1 (L1) blockchains to improve network performance. This is because, if the “scaling problem” is solved by the introduction of L2s, the MEV and transaction (txs) fees are also moved away from the main chain, weakening the L1 business model.
This is exactly what blockchains like Ethereum are facing right now, representing one of the great risks over the next few years. See this Twitter thread for a better understanding of the topic.
But, what is the current status of MEV on Solana? Let’s start from the beginning. Solana does not have a public mempool, meaning that some bad externalities of MEV are very difficult to achieve. However, Solana is not free from them since MEV extraction may produce a bad performance of the network, e.g. spam txs, dropped txs, etc. Indeed, some MEV opportunities only exist if searchers run their own validator, inspect txs that come to them, and run an MEV-extraction code on top of it. Having a high stake and getting access to more MEV opportunities is not an easy task. This dramatically reduces the likelihood of being highly profitable, as the distribution of MEV revenues averages around zero, with a tail towards higher values — see Fig. 2.2.
Note that this is obtained in a specific time window, so it is only representative of the shape of the actual distribution.
Since txs fees on Solana are low and MEV opportunities can bring validators more profit, validators are incentivized to auction off their block space to searchers, or at least some rumors are pointing towards this possibility.
Further, on Solana, fees are currently fixed and cheap, meaning that if there is high competition in a specific market, users face the risk of not getting transactions executed. Since a gas-fee auction is still missing, currently MEV searchers spam transactions to the leader (and following validators in the leader schedule) in the hopes of “winning the battle”.
Lastly, on Solana, MEV competition may incentivize validators to perform denial of service (DoS) attacks on other validators in order to leave the spotted MEV opportunities just there, sitting on the table where they are until the attacker can extract them.
The current status of MEV indicates how bad the problem of blockspace-waste is, which resulted in degraded performance for normal users. At the time of writing, according to what can be found on Jito’s MEV dashboard, we have 12,072,328 successful arbitrages against the 350,179,786 unsuccessful ones in 6 months (i.e. a 3.3% of success rate). If we also include liquidations, the success rate goes down to roughly 3%. The total extracted “good” MEV is around $33M. Of course, this is only a lower-bound since MEV can be created any time a user interacts with a blockchain, and smart contracts enable a functionally infinite number of potential interactions. Thus, it is computationally infeasible to calculate a blockchain’s total potential MEV by brute force. Further, we have some previous analyses that show how a huge amount was extracted during periods of stressful market conditions, e.g. $13M MEV during Wormhole Incident and $43M Total MEV from Luna/ UST Collapse on Solana.
Future Solana improvements aim to introduce several features, forcing current MEV strategies to change. Introducing these new features represents a two-sided coin for MEV searchers. Indeed, some spamming bots would be forced to shut down since the local fee market will make it unprofitable to massively spam txs. However, improving the network means more and more users are attracted to use it. This has the immediate consequence of also increasing the total amount of MEV, allowing the chain of implications to continue by incentivizing competition around MEV and “inviting” new searchers to step in.
One of the main problems that can worsen an AMM’s functionality is pool congestion. This is because if there are too many txs happening on a specific pool, users may experience a worse trade due to pool unbalancing. This is why arbitraging is a sort of service that normalizes DEXs functionality. But, despite the fact that we know MEV is happening on Solana, where are the greatest opportunities? In other words, what are the DEXs with the highest pool congestion, and who is “solving” it? To answer these questions, we built an MEV dashboard on Dune Analytics. This is because, by looking at the exchanged volume, — using Solscan — you can definitely have an idea of where the congestion is, but nothing is clear when the question is if searchers are solving for it.
Our preliminary research shows that in 10 days (from July 16th to July 26th), the paths with the highest extracted MEV on Solana were live on Orca and Raydium with a lower bound of 20,775 USD extracted, see Fig. 2.5. There were 68 MEV extractors on these cross DEXs during the analyzed period, thus not a great number in terms of competition. Fig. 2.6 shows how the extracted revenues are concentrated among a few searchers. Precisely, 5 different accounts extracted 80.1% of the total MEV.
It is worth mentioning that none of the studied DEX combinations show a uniform distribution in terms of MEV opportunities, according to what we show in Fig. 2.2.
If we extend the analysis by looking at the USDC Token Accounts belonging to the most profitable MEV searchers, we have that 7 accounts were able to extract 95.6% of total extracted MEV, see Fig. 2.7. Two of them, GjT…m2P and G9D…y2m, interact with the same smart contract, which may indicate that these two accounts belong to the same user. Since these accounts are in the top 7 accounts, this means that it is likely that only 6 users were able to extract 95.6% of the total extracted MEV.
By deep diving, we also found two accounts interacting with a smart contract with clear reference to Jito, Jito…HoMA, with a total extracted MEV in 10 days of 3,342.30 USDC (at time of writing), over a total of 158,132 USDC extracted — i.e. 2.1% of the total amount.
We already stated that, on PoS networks, MEV can be seen as a business model since validators can share a portion of the extracted amount with their delegators. However, as shown in Sec. 2.1, this sometimes can constitute a deal that does not truly mean high returns. MEV revenues are strongly correlated with market conditions and DEXs’ usage, meaning that we’re unable to estimate a fixed income to share with delegators. Further, if competition does not grow fast, the promise of sharing revenue with delegators may bring a centralization problem.
To assess this statement, let’s try to formulate a “gedanken-experiment”. Imagine that the volume exchanged by DEXs on Solana grows by a factor 30, and assume that there is only one validator extracting MEV and redistributing the revenues to delegators. The implication of the increased volume is that MEV also increases. Indeed, a factor of 30 means that in 30 days the DEX’s volume on Solana is greater than $30B, and assuming that the 0.04% of it is MEV — as it happens on Ethereum — this means more than $144M yearly. The implication of having only one validator playing this game is that the extracted amount also increases, making the delegation to them an appealing deal. We can just think that a validator with ~2% of the total stake can extract an MEV of ~ $2.9M yearly. Once the delegation starts to concentrate around a single validator — the sole player — again we have a boost in MEV revenues, since the leader schedule is “stake-dependent” on Solana. This is because the revenue per block is not uniformly distributed, so a higher stake means an increased likelihood of capturing a rare juicy opportunity, pushing up the median of the extracted MEV. If there is no competition, this gedanken-experiment has a single outcome: concentration of stake — i.e. centralization.
Risks become higher if one considers that at the moment Solana is one of the fastest blockchains and that future development aims to improve this even further. The high number of processed txs per second could pave the way for prop firms to enter the market, meaning that more SOL can be delegated to a single validator — the winner of the MEV war.
This, without any doubt, points toward the necessity of building competitive validators for what regards MEV extraction. Once Jito delivers its third-party client for Solana that’s been optimized for efficient MEV extraction (plus its bundle), the risk of centralization can be mitigated. However, even with decentralized block building, as Flashbots aims to achieve with MEV-boost, we remain still far from a definitive solution. Indeed, such an environment makes it easier for builders to buy the blockspace of all validators and thereby isolate the centralization to the builder layer, see e.g. here. At the moment a decentralized MEV from top to bottom is a chimera. The first step toward this direction would require open-sourcing the MEV-extracting validator, starting collaboration between many validators, in the true spirit of open source. Indeed, it is worth noting that adopting validator products developed — and belonging — to a single entity reduces the problem of stake concentration, but can decrease the network’s censorship resistance. If block production is centralized to a single entity, that may represent an enormous censorship risk, regardless of how many validators participate.
For example, let’s assume that this entity gets adopted by 50% of the stake. Suppose now that this entity is regulated by a specific government, which demands that all transactions are blocked. Then, at best, users would need to get their transactions into the other blocks, but in the worst case, this entity can refuse to include vote transactions that vote on blocks that contain sanctioned transactions. This is a simple example that shows how some MEV strategy outcomes could pave the way for censorship risks.
Before concluding, it is worth mentioning that other possibilities do exist. One of them is to frame MEV-extraction as a service, where it is the protocol itself that takes the MEV and shares the corresponding revenue with protocol-token stakers, see e.g. recent rumors on Osmosis development. Despite this “method” seeming to be less prone to a centralization risk, it remains unclear if the time needed to extract MEV is enough to guarantee the AMM functionality — remember that poor competition means some opportunities may remain there for a “long” time. The outcome is the difficulty of assessing all the details of how this will affect the future of the chain.
This article aims to collect some thoughts on how framing MEV may affect the future of PoS ecosystems, focussing on some of its “bad” consequences. Despite the fast development around this huge and complex topic, we at Chorus One are continuously researching this topic with an eye to the future: the healthiness of all networks is always our first priority.
If you’re interested in framing the topic and require research/advisory services on MEV, you can contact our Research Team at research@chorus.one
Avalanche has a thriving, friendly, and engaging community. On top of that, it also has the quickest and most valuable bridge solution to and from Ethereum, with BTC onboarding shortly. Avalanche is fortunate to have a team that consistently produces and executes at the top level. It’s great for validators like us too. There’s no slashing and rewards are dependent only on uptime. Currently, the annual staking rewards are at 9.1%. This makes locking AVAX to stake appealing. The thriving ecosystem is already on display, with liquid-staking now accessible via BenQi (sAVAX, $179M in TVL) and two additional solutions on the way: LAVA and Eden Network + YieldYak. Lido is also building its liquid staking implementation for AVAX. A competitive DeFi landscape is also in operation, including TraderJoe (DEX, $179M in TVL), Platypus (stable swap, $155M in TVL), Aave (lending, $4.64Bn in TVL), and many more. Subnets now allow innovative technologies in both consensus and horizontal scalability architecture to join the network. To make the experience complete they even provide VMs as free open source code ready to be picked up by companies wishing to join the subnet movement.
Avalanche mainnet is made up of two blockchains (C-Chain and P-Chain) and one DAG (X-Chain for ultra-high TPS). These are two types of distributed ledger technologies (DLTs). The P-Chain is responsible not only for dealing with Subnet and all validator information but also to create new subnets and blockchains.
Although the term “subnet” is used interchangeably and synonymously with blockchains, subnets are a bit more complex than that. The technical definition of a subnet is as follows:
A Subnet is a dynamic set of validators working together to achieve consensus on the state of a set of blockchains, according to Avalanche’s FAQ page.
Subnets allow anybody to quickly establish permissioned or permissionless networks with unique implementations that are powerful, dependable, and secure. Developers can use AvalancheGo or AvalancheJS, and Ethereum developers can seamlessly use Solidity to launch dApps as it is fully compatible. Avalanche includes features not seen on other chains, such as the ability to choose which validators secure their Subnet activity, which token is utilized for gas costs, bespoke economic models, and more. Subnets, crucially, stay naturally linked with the larger Avalanche ecosystem, do not compete for network resources with other projects, and are accessible in an infinite supply. With standard rules underlying all apps on a smart contract network, Web3 applications may distinguish on user experience like never before. A similar approach can be found in Cosmos with Saga and their “chainlets” approach and in Ethereum with Skale.
GameFi, a common phrase in the crypto-verse, is a combination of the words “Gaming” and “Finance.” It covers the gamification of the working system in order to generate profit via play-to-earn crypto games. In GameFi games, items are represented by NFTs. Users may boost their earning potential by levelling up and upgrading their characters, as well as participating in tournaments. As an example, players in Axie Infinity (arguably the biggest GameFi game in 2021) earned more than $1000 worth of $SPL a month before it suffered a hack. Many of these blockchain games are communities where players may earn tokens to swap for money. It’s remarkable to watch blockchain games with a few hundred players in 2013 turn into top-grossing games like Axie Infinity with hundreds of thousands of dollars in daily trade volume. And this is just the first generation of games on blockchains.
Adoption has skyrocketed over the past years. With a large number of retail investors as well as big companies like Microsoft, Nike, Meta and many more already involved, the metaverse market is expected to grow significantly. Major investors such as Gala Games and C2 Ventures formed a $100 million venture fund for GameFi. Solana Ventures and others also launched a $150 million fund by the end of 2021. More recently, Framework Ventures has allocated half of the $400M fund to Web3 gaming. As evidence of the blockchain gaming industry’s expansion, the blockchain games and infrastructure business attracted over $4 billion in venture capital financing in 2021 alone. Blockchain gaming has grown by 2,000 percent in a year, according to the conclusions of a joint report by DappRadar and the Blockchain Game Alliance (BGA). Although this was prior to the latest crypto meltdown. The scenario might be extremely different right now. However, the crypto gaming business has already received $2.5 billion in investment this year; if this trend continues, it might reach $10 billion by the end of 2022. The report also states that blockchain games drew $1.22 million in unique active wallets (UAW) in March, representing 52% of industry activity. With all of the various technologies collaborating to build a self-sustaining ecosystem, the blockchain gaming sector is poised to become a significant income source and probably the first real utility for blockchains outside payments.
The key advantage of using AVAX for GameFi is the three-pronged structure, which comprises validators and subnets using the P-Chain. Subnets let projects create their own application-specific blockchains (ASBs) that do not disrupt the rest of the chain. As a result, no single game utilizes the whole network bandwidth. GameFi on Avalanche offers the best chance for blockchain games to thrive in their intended setting. Avalanche is also great for creating NFTs, which makes digital assets like NFTs easily available for P2E games or the metaverse. Users can utilize Avalanche to establish their own localized chains that run independently of other chains, allowing them to sandbox their own knowledge and technology for the benefit of their own efforts. Most developers use their own token for gas on their subnet, however, a subsidised gas fee is also an option. Avalanche allows network developers to utilize whatever virtual machine they want or to create their own. You may use EVM or any other VM you like. Aside from the EVM and AvalancheVM, Avalanche now provides SpacesVM (key/value storage), BlobVM (binary storage), TimestampVM (a minimum viable VM), and others are in the works. Modularity rules the roost. Observing web2 games moving into web3 through subnets is a great place to start.
It is worth noting that Avalanche gaming developers are taking a Play-and-Earn method rather than a Play-to-Earn approach. This emphasizes the necessity for the game is enjoyable and long-lasting.
Overall, blockchain games continue to be one of the most appealing parts of the dApp market. Although demand for blockchain games looks to have peaked, gaming dApps continue to drive most of the industry’s on-chain activities. Notably, subnet games like Crabada and Defi Kingdoms are still drawing players even in a difficult 2022.
VCs and investors are pouring money into Web3 gaming ventures at an all-time high pace. Furthermore, financial firms like Morgan Stanley have assessed the metaverse’s economic potential to be at least an $8 trillion business. The Sandbox’s second Alpha season, Decentraland’s Fashion Week, and the overwhelming demand for NFT Worlds indicate a positive future for GameFi. However, security risks such as the Ronin bridge vulnerability and the difficulties of attaining full interoperability remind everyone interested that widespread adoption is not yet here. Avalanche Foundation believes that subnets like Shrapnel and TimeShuffle are the solution for the next generation of gaming, thus it launched Avalanche Multiverse last March, a $290 million incentive program to accelerate the growth of the new Internet of Subnets.
Solana has announced three main changes in its mitigation plan to address the stability and resilience of the network:
The measures are targeting the intense traffic responsible for two out of the three recent incidents. Although the changes being proposed by Solana developers are considered abstract or deeply technical for the general part of the community, the concepts are not completely new, being imported from other already mature systems. In this article, we will try to break down the technicalities and explain them in simple terms.
The current Solana client version for validator nodes (v1.10) already paves the way for some of these improvements to be iterated on until optimal market fit. Fee prioritization is targeted for the v1.11 release, according to the official announcement.
Solana used to adopt the User Datagram Protocol (UDP) for transmitting transactions between nodes in the network. Nodes send transactions through UDP directly to the leader — the staked node responsible for proposing the block in that particular slot — without a previous connection being established. UDP does not handle traffic congestion or delivery confirmation for data. In situations of network congestion, the leader is unable to handle the volume of incoming traffic, which means some packets get dropped. Even at quiet times, some level of packet loss is normal. By sending the same transaction multiple times, users have a greater chance that at least one of their attempts will arrive.
In contrast to UDP is the Transmission Control Protocol (TCP). TCP includes more sophisticated features but for this to work, it requires a session (i.e. a known connection was previously established between the client and the server). The receiver acknowledges (“acks”) packets and the sender knows when to stop sending packets in case of intense traffic. TCP allows for re-transmitting lost packets, once the sender stops receiving acks, the interpretation is that something must be lost, so the sender should slow down.
TCP is not ideal for some use cases though. In particular, it sequences all traffic. If one portion of the data is lost, everything after it needs to wait. That is not great for Solana transactions, which are independent.
QUIC is a general-purpose protocol which is used by more than half of all connections from the Chrome web browser to Google’s servers. QUIC is the name of the protocol, not an acronym.
QUIC is an alternative to TCP with similar features: a session, which then enables backpressure to slow the sender down, but it also has a concept of separate streams; so if one transaction gets dropped, it doesn’t need to block the remaining ones.
Solana is a permissionless network. Anyone running a Solana client is a “node” in the network and is able to send messages to the leader. Nodes can operate as validators — when it is signing and sending votes — and (or) they can expose their RPC (Remote Procedure Call) interface to receive messages from applications such as wallets and DEXs, and send those to the leader.
The leader listens on a UDP port and RPCs listen on a TCP port. Given the leader schedule is public, sophisticated players with algorithmic strategies (“bots”) are able to send transactions to the leader directly, bypassing any additional RPC nodes that would only increase latency. With the leader being spammed, the network gets congested and that deteriorates performance. The UDP port used by the leader will be replaced by a QUIC port.
Quality of Service (“QoS”) is the practice of prioritizing certain types of traffic when there is more traffic than the network can handle.
Last January, after Solana faced performance issues as automated trading strategies (aka “liquidator bots”) spammed the network with more than 2 million packets per second, mostly duplicate messages, Anatoly Yakovenko mentioned in a tweet that they would bring the QoS concept to Solana.
The Leader currently tries to process transactions as soon as they arrive. Because IPs are verifiable through QUIC, validators will be able to prioritize and limit the traffic for specific connections. Instead of validators and RPCs blasting transactions at the leader as fast as they can, effectively DoS’ing the leader, they would have a persistent QUIC connection. If the network (IP) gets congested, it will be possible to identify and apply policies to large traffic connections, limiting the number of messages the node can send (“throttle”). These policies are known as QoS.
Internally, staked weighted QoS means queuing transactions in different channels depending on the sender, weighted by the amount of SOL staked. Non-staked nodes will then be incentivized to send transactions to staked nodes first, instead of sending directly to the leader, for a better chance of finding execution, since excess messages from non-staked nodes will most likely be dropped by the leader.
According to Anatoly validators will be responsible for shaping their own traffic, and applying policies that will avoid vulnerability. For example, if a particular node sends huge amounts of transactions, even if they are staked, validators can take action, ignoring the connections established with this node in order to protect network performance.
Solana fees are currently fixed and charged for each signature required in a transaction (5000 lamports = 0.000005 SOL). If there is high competition in a specific market, users face the risk of not getting transactions executed. With a fixed transaction fee, there is no way to communicate priority or compete by paying more to get their transaction prioritized. Without alternatives, users (usually bots) spam transactions to the leader (and soon-to-be leaders) in hope that at least one of them is successful. In many situations, this behavior generates more traffic than what the network can process.
A priority fee is soon to be included in Solana, allowing users to specify an arbitrary “additional fee” to be collected upon execution of the transaction and its inclusion in a block. This mechanism would not only help the network to prioritize time-sensitive transactions but also tends to reduce the amount of invalid or duplicated messages sent by algorithms since speculative operations can become unprofitable with an increase in the total cost.
The ratio of this fee to the requested compute units (the computational cost to the program to perform all operations) will serve as a transaction’s execution priority weight. This ratio will be used by nodes to prioritize the transactions they send to the leader. Additional fees will be treated identically to the base fee today: 50% of the fees paid will be collected by the leader and 50% will be burned.
At this point, you could think of several blocks being filled only with transactions targeting an NFT mint. However, there is a limit time for each account to be locked for writing on a single slot (600 to 800 milliseconds). The remnant block space can be filled with transactions writing in different accounts, even if they offer a smaller priority fee. High-priority transactions trying to write to an account that has already reached its limit will be included in the next block.
Each Solana transaction specifies the writable accounts — the portion of the state that will be modified. This allows transactions to be executed in parallel, as long as transactions are independent, i.e. do not access the same accounts. If two transactions write or read to the same account, these two transactions can not be processed in parallel, because they affect the same state.
The Solana team argues that the priority fee will then behave as parallel auctions, affecting only the “hot market” instead of the global price, allowing the fee to grow for a specific queue of transactions trying to write in that account only.
How does the user know the fee to adopt to get a mint? RPCs nodes will need to estimate an adequate fee, most likely using a simple statistical method, for example averaging the actual cost of similar transactions in previous N blocks, or even a quantile. The optimal method will depend on the market, and whether fees for similar transactions are more volatile (high demand) or stable (less demand).
In practice, the priority fee can have a global effect, if the parallel auctions are not implemented on the validator client. With RPCs and users being responsible for arbitrarily setting it, during high intense traffic, applications will likely try to get priority even though they do not interact with the “hot market”, causing an increase in the fee price for other lower demand dApps.
Fee prioritization is targeted for the v1.11 release, according to the official announcement.
The present article covered the three pieces Solana is actively working on to deal with congestion issues, which include changing the communication protocol from UDP to QUIC, adding stake-weighted QoS for transaction prioritization and a fee market that increases fees with high demand. All of these 3 improvements aspire to improve the performance of Solana, which has been experiencing degraded performance quite often.
We hope it was possible to clarify these concepts and understand the motivations and choices being made. Exploring Solana source code would be an essential next step to investigate the exact metrics being implemented in QoS to select or drop transactions or the mechanism behind the increase (and decrease) of fees and other questions that remain unanswered.
I would like to thank the Chorus One team for the enlightening discussions and knowledge sharing, especially Ruud van Asseldonk for the technical review, and Xavier Meegan for the support.
This is the second article of the Solana MEV outlook series. In this series, we use a subset of transactions to extrapolate which type of Maximum Extractable Value (MEV) is being extracted on the Solana network and by whom.
MEV is an extensive field of research, ranging from opportunities created by network design or application-specific behaviour to trading strategies similar to those applied in the traditional financial markets. As a starting point, our attempt is to investigate if sandwich attacks are happening. In the first article, we examined Orca’s swap transactions searching for evidence of this pattern. Head to Solana MEV Outlook — part 1 for a detailed introduction, goals, challenges and methodology. A similar study is performed in the present article. We are going to look at on-chain data, considering approximately 8 h of transactions on the Raydium DEX. Given the magnitude of 4 x 10⁷ transactions per day, considering only Decentralized Exchanges (DEX) applications on the Solana ecosystem. This simplification is done to get familiarity with data, extrapolating as much information as we can to extend towards a future analysis by employing a wider range of transactions.
Raydium is a relevant Automated Market Maker (AMM) application on the Solana ecosystem, the second program in the number of daily active users and the third in terms of program activity.
Raydium program offers two different swap instructions:
Although the user interface (“UI”) interacting with the smart contract sets the swap instruction to use the first instruction type, leaving SwapBaseIn responsible for 99.9% of successfully executed swap instructions:
We built a dataset, extracting the inputs from the data byte array passed to the program, and the actual swap token amounts by looking at the instructions contained in the transaction. Comparing the minimum amount of tokens specified in the transaction and the actual amount the user received, we estimate the maximum slippage tolerance for every transaction. By computing the corresponding slippage, we obtain the histogram:
The default value for slippage on the Raydium App is set to 1%. We can assume that at least 28% of transactions use the default value. Since it is not possible to know the state of the pool when creating the transaction, this number could be a bit higher.
It can be assumed that nearly 0% of slippage values are only achieved by sophisticated investors using automated trading strategies. Orca swaps’ histogram, presented in Fig 2.2 of the previous article, shows a peak in transactions with slippage of around 0.1%. On Raydium, a relevant proportion of transactions lies below 0.05%. This fact can suggest that trading strategies with lower risk tolerance, i.e price-sensitive strategies correspond to 25% of the swaps transactions (accumulating the first two bars in the histogram).
Other evidence of automated trading being common on this DEX is that on average, 40% of transactions fail, mostly because of the tight slippage allowed by user settings.
We are considering more than 30,000 instructions interacting with the Raydium AMM program, from time 02:43:41 to time 10:25:21 of 2022–04–06 UTC. For statistics purposes, failed transactions are ignored.
Although 114 different liquidity pools are accessed during this period, the SOL/USDC pool is the most traded pool, with 4,000 transactions.
The sample contains 1366 different validators as leaders in more than 35000 slots we are considering, representing 93% of the total stake and 78% of the total validator population by the time of writing, according to Solana Beach.
Of 5,101 different addresses executing transactions, 10 accounts concentrate 23% of the total transactions. One of the most active accounts on Raydium, Cwy…3tf also appears in the top 5 accounts in Orca DEX.
The graph below shows the total number of transactions for accounts with at least two transactions in the same slot. If used as a proxy to identify automated trading, on average 9 different accounts can be classified:
We can also look at the pools where these accounts execute more often. It is possible to notice they tend to specialize in different pools. The table below shows the two pools with more transactions for each of the 5 more active addresses:
By deep-diving into account activity by pool, we can see that two accounts concentrate transactions on WSOL/USDT pool; one account is responsible for half of all transactions in the mSOL/USDC pool; most of the transactions in the GENE/RAY pool are done by only one account (Cwy…3tf).
Searching for sandwich behaviour means we need to identify at least 3 transactions executed in the same pool in a short period of time. For the purpose of this study, only consecutive transactions would be considered. The strategy implies the first transaction to be in the same direction of the sandwiched transaction and a transaction in the opposite direction of the initial trade, closing out the positions of the MEV player.
The need for price impact implies a dependence on the amount of capital available to be used in every trade. Some MEV strategies can be performed atomically, with a sequence of operations executed in the same transaction. These strategies usually benefit from flash loans, allowing for anyone to apply it disregarding the capital they have access to. This is not the case for sandwich attacks, since the profit is realized after the successful execution of all the transactions (Fig. 10).
As shown in the first article, the amount of capital needed in order to create value depends on the Total Value Locked in the pool — the deeper the liquidity, the more difficult it is to impact the price. Head to Fig. 2.4 of the first article for the results of simulation into the Orca’s SOL/USDC pool. The figure shows the initial capital needed in order to extract a given percentage of the swap.
In the current sample, we have found 129 blocks with more than three swaps in the same pool, most of the swaps are in the same direction — no evidence of profit-taking. As shown in Fig. 11 below, the pool SAMO_RAY is the pool with more occurrences of multiple swaps in the same slot.
When searching for blocks and pools with swaps in opposite directions as a proxy to profit-taking, 9 occurrences are left with a potential sandwich attack pattern, as shown in the table below (Fig 12). After further investigation of the transactions and the context in which the instructions were executed, it is fair to assume the operations are related to arbitrage techniques between different trading venues or pools.
In this report, we were able to access the activity of the Raydium DEX. The conclusions are based on a limited amount of data, assuming our sample is comprehensive enough to reflect the general practices involving the dApp.
It is possible to notice relevant activity from automated trading and price-sensitive strategies such as arbitrage, which corresponds to 25% of swap transactions. On average, only 40% of transactions are successfully executed and 72% of all reverted transactions fail because of small slippage tolerance. Approximately, 28% of transactions can be classified as manual trading, since they use the default slippage value.
Of 5101 different accounts interacting with the Raydium program, 10 accounts concentrate 23% of the total transactions. One of the most active accounts on Raydium, Cwy…3tf also appears in the top 5 accounts in Orca DEX transactions. This same account is responsible for 77% of swaps in the GENE/RAY pool.
There were 9 occurrences of a potential pattern of a Sandwich attack discarded after further investigation.
It is important to mention that this behaviour is not only dependent on the theoretical possibility but largely biased by market conditions. The results in $13m MEV during Wormhole Incident and $43m Total MEV from Luna/ UST Collapse on Solana demonstrate the increase in profit extracted from MEV opportunities during stressful scenarios. Although the study focuses attention on different strategies and does not mention sandwich attacks, the probability of this strategy happening can also increase, given the smaller liquidity in pools (TVL) and the occurrence of trades with bigger size and slippage tolerance.
This is my first published article. I hope you enjoyed it. If you have questions, leave your comment below and I will be happy to help.
Solana is a young blockchain, and having a complete picture of what is happening on-chain is a difficult task — especially due to the high number of transactions daily processed. The current number of TPS is around 2,000, meaning that we need to deal with ~ 10⁸ transactions per day, see Fig. 1.1.
When processing transactions, we have to deal with the impossibility of a-priori knowing its status before querying information from an RPC node. This means that we are forced to process both successful and failed transactions. The failed transactions, most of which come from spamming bots that are trying to make a profit (e.g. NTF, arbitrage, etc.), constitutes ~ 20% of the successful ones. The situation slightly improves if we consider only program activity. By only considering what happens on Decentralized Exchanges (DEXs), we are talking about 4x10⁷ transactions per day, see Fig. 1.2. This makes it clear that a big effort is required to assess which type of Maximum Extractable Value (MEV) attack is taking place and who is taking advantage of it, even because tools like Flashbots do not exist on Solana.
In what follows, we are going to estimate what happened on-chain considering only ~5 h of transactions on Orca DEX, from 11:31:41 to 16:34:19 on 2022–03–14. This simplification is done to get familiarity with data, extrapolating as much information as we can to extend towards a future analysis by employing a wider range of transactions. It is worth mentioning that Orca DEX is not the program with the highest number of processed instructions, which indicates that a more careful analysis is needed to look also into other DEX — this is left for future study.
The aim of this preliminary analysis is to gain familiarity with the information contained in usual swap transactions. One of our first attempts is to extrapolate if sandwich attacks are happening, and if so, with which frequency. In Section 2, we are going to look at the anatomy of a swap transaction, focussing on the type of sandwich swap in section 2.1. Section 2.2 is devoted to the description of “actors” that can make a sandwich attack. In Section 3, we describe the dataset employed, leaving the description of the results in Section 4. Conclusions are drawn in Section 5.
On Solana, transactions are made by one or more instructions. Each instruction specifies the program that executes them, the accounts involved in the transaction, and a data byte array that is passed to the program. It is the program’s task to interpret the data array and operate on the accounts specified by the instructions. Once a program starts to operate, it can return only two possible outcomes: success or failure. It is worth noticing that an error return causes the entire transaction to fail immediately. For more details about the general anatomy of the transaction see the Solana documentation.
To decode each of the instructions we need to know how the specific program is written. We know that Orca is a Token Swap Program, thus we have all the ingredients needed to process data. Precisely, taking a look at the token swap instruction, we can immediately see that a generic swap takes as input the amount of token that the user wants to swap, and the minimum amount of token in output needed to avoid excessive slippage, see Fig. 2.1.
The minimum amount of tokens in output is related to the actual number of tokens in output by the slippage S, i.e.
from which
Thus, we can extract the token in input and the minimum token in output from the data byte array passed to the program, and the actual token in output by looking at the instructions contained in the transaction.
By computing the corresponding slippage defined in Eq. (2.2) we obtain the histogram in Fig. 2.2. From this picture, we can extrapolate different information. The first one is, without doubt, the distribution of transactions around the default value of slippage on Orca, i.e. 0.1%, 0.5% and 1%. This makes complete sense since the “common-user” is prone to use default values, without spending time in customization. The second one is the preference of users to select the lowest value for the slippage. The last one concerns the shape of the tails around the default values. A more detailed analysis is needed here since it is not an easy task to have access to what actually is contained inside them. The shape surely depends on the bid/ask scatter, which is a pure consequence of the market dynamic. The tails may also contain users that select a different slippage with respect to the default values. However, one thing is assured: this histogram contains swaps from which the slippage can yet be extracted. As we will see, from this we can extrapolate an estimate of the annualized revenue due to sandwich attacks.
The goal of this report is to search for hints of sandwich swaps happening on Orca DEX. All findings will be used for future research, thus we think it is useful to define what we refer to as sandwich swaps and how can someone take advantage of them.
Let’s start with its basic definition. Let’s assume a user (let’s say Alice) wants to buy a token X on a DEX that uses an automated market maker (AMM) model. Let’s now assume that an adversary sees Alice’s transaction (let’s say Bob) and can create two of its own transactions which it inserts before and after Alice’s transaction (sandwiching it). In this configuration, Bob buys the same token X, which pushes up the price for Alice’s transaction, and then the third transaction is the adversary’s transaction to sell token X (now at a higher price) at a profit, see Fig. 2.3. This mechanism works until the price at which Alice buys X remain sbelow the value X・(1+S), where S represents the slippage set by Alice when she sends the swap transaction to the DEX.
Since Bob needs to increase the value of the token X inside the pool where Alice is performing the swap, it is evident that the core swaps inserted by Bob should live on the same pool employed by Alice.
From the example above, it may happen that Bob does not have the capital needed to significantly change the price of X inside the pool. Suppose that the pool under scrutiny regards the pair X/Y and that the AMM implements a constant product curve. In the math formula we have:
where k is the curve invariant. If we set the number of tokens Y in the pool equal to 1,000,000 and the number of tokens X equal to 5,000,000 and assuming that Alice wants to swap 1,000 token Y, we have that the amount of token X in output is:
It is worth noting that here we are not considering the fee that is usually paid by the user. If Alice set a slippage of 5%, this means that the transaction will be executed until the output remains above 4'745.25. This means if Bob is trying to take this 5%, he will need an initial capital of 26,000 token Y.
Sometimes this capital may be inaccessible, allowing Bob to only take a portion of the 5% slippage. For example, let’s consider the Orca pool SOL/USDC, with a total value locked (TVL) of $108,982,050.84 at the time of writing. This pool implements a constant product curve, which allows us to use Eqs. (2.3) and (2.4) to simulate a sandwich attack. Fig. 2.4 shows the result of this calculation.
It is clear that the initial capital to invest may not be accessible to everyone. Further, it is important to clarify that the result is swap-amount independent. Indeed, for each amount swapped by Alice, the swap made by Bob is the one that “moves” the prices of the initial tokens inside the pool. The scenario is instead TVL dependent. If we repeat the same simulation for the Orca pool ETH/USDC, with a TVL of $2,765,189.76, the initial capital needed to extract a higher percentage of the slippage of Alice drastically decreases, see Fig. 2.5.
From the example above, let’s consider the case in which Bob has an initial capital of 2,000 token Y. If he is able to buy the token Y before Alice’s transaction, Alice will obtain an output of 4,975.09 token X, which is only 0.4% lower than the original amount defined in Eq. (2.4).
At this point, Bob has another possibility. He can try to order transactions that are buying the same token X after its transaction, but immediately before Alice’s swap. In this way, he can use the capital of other users to take advantage of Alice’s slippage, even if Bob’s initial capital is not enough to do so, see Fig. 2.6. This of course results in a more elaborate attack, but likely to happen if Bob has access to the order book.
It is not an easy task to spot the actors behind a sandwich attack on Solana. In principle, the only profitable attackers are the leaders. This is because there isn’t a mempool, and the only ones that know the exact details of the transactions are the validators that are in charge of writing a block. In this case, it may be easier to spot hints of a sandwich attack. Indeed, if a leader orders the swap transactions to perform a sandwich, it should include all of them in the same block to prevent an unsuccessful sandwich.
The immediately following suspect is the RPC service that the DAPP is using. This is because the RPC service is the first to receive the transaction over HTTP, since it is its role to look up the current leader’s info using the leader schedule and send it to the leader’s Transaction Processing Unit (TPU). In this case, it would be much more difficult to spot hints of sandwiching happening since in principle the swap transactions involved can be far from each other. The only hook we can use to catch the culprit is to spot surrounding transactions made by the same user, which will be related to the RPC. This is a consequence of the lower price fee on Solana, which raises the likelihood that a sandwich attack can happen by chance spamming transactions in a specific pool. This last one is clearly the riskiest since there is no certainty that the sequence of transactions is included in the exact order in which the attacker originally planned it.
Before entering the details of the analysis, it is worth mentioning that, standing on what is reported on Solana Beach, we have a total of 1,696 active validators. Our sample contains 922 of them, i.e. 54.37% of the total validator population. The table below shows the validator that appears as the leader in the time window we are considering. Given the likelihood-by-stake for a validator to be selected as a leader, we retain fair to assume that our sample is a good representation of what’s happening on Orca. Indeed, if a validator is running a modified version of the vote account program to perform sandwich swap, the rate of its success will be related to the amount of staked tokens, not only by actual MEV opportunities. Further, modifying the validator is not an easy task, thus smaller validators will not have the resources to do that. Since we have all the 21 validators with a supermajority plus a good portion of the others (i.e. we are considering half of the current number of active validators), if such a validator exists, its behaviour is easily spotted in our sample. However, it is worth mentioning that a complete overview of the network requires the scrutiny of all validators, without making assumptions of that kind. Such achievement is behind the scope of this report, which aims primarily to explore which type of sandwich can be done and how to spot them.
Having clarified this aspect, we firstly classify the types of swaps that are performed on the Orca DEX. The table below shows the accounts that are performing more than two transactions. It is immediately visible that most of the transactions are done by only 2 accounts over 78 involved.
As explained in Section 1, we are considering 5H of transactions on Orca DEX, from 11:31:41 to 16:34:19 on 2022–03–14. This sample contains a total of 12,106 swaps, with pool distribution in Fig. 3.1.
By deep-diving into the swap, we can see that most of the transactions in the 1SOL/SOL [aq] and 1SOL/USDC [aq] are done by only two accounts, see Fig. 3.2. Here [aq] stands for Aquafarm, i.e. an Orca’s yield farming program. We can also see the presence of some aggregate swaps in the SOL/USDC [aq] and ORCA/USDC [aq] pools.
We started searching for the presence of leaders performing sandwich swaps. As we described in Section 2.1, in general, a swap can happen in two ways. For both of them, if such a type of surrounding is done by a leader, we should see the transactions under scrutiny included in the same block. This is because, if a leader wants to make a profit, the best strategy is to avoid market fluctuations. Further, if the attacker orders the transactions without completing the surrounding, the possibility that another leader reorders transactions cancelling the effect of what was done by the attacker is not negligible.
By looking at the slots containing more than 3 swaps in the same pool, we ended up with 6 slots of that kind, out of 7479. Deep diving into these transactions, we found that there is no trace of a sandwich attack done within the same block (and so, from a specific leader). Indeed, each of the employed transactions is done by a different user, marking no evidence of surrounding swaps done to perform a sandwich attack. The only suspicious series of transactions is included in block # 124899704. We checked that the involved accounts are interacting with the program MEV1HDn99aybER3U3oa9MySSXqoEZNDEQ4miAimTjaW, which seems to be an aggregator for arbitrage opportunities.
As mentioned in Section 2.2, validators are not the only possible actors. Thus, to complete the analysis we also searched for general surrounding transactions, without constraining them to be included in the same block. We find that only 1% of the total swaps are surrounded, but again without strong evidence of actual sandwich attacks (see Fig. 4.1 for the percentage distribution). Indeed, by looking at those transactions it comes out that the amount of token exchanged is too low to be a sandwich attack (see Sec. 2).
Before ending this section, it is worth mentioning that if we extrapolate the annual revenue that a leader obtains by taking 50% of the available slippage for swaps with a slippage greater than 1%, we are talking about an amount of ~ 240,000.00 USD (assuming that the attacker is within the list of 21 validators with supermajority), see Fig. 4.2. Of course, this is not a real estimate since it is an extrapolation from only 5h of transactions, thus we need to stress that the actual revenue can be different. Further, this is not an easily accessible amount due to what we showcased in Sec. 2. However, the amount in revenue clearly paves the way for a new type of protection that validators should offer to users, especially if we take into account that Orca is not the DEX with the highest amount of processed swaps. Since at the moment there is no evidence that swaps are sandwiched, we will take no action in this direction. Instead, we will continue monitoring different DEXs by taking snapshots in different timeframes informing our users if a sandwich attack is spotted on Solana.
In this report, we define two types of sandwich attacks that may happen on a given DEX. We further describe who are the possible actors that can perform such a type of attack on Solana and how to spot them. We analyzed data from ~5 h of transactions on Orca DEX, from 11:31:41 to 16:34:19 on 2022–03–14 (that is, 12,106 swaps). Despite the cutting of the number of transactions employed, we argued why we believe this sample could fairly be a “good” representation of the entire population.
Our findings show no evidence that sandwich attacks are happening on Solana by considering two possibilities. The former is that a validator is running a modified version “trained” to perform a sandwich attack on Orca. The latter is that an RPC is trying to submit surrounding transactions. We discovered that only 1% of transactions are actually surrounded by the same user, but none of them is included in the same block — excluding the possibility that a leader is taking advantage of the slippage. By deep-diving into this, we discover that the amount exchanged by these transactions results are too low for capital to be invested to exploit the slippage and submit a profitable sandwich attack.
We also show how the capital needed to make sandwich attacks profitable may not be accessible to everyone, narrowing the circle of possible actors.