Maximum Extractable Value (MEV) is critical to blockchains, particularly on networks like Ethereum and Solana. With sub-second block times and high throughput, Solana has unique challenges and opportunities in the MEV space. Unlike Ethereum's block-building marketplace model, Solana's mempool-less architecture has led to a different MEV extraction dynamic characterized by high-speed competition and potential network congestion.
Solana's unique features, including Gulf Stream for mempool-less transaction forwarding, have enabled remarkable speed and efficiency. However, these same features have also created an MEV landscape that requires innovative approaches.
The current methods of MEV extraction on Solana have several drawbacks. Searchers competing on latency often flood the network with duplicate transactions to ensure MEV capture, leading to periods of intense congestion and failing transaction processing for all users.
The winner-takes-all nature of Solana MEV opportunities results in a high rate of failed transactions. These failed transactions still consume compute resources and network bandwidths. Studies have shown that up to 75% of transactions interacting with DEX aggregators can fail during periods of high activity.
Moreover, the concentration of MEV capture among a few players threatens network decentralization as these entities accumulate more resources and influence. In Ethereum, the use of external searchers and block-builders has led to private order flow deals, resulting in extreme centralization where a single builder creating over 50% of Ethereum blocks, with only two builders responsible for 95% and four entities building 99% of all blocks.
Paladin introduces a solution to address these issues. It consists of two main components:
The Paladin bot is a high-speed, open-source arb bot that runs locally on validators. It works only when the validator is the leader and is integrated with the Jito-client. By running directly on the validator, it captures all riskless and straightforward MEV opportunities (e.g., atomic arbitrage, CeFi/DeFi arbitrage) faster than searchers, without needing to outsource these opportunities and pay around 20% of the MEV to external entities. Any non-supported, or more advanced MEV strategies that the Paladin bot doesn’t recognize can still be captured by the Jito auction, making it a net positive for the ecosystem.
The bot listens to state updates from the geyser interface, allowing real-time opportunity detection. Validators can choose which tokens and protocols to interact with, allowing more conservative validators to alleviate legal concerns about interacting directly with tokens they deem securities.
The PAL token is designed to align the incentives of validators and users and create a robust MEV extraction mechanism. With the entire supply of one billion airdropped at launch, PAL is distributed among validators, their stakers, Solana builders, the team, and a development fund.
PAL can be staked by validators and their delegators, with rewards proportional to their SOL stake. The token has a unique MEV distribution mechanism, where 10% of captured MEV is funneled to PAL token holders, with 97.5% going back to validators and their stakers. Most staked PALs can vote to slash the staked PAL of validators who engage in dishonest actions, such as running closed-source modifications of Paladin, instead of adhering to the "just run Paladin" principle.
Paladin's design creates dynamics that contribute to its sustainability. The "Pack of Wolves" dynamic incentivizes validators to "run with the pack" by honestly running Paladin. Going against the pack risks slashing and loss of rewards. This creates a self-reinforcing system of honest behavior.
As more validators run Paladin, a flywheel effect is created. More MEV is funneled to PAL holders, increasing the value of PAL and further incentivizing participation. This alignment of long-term interests incentivizes validators to behave honestly rather than pursue short-term gains through harmful practices like frontrunning.
Moreover, by allowing all validators to participate in MEV extraction, Paladin prevents centralization while still allowing searchers to implement more specialized strategies. The bot's open-source nature and transparent reward distribution create a fairer MEV landscape, benefiting the entire Solana ecosystem.
At Chorus One, we recognize Paladin's transformative potential. We've taken the proactive step of integrating Paladin into one of our Solana validators, Chorus One Palidator.
If you have been following Chorus One, you would know we have a deep interest in MEV. Almost two years back, we open-sourced our proof-of-concept called ‘Breaking Bots’ to capture MEV on Solana efficiently and ethically. Paladin’s proposition is similar in spirit but takes a different approach with the PAL token, which was not part of our proof-of-concept.
The integration of Paladin with our validator is a significant step in addressing the challenges of MEV on Solana. We invite Solana stakers to join us in this effort by delegating to our Palidator. Let’s move towards a model that benefits all participants rather than a select few.
As the MEV landscape evolves, Chorus One is committed to exploring and implementing solutions that benefit our delegators and the wider Solana community.
Blog articles
https://chorus.one/articles/metrics-that-matter
https://chorus.one/articles/solana-mev-client-an-alternative-way-to-capture-mev-on-solana
https://chorus.one/articles/solana-validator-economics
https://chorus.one/articles/analyzing-mev-instances-on-solana-part-3
https://chorus.one/articles/analyzing-mev-instances-on-solana-part-2
https://chorus.one/articles/analyzing-mev-instances-on-solana-part-1
Podcasts
Solana's Next Big Moves: From Memecoins to Staking—What's Coming Next?
Exploring Marinade V2 and the state of Solana Staking
About Chorus One
Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.
The rapid expansion of AI-driven applications and platforms in 2024 has revolutionized everything from email composition to the rise of virtual influencers. AI has permeated countless aspects of our daily lives, offering unprecedented convenience and capabilities. However, with this explosive growth comes an increasingly urgent question: How can we enjoy the benefits of AI without compromising our privacy? This concern extends beyond AI to other domains where sensitive data exchange is critical, such as healthcare, identity verification, and trading. While privacy is often viewed as an impediment to these use cases, Nillion posits that it can actually be an enabler. In this article, we'll delve into the current challenges surrounding private data exchange, how Nillion addresses these issues, and explore the potential it unlocks.
Privacy in blockchain technology is not a novel concept. Over the years, several protocols have emerged, offering solutions like private transactions and obfuscation of user identities. However, privacy extends far beyond financial transactions. It could be argued that privacy has the potential to unlock a multitude of non-financial use cases—if only we could compute on private data without compromising its confidentiality. Feeding private data into generative AI platforms or allowing them to train on user-generated content raises significant privacy concerns.
Every day, we unknowingly share fragments of our data through various channels. This data can be categorized into three broad types:
The publicly shared data has fueled the growth of social media and the internet, generating billions of dollars in economic value and creating jobs. Companies have capitalized on this data to improve algorithms and enhance targeted advertising, leading to a concentration of data within a few powerful entities, as evidenced by scandals like Cambridge Analytica. Users, often unaware of the implications, continue to feed these data monopolies, further entrenching their dominance. With the rise of AI wearables, the potential for privacy invasion only increases.
As awareness of the importance of privacy grows, it becomes clear that while people are generally comfortable with their data being used, they want its contents to remain confidential. This desire for privacy presents a significant challenge: how can we allow services to use data without revealing the underlying information? Traditional encryption methods require decryption before computation, which introduces security vulnerabilities and increases the risk of data misuse.
Another critical issue is the concentration of sensitive data. Ideally, high-value data should be decentralized to avoid central points of failure, but sharing data across multiple parties or nodes raises concerns about efficiency and consistent security standards.
This is where Nillion comes in. While blockchains have decentralized transactions, Nillion seeks to decentralize high-value data itself.
Nillion is a secure computation network designed to decentralize trust for high-value data. It addresses privacy challenges by leveraging Privacy-Enhancing Technologies (PETs), particularly Multi-Party Computation (MPC). These PETs enable users to securely store high-value data on Nillion's peer-to-peer network of nodes and allow computations to be executed on the masked data itself. This approach eliminates the need to decrypt data prior to computation, thereby enhancing the security of sensitive information.
The Nillion network enables computations on hidden data, unlocking new possibilities across various sectors. Early adopters in the Nillion community are already building tools for private predictive AI, secure storage and compute solutions for healthcare, password management, and trading data. Developers can create applications and services that utilize PETs like MPC to perform blind computations on private user data without revealing it to the network or other users.
The Nillion Network operates through two interdependent layers:
When decentralized applications (dApps) or other blockchain networks require privacy-enhanced data (e.g., blind computations), they must pay in $NIL, the network's native token. The Coordination Layer's nodes manage the payments between the dApp and the Petnet, while infrastructure providers on the Petnet are rewarded in $NIL for securely storing data and performing computations.
The Coordination Layer functions as a Cosmos chain, with infrastructure providers staking $NIL to secure the network, just like in other Cosmos-based chains. This dual-layer architecture ensures that Nillion can scale effectively while maintaining robust security and privacy standards.
At the heart of Nillion's architecture is the concept of clustering. Each cluster consists of a variable number of nodes tailored to meet specific security, cost, and performance requirements. Unlike traditional blockchains, Nillion's compute network does not rely on a global shared state, allowing it to scale both vertically and horizontally. As demand for storage or compute power increases, clusters can scale up their infrastructure or new clusters of nodes can be added.
Clusters can be specialized to handle different types of requests, such as provisioning large amounts of storage for secrets or utilizing specific hardware to accelerate particular computations. This flexibility enables the Nillion network to adapt to various use cases and workloads.
$NIL is the governance and staking token of the Nillion network, playing a crucial role in securing and managing the network. Its primary functions include:
Nillion's advanced data privacy capabilities open up a wide range of potential use cases, both within and beyond the crypto space:
Nillion is currently in the testnet phase, having recently completed its incentivized Genesis Sprint. The network is now running the Catalyst Convergence phase, which aims to seamlessly integrate the Petnet with the Coordination Layer. Nillion also recently announced its partnership with the leading Layer 2 Arbitrum. The tie-up will enable apps on Nillion to tap into Ethereum’s security for settlement and bring Nillion’s privacy-preserving data processing and storage to Arbitrum dapps.
Staking $NIL with Chorus One
Chorus One is actively collaborating with Nillion and will support $NIL staking when the network launches its mainnet. For those interested in learning more or participating in the network, reach out to us for further information.
In the blockchain industry, where the balance between decentralization and efficiency often teeters on a knife's edge, innovations that address these challenges are paramount. Among these innovations, preconfirmations stand out as a powerful tool designed to enhance transaction speed, security, and reliability. Here, we’ll delve into what preconfirmations (henceforth referred to as “preconfirms” ) are, why they matter, and how they’re set to transform the blockchain landscape.
The idea of providing a credible heads-up or confirmation that a transaction has occurred is deeply ingrained in our daily lives. Whether it's receiving an order confirmation from Amazon, verifying a credit card payment, or processing transactions in blockchain networks, this concept is familiar and widely used. In the blockchain world, centralized sequencers like those in Arbitrum function similarly, offering guarantees that your transaction will be included in the block.
However, these guarantees are not without limitations. True finality is only achieved when the transaction is settled on Ethereum. The reliance on centralized sequencers in Layer 2 (L2) networks, which are responsible for verifying, ordering, and batching transactions before they are committed to the main blockchain (Layer 1), presents significant challenges. They can become single points of failure, leading to increased risks of transaction censorship and bottlenecks in the process.
This is where preconfirms come into play. Preconfirms were introduced to address these challenges, providing a more secure and efficient way to ensure transaction integrity in decentralized networks.
Before jumping into the preconfirms trenches, let’s start by clarifying some key terms that will appear throughout this article (and are essential to the broader topic).
Builders: In the context of Ethereum and PBS, builders are responsible for selecting and ordering transactions in a block. This is a specialized role with the goal of creating blocks with the highest value for the proposer, and builders are also highly centralized entities. Blocks are submited to relays, which act as mediators between builders and proposers.
Proposers: The role of the proposer is to validate the contents of the most valuable block submitted by the block builders, and to propose this block to the network to be included as the new head of the blockchain. In this landscape, proposers are the validators in the Proof-of-Stake consensus protocol, and get rewarded for proposing blocks (a fee gets paid to the builder from the proposer as well).
Sequencers: Sequencers are akin to air traffic controllers, particularly within Layer 2 Rollup networks. They are responsible for coordinating and ordering transactions between the Rollup and the Layer 1 chain (such as Ethereum) for final settlement. Because they have exclusive rights to the ordering of transactions, they also benefit from transaction fees and MEV. Usually, they have ZK or optimistic security guarantees.
Now that we’ve set the stage, let’s dive into the concept of preconfirms.
At their core, preconfirms can provide two guarantees:
These two guarantees matter. Particularly for:
Speed: Traditional block confirmations can take several seconds, whereas preconfirms can provide a credible assurance much faster. This speed is particularly beneficial for "based rollups" that batch user transactions and commit them to Ethereum, resulting in faster transaction confirmations. @taikoxyz and @Spire_Labs are teams building based rollups.
Censorship Resistance: A proposer can request the inclusion of a transaction that some builders might not want to include.
Trading Use Cases: Traders may preconfirm transactions if it allows them to execute ahead of competitors.
Now, zooming in on Ethereum.
The following chart describes the overall Proposer-builder separation and transaction pipeline on Ethereum.
Within the Ethereum network, preconfirms can be implemented in three distinct scenarios, depending on the specific needs of the network:
Builder preconfirms suit the trading use case best. These offer low-latency guarantees and are effective in networks where a small number of builders dominate block-building. Builders can opt into proposer support, which enhances the strength of the guarantee.
However, the dominance of a few builders means that onboarding these few is key. However, since there are only a few dominant builders, successfully onboarding these players is key.
Proposers provide stronger inclusion guarantees than builders because they have the final say on which transactions are included in the block. This method is particularly useful for "based rollups," where Layer 1 validators act as sequencers.
Yet, maintaining strong guarantees are key challenges for proposer preconfirms.
The question of which solution will ultimately win remains uncertain, as multiple factors will play a crucial role in determining the outcome. We can speculate on the success of builder opt-ins for builder preconfirms, the growing traction of based rollups, and the effectiveness of proposer declaration implementations. The balance between user demand for inclusion versus execution guarantees will also be pivotal. Furthermore, the introduction of multiple concurrent proposers on the Ethereum roadmap could significantly impact the direction of transaction confirmation solutions. Ultimately, the interplay of these elements will shape the future landscape of blockchain transaction processing.
Commit-boost is a mev-boost like sidecar for preconfirms.
Commit-boost facilitates communication between builders and proposers, enhancing the preconfirmation process. It’s designed to replace the existing MEV-boost infrastructure, addressing performance issues and extending its capabilities to include preconfirms.
Currently in testnet, commit-boost is being developed by a non-ventured-backed neutral software for Ethereum with the ambition of fully integrating preconfirms into its framework. Chorus One is currently running commit-boost on Testnet.
Chorus One has been deeply involved with preconfirms from the very beginning, pioneering some of the first-ever preconfirms using Bolt during the ZuBerlin and Helder testnets. We’re fully immersed in optimizing the Proposer-Builder Separation (PBS) pipeline and are excited about the major developments currently unfolding in this space. Stay tuned for an upcoming special episode of the Chorus One Podcast, where we’ll dive more into this topic.
If you’re interested in learning more, feel free to reach out to us at research@chorus.one.
About Chorus One
Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.
This is a joint research article written by Chorus One and Superscrypt
Blockchain transactions are public and viewable even before they get written to the block. This has led to maximal extractable value (‘MEV’), i.e. where actors frontrun and backrun visible transactions to extract profit for themselves.
The MEV space is constantly evolving as competition intensifies and new avenues to extract value are always emerging. In this article we explore one such avenue - Oracle Extractable Value, where MEV can be extracted even before transactions hit the mempool.
This is particularly relevant for borrowing & lending protocols which rely on data feeds from oracles to make decisions on whether to liquidate positions or not. Read on to find out more.
Value is in a constant state of being created, destroyed, won or lost in any financialized system, and blockchains are no exception. User transactions are not isolated to their surroundings, but instead embedded within complex interactions that determine their final payoff.
Not all transaction costs are as explicit as gas fees. Fundamentally, the total value that can be captured from a transaction includes the payoff of downstream trades preceding or succeeding it. These can be benign in nature, for example, an arbitrage transaction to bring prices back in line with the market, or impose hidden taxes in the case of front running. Overall, maximal extractable value (or “MEV”) is the value that can be captured from strategically including and ordering transactions such that the aggregate block value is maximized.
If not extracted or monetized, value is simply lost. Presently, the actualization of MEV on Ethereum reflects a complex supply chain (“PBS”) where several actors such as wallets, searchers, block builders and validators fill specialized roles. There are returns on sophistication for all participants in this value chain, most explicitly for builders which are tasked with creating optimal blocks. Validators can play sophisticated timing games which result in additional MEV capture; for example, Chorus One has run an advanced timing games setup since early 2023, and published extensively on it. In the PBS context, the best proxy for the total MEV extracted is the final bid a builder gets to submit during the block auction.
Such returns on sophistication extend to the concept of Oracle Extractable Value (OEV), which is a type of MEV that has historically gone uncaptured by protocols. This article will explain OEV, and how it can be best captured.
Oracles are one of crypto's critical infrastructure components: they are the choreographers that orchestrate and synchronize the off-chain world with the blockchain’s immutable ledger. Their influence is immense: they inform all the prices you see and interact with on-chain. Markets are constantly changing, and protocols and applications rely on secure oracle feed updates to provide DeFi services to millions of crypto users worldwide.
The current status-quo is that third-party oracle networks serve as intermediaries that feed external data to smart contracts. They operate separately from the blockchains they serve, which maintains the core goal of chain consensus but introduces some limitations, including concepts such as fair sequencing, required payments from protocols and apps, and multiple sources of data in a decentralized world.
In practical terms, the data from oracles represents a great resource for value extraction. The market shift an oracle price update causes can be anticipated and traded profitably, by back-running any resulting arbitrage opportunities or (more prominently) by capturing resulting liquidations. This is Oracle Extractable Value. But how is it captured, and more importantly, who profits from it?
In MEV, searchers (which are essentially trading bots that run on-chain) profit from oracle updates by backrunning them in a free-for-all priority gas auction. Value is distributed between the searchers, who find opportunities particularly in the lending markets for liquidations, and the block proposers that include their prices in the ledger. Oracles themselves have not historically been a part of this equation.
OEV changes this flow by atomically coupling the backrun trade with the oracle update. This allows the oracle to capture value, by either acting as the searcher itself or auctioning off the extraction rights.
How OEV created in DeFi can be captured by MEV searchers before the dApp gets access to it.
OEV primarily impacts lending markets, where liquidations directly result from oracle updates. By bundling an oracle update with a liquidation transaction, the value capture becomes exclusive, preventing front-running since both actions are combined into a single atomic event. However, arbitrage can still occur before the oracle update through statistical methods, as traders act on the true price seen in other markets
UMA and Oval:
API3 and OEV Network:
Warlock
The upshot of this MEV capture is that oracles have a new dimension to compete on. OEV revenue can be shared with dApps by providing oracle updates free of charge, or by outright subsidizing integrations. Ultimately, protocols with OEV integration will thus be able to bid more competitively for users.
OEV solutions share the same basic idea - shifting the value extraction from oracle updates to the oracle layer, by coupling the price feed update with backrun searcher transactions.
There are several ways of approaching this - an OEV solution may integrate with an existing oracle via an official integration, or through third party infrastructure. These solutions may also be purpose built and provide their own price update.
Heuristically, the key components of an OEV solution are the oracle update and the MEV transaction - these can be either centralized or decentralized.
We would expect purpose-built or “official” extensions to existing oracles to perform better due to less latency versus what would be required to run third party logic in addition to the upstream oracle. Additionally, these would be much more attractive from a risk perspective, as in the case of third party infrastructure, updates could break undesired integrations spontaneously.
The practical case is that a centralized auction can make most sense in latency-sensitive use cases. For example, it may allow a protocol to offer more leverage, as the risk of stranding with bad debt due stale price updates is minimized. By contract, a decentralized auction likely yields the highest aggregate value in use cases where latency is not as sensitive, i.e. where margin requirements are higher.
OEV is still in its early stages, with much development ahead. We're excited to see how this space evolves and will continue to monitor its progress closely as new opportunities and innovations emerge.
About Chorus One
Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.