Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
How to stake MATIC (Polygon)
A step-by-step guide to staking MATIC with Chorus One
June 5, 2023
5 min read

Polygon is a Layer 2 scaling solution built on Ethereum that aims to provide multiple tools to improve the speed and reduce the cost and complexities of transactions on blockchain networks.


  1. To start staking $MATIC, first log in to on the browser of your choice.

Ensure that the browser has integrated any of the wallets supported by Polygon.

  1. Then, click on Login and connect to the wallet of your choice. Click ‘View all’ to see all the wallets supported by Polygon. We have chosen MetaMask.
  1. Once you have connected your wallet, click on ‘Become a Delegator’, and search for ‘Chorus One’ amongst the list of available validators.

Click on ‘Chorus One’ to verify all the details. Ensure that the Validator address (shown as ‘Owner’) is 0xbbd83024be631bb6f3dd3c0363b3d43b5d91c35f.

Note: The commission rate to stake $MATIC with Chorus One is 5%.

  1. Once you have verified all the details, click ‘Become a Delegator’ .
  1. Next, enter the amount of $MATIC you would like to stake. Then, click ‘Continue’.
  1. You will be redirected to your wallet to approve the transaction, which will take a few minutes.

You have now completed the process and staked your $MATIC with Chorus One!  

About Chorus One

Chorus One is one of the biggest institutional staking providers globally operating infrastructure for 40+ Proof-of-Stake networks including Ethereum, Cosmos, Solana, Avalanche, and Near amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures. We are a team of over 50 passionate individuals spread throughout the globe who believe in the transformative power of blockchain technology.

Chorus One announces staking support for Sui Network
Sui is a Layer 1 blockchain and smart contract platform designed to make digital asset ownership fast, private, secure, and accessible to everyone.
May 4, 2023
5 min read

After three rounds of rigorous testnets, the Sui Network Mainnet is live, and Chorus One is proud to support the network as a genesis staking provider and validator.

What is SUI?

Sui Network is a permissionless Layer-1 blockchain and smart contract designed from the ground up to make digital assets ownership fast, secure, and accessible to the next generation of Web3 users. Its pioneering architecture is implemented to create a world-class developer experience, in addition to vastly improving performance and user experience of L1 blockchains.

Sui Move

Sui uses Rust and supports smart contracts written in Sui Move -  a customized version of the Move programming language that enables the definition and management of assets with owners. These assets can be created, transferred, and mutated through custom rules defined in the smart contract, offering a flexible way to manage digital assets on the blockchain. This enables a vast range of use-cases such as tokens, virtual real estate, and more.

SUI’s unique design features

  1. Parallel agreement

Sui has a unique system design that allows it to scale horizontally and handle a high volume of transactions at low operating costs. Unlike other blockchains that require global consensus on all transactions, Sui enables parallel agreement on independent transactions through a novel data model and Byzantine Consistent Broadcast. This approach eliminates the need for global consensus and enhances scalability without compromising safety and liveness guarantees.

The object-centric view and Move's strong ownership types enable parallel execution of transactions that affect different objects while transactions that affect shared state are ordered through Byzantine Fault Tolerant consensus and executed in parallel.

  1. Scalability and Immediate Settlement

Sui’s scalability characteristic is highly innovative and distinct from existing blockchains that have bottlenecks. Currently, most blockchains have limited capacity to handle a high volume of transactions, resulting in slow processing times and expensive fees. This can lead to a poor user experience, particularly in gaming and financial applications. Sui addresses these issues by scaling horizontally to meet the demands of applications. It does this by adding more processing power through additional validators, resulting in lower fees and faster processing times even during periods of high network traffic.

  1. Novel Storage Ability

Sui allows developers to store complex assets directly on the blockchain, which makes it easier to create and execute smart contracts. This results in low-cost and horizontally scalable storage that enables developers to define rich assets and implement application logic. With this capability, new applications and economies can be created based on utility without relying solely on artificial scarcity.

SUI Tokens

Sui’s native token, SUI, has a fixed supply and is used to pay for gas fees. Additionally, users can earn rewards by staking their SUI tokens with validators like Chorus One. To learn more about how you can stake SUI with Chorus One, visit:

Sui Use Cases

Sui enables developers to define and build:

  • On-chain DeFi and Traditional Finance (TradFi) primitives: enabling real-time, low latency on-chain trading
  • Reward and loyalty programs: deploying mass airdrops that reach millions of people through low-cost transactions
  • Complex games and business logic: implementing on-chain logic transparently, extending the functionality of assets, and delivering value beyond pure scarcity
  • Asset tokenization services: making ownership of everything from property deeds to collectibles to medical and educational records perform seamlessly at scale
  • Decentralized social media networks: empowering creator-owned media, posts, likes, and networks with privacy and interoperability in mind

Staking $SUI with Chorus One

SUI can be delegated to Chorus One delegation pool

Current Staking APR: 8.3%

For any other questions, reach out to

Useful links, tools, and resources







About Chorus One

Chorus One is one of the biggest institutional staking providers globally operating infrastructure for 40+ Proof-of-Stake networks including Ethereum, Cosmos, Solana, Avalanche, and Near amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures. We are a team of over 50 passionate individuals spread throughout the globe who believe in the transformative power of blockchain technology.

Unstoppable games in Avalanche
Erwin Dassen explains how Avalanche's multichain architecture is ideal for developing an unstoppable blockchain game controlled by its users.
April 7, 2023
5 min read

What is this about

At Chorus One, we have a strong conviction in the potential of a multichain future. We believe that specialized blockchains play a crucial role in discovering and nurturing new use cases, and ultimately driving mainstream adoption. Since joining Chorus One about two years ago, I've been pushing for us to do the same in the Avalanche ecosystem as these two ecosystems have similar visions of what the multichain future can, should, and will look like.

Last year, we entered the Avalanche ecosystem. Our work will only intensify in the coming years with Chorus Ventures, our ventures arm, investing in native Avalanche projects. We also use our expertise in tokenomics and infrastructure to help projects launch their permissionless subnets. We will be presenting this topic both at the online Subnet Summit mid April and the Avalanche Summit II beginning of May.

In my view gaming is key to onboarding the next wave of users and a fundamental step in the road to mass adoption. This article aims to present the exciting future of blockchain gaming and demonstrate how the Avalanche architecture, particularly the multichain subnet architecture, is the ideal substrate for this vision. Through a two-part series, I will illustrate how one can develop an unstoppable game. By unstoppable, I mean a game that not even its creators can censor or stop if one day they move on to other projects. A game in control of its users.

So let's get to it.

Path of Exile

To make this exercise as clear as possible I will look at a game I have plenty of experience with having played around 2000h. The game in question is Path of Exile, in my personal opinion the Diablo killer. This game needs no introductions but:

  • It is the number 2 action-RPG (ARPG) in terms of concurrent gamers on Steam consistently.
  • It has the most interesting in-game economy of any game I am aware of and competes with EVE Online in this regard. So much so that the community developed a variety of tools to track and facilitate the movement of goods.
  • It is completely free-to-play with no pay-to-win mechanics. Game profits come from cosmetic-only purchases.
  • It is fun! I've sunk +2000h in this game and most hardcore gamers sink this amount of time in the game per season.
  • The game is constantly refreshed with new mechanics, bosses and lore via the seasonal leagues which also boosts the revenue for the developers.
  • It is complex with multiple mechanics and endless build options. Gamers evolve but are entertained from noob level to youtuber level.
  • Look at the passive skill tree!

A short video:

Take a look here for some more gameplay videos.

I cannot emphasize enough how deep the economics goes in this game. The reason being that its economics are fundamentally tied to the crafting system for equipment and to the simple fact that you need to craft gear yourself or buy it from someone if you want to reach the endgame. Purely random drops cannot take you there.

Every season a "league patch" is released with new contents and the economy is reset. Characters and loot from previous seasons are still available to play in the "standard" league and the standard league economics are interesting in their own right but as a driver for innovation and to give new players the ability to compete in a more level playing field, these resets are very important.

The goal is thus to envision a version of PoE that is unstoppable and in the hands of gamers. You might ask: why would developers make such a game? To which I answer: the first one to do this becomes a first mover in a technology that soon will be expected from all games. And why will this be expected? Why do gamers want this? Well, this is a game you can continue playing and you can really own it. Like how it was in the dawn of console gaming. Be it real or game money, you can trade assets and no one can censor you. If you recall the anecdote, this was the reason Vitalik started his work in crypto.

The anatomy of an ARPG

In the centralized world, an ARPG like Path of Exile consists of a client/server platform where the server infrastructure is run by the game developer, and where the client freely available or purchased in a marketplace. Next, we will look at the features and responsibilities of the server side as this will be where our decentralization efforts will mostly focus.


The ability of client-side tampering with binary can cause all types of attacks/cheating. This is an arms race but currently, it is tackled via lock-step state validation. More on this later.


Most games need randomness. For anti-cheat reasons, this is taken care of at the server level. In the case of PoE (and ARPG more broadly) this is even more important as loot, damage, map layout, and even AI are parameterized by random inputs.

Loot generation

Of fundamental importance for a healthy in-game economy is that the more powerful items are, the rarer they should be. That is, their drop rate should be lower. This is accomplished by drop-rate lookup tables that are set and maintained by the server. Again due to anti-cheat measures, it is the server that, when appropriate, generates a random drop.


Even in PoE which tends to be dominated by PvE (player versus environment), there are situations where players interact: regions where PvP (player versus player) are allowed and sanctuary environments also called player hubs. These interactions need to be facilitated by the game server.


Special trading windows and functionalities are implemented so that players can exchange goods in a safe way.

The backends

Looking at the above set of functionalities that the server must provide, we can identify three different types of backends that the server infrastructure needs to maintain. These are the components we will need to "permissionlessly" decentralize. The following figure gives an idea of how the server-side interacts with these backends and the client (overlap indicates communication).

Queryable databases

There is a need for queryable databases, with loot tables clearly being one such need. But many more are present: leaderboards, player info, skill table, effect mechanics, and many more.

Content delivery

A key-value store that can deliver monolithic "chunks of bytes" is also a necessary backend. The game needs to ship itself and its updates with a big proportion being graphical assets. For this dedicated content-delivery networks are employed.

Anti-cheat logic

As mentioned before the server infrastructure needs to be able to keep clients in sync across PvE and PvP both for anti-cheat purposes and for facilitation of user interactions.

A short digression into Subnets

So why is Avalanche especially suited for this exercise? How will the architecture of such a game change and what technologies do we need to leverage to accomplish our goal of a decentralized, unstoppable ARPG game?

Avalanche has two genius breakthroughs in its design: its consensus being the first and the subnet architecture being the second. The latter is highly dependent on the former. Let's see why.

Avalanche consensus is without a doubt the most advanced consensus out there and is correctly categorized as a third type of byzantine fault tolerant consensus following the discovery of signature accrual and Nakamoto consensus. It is the first meta-stable type of consensus algorithm. This consensus enjoys enviable properties: it scales easily with the number of validators, it is leaderless, and single-slot final. I won't go into much detail but suffices to say it accomplishes all of these by being a consensus algorithm based on a statement about an emergent property of the system. Let me explain what I mean. You can think of the network as having the property of being consistent (all validators agree on the current state). In Avalanche this property is emergent. Like the temperature of a gas, it exists as a property derived by the local interactions of its constituents “particles”. In the case of the gas, particles bouncing of each other exchanging kinetic energy in their small neighborhood gives rise to the macroscopic property of temperature. In Avalanche, validators are the particles and contrary to other consensus mechanisms they interact only “locally”, that is, with a small number of validators that are randomly selected in each round. Somehow - and here there is a strong mathematical theorem behind it - this is enough for the network to have a well-defined sense of state history. Even in the presence of attackers.

It is the property of essentially limitless scaling in the number of validators that allows for the second genius move. You see, Cosmos is the originator of the concept of an app-chain. In this design, it is absolutely necessary that chains can "talk" to each other to really cover all the use cases one is interested in. For this reason, they developed the IBC framework. This is an elegant framework for trustless communication but it incurs a significant requirement to a prospective chain: as a destination chain you need to keep consensus information of any given source chain you want to communicate with in the form of a light client. Wouldn't it be ideal if this information would be globally available to all chains from all chains? This is impossible with a limited set of validators.

So to have an unlimited set of app chains that can trustlessly communicate without having to keep light clients of every other chain they communicate with you need an unbounded set of validators in a global chain that keeps all this information. I hope you see where this is going: this is exactly the subnet design.

In Avalanche the main network that every validator must secure contains three chains. The P-chain (Platform chain), the X-chain (eXchange-chain) and the C-chain (Contract chain). The X-chain - which us currently a DAG but will become a linear chain in the near future - is a chain made for throughput exchanges of assets much like a blazing fast Bitcoin network. The C-chain is what most users are more familiar with and is an EVM based chain. It works just like Ethereum but faster and with instant finality. Great. But the real genius comes from the P-chain. This chain tracks all validation related transactions of the mainnet and all subnets. This is what will enable the unbounded, composable network of app chains. Since all validators have the P-chain at hand, any two subnets can communicate directly provided they want to. In IBC, on the other hand, with its hub-and-spoke design you have the unaddressed issue of path dependence.[^1]

So, we will leverage an Avalanche subnet for our game. Main reasons are the excellent scaling properties of its consensus and the application-specific, isolated nature of the subnet approach. On top of that it supports cross-subnet transactions allowing for valuable assets to move around freely in the ecosystem. Finally but not least, there is also VM2VM message passing that allows the validators in a subnet to easily check the state in other connected VMs be that within the same subnet, in the mainnet, or another subnet running in the same validator (the latter has not even been explored yet).

An Avalanche subnet is essentially the following:

  • The specification of a subset of validating nodes from the overall set of Avalanche mainnet validators.
  • The specification of a set of blockchains these validators should validate and for which their performance is monitored.

The set of validators is dynamic but can be either permissionless or permissioned. The specification of a blockchain is comprised of a specification of a subnet this blockchain pertains to and the specification of a VM (i.e. virtual machine) that characterizes the valid state transitions in that blockchain.

Ava Labs recently announced HyperSDK a toolkit not much unlinke the CosmosSDK to help developers easily build VMs to power their subnet. From now on, they can focus on the logic of the application and worry much less about synchronization, consensus, state storage and availability and other blockchain-heavy topics. On the other hand, if you want to, you can customize these aspects as the SDK was build with modularity in mind.

See Avalanche platform and Subnets sections in the Avalanche documentation for more information on subnets and visit the HyperSDK repository which is open for contributions.

The game architecture

As mentioned before, our intent is to decentralize the game. For this, we will need to decentralize the server infrastructure, mainly the three points named above: databases, content delivery, and anti-cheat logic. This will be done by defining specialized VMs and the corresponding blockchains for each of those game infrastructures. All of this is packaged in the game server binaries which will be run by the validators in the subnet.

A game client will essentially be submitting transactions to the server network. Clearly, the game client is responsible for client-side rendering which is something we do not need to bother with on the server side. In terms of execution hardware, the game server is much lighter than the client and we will exploit this.

Keep in mind that being a player does not mean you can’t be a validator as well or a delegator to a Chorus One validator ;). This is obvious but worth mentioning as this means that for the first time ever a game can actually be in the hands of the players. With governance, even the game features and roadmap can be decided, paid for, and rolled out completely in a decentralized fashion.

So the big question: what are the blockchains, VMs and technologies used for this purpose? We dive in.

Content delivery via BlobVM

The BlobVM already exists in an advanced prototype stage. It was developed by Ava Labs and is available in open-source. What it does is provide a dedicated, seamlessly integrated (at the subnet level) content addressable storage with customizable parameters regarding permissions for read/write and persistence.

We use BlobVM for storing all art, texture, and models, i.e., all game assets. Even the game client binary can be updated via this method. In a fresh install, an externally downloaded game client connects, and sends a transaction to download all necessary assets. Note that this transaction could be a way to monetize the game but this is optional of course. In other words, this transaction would give you a game license NFT.

NFT and player asset tracking with AVM

Now as mentioned before we want to give power and value to the gamers. Path of Exile is famous for its rich economy and is a formidable laboratory for NFT tokenomics. By giving the gamers the option to mint any found loot item we give this economy real value. There is plenty of opportunity and pitfalls here to fill in another article but it is important to mention that PoE works by having multiple “leagues” which give an opportunity to always “reset” the economy and give chance for new players to “make it”. We think this is an important aspect to keep in the decentralized version of this game. As an example of how we could explore this, we can configure it so that minted NFT only work on the current and previous leagues.

For tracking a gamer’s collection we use AVM, the Avalanche VM, which is a DAG (directed acyclic graph) based on the UTXO model capable of massive throughput. In fact, this is the underlying VM of the mainet’s X-chain. Note that since the announcement of Cortina (the next dot release of the Avalanche validator client) the X-chain will move from being a DAG into a linear chain. Here we have the option of launching out own AVM chain for assets transfer or, use the X-chain directly which would make all of the game’s NFT directly available to the wider Avalanche community (NFT reuse in games is an under-explored area). The AVM supports ANTs or Avalanche Native Tokens that can easily be imported/exported across the majority of supported VMs as it defines a unified API for cross-chain atomic swaps.

PoE is a free-to-play game that monetizes itself via cosmetic-only user-purchasable content. This can be easily supported via the AVM chain as well. Simply: an NFT in the user wallet ”unlocks” these assets to be delivered via the content delivery mechanism. This is essentially a VM2VM communication as is desirable and quite probable that the X-chain will support account lookups via this mechanism.

Databases with SQLVM

As mentioned before, as with any modern application, the game needs to store global relational data. For example, loot tables, league-specific information, game metrics, user metrics, NFT market data, etc. The list goes on and on. For this specific use case currently, many web3 projects use The Graph: a sophisticated but complex decentralized solution. A few issues arise with this approach:

  • Your economy has to compete with external, global, economies to make the service persistently available.
  • The Graph only indexes preexisting block data. It is not actually a form of storage.

Because of these, we propose a new type of VM we dub SQLVM and this will be the topic of our next article. But in a nutshell, you should think of it as a hybrid between a app-specific indexer and a persistent relational data store.

It allows for specific types of transactions that query/write to a globally replicated ACID relational database. Here we automatically benefit from the fact that blockchain transactions are atomic at the consensus level which makes designing the underlying database much simpler. For example, a suitable design can be done for a VM where the runtime state is an instance of any query engine: row-oriented like Postgres, column-oriented like BigTable, or document-based like MongoDB. Keep in mind that even this is overkill as we don't need their replication features. What we need is their query engine and storage solution. Most of these databases have sophisticated query planners than can take the place of fee estimators. The beauty here is that Avalanche will take care of maintaining this database eventually consistent which suffices for our use case. More sophisticated designs are certainly possible. The job of the VM here is to essentially declare the types of transactions (write/reads), the fees and verify the blocks by applying the transactions in the database and updating certain database hashes (will be needed for anti-cheat below). For our game - or any other app chain using this backend - other VMs in the subnet should be able to read the database at will which can easily be done with VM2VM.

Similar to how a non-validating Avalanche node have access to the mainnet state, a game client could be a node of this chain running in non-validation mode so as to keep this database state at all times for easy synchronicity.

ZK anti-cheat with ZKVM

Now to the technologically most innovative piece of the puzzle: to run anti-cheat as a ZK verifier. This is such a breakthrough technology that it would be an improvement over existing anti-cheat technology on centralized games.

Anti-cheat works, as mentioned before, as lock-step game simulation. What this means is that the game client is essentially an input system and a rendering engine of a game that is actually run remotely on servers. This Introduces latency which is the reason why game server farms have to be deployed across internet “regions”. ZK changes the game as it allows one to codify all game state transitions in a prover which we can run on the game client (remember the gamers tend to play with machines that are quite powerful) while the server is just a verifier! This has the added benefit that it even liberates the server from having to run in lock-step, to begin with! Essentially we can use eventual consistencyto catch the cheaters. Put differently, we don’t care to verify every little state transition that happened but batches (or recursions) encoding all transitions that happened in a configurable time window: 1 second, 10 seconds, a minute, an hour…

It is obvious what a powerful idea this is: no need to simulate full-blown games on the server. For example, we now can use more sophisticated AIs in the client. The fact that you have to run the game on the server is one of the reasons no modern AI is in use in games. Why not use GPT-4 for creating procedural quests??

We will have more to say about a ZKVM in a future article but I would like to state a few things. Firstly, note that we are not even using the zero-knowledge aspect of this VM and this gives more freedom in the exact construction of the protocol. In precise terms we are interested in SNARKS not necessarily ZK-SNARKS. Nonetheless, we expect that applications that use this zero-knowledge aspect will also exist.

Secondly, we might not be at the stage yet where fast enough provers exist to prove the state transition for a game like PoE. I'm not an expert, but I expect that schemes leveraging the GPUs in the gamer's clients will be just a matter of time.

And finally, we are talking about a very specific VM - that of the game - and not a generic programmable one like the EVM. We need a prover for those exact transitions that happen in game. This is potentially another route for optimization.


We hope to have convinced you that the future of decentralized gaming and player-owned gaming is bright. When Vitalik joined the crypto movement I don't think he thought his dream would come true on another chain, but I think he will be satisfied nonetheless.

But more importantly, we hope the reader is also convinced that this is only possible in a clean, elegant, and reusable way via the subnet architecture. Sophisticated applications like this will only flourish when good reusable VMs are available much like reusable contracts are right now. Multiple VMs demands multiple chains in a subnet architecture. Although technically possible to cram all of these backends into a single block to be serialized/deserialized and verified using a single chain, this would not only hurt code reuse but is also impractical since it is clear that these backends might need different blocktimes.

Of course, there are a lot of unknowns to this as I am not a game developer. I just want this to jump-start the imagination of developers in general (not only game developers) to the reality that the future is app-specific multi-chain subnets. And so that someone develops an unstoppable ARPG like Path of Exile!!

Tune in for some follow-up articles on where we attempt to detail somewhat the SQLVM and ZKVM and come talk to us in the summit. See you there!

About Chorus One

Chorus One is one of the biggest institutional staking providers globally, running infrastructure and validating over 40 blockchain networks. Since 2018, we have been at the forefront of the PoS industry and now offer enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge projects through Chorus Ventures. We also invest in subnets on Avalanche so if you’re building something interesting, reach out to us at

Chorus One announces staking support for Onomy
Chorus One is proud to announce staking support for Onomy Protocol, an on-chain fintech hub for DeFi.
April 6, 2023
5 min read

We’re very excited to announce that Chorus One is live on the Onomy Network! 

Onomy Protocol is pioneering a harmonious connection between traditional financial markets and the DeFi landscape - two worlds that have remained largely disjointed - by creating a vertically-integrated ecosystem  that emulates the familiarity of centralised exchanges but retains the decentralised ethos of Web3, Onomy will be presented to end-users in a digestible, retail-friendly ‘fintech shell’ whose backroom engine smoothens the transition from CeFi to DeFi for retail and institutions alike. 

Leveraging a Cosmos-based layer-1, a hybrid DEX, bridge hub, stablecoin issuance protocol, and additional contributions built on the ecosystem, Onomy is creating the perfect conditions for Forex markets to thrive on-chain. 

Introducing Onomy: An On-Chain Fintech Hub for DeFi

Onomy Network (ONET): A Fast and Secure Proof-of-Stake Blockchain

The Onomy Network is a Proof-of-Stake blockchain constructed using the Cosmos SDK framework, which enables it to achieve scalability by leveraging the infrastructure supported by a network of institutional validators, like Chorus One

With a block time of just five seconds, and its high throughput, low latency, and low fees features, the Onomy network is made to be ideal for financial transactions.

Onomy Exchange (ONEX)

Supporting various order types, including limit, market, conditional, and stop-loss orders, the Onomy Exchange (ONEX) stands out as a unique hybrid, multi-chain decentralised exchange (DEX) on which traders can buy and sell cross-chain through an order book with no trading fees incurred, whilst liquidity providers can get involved and earn rewards from the AMM running in the back-end. 

This empowers users to trade both crypto and Forex pairs effortlessly while also offering cross-chain trading, advanced charting, and more. 

“The Hybrid DEX combines the importance and familiarity of order books while retaining the flexibility and security of AMMs.” - Lalo Bazzi, Co-founder, Onomy Protocol

Essentially, ONEX aims to provide a high-volume trading experience similar to that of traditional centralised exchanges (CEX), but in a decentralised and non-custodial manner on the blockchain. 

Arc Bridge Hub

The Network powers the Onomy Arc Bridge Hub, a cross-chain transfer solution that integrates inter-blockchain communication (IBC) and allows users to easily traverse between prominent blockchains both within and beyond the Cosmos ecosystem, such as Near, Avalanche, Polygon, Ethereum, Neon, etc. Additionally, the Arc Bridge solves the issue of approving multiple cross-bridge transactions by reducing it to a single approval, making the user experience significantly simpler. 

Onomy Reserve (ORES)

Onomy Reserve (ORES) is the linchpin of the ecosystem and the fundamental driver behind Onomy’s core long term mission. A decentralised reserve bank, the ORES will provide on-chain minting of stablecoins, or denominations (Denoms) of fiat currencies. 

The goal is to create a trusted, decentralised system through which national currencies can be exchanged at speed on-chain and with broader integration with the wider DeFi ecosystem and the advantages composable finance brings and the efficiencies it entails for this titanic, $7 trillion per day market. The ORES will function as a gateway for liquidity across all integrated blockchains and will support multiple national currencies, with the native $NOM coin playing a key role. 

$NOM Utility

$NOM is Onomy’s native network and governance token. It’s used by validators (like Chorus One) and their delegators to secure the proof-of-stake blockchain, but also to cover transaction fees, and vote on governance proposals in the Onomy DAO which manages the on-chain treasury with no centralised control. $NOM will have a key role to play in the Onomy Reserve as highlighted in the Onomy Improvement Proposals, with additional utility to be voted on by the DAO. 

Onomy, Forex, and the New Economy

For crypto’s next great wave of adoption to occur, access to crypto needs to be easier, faster, and more intuitive - while also continuing to lay the scaffolding for a decentralised financial system that works entirely on-chain. Onomy is that convergence point. 

Powered by a strong team of crypto natives and backed by prominent crypto investors among the likes of Chorus One, Bitfinex, UDHC, GSR, DWF Labs, CMS Holdings LLC, and more. Onomy offers new possibilities for on-chain FX markets and broadens access to DeFi for the individual and institutional investor.

$NOM is already live for trading on Kucoin, Bitfinex, and MEXC.

Onomy will unlock DeFi for the masses, and Chorus One is thrilled to be part of the journey. 

Staking $NOM with Chorus One

Current Inflation Rate: approximately 90% 

Current Staking APR: approximately 114%

Staking $NOM with Chorus is straightforward. Simply hold native $NOM on Cosmostation, Keplr or Leap, connect your wallet to the Onomy SuperApp, and stake $NOM with Chorus One. 

For any other questions, reach out to

About Chorus One

Chorus One is one of the biggest institutional staking providers globally operating infrastructure for 40+ Proof-of-Stake networks including Ethereum, Cosmos, Solana, Avalanche, and Near amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures. We are a team of over 50 passionate individuals spread throughout the globe who believe in the transformative power of blockchain technology.

Chorus One announces staking support for KYVE.
Delegators can stake their KYVE tokens to earn rewards and contribute to the network’s growth.
March 19, 2023
5 min read

We’re very excited to announce that Chorus One is now live on the KYVE Network mainnet.

Kyve aims to revolutionize customized access to on- and off-chain data by providing fast and easy tooling for decentralized data validation, immutability, and retrieval. With these tools, developers, data engineers, and others can easily and reliably access the trustless data they need in order to continue building the future of Web3.

It is a PoS blockchain built with the Cosmos SDK. It has two layers: the Chain Layer and the Protocol Layer, each with its own node infrastructure.

  • The chain layer is the backbone of KYVE and is an entirely sovereign Proof of Stake (PoS) blockchain built with/on Ignite. It’s run by independent nodes, which enable users to support and secure the KYVE blockchain.
  • Sitting on top of the chain layer is the Protocol Layer, which enables the actual use case of KYVE’s data lake. This includes data pools, funding, staking, and delegation.

The protocol layer nodes are responsible for collecting data from a data source, bundling and uploading it to any decentralized storage solution, and then validating it, keeping track of which data is truly valid for its users to tap into. This enables KYVE to store any data permanently and in a decentralized manner, creating a Web3 data lake.

Source: Kyve

Via KYVE, developers first input the desired endpoint from which they would like to fetch data and then fund a pool with $KYVE. Node runners wanting to participate in the protocol will be the ones fetching, bundling, storing, and validating the data to earn $KYVE rewards.

Data pipeline is another way of using KYVE. Through a non-code solution, KYVE data can be imported into any data source supported by Airbyte within just a few clicks. Since KYVE fetches raw data, it allows you to transform it to best fit your use case.

John Letey, Kyve’s co-founder & CTO, joined our podcast and told everything you need to know about Kyve, including some fun facts: John wrote his first program in C++ when he was only 8 years old.

At genesis, inflation was disabled. A governance proposal is currently being voted on to activate inflation with default parameters that were calculated considering the staking ratio at genesis. The goal is to reach an APY of 20%, a reference value influenced by other Cosmos networks.

Source: Kyve

The project is backed by multiple relevant foundations such as Near, Solana, and Avalanche, to name a few.

To know more about staking $KYVE with Chorus One, click here

About Chorus One

Chorus One is one of the biggest institutional staking providers globally operating infrastructure for 35+ Proof-of-Stake networks including Ethereum, Cosmos, Solana, Avalanche, and Near amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures. We are a team of over 50 passionate individuals spread throughout the globe who believe in the transformative power of blockchain technology.

For more information, please visit

Core Research
Cosmos ticks all the boxes in building the ultimate modular blockchain
We evaluate why Cosmos is the best solution for building a modular blockchain.
March 19, 2023
5 min read


Cosmos is steadily becoming the place to create the ultimate modular blockchain. Cosmos SDK allows developers to effortlessly roll out tailored blockchains, resulting in a flood of new projects that provide specialized settings for novel products. The goal of modular blockchains is to divide Execution, Settlement, Consensus, and Data Availability. Refer to page 19 of this report to learn more about modular vs. monolithic blockchain designs (Ethereum). As a result, we see various teams tackling the issues of each layer and creating optimal solutions and developer environments. Ultimately, developers could use these optimizations to create an application that is highly performant using such an ultimate modular blockchain. Not to mention the greater decentralization that comes with spreading your product across numerous ecosystems.

Let’s go over the problems that current ecosystems face in each layer of the modular stack, and how various quality teams are solving these issues. Please bear in mind that there are other teams that are solving these issues too, we are just exploring some.

Issues with Data Availability

It is important to explain that when a block is appended to the blockchain, each block contains a header and all the transaction data. Full nodes download and verify both, whilst light clients only download the header to optimize for speed and scalability.

Full nodes (validators) cannot be deceived because they download and validate the header as well as all transaction data, whereas light clients only download the block header and presume valid transactions (optimistic). If a block includes malicious transactions, light clients depend on full nodes to give them a fraud proof. This is because light nodes verify blocks against consensus rules, but not against transaction validity proofs. This means that a 51% attack where consensus is altered can easily trick light nodes. As node runners scale, secure methods to operate light clients would be preferable because of their reduced operational costs. If nodes are cheaper to run, decentralization also becomes easier to achieve.

The DA problem refers to how nodes can be certain that when a new block is generated, all of the data in that block is truly published to the network. The problem is that if a block producer does not disclose all of the data in a block, no one will be able to determine if a malicious transaction is concealed within that block. A reliable source of truth as a data layer is required that orders transactions as they come and checks their history. This is what Celestia does, solely optimizing the Consensus and the DA layer. This entails that Celestia is only responsible for ordering transactions and guaranteeing their data availability; this is similar to reducing consensus to atomic broadcast. This is the reason why Celestia was originally called ‘Lazy Ledger’, however, efficiently performing this job for a future with thousands of applications is no easy job. Celestia can also take care of consensus. See the different types of nodes in Celestia here.

​​Two key features of Celestia’s DA layer are data availability sampling (DAS) and Namespaced Merkle trees (NMTs). Both are innovative blockchain scalability solutions: DAS allows light nodes to validate data availability without downloading a complete block; NMTs allow Celestia’s execution and settlement layers to download transactions that are only meaningful to them. In a nutshell, Celestia allows light nodes to verify just a small set of data, that when combined with the work of other light nodes, provides a high-security guarantee that the transactions are valid. Hence, Celestia assumes that there is a minimum number of light nodes sampling the data availability layer.

“This assumption is necessary so that a full node can reconstruct an entire block from the portions of data light nodes sampled and stored.”

It is worth noting for later that these layers (DA & Consensus) are naturally decentralized and easier to have fully on-chain, as most of the work is taken on by the validators. Scaling here will ultimately depend on the consensus algorithm. ‘Rollapp’ developers will not need to assemble a validator set for their applications either.

Issues with Execution & Settlement layers

  • Execution refers to the computation needed for executing transactions that change the state machine accurately.
  • Settlement involves creating an environment in which execution levels can check evidence, settle fraud claims, and communicate with other execution layers.

The present web3 environment suffers from centralization in the execution and settlement layers. This is due to the fact that the on-chain tech stack severely limits an application’s functional capability. As a result, developers are forced to perform heavy computation off-chain, in a centralized manner. On-chain apps are not inherently interoperable with external systems, and they are also constrained by a particular blockchain’s storage and processing capability.

More than just a distributed blockchain database is required to create the ultimate decentralized apps. High-performance processing, data IO from/to IPFS, links to various blockchains, managed databases, and interaction with various Web2 and Web3 services are all common requirements for your application. Additionally, different types of applications require different types of execution environments that can optimize for their needs.

Blockless — Facilitating custom execution

Blockless can take advantage of Celestia’s data availability and focus to improve application development around the execution layer. Blockless provides a p2p execution framework for creating decentralized serverless apps. dApps are not limited by on-chain capacity and throughput by offloading operations from L1 to the performant, configurable execution layer offered by Blockless. With Blockless you can transfer intensive processing from a centralized cloud service platform or a blockchain to the Blockless decentralized node network using built-in functions. With the Blockless SDK, you can access any Web2 and Web3 applications as it currently supports IPFS, AWS3, Ethereum, BNB Chain, and Cosmos.

Developers using Blockless will only need to provide the serverless functions they want to implement (in any language!), as well as a manifest file that specifies the minimal number of nodes required, hardware specifications, geolocation, and node layout. In no time, their services will be operating with ultra-high uptime and hands-free horizontal scaling. To learn more about the architecture of the Blockless network go here, but yet again, its orchestration chain is a Cosmos-based blockchain responsible for function/app registration. The cherry on the cake is that you can use and incorporate or sell community functions and extensions into your own application design in a plug-and-play manner using Blockless Marketplace. In Cosmos, you can already do this through projects like Archway or Abstract.

SAGA — Rollups as a service and Settlement optimization

Popular L2s and Rollups today like Arbitrum, Optimism, and StarkNet use Ethereum for data availability and rely on single sequencers to execute their transactions. Such single sequencers are able to perform fast when submitting to Ethereum but evidently stand as a centralized point of failure. Saga has partnered with Celestia to provide roll-ups as a service to enable a decentralized sequencer set.

Saga’s original design is meant to provide critical infrastructure to the appchain vision, where the Saga protocol abstracts away the creation of a blockchain by leveraging IBC.”

Saga provides easy-to-deploy “chainlets” for any developer to roll out an application without having to care about L1 developments. Although their main focus is to support full appchain formation on top of the
Saga Mainnet, the technology can also support the modular thesis. This means that rollup developers can use Saga’s validators to act as sequencers and decentralize their set. In other words, Saga validators can also work in shifts submitting new blocks for Celestia rollups.

Saga offers a service that organizes validators into sequencers and punishes misconduct through shared security. Saga’s technology provides functionalities to detect invalid block productions with fraud proofs and to manage censoring or inactivity, challenges are made to process a set of transactions. This means that Saga can enhance the settlement layer whilst using Celestia for data to generate fraud proofs and offline censor challenges. This could also even be done for Ethereum, with the additional benefit of having shared security between chainlets and IBC out of the box. To further understand the difference between running a rollup or a chainlet, please refer to this fantastic article.


In such a modular world, developers finally have full customization power. One could choose to build sovereign rollup or settlement rollups, or even a hybrid. In our example, it could even be possible to use Saga’s consensus instead of Celestia’s. Referring to our example, we could have an application that decentralizes its execution computing through Blockless whilst programming in any language, decentralizes its sequencer set and is able to deploy unlimited Chainlets if more block space is required with Saga, and has a reliable and decentralized data availability layer with Celestia. What’s best, all these layers are built and optimized with Cosmos SDK chains, meaning they will have out-of-the-box compatibility with IBC and shared security of Chainlets.

Ethereum Withdrawals are near and here’s a quick guide to the event.
We talk about the withdrawal process and future implications.
February 17, 2023
5 min read

Withdrawals are imminent. This March, Ethereum will be undergoing its first hard fork of the year, bringing much anticipated withdrawals to the mainnet. As developers move into the final pre-launch sequence, by upgrading the public testnets (first Sepolia, then Goerli), we wanted to get you up to speed on this coming Shapella (Shanghai + Capella) upgrade.

1. Withdrawals mark the end of the Proof-of-Stake transition cycle

If you look at Ethereum’s Beacon Chain today, the way to participate as a validator means you must send at least 32 ETH to the Deposit Contract, or “stake” your ETH. The Beacon Chain follows the contract, querying for changes so that it can process any new deposits. The entire validator lifecycle consists of different states that determine what you can or can’t do as part of the network.

Ethereum only allows a small number of validators to start or stop validating at a time to maintain the stability of the validator set. Once you are part of the “Active” set, you start accruing rewards by voting (”attesting”) every six minutes with the occasional proposal. The majority of these rewards are added to the balance of the validator.

At any point, you might want to stop validating and take out your ETH, in which case you would want to join the voluntary exit queue. On the other hand, you might have been a validator for some time and want to utilize the excess ETH, considering the average validator balance is ~34 ETH.

Withdrawals close the validator cycle and mark the end of the PoS transition that started with the Merge in September 2022. Before then, the two chains were unaware of each other. Specifically, the Execution Layer didn’t communicate at all with the Beacon Chain until they merged. Withdrawals stand opposite to the deposit process, crediting your ETH from the Beacon Chain on the Execution Layer to finally close the cycle.

2. About the Ethereum withdrawal process

There are 2 requirements for withdrawals to be processed:

  • You must have a 0x01 credential, which represents the Ethereum address where the ETH will be credited. If you don’t have this type of credential, you must sign a message to change it, which will take effect at the time of the fork.
  • You must have a balance above your 32 ETH (partial withdrawals), or have fully exited the validator according to the validator lifecycle (full withdrawals).

For every block, the network scans the validator set for the first 16 validators that satisfy those two requirements. Then, those withdrawals get processed as part of the block in a gasless transaction.

According to the most recent estimate, ~300,000 validators are on the old credentials, meaning the majority of validators will need to change them (it involves digging for those mnemonics created over 2 years ago). This change can only be done once.

Chorus One developed a tool called “eth-staking-smith” that enables the user to generate those signed messages and easily update their withdrawal address.

The process after that is fully automatic. Meaning, you don’t have to do anything else to start spending those rewards, they will be credited to the withdrawal address without your intervention. If all of those validators properly change their credentials, a complete run through the active validator set would take about 4 and a half days. Meaning, you can expect to receive your rewards to the withdrawal address in that cadence.

Please check the official ETH Withdrawals FAQ to learn more about withdrawal mechanics and enabling withdrawals for your validator.

3. Changes in the staking panorama for Ethereum

We have previously elaborated on why staking is the most attractive risk-adjusted source of yield in crypto. We believe in its force to provide value at the base level to stakers, deliver competitive results and guarantee that networks such as Ethereum continue to operate as the backbone of a decentralized financial system.

However, the inability to withdraw staked assets on Ethereum has been a risk consideration that stakers had to make before committing to the task for the past years. Not anymore. This massive unlocking of liquidity is sure to make big waves in the coming months and impact the staking panorama of Ethereum. Staking has also made the news with the recent news of regulations in the United States. As a non-custodial staking provider, we continue to believe in this thesis.

With an increasing number of ETH being staked post-Merge, along with growing adoption of the Ethereum network and a rising ETH price, we believe that 2023 will be an even stronger year for Ethereum staking post-Shanghai. However, we must get ready for some changes.

  • The Shanghai Upgrade de-risks ETH staking as it improves liquidity and reduces lock-up requirements by initiating the withdrawal process, making it increasingly attractive to institutions wanting long-term bets on the blockchain ecosystem.
  • In terms of Liquid Staking Derivatives (”LSD”) you will be able to redeem them and unstake your ETH directly on the protocol. This means unlocked liquidity to compound, which might push the APY slightly. Some stakers might choose to migrate to other providers altogether.
  • Staked ETH held by the Deposit Contract and active validator counts continue to grow with new momentum after the Merge, and even pre-Shanghai where some narratives called for sell-pressure on ETH.

4. How Chorus One prepares for Withdrawals

We made our bet on the Ethereum staking ecosystem last year, when we finally unveiled OPUS: our API and Portal solution to significantly speed up institutional staking operations.

Since then, we have been working on many exciting features, including enabling MEV rewards, with more in the pipeline to be rolled out in the coming months. We plan to support withdrawals in our infrastructure as soon as it's safe after the upgrade, and we are working to create the simplest staking and unstaking process in the market for all kinds of institutional clients.

We have been testing this process and will continue to do so on the available testnets for increased security. We also provide a suite of options including the mentioned update of validator withdrawals addresses and a full Portal to consult all rewards accumulated.

Reach out to to know more about how OPUS can help you start staking or offer staking to your customers with minimal setup.

About Chorus One

Chorus One is one of the biggest institutional staking providers globally operating infrastructure for 35+ Proof-of-Stake networks including Ethereum, Cosmos, Solana, Avalanche, and Near amongst others. Since 2018, we have been at the forefront of the PoS industry and now offer easy enterprise-grade staking solutions, industry-leading research, and also invest in some of the most cutting-edge protocols through Chorus Ventures. We are a team of over 50 passionate individuals spread throughout the globe who believe in the transformative power of blockchain technology.

For more information, please visit

Chorus One announces staking for Gnosis Chain
Staking GNO contributes to the chain security and earns rewards.
February 9, 2023
5 min read

We are excited to announce that we have onboarded Gnosis Chain as validators. Gnosis is one of the first Ethereum sidechains in existence and has kept close to its values from inception. Gnosis Chain is EVM-based and secured by over 100k validators around the world. It hosts a very diverse validator set and it is propped up by the community governance of GnosisDAO to ensure it remains credibly neutral at a much lower price point than Ethereum mainnet. It powers an ecosystem of DApps including POAP (Proof of Attendance Protocol, the original NFT protocol), Dark Forest (a fully decentralized strategy game, built with zkSNARK technology), Giveth (public goods, peer-to-peer direct funding platform), and much more.

Gnosis has a long history of working alongside Ethereum, although Gnosis Chain is technically a new blockchain. It first specialized in prediction markets, decentralized exchanges, and wallet solutions, and joined expertise with xDAI Chain in 2021 to provide fast and inexpensive transactions. This newer chain has some great features including a block time of 5 seconds (making it ideal for everyday payments), a native stablecoin, a low-fee system (gas fees cost .01 xDAI per 500 transactions), Ethereum compatibility/interoperability, and much more. Gnosis Chain already successfully went through its Merge upgrade and on December 08, 2021, became a full Proof-of-Stake network.

Gnosis Chain runs on a dual-token framework: xDAI, which is a wrapped version of MakerDAO’s algorithmic stablecoin DAI, is the payment coin of the network. By using a stablecoin for payments and calculating gas in xDAI, Gnosis Chain can keep fees extremely low. On the other side, GNO is the staking and governance token for GnosisDAO, allowing validators and delegators to secure the chain. Currently, there are 342k GNO staked for on-chain voting, making Gnosis Chain the third most decentralized blockchain after Bitcoin and Ethereum. Chorus One is thrilled to support Gnosis Chain in our quest to expand the PoS economy.

About staking on Gnosis Chain

Block Explorer:

Validating Rights: The minimum requirement to run a validator is 32 mGNO (1 GNO). Gnosis follows Ethereum’s Proof-of-Stake rewards system. You can learn more here.

Staking yield: 15.78%

Slashing: Staked tokens are subject to slashing.

To stake GNO or to set up a whitelabel validator, reach out to

No results found.

Please try different keywords.

 Join our mailing list to receive our latest updates, research reports, and industry news.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Want to be a guest?
Drop us a line!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.