Uqbar is building a decentralised network on an operating system known as Urbit, an identity-driven, peer-to-peer, deterministic system that uses a functional programming language called Hoon. Urbit was originally conceived in 2002 to solve what was seen as a failure of the modern internet to live up to its expectations as a peer-to-peer network of personal servers.
The failure of the modern internet can be described as vulnerabilities and inefficiencies that occur as a consequence of using old operating systems such as Unix, which are more prone to bad state, undefined behaviour, memory leaks, and/or crashing. When we access the internet today, we use operating systems that do not allow for our identity to be transferred or traced, we cannot easily understand the state of our current system without trusting a centralised third party to give us the information, we use computer resources inefficiently and store information across a multitude of servers and interact with peers in a non-private manner, exposing our personal information.
Urbit built an identity-driven, deterministic, functional, peer-to-peer and private operating system to solve all of the abovementioned problems. Urbit’s operating system gives users an alternative way to access the internet to retain ownership of their own data and become digitally sovereign citizens, released from the shackles that centralised technology companies have over most of the internet’s population. It comes as no surprise that the Uqbar team concluded that Urbit would be the ultimate system to build a blockchain on top of. Uqbar is venturing into the unknown, by creating an zero-knowledge execution layer that settles on Ethereum but is run on top of Urbit.
Urbit and blockchain share many similar properties (in fact, Urbit uses Ethereum NFTs to record IDs) but primarily differ in the way that blockchains are purpose-built to solve the double spend problem. Urbit does not need to solve the double spend problem because there is no such concept as a fungible currency in Urbit. Instead, Urbit needs to solve a ‘double-sell’ problem, to avoid a ‘parent’ selling one ID to two sellers at the same time. As Urbit IDs act more like real estate than as a currency, it is much easier to solve this problem because the chance of one ID being sold to two parties at the same time is negligible (especially as reputation and governance exist in the system, meaning IDs implicitly have an economic stake in this imperfectly decentralised system).
Indeed, choosing to build a blockchain on top of Urbit actually solves many problems of what blockchain itself faces today, such as; fragmented middleware, expensive versioning and central points of failure. When Uqbar launches on Urbit and as an execution layer for Ethereum, it solves the above-mentioned blockchain problems to create a private, unified and composable environment for on-chain and off-chain data. To elaborate, Uqbar is interoperable with all applications built on Urbit itself. To date on blockchain, if we hold financial assets such as ERC-20s on Ethereum, it cannot be interpreted by any web2 websites (e.g. Instagram does not understand if you transfer an NFT to it).
When Uqbar launches, any financial asset on Uqbar can be interpreted by all applications outside of Uqbar in Urbit (e.g. send an NFT from Uqbar to a non-blockchain Urbit application such as a blog to be used/viewed there). Urbit and Uqbar are fundamentally recreating what the internet is and what we can do with it. Urbit is a system built as if cryptocurrency was invented at the same time that the internet was created, perfectly suited to facilitate cryptocurrency transactions. To move to the next evolution of the internet, we need to rebuild the past. Forget web3, web0 is here.
To understand how powerful Uqbar is, we first need to further understand what Urbit is, the operating system that overlays traditional operating systems such as Unix that is architected to be identity-driven, deterministic, functional, peer-to-peer and merkelisable.
One of the most powerful properties of Urbit is that it is identity-driven. Urbit has a built-in identity system in the form of PKI on Ethereum. Having the private key is having the urbit instance. This coincides with the ethos of cryptocurrency as attributable ownership of computation is a desired property in the crypto world. The internet of today does not have a canonical identity system, which allows for sybil attacks such as spam and phishing. Urbit decided to make its operating system identity-driven in order to establish cryptographic accountability in the system, which is non-existent in existing operating systems.
Within Urbit, all messages are securely signed and encrypted by the sender and verified by the receiver. Having an identity in an operating system makes life much more frictionless because it is immutable and persistent. This means that an Urbit ID will always exist and it cannot be changed in a way that is not authorised by the owner of the ID. For once, a user has an identity that can be used across an entire system, rather than needing to create a new identity and password on every website they sign up for.
A deterministic system is a system in which a given initial state or condition will always produce the same results. Urbit is a deterministic operating system — the state of the OS is a pure function of the event log. The event log in this context means a complete, ordered description of everything the OS has been asked to compute. All of the data is stored in a single binary tree of integers, and every computation it performs is a series of manipulations of some sub-tree following 12 allowed operations. This means the current state of the urbit instance is fully determined by its genesis state and the sequence of inputs given to it. This is entirely different to operating systems today that are non-deterministic and vulnerable to going into a bad state, producing undefined behaviour (often resulting in bugs), memory leaks, or crashing.
Urbit also greatly differs from existing operating systems as it is built using a functional programming language, called Hoon. One major benefit of using a functional programming language is that side-effects seen in imperative programs are minimised (such as mutable state having unintended consequences from user interactions), which makes a system more reliable and therefore easier to maintain. Functional programming languages also make it easier to reason about software, which results in it being easier to debug than other types of programming languages.
A good way of thinking about a functional programming language versus other types of programming languages is to think of mathematical functions versus computer functions. Functional programming languages closely resemble mathematical functions whereby the same input will always equal the same output, which ultimately results in it being easier to debug than other types of programming. As well as the above, functional programming also avoids mutable/shared state, which is important for concurrency and increases performance of an entire system.
Another property of Urbit that is unique to its operating system versus others is the way in which the network topology is defined a-priori and completely via a peer-to-peer (p2p) network. The location of a computer in the Urbit network is defined by its name (at genesis). All communication is fully encrypted leveraging the built-in identity system (PKI). This makes it trivial to do peer-discovery, software distribution and identity verification.
Finally, state in Urbit is represented as a single binary tree. All of the data/code in Urbit is stored in a single binary tree of integers, and every computation performed in Urbit is a manipulation of some sub-tree. For those already familiar with cryptocurrency, a binary tree data structure is also used in blockchains (merkle trees). This makes Urbit an ideal system to build a blockchain on top of.
By understanding the architecture of Urbit and what makes it unique from the properties mentioned above, such as the deterministic, functional, identity-driven, peer-to-peer and merkelised nature of the system, we can begin to understand what makes it a lucrative operating system to build a decentralised network on top of.
Firstly, Urbit is deterministic. This is the exact same computational model that is used in blockchains. The same inputs will always equal the same outputs. The operating system of Urbit is as reliable as the Uqbar decentralised network itself, an ideal combination.
Secondly, Urbit is unified. The structure of Urbit means all applications are encoded in the same way that they are in the state. This means that smart contracts written for Uqbar are eventually available to the wider Urbit computer. For example, one could write a smart contract on Uqbar that is a financial application, which receives oracles updates of values directly from an application built on Urbit, like a weather app. Or, a smart contract on Uqbar could be written that executes upon actions being taken outside of the blockchain (e.g. an NFT is minted to an address when an Urbit ID purchases an item on a shopping application on Urbit).
The point to understand here is that data that exists on Uqbar and Urbit is interpreted in the same way. Any financial assets that a user holds on Uqbar will be able to be understood by any Urbit application. Applications on Uqbar and Urbit execute in the same way and are understood by each other’s environments, unifying the on-chain and off-chain world for the first time.
Thirdly, Urbit is intrinsically private. Urbit uses a public key infrastructure (PKI) called Azimuth, which exists on the Ethereum blockchain, recording ownership and public keys of Urbit IDs. Ames (Urbit’s encrypted p2p protocol) encrypts every message sent on Urbit using symmetric-key encryption derived from the public key of the peers. One concludes that all communication on Urbit has at least the same privacy guarantees as messaging someone on Signal. All communication and data transfer on Urbit (DMs, blog posts, file-transfer, etc.) is private.
However, it should be noted that whilst the data of the communication (ciphertext) is private to both the sender and receiver, the metadata (to and from address of each packet) can be ‘watched’ by middlemen. However, users in Urbit have an assurance that the ciphertext of the communication itself is private (e.g. a user saying hello to a friend cannot be read by anyone but the friend in Urbit). This in itself offers stronger privacy guarantees than what exists now in web2 (or web3).
In the future, it is likely that a privacy network will be built within Urbit, which could ‘mix’ addresses (Urbit IDs) to make it extremely difficult for any third party to work out an identity based on types of packets it receives and further enhance privacy guarantees when using Urbit.
Fourthly, Urbit is identity-driven, much like a decentralised network itself. Urbit’s PKI Azimuth is used as a decentralised ledger for what are known as Urbit identities. The fact that Urbit leverages a blockchain already for identities shows the clear link between both technologies. Urbit decided that a blockchain was the best technology to store IDs, which is necessary to use the Urbit network. Azimuth is a parallel system that can be used as a generalised identity system for any Urbit application. All users have an identity in Urbit. Once an identity is linked to an Urbit instance, it cannot be unlinked as an Urbit ship acquires its identity at boot and retains it for its lifespan. In order to use a different identity in Urbit you would need to boot a new ship. This is somewhat similar to how decentralised networks work today, whereby a public address is needed to interact with any blockchain.
However, Urbit made one key critical breakthrough in decentralised identification. In Urbit, an ID (for the most part in the form of an NFT, apart from moons or comets) is needed to access the network. The amount of IDs that exist in Urbit is limited, hence identity is scarce. Scarce identities are inherently valuable because only a limited amount exists. This differs from a centralised network, which can disperse identities at will with no extra cost. Not only that but IDs can be transferred and traced, meaning a user can take their identity with them wherever they go within the system. This powerful concept is a breakthrough in decentralised identification, which is another example of how well suited Urbit is for building a blockchain on.
Fifthly, Urbit is perfect for software versioning and distribution. It becomes very efficient to distribute software using Urbit because all users have an identity, therefore users are able to send and receive data with just one signature. Application developers are able to easily ship different versions of their product and send the different versions to different IDs. For example, contract developers could A/B test applications on the blockchain itself to experiment with their products by sending different versions to different Urbit IDs. Testing different versions of smart contracts for different users is not possible on blockchains that are built on existing operating systems today.
Another advantage of using Urbit for a blockchain is that it gives developers the optionality to charge different users different prices. To elaborate, it is easy to verify blockchain state within the bounds of a single application. It is possible to verify that a payment was made by a specific Urbit ID, to then automatically send some data or software to that Urbit ID in response. This is somewhat possible with the existing stack (non-Urbit), except that wallet addresses and contact addresses are not linked by default, so some bookkeeping is necessary.
Applications could offer different pricing packages for users dependent upon their use of the application and actions taken. Separately to the above, applications that exist in Urbit outside of the Uqbar blockchain could request payment from a user via Uqbar blockchain itself. An application developer could receive payment on-chain and then ship the software off-chain. Software versioning and distribution on Urbit is a perfect match for a blockchain to leverage as it provides more flexibility for application developers and a better user experience for users.
Now we know what Urbit is and why it makes sense for a blockchain to be built on top of, we can put the pieces of the puzzle together to understand the similarities and differences of Urbit and blockchains as they exist today.
Firstly, the similarities. Both Urbit and blockchains have multiple design similarities with each other, which include:
A p2p networking layer
Public key infrastructure as identity
A functional system
A permissionless system
However, blockchains have a value transfer layer that Urbit currently does not have, which is the major difference between the two technologies. For example a blockchain is built specifically to solve the double spend problem. Satoshi Nakomoto, the creator of Bitcoin, solved the double-spend problem, which was an ubiquitous problem within the cypherpunk space at the time in order to remove the necessity of having a “trusted third party” for digital currency.
In particular, Satoshi came up with a system, known as blockchain, which had within it a sybil resistance mechanism (Proof-of-Work), a consensus algorithm for nodes to adhere to (Nakamoto consensus), game theory (cost of attack is more than cost of work) and a currency to reward those that verified transactions on the blockchain (Bitcoin). Soon after the invention of Bitcoin, Ethereum was created. Ethereum introduced the notion of building a virtual machine on top of a blockchain, which paved the way for ‘smart contracts’ in the blockchain space.
Since then, there has been a vast amount of blockchains that have experimented with sybil resistance, consensus, game theory, virtual machines and currencies in different ways.
The double spend problem that Bitcoin originally solved that accounts for most of blockchains success, is a similar but somewhat different problem to what Urbit calls the ‘double sell problem’. Bitcoin and its relatives are designed to secure a complete, interdependent, collective transaction history. Bitcoin uses UTXO to verify whether or not a transaction is valid by looking at a history of previous transaction inputs. Urbit does not have such a thing as UTXO because Urbit ships are non-fungible and have no collective history or dependencies, which removes the need for a ‘chain’.
Another major difference between Bitcoin and Urbit is the way in which both have different hierarchies; Bitcoin has a completely flat, trustless and decentralised system whereas Urbit is more hierarchical. ‘Parents’ in Urbit can theoretically sensor ‘ships’ that are spawned from them (e.g. DoS) and as a result, Urbit could be argued to be more centralised than a permissionless blockchain such as Bitcoin.
However, it should be noted that ships that spawn from parents do have the optionality to take their ship elsewhere (e.g. if a provider is denying a ship service, the ship has the option to transfer its identity to another provider, which enhances the decentralisation of the whole system). In this way, Urbit is imperfectly decentralised with a less flat hierarchy than what Bitcoin has. Uqbar noticed an opportunity to build a value-transfer layer on top of Urbit that is completely flat and permissionless.
A value transfer layer whereby every transaction is recorded on a universal ledger, able to be read by anyone. However, blockchains as they exist today are not perfect. If Uqbar was not built on top of Urbit, it would likely run into the same types of problems that most blockchains experience today on existing operating systems. .
Fragmented middleware / non-unified environments
A major downside of blockchains is the incapability of each to provide any service outside of consensus on internal state changes (accounts and balances). For this reason, if an application wants to do more than just tracking state changes of accounts that live on a blockchain, it must leverage a middleware protocol. However, middleware protocols that are built to enhance the capabilities of decentralised applications are often networks themselves and require a completely different skill-set to run and maintain.
Consider this, a decentralised application would like to use a middleware protocol such as an oracle network to update the value in a smart contract of an event taking place outside of a blockchain (e.g. the weather temperature), as well as an RPC network to read the state of the internal blockchain (e.g. account balances). Most likely right now, an application developer would have to learn the tools of each middleware protocol in order to utilise it in the most secure way possible. As you can imagine, every time another middleware protocol is used by an application, it gets more and more difficult to manage. There is not one unified middleware protocol that is capable of providing any middleware service (e.g. oracle, RPC, cloud, analytics, etc.) for an application.
Expensive versioning
To understand what we mean by expensive versioning, we must first define versioning. Versioning is the process of assigning either unique version names or unique version numbers to unique states of computer software. Due to the intrinsic nature of a blockchain being fully focused on solving the double spending problem, it cannot easily iterate on versions without a fork taking place, due to the consensus required in order to change what version all nodes follow. There is no way in a blockchain to distribute software to specific identities. All nodes must upgrade in order to avoid forking of a blockchain.
Censorable
Realistically today, although most decentralised applications are deployed in decentralised networks, most users interface with decentralised applications through a server (or node) that is hosted by a centralised party (e.g. AWS or Infura). For example, most DeFi applications might be accessed through a website that is hosted by AWS, or an interaction with a blockchain can only be made through a centralised provider that relays transactions from a user to the blockchain.
Uqbar is a decentralised network that leverages the best properties of Urbit’s operating system and blockchains to create a self-sovereign, private, functional, unified and composable environment for on-chain and off-chain data
Uqbar is going-to-market as a zero-knowledge execution layer (‘Layer 3’), using Starknet as a settlement layer by posting proofs to Starknet’s ‘Layer 2’ for verification. Uqbar is a highly scalable, customisable and composable decentralised network that is leveraging Urbit’s performance and capabilities (networking, identity, databases, authentication, and software distribution) to build an execution layer on top of Urbit’s operating system that settles transactions on Starknet. Uqbar’s permissionless network is architected through shards, flexible time-to-finality, inter-shard interoperability, inter-shard composability, customisable data availability, a unified developer experience, privacy and on-chain governance.
Shards (Towns)
Shards are called towns in Uqbar and each town has different rules and regulations. For example, sequencing times and data availability methodology across towns will likely look vastly different. Some towns might require fast sequencing times for fast finality, whereas others might not require such fast sequencing times if finality does not need to be sub-second.
How data is made available across shards will likely be different too. For example, some towns might use validiums for data availability, whilst others might use volitions for data availability, and so on. The important thing here is that each town does utilise a data availability solution to avoid itself from becoming ‘locked’ (e.g. from nodes not being able to compute the balance of every account at a given state because data is not available, therefore a new state is unable to be propagated).
Flexible Time-to-Finality Finality
Finality across towns in Uqbar is customisable and each town will have a different time-to-finality depending on its own requirements. Shards that want faster finality (e.g. a CLOB) will likely require node operators that have specialised resources for executing proofs. There are two types of finality in Uqbar, business and settlement finality.
When it comes to business finality, Uqbar classifies this as the time it takes for x transaction to be sent to a sequencer and for a sequencer to verify x transaction. The second type of finality in Uqbar can be defined as ‘settlement finality’. Settlement finality occurs after business finality and once a sequencer has batched all transactions from users, submitted them to a prover, had a proof generated by a prover that then submits the proof to Starknet, posted state changes the transaction caused to Starknet for data availability and finally called the settlement contract on Starknet to record and update the new stored state of a particular shard.
In a nutshell, business finality is once a sequencer has received and signed confirmation to a user on Uqbar (execution layer), whereas settlement finality is when the settlement layer (Starknet) has verified (via a contract on Starknet) that a proof being generated is valid. Having a proof verified on a settlement layer gives a user a higher guarantee that their transaction will not be rolled-back or tampered with.
Figure 1 — Uqbar’s Transaction Lifecycle and Data Flow using Starknet as a Settlement, Consensus and Data Availability Layer
Inter-shard Interoperability
Trust-minimised bridging is possible between towns in Uqbar. Users send assets to a ‘burn’ smart contract in their ‘origin’ town and include what destination town they would like to bridge their assets to in the payload. After a user has indicated their cross-town intent, the ‘burn’ smart contract increments the nonce of the origin town to keep track of the sequence of the transaction.
Afterwards, the destination town claims the asset from the source town by providing a merkle proof of the burn state, which is verified via a settlement contract on the Layer 2 (Starknet). When a user’s bridging transaction is verified by the settlement layer (Starknet) the user is able to bridge the assets and amount specified in the original burn transaction payload to be used on the destination town.
Inter-shard Composability
There is no such thing as slots or epochs in Uqbar. Transactions are sequenced depending on rules and regulations governing a town. This unlocks experimental composability across towns. Composability leverages atomicity, meaning either a transaction executes within a given timeframe, or it doesn’t. In the context of Uqbar, it is possible for towns to have different ‘clocks’, meaning town clocks do not have to be synchronised.
As a result, it is possible to bridge to another town from a source town, send a proof to the destination town and have it sequenced before the destination town has itself sequenced transactions to Starknet L2 to be settled. Because there is no such thing as a clock in Uqbar, synchronous composability across towns is possible, a property that is not possible in other asynchronous execution layers.
Customisable Data Availability
Data availability is a crucial element of Uqbar’s architecture because full nodes need to be able to verify the correctness (integrity) of state updates to the settlement layer (Starknet). The zero-knowledge proofs guarantee computational integrity (i.e. that the proof was generated correctly) but it does not guarantee integrity of the data used in the actual proof generation. If a block producer proposes a block without all the data being available, it could reach finality whilst containing invalid transactions. To avoid this, a data availability solution is needed for zero-knowledge execution layers such as Uqbar.
Data availability can range from on-chain (where all transaction data is posted on-chain and verified by all nodes) or off-chain (where data is stored/made available in another layer and a cryptographic commitment is published that proves the availability of the data in an off-chain location). There are many solutions to solve the data availability problem off-chain (when data is being withheld when blocks are being proposed) and they range from data availability sampling, to data availability proofs to data availability committees.
In Uqbar, each ‘town’ (shard) will have the opportunity to customise and choose their own data availability solution depending on their needs. For example, one town might choose to use validiums for data availability, whilst another town might use volitions for data availability. The economics for data availability solutions per town will be different and will depend on the option a particular town has chosen. Any data availability layer solution comes with a cost that towns can weigh trade-offs of before deploying on Uqbar. Uqbar towns will always have the data availability layer option of Ethereum as a failover (on-chain) if other solutions (off-chain) are not optimal for their needs.
Figure 2— Data availability customisation options available to Uqbar’s execution layer
Unified developer experience
For Uqbar developers, infrastructure is inherently built in for free (e.g. p2p messaging, identity, version control is taken care of, developers just need to focus on the business logic). This makes it a suitable base layer to build complex applications on (e.g. games). The outcome of only needing to focus on business logic is increased speed of innovation. This is as a result of application developers spending less time on platform management (like they do now in web2 and web3) and more time on actual contract logic (as all platform, network, identity, etc. is automatically handled for them by Urbit).
Uqbar application developers are programming front-end and back-end in the same language (Hoon). This again amplifies the speed of innovation on Uqbar as any deployed Hoon code could be front-end or back-end, which can be re-used by any other developers depending on their needs. Essentially all developers and all applications speak the same language, front-end and back-end, which is Hoon. This unified developer experience greatly increases collaboration and innovation in Uqbar’s ecosystem.
Privacy
All of the data/code in Uqbar (and Urbit) is stored in a single binary tree of integers. Every computation performed in Uqbar (and Urbit) is a manipulation of a sub-tree of integers. As Uqbar is a binary tree of integers, it is easily merkalisable and therefore suitable for merkle trees. Merkle trees create the ability to prove certain data is included without sharing all the data, which makes it very suitable for zero-knowledge proofs. Zero-knowledge proofs involve one party (a prover) proving the truth of a statement to a second party (verifier), without the prover revealing all of the statement’s contents or revealing how they discovered the truth.
Ultimately zero-knowledge proofs facilitate privacy as users can prove a statement (e.g. a user wants to transfer x amount of assets from one wallet to another) without giving away to the verifier the exact metadata (e.g. the amount of assets or what the asset is in the transfer). The verifier is able to verify that the user in this example can indeed transfer their assets to another wallet without knowing the amount of assets or what the asset is. After verification, the transaction is executed on-chain, all-the-while a user is assured that their activity on-chain is undiscoverable by outside parties. In this sense, all users are assured that their transaction data is private on Uqbar.
On-chain Governance
As it stands right now, most on-chain governance in Uqbar is being done in a relatively centralised way by Uqbar’s development DAO that mainly makes decisions on upgrades of the Uqbar’s settlement layer smart contract on Starknet. However, in the future it is likely that Uqbar’s governance will decentralise over-time. For example, governance might be used to vote on towns being proposed to launch on Uqbar in the future.
Decentralised / Permissionless
Any validator can become a prover or a sequencer. In practice, there will likely be a spectrum of some at-home sequencers and some professional node operators, depending on the frequency of proving required by a town. In the future, any town will be able to propose to launch on Uqbar and any validator will be capable of becoming a sequencer or a prover on the network.
After understanding what Urbit is, the problems of traditional operating systems it solves, the similarities with blockchains it has and why it is the ultimate operating system to build a blockchain on, we can begin to envision the types of unique use-cases that Uqbar will spawn. In particular, it is likely Uqbar will propagate a new era in DAO tooling, NFTs beyond the blockchain, web3 (social media) and gaming.
When it comes to DAO tooling as a use-case, Uqbar will enable communities to form in a much more decentralised manner than ever before. For example, it will be possible for DAO members to activate an Urbit desktop that has programs pre-installed that are specific to the DAO (e.g. an application to vote in governance).
Uqbar will unleash a new era for NFT use-cases too, as we begin to see NFTs being integrated wholly and natively into applications that exist outside of Uqbar’s blockchain (e.g. use an NFT to obtain discounts at a supermarket site). NFTs created on Uqbar can be used across Urbit wherever an owner can authenticate their Urbit identity. It is not hard to imagine a variety of use-cases that come about as a consequence of NFTs being able to be used beyond the blockchain. Another example might be that an NFT is required in order to co-program smart contracts that a DAO is developing through an integrated development environment that exists as an application on Urbit. The potential basket of use-cases here is really unlimited.
Any communities that develop on Uqbar will be able to communicate natively with each other through decentralised social media applications built on Urbit. For example, a decentralised venture capital community can coordinate decisions on a communication platform such as Escape and then propose a confirmation of transaction intent on the Uqbar blockchain via Escape. Uqbar will be able to natively interpret intent from community members on applications that exist off of the blockchain.
In turn, Uqbar nodes have the ability to execute upon transaction intent signalled on communication platforms outside of Uqbar. Identity, transactions and applications will all be natively integrated through one operating system. Not to forget that all applications on Uqbar are self-hosted and private, so all communication channels have no way of being surveilled by intermediaries. In the not so near future, centralised communication channels such as Discord and Twitter could become obsolete.
Uqbar will unlock creative use-cases that were not possible previously too, due to the sheer execution power that Uqbar has as a result of being built on top of Urbit. In particular, because of the unified and zero-knowledge environment that Uqbar capitalises on, creative use-cases such as gaming and visual art experiences will improve monumentally as Uqbar’s native networking and unified functional programming experience orchestrates unprecedented composability. Today, NFTs often only represent data that is hosted elsewhere (e.g. an NFT has a hyperlink, which directs to a JPEG file hosted on IPFS). Tomorrow, on Uqbar, NFTs will represent data that is hosted natively, on Urbit or Uqbar, fully scalable, online and available.
Uqbar’s Testnet and Ziggarut Developer Suite launched on September 25 2022, when it was announced by Hocwyn-Tipwex at the Assembly conference in Miami. Developers are currently building their first unified, private and composable applications on Uqbar. If all goes well, Uqbar is planning to launch their Mainnet in Q2 2023.
To conclude, Uqbar is a decentralised network that leverages the best properties of Urbit and blockchains to create a private, unified and composable environment for on-chain and off-chain data. Uqbar is recreating the internet by building a blockchain on top of Urbit’s operating system, an identity-driven peer-to-peer, deterministic system that uses a functional programming language called Hoon. Uqbar itself acts as an execution layer as part of a modular blockchain stack, which is likely to settle transactions on Ethereum to begin with.
There are many similarities between the Urbit and blockchain stack that have not been assembled together to create one unified experience before. Uqbar is venturing into the unknown and embarking on a mission to solve existing problems of blockchains such as fragmented middleware, expensive versioning and points of centralisation by building a blockchain on top of Urbit, which fundamentally solves most major problems of blockchains whilst enabling new types of use-cases that are not possible in blockchains today.
We cannot begin to imagine the new types of use-cases that will be possible on Uqbar that come about as a result of combining two groundbreaking p2p technologies in Urbit and blockchain. However, it is not hard to anticipate a variety of use-cases that simply do not exist on the internet today. Financial assets that natively integrate with applications outside of the blockchain. Reputations being built with intrinsic value through accessing applications with Urbit ID that can be used within Uqbar. Art that is actually stored on the blockchain. Synchronous, multiplayer actions being taken by members of DAOs.
The amount of use-cases that will be unlocked with Uqbar is unlimited. Uqbar is fundamentally recreating what the internet is and where it came from.
Acknowledgements:
Thanks to Erwin Dassen, Gary Lieberman, Brian Crain, Jennifer Parak for their contributions, thoughtful insights and review of this article.
Xavier Meegan is Research and Ventures Lead at Chorus One.
Medium: https://medium.com/@xave.meegan
Twitter: https://twitter.com/0xave
Chorus One is one of the largest staking providers globally. We provide node infrastructure and closely work with over 30 Proof-of-Stake networks.
Website: https://chorus.one
Twitter: https://twitter.com/chorusone
Telegram: https://t.me/chorusone
Newsletter: https://substack.chorusone.com
YouTube: https://www.youtube.com/c/ChorusOne
Uqbar is a one-stop coding environment that makes writing and deploying smart contracts simple, efficient, and secure.
Website: https://uqbar.network/
Twitter: https://twitter.com/uqbarnetwork
Discord: https://discord.com/invite/G5VVqtjbVG
GitHub: https://github.com/uqbar-dao
Blog: https://mirror.xyz/0xE030ad9751Ca3d90D4E69e221E818b41146c2129
By ~dosnul-sogteg
As a mathematician/cryptographer working at Chorus One in the role of Team Lead, I tend to dive deep into the fundamentals of cryptographic, tokenomics and consensus protocols while helping out with due-diligence for investments and network onboarding.
Crypto has been a passion for a long time and it graduated to a work/life obsession in the last 3–4 years. More recently, I have also been diving into Urbit although, on that, I still need to familiarize myself with a lot of the terminology and the overall philosophy.
This article, the second of a series of three on Urbit and Uqbar pieces published by Chorus One, is an attempt to report on my findings and what I consider great about these two projects from a technical perspective. Also, last but not least, it touches on how this all relates to crypto and the fundamental problem of “Crypto = Web3”, explored in the previous article on “Why Web3 needs Urbit”.
There was a certain moment I had when learning about Crypto that one could categorize as an “A-ha!” moment. I really see how this technology can — although not guaranteed — revolutionize how we get together to build things and solve problems.
Similarly, I think a little while back I got an “A-ha!” moment with Urbit. I understood what it was and why it was different. Why perhaps its idiosyncrasies are a feature and not a bug, and how this technology can change how we own private data and compute over it.
But it was just recently that I had a “Eureka!” moment! I know, I’m slow to catch on. It was when I identified the potential synergies between these two realms of blockchain and Urbit. My ideas were embryonic to say the least, and certainly not original. As usual with such things, brighter minds had similar ideas before, and I was very happy that in this case, not only have they arrived at similar conclusions, but the technical execution is much more sophisticated than I had first envisioned.
This article tries to describe each of the ideas involved: What is a blockchain, what is Urbit, and how Uqbar — the first project trying to unify both — is a gem among gems.
Let’s dive in.
Please take a look at this picture. If you have some experience with Crypto, you might identify this somehow.
What you are seeing is an often idealized picture of a blockchain. Each diamond shape represents the state of the chain at that given moment in time. The squares are what is called a block — where the blockchain name comes from — and it contains a set of transactions.
These blocks update the blockchain state in a predictable and sequential manner, represented by the solid vertical lines. Here a transaction can be any data that you want to persist, for example, one encoding the transfer of value. But blockchain technology promises you that they have a mechanism to permissionlessly replicate this structure and to allow any participant to introduce said transactions.
By “permissionlessly”, I mean that so long as you satisfy open and well-defined requirements, no one, not even some set of participants in the network itself, can censor you, i.e. forbid you from replicating or participating in the network.
One can debate to what extent existing blockchains satisfy this property, but it cannot be argued that in principle this is true. Similarly, no one can censor you from sending transactions to the network even if you do not participate fully in the network, that is, you are not replicating this data structure yourself.
On can (understandably) ask oneself, “What is the use case here? Why would someone do something so inefficient?” To this, in my opinion, the answer lies in trustless decentralization.
Applications like payments are an obvious candidate, as the trustless decentralization afforded by cash systems are running against market-fit difficulties in the age of information: for example, you don’t truly own your bank balance as anyone who witnessed a bank run can attest. But due to the ability of many blockchains to support arbitrary computation, there is no limit on the applications that can be run in a trustless and decentralized manner.
Decentralized finance or DeFi is a prominent one but gaming, supply chain tracking and verification, identity distribution, crowd funding and many others are in sight.
A blockchain is able to accomplish all this by merging three paradigms: cryptography for verification, deterministic processing for replication and incentivization for bootstrapping and security.
What you see in the picture above is the idea of determinism: that a new block is deterministically computed from information contained in previous blocks plus new inputs. Since every network participant, when honest, computes the exact same thing, this makes it easy to essentially ”vote” on the current aggregated correct state of the chain, that is, its replicated state.
For the purposes of the present article, this is enough of an understanding of blockchain technology, but I would like to add one particular aspect of the cryptography of blockchains that will be important later on. That is the idea of a Merkle tree. A Merkle tree looks like this:
This is a binary tree! A binary tree is a data structure formed by nodes and directed edges. Each node can contain arbitrary data and incoming and outgoing links to other nodes. In a binary tree each node can have at most two children (incoming edges) and one parent (outgoing edges). A leaf node is a node without children and the root node is the unique node without a parent.
The special thing about it is how it is constructed and its purpose. A Merkle tree serves as a short cryptographic proof of the integrity of an ordered set of data. Imagine that you have an ordered set of data.
For example, a list of transactions. Here is how to build the Merkle tree for this set. You select a hash function and apply it to each transaction. You define these hashes to be the leaf nodes of the tree (the base layer) while keeping the order. Then, you iteratively build a parent node by concatenating the hashes of two neighbouring nodes and hashing it. At the end, the root node is called the Merkle root of the tree.
Its purpose: proof of inclusion. You want to be sure that your transaction has been included in the block that was processed. We can use Merkle trees for this because the only way to arrive at a given Merkle root is via this specific ordered list of transactions! Any change in order or content is detectable since it changes the Merkle root.
If you sent one of these transactions and know its position in the block you hash it and use the Merkle tree to see if it matches all the way to the root. Notice that you only need the hashes of the tree along your path to the root. This is why it is so efficient.
As an extra bonus: error detection. We send transactions over the network together with the Merkle root. The chances that the Merkle root (being short) is corrupted while in transit is much smaller than the chances that a set of transactions is corrupted. At the destination, we compute the Merkle root from the transactions and compare. If they match, we can be certain that the blocks arrived correctly.
For the interested: the main reason for this is that the hash function is close to what we call a one-way function.
Now take a look at the following picture:
What you see is Urbit, or an idealized representation of it. Urbit is the first completely functional stack starting at the virtualized operating system up to a computer identity and networking layer.
Here, each diamond represents the state of the computer. The whole computer! Again, the whole computer: memory contents, code, networking stack, buffers and, crucially, identity! Also here, each block represents inputs. Any input. Of course, invalid inputs are ignored, but correct inputs cause the system to update its state in a deterministic manner. This is what is meant by functional.
Like triangles only exist in Plato’s ideal world but interact with the real world via something we call “brains”, so does a functional machine need to interact with the real world. So there is a “brain” in the form of this thin runtime layer that takes care of this interface and allows us to scry the Urbit state.
In Urbit the state can be interpreted seamlessly as a number. A gigantic integer: 2Gb (giga-bits) long! That is a number with approximately 400 million digits! Under this interpretation, one can think of the squares in the image as other numbers (input numbers) and the horizontal arrows as a very complex equation using weird operations that take the state number and the input number and computes a new state number.
Well, THAT is Urbit! But that’s not all: the beauty of Urbit is that it figured out a way to design everything in such a way as to keep this representation intact but actually have a very simple state transition equation. Well, it is still too complicated to be of practical use for a human but it is simple enough for a human to understand what is going on.
This is similar to how one can understand general relativity and develop an intuition for it despite most calculations — except for the most simple ones — having to be left to a computer. This is what is meant with “Urbit is simple and a single developer can understand it all”.
I will go briefly on how this is done below but before that, a reader might rightly be wondering: “But isn’t this how every computer works? Aren’t these all 0s and 1s encoded in some way?”. I want to address this insightful question. The short answer is no. The long answer goes into the heart of what is meant by deterministic and being a true Turing machine.
Urbit being a true Turing machine is, as mentioned above, defined by its starting state and state transition function. It is a mathematical function acting on natural numbers. In this comparison, an ordinary computer is messier. An imperfect but useful analogy is that an ordinary computer is instead like a function on a subset of the real number line.
It is not defined for all numbers, only a subset, and even for the numbers for which it is defined, any small imprecision might cause you to fall outside of this set and fail. Anyone who has worked on trying to build performant mathematical libraries that need to work with arbitrary approximations of irrational numbers using the IEEE floating point specification knows the pain (there are more of us than rightfully deserved), and that’s why we tend to stay away from 80s FORTRAN code. It is not that we can’t rewrite it in C, but we cannot guarantee that it won’t break anything down the line!
As an interesting side note, recall that we shun floating point numbers in the EVM (the Ethereum virtual machine). If you recall adding a token to your wallet in Ethereum you must have noticed this “decimals” field. This is because we would like to represent and work with fractional amounts of this token but the EVM does not support it. Every ERC20 token contract stores address balances in an unsigned integer and there is the “decimals” constant that specifies where to put the decimal point for correct visual representation (usually 18).
So, returning to the Urbit “axiomatic description”, the first step is to define your state number as a specific encoding of a binary tree. So now your state is represented in a data structure as follows (see the previous section on blockchains for a more detailed description of a binary tree):
Each node contains what in Urbit is called an atom and is just a natural number. Here already an important simplification occurs: normally natural numbers and computer unsigned integers are beasts of different worlds: the Platonic world of mathematics and the real world of computer science. In Urbit we are in the platonic world! It is difficult to overstate the importance of this quality to developers.
Next and main step: we build a computer. Perhaps you heard about Conway’s game of life. It is a simple example of how to do this: define some rules to transform some data structure (the state) and show that it is a universal Turing machine. I don’t want to digress here so I will just say that this is exactly what a computer is: a universal Turing machine. The things that can be computed are exactly the things that a universal Turing machine can do.
So in our situation, we do the same: define a set of operations transforming this binary tree so that in conjunction they form a universal Turing machine. We can now encode any algorithm as a sequence of these operations on this binary tree.
This set of operations, together with the initial state of the binary tree, define an input format where a number as input can be interpreted as a pair <code-text>(program, arguments)<code-text>, where the <code-text>program<code-text> is the set of operations to perform and <code-text>arguments<code-text> are any extra input for the program. It can then follow the set of instructions and compute a new end state of the binary tree (or never halt).
Urbit found a way to make this work with just a set of 12 operations on this binary tree. It can famously fit on a t-shirt. This encoding standard and set of operations are called Nock and it can be seen as the assembly of Urbit. Here it is — taken from the Nock definition page — in all its glory but still needing some explanation we won’t go into.
This pseudo-code is defining the operators <code-text>?<code-text>, <code-text>+<code-text>, <code-text>=<code-text>, <code-text>/<code-text>, <code-text>#<code-text> and <code-text>*<code-text>. Please go check the source linked above. But here is a short description of these operators:
Not only did Urbit come up with an economical definition of a functional computer (a universal Turing machine in this case) but it made it practical and modern. Urbit is implemented in C as a VM. The performance is already good enough to be useful and there is an engaging community of developers making real-world applications for it. One of them is Uqbar.
There are two very important aspects of Urbit that I still did not touch upon. The goodies don’t end here. First, Urbit is a computer with an identity. More precisely, in the initial state definition of an Urbit instance, there is already encoded a boot procedure that requires a specific kind of cryptographic secret key. Those keys are finite in number and can only be obtained, or correctly derived if you own a type of NFT on Ethereum. The choice of Ethereum here was arbitrary; they chose to use a blockchain for this and chose Ethereum because of its maturity as a platform. More importantly, that means that each Urbit computer is uniquely identified.
Second, Urbit uses this property of identity to pre-define a complete network topology layer for all of the Urbit computers in such a way that routing packets is simple and private — via encryption — between sender and receiver. This consolidates IP, routing tables, DNS, and TLS/SSL all into one single transparent stack accessible by default to any Urbit computer. You just need the receiver’s name (or identity) to be able to talk to it securely and privately. These are called “@p” and take the shape of a tilde (~) followed by a few pronounceable syllables. For example, I’m <code-text>~dosnul-sogteg<code-text>.
Finally, this network is functional and typed in the computer science sense. Every packet in the network is typed and will yield the same result (return packet) every time it is applied to its destination (if it is online). This forces the input packet to contain all necessary input to be properly decoded and inserted into the destination's transition function.
The power of computers with identity cannot be understated. It turns computers into avatars of its users uniformly for all applications. Since the identities are scarce (finite), this helps build a social reputation system in the network and disincentivizes bad behaviors like spamming.
On the other hand, Urbit wants to be a new private server for every user, so that we don’t need to use and keep our data in centralized services. Going back to the pre-internet era is not an option either. We are social animals, and computers are most valuable to us when we use them to network and collaborate. But there is one thing missing: we can message but we can’t transact. We have an identity, but that’s not enough for trust.
Urbit is missing a shared trustless and permissionless piece of state. A blockchain. Enter Uqbar.
Uqbar is exactly what was alluded to above. A part of the Urbit state that is shared in a permissionless and trustless way. It is a blockchain whose client is an Urbit application. So if you think about it, much like the movie Inception we implemented the state transition of the blockchain as part of the state transition of the Urbit computer. Running a blockchain is executing its state transition and we do it by executing Urbit’s state transition.
This is not so shocking when stated as implementing a universal Turing machine — for example, the EVM — in another universal Turing machine. The first and third figures of this article, as you may have noticed, are the same figure after all! But this can be viewed better as enlarging the blockchain context to have at its disposal a full operating system stack. We will go through what this implies in the next section. In this section, I want to delve a little deeper into some aspects of Uqbar to clarify what I meant in the beginning of the article when I said that “the technical execution is much more sophisticated than I had first envisioned”.
If Uqbar was “just” a blockchain client implemented in Urbit it would already be pretty exciting. But Uqbar has a few other things going for it. Most notably, it comes with a compiler that can take any arbitrary Nock code and generate a zero-knowledge prover and verifier for it. Thus not only execution of on-chain code paths, but off-chain as well. This is absolutely huge and goes back to a combination of the functional aspect to the simplicity of Nock.
Recall, Nock is Urbit “assembly” and it translates any algorithm to a lengthy sequence build of 12 basic Nock operations. Seizing the opportunity, Uqbar built a zero-knowledge prover/verifier circuit for each of these operations. This means that block proposers — known as sequencers in Uqbar’s parlance — can post proofs of correct execution when emitting blocks, and verifiers can quickly verify correctness. I won’t go into more details of zero-knowledge magic, there are plenty of good introductory articles about it like this one. Note that, to be precise, we are not even exploiting the zero-knowledge aspect of it only the computational compression aspect.
This allows for very powerful light clients. Since Urbit computers have identities, Sybil resistance is trivial: Uqbar block verification mechanism can be configured to demand verifiers to be stars[¹], and stars are by nature intended to be service providers.
They maintain the state and archive the blockchain while providing state bootstrapping to the light clients. The light clients can/would be run in any end-user ship. This is how blockchains are supposed to be accessed: in your local always-on server. Because it is always-on and maintenance-free, light clients can be bootstrapped once and (almost) never again without the need for an external (to the network) data availability layer much like the Mina network.
But this still goes beyond all of this. As mentioned before, the proofs can be proofs about statements outside the chain, i.e., any Urbit computation. This of course has some caveats as, for example, the off-chain state — at least in some obfuscated form — has still to be made available to verifiers somehow. Nonetheless, it is to be expected that this will have incredible applications in the future. More on this in the next section.
An Urbit application, being a subtree of the state tree, can easily and transparently access read-only information in the Uqbar shared state. For example:
This last point was alluded to in passing in the last section but is worth elaborating on. It is not easy to accomplish this currently, and dApps tend to go to decentralized “indexers” at best or “trust-bottlenecked” services like Infura or OpenSea at worst. A very good read describing this problem, and probably the fairest criticism of “Web3” around, is in my opinion My first impressions of web3 by @moxie.
💡 An Urbit application is free to run and process any “indexing” required to give the user the best UX while keeping this data private.
ZK technology is a game changer and what is/was holding it back is the computational burden for generating proofs. Despite Ethereum recently moving to PoS, and in doing so abandoning wasteful PoW mining, there are multiple projects considering mining hardware — GPUs, ASICs and FPGAs — for proof generation. Talk about full-cycle. Performing this computation exclusively on block proposers is useful but limiting.
One can imagine that in a completely functional OS and network stack, where everyone has access to the shared state while being able to keep their own personal relevant state and emit proofs of statements about it, the lines between on and off-chain will be so blurry as to be meaningless. Well-designed applications will be truly decentralized (as in uncensorable), private (as in they will leak only the necessary information), and scalable (as most computation can/will be done locally).
As a concrete example, one can implement a country’s complete tax system in such a way that the government does not know what particular transactions were made, but they can be sure that they are correctly taxed and paid. This is, of course, very very far off, but it is in the realm of possibility.
It is our belief that the synergies of these two truly remarkable, not too technically dissimilar technologies can truly unlock the capabilities of Web3. In this paradigm, businesses and organizations (DAOs) can prop up from just a few collaborators and truly be uncensorable. Value can be derived from direct measurable KPIs, and not by external parties (most of which are not even democratic in nature, much less egalitarian).
As a final, more immediate set of examples: we can have an automatically monetized, decentralized, uncensorable github, twitter, office, zoom (in small caps!). Games, forums, email: anything in Web2 has a better version here.
[¹]: Planets, stars and galaxies are how Urbit calls its computers depending on their position on the the a priori defined network topology. See the previous section.
Urbit has gained some renown among crypto enthusiasts in recent years as an ambitious and compelling use case of NFTs to power a novel computing system and network. The technical stack that Urbit has developed is impressive and far-reaching, but some criticize its perceived opacity and lack of a precise use-case. If your first impression of Urbit came from a deep-dive into the intricacies of the OS, network, and identity system, you might be left wondering what Urbit’s specific use case even is. Is there a problem Urbit is trying to solve, or is it all just a severe case of NIH syndrome?
The reality is that there is a problem that Urbit solves, and it’s a complex enough problem that it won’t be obvious to most people, but it’s a deep and pernicious enough problem that it affects everyone using the internet. A rudimentary understanding of Urbit’s problem space can be gained from this tweet from Philip Monk, CTO of Tlon, the primary company driving Urbit development. Urbit is a solution to deep technical limitations of the internet that prevent it from being used the way it should: as a permissionless peer-to-peer network that gives freedom and responsibility to its users.
If this explanation feels under-explored, read on for a deep dive into the core value proposition from Urbit to users and developers alike. But before we begin, we should clarify a basic philosophical understanding of Web3.
“Decentralization” is a commonly used buzzword in Web3 and elsewhere, with much said about new companies whose product is to decentralize some aspect of digital experience. Because of the enormous financial success of Bitcoin and other DeFi technologies, a case can be made that merely decentralizing a product is a sufficient advantage that consumers will flock to it. But this is a poor understanding of what consumers value in crypto, and thereby a flawed approach to Web3’s path to victory.
Bitcoin was, of course, not the first decentralized digital currency ever invented. E-Cash and Bit Gold were predecessors to Bitcoin in this domain, and they each used cryptography-powered precursors to blockchains to make digital payments permissionless. What made Bitcoin more successful than its predecessors is not solely that it was more decentralized (although in some cases it was), but that it was much more secure. The combination of decentralization and security gave Bitcoin holders ownership that they could rely on, and that went on to make it a successful product.
Decentralization is best understood as a special case of ownership, where trusted third parties in central control of a product reduce the user’s intuition that they own the product they use. Merely decentralizing a component of a product does not necessarily compel an end-user to use it, but to some degree, every end-user wants to own their tools if they can.
That’s all to say that Web3’s critics are correct that decentralization itself is not a product. However, decentralization can be a critical component of ownership, and ownership is a critical component of what makes Urbit a compelling product to end-users. Urbit is decentralized, but not for decentralization’s sake. Urbit is “yours forever” and that requires it to have many attributes, including permanence, security, and of course, decentralization.
The story and namesake of “Web3” is perhaps best summarized by this article on Ethereum’s website, which goes through the stages of the internet’s development and shows how a new, blockchain-powered paradigm can shift the balance of power and take ownership from giant tech corporations and give it back to users.
As is well understood by visionaries of a decentralized web, the internet of the early 1990’s was idealized as a permissionless space in which everyone had a voice and could make their own mark on the world by learning and using a set of open protocols that did not discriminate on who could operate them. The early internet was a pluralistic “Wild West” of custom-built websites and services, and while the distribution of activity was anything but equal, there was little resembling a monopoly on most use cases. Idealists saw this web as the beginning of a new flowering of culture and technology, where mass media would become obsolete in comparison to an open field where undiscovered talent could win hearts and minds by their own bootstraps.
As the internet’s ecosystem developed, the idealists only partially got their wish. The internet did become a phenomenal landscape for small contributors to make a big impact, but only under the patronage of monolithic platforms. Somewhere along the way, the expectation that users would have actually owned their means of communication was subverted. As it turned out, running infrastructure and operating servers is boring and hard. End-users needed powerful platforms to obfuscate the complexity of the tech stack, and were willing to give up their ownership in exchange for an approachable user interface.
Detractors and sympathizers alike refer to the early, pluralistic internet as “Web1” and the modern, centralized internet as “Web2”. In accordance with this scheme, the hypothesized successor paradigm of the Internet is called “Web3”.
Proponents of Web3 see in blockchain technology an opportunity for a new phase of development that corrects this flaw by taking the responsibilities of Web2 infrastructure and offloading them to consensus networks that are owned by everyone and no-one. Rather than private infrastructure managed by giant corporations, web services can use public infrastructure managed by the community, and the power structure of the internet can thereby resemble the same fair and open field that the Web1 idealists envisioned, while offering an equal or better user experience to Web2.
Blockchains are a promising technology for secure digital ownership by providing one immeasurably valuable feature to its users: trustless consensus on data. By nature, applications must rely on a single source of truth for a dataset in order to be sensible to the developer and the user. In order to obviate the need for a trusted third party to secure and manage this data, consensus must be reached across a network on what is true. This problem is best summarized by the infamous Byzantine generals problem, to which blockchains offer a reasonable solution.
Blockchains also offer another potential way to revolutionize software by offering developers the ability to create new, scarce assets ex nihilo. By allowing investors to speculate on these spawned assets, free and open source software finds a new financial model where code can be given away to the community without thankless developers having nothing to show for their contributions. Given the scope of work required to make systems that are sensible to everyday users, this advantage is truly invaluable.
However, the aforementioned Byzantine fault tolerance comes at a cost in blockchains. Consensus over a network offers a better assurance of ownership to users, but duplicates work that, in the centralized case, only needs to be performed once. The inevitable tradeoff between ownership and efficiency in blockchain networks is best summarized by Vitalik Buterin’s scalability trilemma, which shows that the two most valuable components of blockchains are fundamentally at odds with a third attribute that powerful systems seek to maximize, which is efficiency.
Solutions exist which extend blockchain capabilities in all three domains, so the trilemma is not completely binding. But to the degree that the trilemma is unsolved, scalability constraints manifest themselves in gas fees, which make it costly to write transactions to any chain that is uncompromising on secure decentralization. Costly writes are an anti-feature that make it difficult to excite end-users, and so this limitation threatens the ability of blockchains to obviate monopolies powered by Web2 infrastructure.
Privacy and latency are also notable challenges in a blockchain environment. Infrastructure that, by default, gives read access to everyone and only adds new data at set intervals, forms a limiting use case for many applications that are expected to be responsive and permissioned. Like the scalability problem, these problems have prospective solutions, but still represent technical hurdles for developers to grapple with, that Web2 solutions can simply centralize away. There are several other hurdles of this type that would deserve exploration in deeper dive.
These limitations to blockchain-based infrastructure have, to some degree, already been explored in other places, and may one day each find satisfactory solutions. But one under-explored limitation is the repeated reliance of Web3 applications on trust in order to access blockchain data. This isn’t even necessarily a hard limitation in blockchains as a tool, but can be observed as a pattern in the industry.
Uniswap, for example, is served from a specific domain name, and consumers implicitly trust that domain name with their tokens. MetaMask is a ubiquitous non-custodial Ethereum wallet, but uses hard-coded proprietary endpoints to access on-chain data. OpenSea, despite its name, does not even claim to be permissionless — it’s explicitly a custodial service with administrators to intervene if something goes wrong.
These hallmarks of the Web3 ecosystem are all fueled by a cultural environment that eschews centralization and prioritizes ownership, and yet find themselves making similar compromises to Web2 companies that promise to democratize the ability for people to express themselves. Rather than creating a system that is thoroughly trustless, some trust is inserted into the equation in order to iron out the difficulties of operating permissionless systems, whether blockchain networks or other peer-to-peer protocols.
Again in Web3 as in Web2, complexity is hidden from the user by an interface that achieves a level of human-comprehensibility by offloading user choice to the provider. There are many exceptions, just as in the Web2 era there were alternatives to centralized services that could be used, but were not mainstream. But there is a reason why the choices that make compromises to user ownership tend to win in this environment, and it’s clear that the reason is not lack of access to blockchains as a tool.
Despite countless efforts to make user-owned applications and networks reliant solely on peers, the role of nodes in any solution is costly to discount. Solutions that give primacy to peers still run nodes to pick up the slack caused by the intermittency of peers. In the Web3 world, offloading all node work to blockchains manifests itself in the cost of writes and the need to obfuscate the gas expense. The need for servers did not go away with blockchains, but only made itself manifest in new ways. Knowing this, the question of a user-owned internet returns to its old question: how can we create a world where each user runs a node?
The underlying need for user-owned servers is not breaking news to those familiar with the history of the internet. In the idealistic days of the early web, user-owned servers were simply a given — as applications became easier to use, always-connected services would follow suit, and the internet of the future would be a patchwork of independent personal servers hosting whichever services were important to the user. In this way, the developments of both Web2 and Web3 technologies can be seen as an adjustment made in response to the failure of personal servers to thrive in the consumer market.
We have made a case for why blockchains cannot provide an answer to the former problem. But do blockchains, and other advancements in computing, have anything to say about the latter?
One interesting case to consider in the landscape of user-owned servers is the omnipresence of personal routers. A router has much in common with a server from a consumer’s point of view: it’s a black box that sits somewhere out of the way. It needs to always be powered on and connected to a network, and you will find out quickly if it’s been unplugged. If it malfunctions, getting it working again is a top priority. In order to do your business, you have to connect to it — what exactly it’s doing is not always clear to the end-user, but that it’s important is well-understood.
Find a personal server that meets all three of these conditions, and we can begin to imagine a new computing paradigm. In practice, Unix servers typically fail on all three, and where they succeed in one domain, they typically compromise on at least one of the others.
A general study of successful consumer products is also helpful in understanding how and why personal servers failed in the market. This article by Lane Rettig makes a concise case for the viability of tools in the marketplace:
“What the tools we rely on the most heavily have in common is that they’re all simple, durable, and ours.” ~ Lane Rettig
While personal routers do not always satisfy the property of ownership, one can see how their value proposition fits neatly into this model. Unix servers, on the other hand, have only ownership to offer. While they are arguably simple from a highly technical point of view, none of this simplicity is legible to the non-technical user. And their durability is certainly not under question by professionals who rely on them, but non-professionals are almost universally unable to replicate that impression.
But why is Unix in particular under question? The answer is, there is not much else to offer consumers in the way of personal servers. Other solutions exist or have existed, but mostly in the business domain, and mostly targeted at professionals. Servers targeted at tinkerers and privacy advocates have seen some success, but even in that market, Unix is almost always the backbone of their software stack. This may shed significant light on the failure of personal servers in the marketplace: no fully capable operating system has been constructed with the personal server use-case in mind, except for various implementations of Unix. Unix, of course, was never designed for everyday consumers.
Urbit is a novel software stack, with its own OS, network, and identity system, built de nihilo from elementary primitives. The OS, as the centerpiece of the system, aims to fulfill the use case of a personal server that is simple, durable, and yours. Urbit uses many theoretical advancements in software engineering to achieve this outcome, most notably determinism, referential transparency, and cryptography.
Much remains to be said about the innovations made to create a general purpose server that feels more like a mechanical clock than a fighter jet cockpit, and a deep dive into Urbit’s architecture is recommended to engineers who want to understand the system at more than a superficial level. But for our purposes, it’s also worth taking a glance at our earlier example of the personal router to examine how Urbit compares.
Urbit is as valuable as the personal router. The end-user’s access to the internet is mediated by their router, and the internet is an invaluable ecosystem of force-multiplying services. The Urbit network, similarly, can fulfill the same potential. By adding powerful primitives and a unified back-end to the protocol by which individual Urbit nodes communicate, Urbit’s network promises to lay the foundation for networked applications that can compete with, and even exceed, the services provided on the modern internet.
Urbit is as low maintenance as the personal router. It is designed to never reach an unrecoverable state, and even reboots should never be necessary. The commitment to minimalism and determinism at every turn has paid dividends for Urbit’s developers, and while it cannot be called “zero maintenance” yet, the path to that milestone today yields more known unknowns than unknown unknowns.
Urbit is as opaque as the personal router. The underlying architecture never shows itself to the end-user. To the degree that it has an interface, this interface is a friendly webpage that mirrors the homepage of a mobile OS. Developers can fork its code or play with the internals however they please, but should never need to look at the terminal to use it or its applications. Just like in the case of the router, a connection needs to be established so that services can be made available, and this intuition will be all the end-user needs to know to proficiently use their Urbit.
While serving primarily as a gateway into Urbit’s network, an Urbit server can do much more than merely route packets. As a general-purpose computer on a peer-to-peer network, Urbit can act as a much-needed backbone to user-owned applications that demand nothing more than code from developers. The guarantees of Urbit’s networking primitives, combined with the assumption that all peers run nodes, makes it possible to deliver cutting edge social applications consisting of only two elements: a protocol and an interface. This leads to limitless possibilities for developers, who previously needed to duplicate massive amounts of work and run their own servers in order to deliver software that satisfies users.
Urbit also benefits both users and developers by consolidating data to where it belongs: in a unified environment that the user owns. Developers need not assume the liability of user data residing on their own infrastructure, and users need not trust developers with their private information. And in the case of creating integrations between services, there is no chasm of APIs and terms of service to bridge between: all of the user’s data is in the same place, speaking the same language. The only chasm between two services is the user’s permission to share data between them.
Prior examples show that this level of added value is necessary to put ownership in the hands of users: a sensible, lightweight product that asks no compromises in terms of UX, while giving full control to the owner.
While it is difficult to overstate the centrality of personal servers to the problems Web3 aims to remediate, there remains a need for applications that interface between end-users and blockchains. Even more than this, a growing industry is responsible for developing middleware in Web3, both between different blockchains and between a given blockchain and the real world. Urbit offers solutions in both of these domains, in the bleeding edge industry of blockchain development on Urbit.
Some aspects of Urbit’s natural affinity for blockchains are already well-understood. Azimuth, for example, which serves as Urbit’s identity system and PKI, is implemented as a Solidity contract on Ethereum. Furthermore, the basic problem of association between names and public keys can be considered solved on Urbit, as name-key associations are already an assumed part of the system, and already integrated into Urbit’s Bitcoin application. Already on Urbit, you can natively send and receive BTC with other Urbit users with no need to keep records of their addresses.
Other faults in Web3 are addressed by the mere lack of any need to compromise on user-owned architecture. dApps on Urbit, for example, are truly dApps — they are sent to the user’s server upon installation, and run locally. API layers and trust bottlenecks between Web3 applications and blockchains are not needed if blockchains are built on Urbit, as the network provides a sensible common language for all applications, even if they are hosted on different servers. And above all, the most important factor in keeping blockchains decentralized is user-run validators, which can be considered no different from any other application on a robust and user-friendly personal server.
Even beyond this, Urbit promises to add even more value to Web3 in the domain of global integration. The need for middleware to connect components on-and-off the chain is said by some in the Urbit community to be a symptom of a deeper problem: the lack of a sensible, unified execution environment shared between applications. In summary, crypto needs an OS, and Urbit can be that OS.
The accelerating power of crypto on an OS that speaks its native tongue is much discussed and speculated on in the Urbit community. Uqbar, the first blockchain native to Urbit’s network, aims to obviate any need for middleware by using Urbit as a general purpose orchestrator to synchronize data between disparate components, whether blockchains or ordinary local state. Their solution uses zero-knowledge proofs, sharding, and other bleeding edge technologies to create a crypto ecosystem on Urbit that can not only compete with the best of the L1s elsewhere, but add features that prove indisputably that Urbit is the true home of Web3.
Uqbar is hard at work developing their tooling and plans to release a public testnet in the very near future. Will it revolutionize the industry the way its developers claim? In that domain, only theories and speculation can provide an answer. But their argument is worth a glance for anyone interested in emerging technologies in crypto.
Much is promised here and elsewhere about the potential for Urbit to take the world by storm and bring about a new era of user-owned computing. Nevertheless, if you try Urbit today, you will see a friendly, somewhat minimal interface for text chat and an ecosystem of experimental applications. You may find Urbit’s promises wanting, in the domains of zero-maintenance servers, competitive UX, and perhaps even avoidance of sysadminship. Regrettably, it is not yet even possible to run a Bitcoin node on Urbit in the one-click way that it should.
Urbit is exciting to early adopters not because of what you can do with it right now, but because of what it can enable after the necessary steps are taken. And in contrast to the Urbit of even two or three years ago, the necessary steps are well-understood and waiting in queue. The revolution in computing is no longer “how?” but “when?” for the Urbit community.
Today, Urbit is a simple and clean tool for chatting with friends, playing games, and experimenting with new ideas. More than anything else, the Urbit of today is a tool for doing what its users care about most: building Urbit. If you’d like to get involved, the community would love to have you. If you’d rather observe from the outside, keep a keen eye out. Big things are coming in the near future for Urbit and Web3, and you don’t want to miss out.
Urbit’s value proposition is long-winded enough that it won’t fit into a tweet or a TV commercial, but it’s promising enough to excite developers who share our vision of the future and want to play a part in building it. When it matures as a product, rethinking Unix and the internet won’t be included in the pitch. Urbit will be a service you can buy, either as a subscription or a physical product, that enables you to use apps that are just plain better than the ones you used to use.
Much remains unclear about what happens between now and then, but crypto and Web3 enthusiasts will have many reasons to get involved before you start seeing ads on the television to buy an Urbit planet. Urbit offers a comfortable home to idealists who believe in cryptographic ownership and share a concern about the future of humans and technology, and the next generation of early adopters is sure to include a wide cohort from that audience.
Now that you understand Urbit’s core value proposition, stay tuned for an exposition into the details of Urbit’s capabilities as a platform, its integrations with crypto, and a deeper dive into the promise of Uqbar to reshape the landscape of blockchain development.