Expert Insight

Speed will always be king in a world of dApps

Adapting to the physical realities of the internet is essential for dApp scalability

Satoshi Nakamoto jump-started an incredible era of innovation for decentralised internet applications with the introduction of Bitcoin as a digital, peer-to-peer payment system.

Blockchain, the technology underlying Bitcoin and other cryptocurrencies, facilitates the formation of decentralised, trustless networks capable of handling transactions and data securely.

Recent decades have seen large-scale irresponsibility by traditional financial corporations from banks – who needed to be bailed out in 2008 – to credit reporting agencies like Equifax, which was subject to one of the largest consumer data breaches in history. It’s no wonder that trust in centralised financial institutions has eroded so rapidly.

As users demand more sovereignty, security, and control over their financial lives, it’s only a matter of time before blockchain-based dApps begin to supplant more traditional applications.

This transition to a more decentralised internet is not inevitable, however. Before device owners can choose dApps over apps, these decentralised applications need to perform as well as their competition. This boils down to one key factor: speed. And on the blockchain, speed is limited by how fast we can achieve consensus on a trustless network. dApps need their underlying blockchain platforms to form consensus rapidly, securely, and at a fraction of today’s cost.

Consensus isn’t just for blockchains

Consensus is the process by which entities come to an agreement across time and space. All internet-based applications must reach consensus to function.

In fact, whenever you need to combine multiple lines of computation together, you need a consensus mechanism, whether your application runs on the internet or a single multi-core PC.

In some contexts, the term “synchronisation” is used instead of “consensus”, but the meanings are essentially the same.

Modern multi-core processors, whether made by Intel or anyone else, use special instructions to ensure that processor cores form consensus on the contents of memory. These instructions are called memory barriers or memory fences. Google’s MapReduce framework famously runs on millions of cores across tens of thousands of machines every single day.

MapReduce solves a wide variety of practical problems for Google. But, MapReduce relies on a synchronisation system called Chubby to obtain consensus on which parts of the computation exist and how they should be recombined.

Though some of this language may be new to those less familiar with internet infrastructure, the applications are not. When you use Google Docs, the various computers viewing the same document are constantly coming to consensus. Making a purchase on a vendor platform, posting to social media, playing an online game—all these actions require achieving consensus between different devices and entities.

The difference between apps and dApps is that apps can reach an agreement by referring back to one centralised authority. In the Google Docs example, the central authority is Google.

If you purchase an item on Amazon, the authority is Amazon. If you play Overwatch, the authority is Overwatch. You get the idea. When there’s a single source of truth, agreement can be achieved very, very quickly. But, everyone must rely on that source of truth to be honest.

And in today’s day and age, extending that level of trust to central authorities is less attractive than ever.

dApps must be more creative in reaching consensus

There is no central authority in a decentralised network, so dApps have to find agreement in more creative ways. The big question all decentralised systems must answer is: “Who should be in charge of validating a given transaction?”

Proof-of-work and proof-of-stake are two common mechanisms blockchains use to determine who (miners in proof-of-work and validators in proof-of-stake) is responsible for creating a block of transactions and broadcasting that to the rest of the network.

Proof-of-work protocols ask miners to compete to solve a very difficult mathematical problem. Solving such a problem requires an extraordinary amount of computing infrastructure and huge amounts of electricity. To incentivise people to perform this essential but expensive function, the winning miner acquires cryptocurrency as a reward.

Some proof-of-stake protocols delegate validators in a deterministic fashion, often based on the number of tokens held. Proof-of-stake protocols are more varied in their behaviour than proof-of-work.

In the delegated proof-of-stake model operated on the EOS network, a small number of master nodes take turns producing blocks. This is fast, but far more centralised than Bitcoin.

Other proof-of-stake protocols propose alternative ways to achieve speed without sacrificing decentralisation, but many have yet to be implemented and proven in the real world.

Proof-of-work blockchains such as Bitcoin and Ethereum are incredibly slow and inefficient, because block creation depends on the consumption of huge amounts of electricity.

Already Bitcoin consumes as much energy as the countries of Greece, Israel, and Bangladesh. In terms of speed, Bitcoin processes about 7 transactions per second while Ethereum can process about 15. For comparison, Visa is capable of processing 45,000 transactions per second. As of now, the two most well-known blockchains are simply not fast enough and use too much energy.

How can dApps scale?

dApps need to be faster and more energy-efficient if they are to meet the needs of millions of internet users at the ease and convenience expected. Though it’s clear proof-of-work is too slow and energy-inefficient, it remains too early to be sure that proof-of-stake is the answer.

Some attacks on proof-of-stake chains may be cheaper to execute without proper safeguards. Casper – invented by Ethereum founder Vitalik Buterin – is intended to address this issue by economically punishing poorly-behaving validators by removing their deposited tokens.

But Casper is known to have major flaws in its ability to reach consensus correctly. Moreover, given the need to achieve extremely high throughputs, it seems natural that a system’s bandwidth should be at least as important as the amount of tokens held when determining the importance of a particular node on the network.

Ultimately, if we want to build truly robust distributed applications, including blockchains capable of processing financial transactions, then we have to get the consensus layer correct. We can no longer rely on half-baked algorithms that lack good proofs.

We can no longer rely on implementations that don’t have any formal verification. Correct algorithms are fast algorithms, and fast consensus algorithms will affect the entire ecosystem of internet software.

By Nash Foster, CEO of Pyrofex

(In disclosure, the team I lead at Pyrofex is developing a solution called CDelta, which will use Casanova’s optimistic consensus protocol.)

About Nash Foster:

Nash Foster, CEO and co-founder of Pyrofex, has more than 20 years of experience in the computing industry and has served on the engineering staffs of Google, Oracle, Counterpane, iBiblio, and many others. Nash studied mathematics and the theory of computation at the University of North Carolina and George Mason University. He has helped Fortune 100 companies design, implement, and manage networks and network applications securely.

Disclaimer: The views and opinions expressed by the author should not be considered as financial advice. We do not give advice on financial products.

Previous Article

Brian Kelly claims institutional adoption of Bitcoin is finally here

Next Article

Coin Rivet teams up with WeChat and Weibo

Read More Related articles