Decentralization is hard
We're seeing a renewed interest in decentralization. The call to “decentralize all the things!” is energizing a large community of developers.
The challenge with this movement is that decentralized systems are hard to build. Decentralized systems rarely have ability to be deployed, monitored, and operated in such a careful way as centrally-managed ones. Nor can configuration changes or software updates be rolled out across the network on demand. Instead, these decentralized systems’ function is dictated solely by existing code.
Even worse, it's hard to fully understand the implications of design or even configuration decisions until deployed in the large, running on that wildly heterogeneous platform that's the Internet. For example, for all the revived interest in distributed hash tables (DHTs), many don’t realize that the non-transitivity of Internet routing can play havoc with these algorithms.
I have a bit of my own perspective (and scars) from this, having gone through the first wave of peer-to-peer systems. I was in the research group at MIT where the Chord protocol was developed, and my grad school officemate Petar designed Kademlia. I worked on the FreeHaven storage system and its decentralized anonymous communication layer; the same team went on to develop Tor. And I built CoralCDN, a content delivery network that used peer-to-peer protocols (including a Kademlia-based DHT) to serve web traffic. I ran that network from 2004 until 2015, and it had millions of users each day.
In fact, the academic community writ large was fascinated by decentralized, peer-to-peer systems throughout most of the 2000s, but then something happened.
The harsh reality of economics
Cloud computing and SaaS made it cost effective to rent resources by the hour and elastically grow with demand. Many of the arguments for the first wave of p2p systems -- that the capital costs of infrastructure were too high, so we could build systems instead where users would be incentivized to contribute resources -- dried up. And new ad-supported business models flourished. In the CDN market, for example, the unit-cost economics of content delivery meant that the better quality-of-experience one could offer through commercial CDNs more than paid for itself given that better user retention translated to greater ad revenue. And anyway, it had proved challenging to get the incentives in these p2p systems right.
Now, things may be a bit different today. Bitcoin and cryptocurrencies are interesting in this context because they have the potential to provide a much easier way to incentivize participation: money. And the consistency of blockchain state means we have some new ways to ensure global agreement.
But in adopting a decentralization maximalist view, I fear we're learning the wrong lesson. After all, these systems are still hard to get right, and certain unit-cost economics still remain. Not all users are willing to make these tradeoffs.
Enabling choice and flexibility
Instead, how might we leverage these emerging technologies to enable choice in how we build Internet systems, so that (a) we can empower users to be able to choose which parties and services providers they trust and want to interact with, and (b) we can future proof our design to adapt to changing needs.
Future proofing is important. Perhaps what's most remarkable and striking with the Internet's basic design and layering model is that it's still going strong...50 years and countless technology shifts later.
But a key aspect of the Internet that has certainly grown is the key role that distributed services and applications play, and yet this layering model is somewhat silent on the complexity of the application space above the network.
In fact, David Clark -- the Internet's Chief Protocol Architect in the 1980s -- discussed this in detail in his 2011 paper "The End-to-End Argument and Application Design: The Role of Trust", which generalizes his "end-to-end" principle:
The function in question can completely and correctly be implemented only with the knowledge and help of the application standing at a point where it can be trusted to do its job in a reliable and trustworthy fashion.
Trust, in this context, should be determined by the ultimate end points---the principals that use the application to fulfill their purposes. Because the locus of trust is naturally at the ends...it more directly invites the important question of "trusted by whom?"
That question, in turn, relates to questions that implicate application design, notably "who gets to choose which service is used?"
Notably, this "trust-to-trust" principle doesn't shout "decentralize all the things!" but instead asks us to empower users to be able to choose which services to use. And hopefully there are meaningful options and federation returns to be the norm, rather than the Hobson's Choice of a single walled-garden service versus nothing at all.
Which brings me to the point of this post: Blockstack.
Blockstack and removing trust
What excited me about Blockstack is its attempt to solve these problems in an architecturally clean way: Use the blockchain and its global consensus in order to bootstrap trust (without a globally trusted party), yet otherwise enable users to leverage these new capabilities off-chain for better scalability and greater choice.
In particular, Blockstack has focused on replacing two of the linchpins of today’s Internet -- DNS and certificate authorities -- with services that do not require globally trusted actors. This is not an academic exercise: too often that trust has been misplaced. Instead, Blockstack enables DNS-like naming and hierarchical namespaces, linked to secured yet easily discoverable public keys, to avoid the need to rely on existing DNS and CAs (or hard-to-use PGP/web-of-trust things) at all.
In a world full of blockchain hype and architecturally questionable decisions, Blockstack takes an approach with cleanly-defined layers that have a well-thought-out separation of function. This allows users to build varied services like websites, messaging and chat, social networking, or even financial tech, all leveraging the Blockstack architecture. Yet these services, or the users of these services, can choose where and how to store their data or execute their computations: on personal computers, on peer-to-peer architectures, on existing cloud services. And be “future proofed” for the continued evolution of the field. Choice remains.
For a deep dive into the Blockstack architecture and motivation, Muneeb Ali's thesis is a great resource. (Muneeb recently earned his PhD at Princeton University, where I served as a Reader on his dissertation committee.)
His thesis was one of the uncommon ones that not only describes a system design and architecture, but also a deployed system in real-world use today. Additionally, in the process of deploying Blockstack, Muneeb was able to perform various longitudinal studies of various blockchains. This identified attacks that had previously only been theorized, such as a single miner having sustained majority mining power (more than 51%), as well as potential selfish mining. It also put a major damper in various approaches employed by alt-coins, such as “merged” mining.
Hopefully you enjoy reading this thesis as well. And I look forward to helping figure out how we can build new Internet services and applications leveraging its capabilities.
Michael J. Freedman is a Professor of Computer Science at Princeton University, and the co-founder and CTO of Timescale, an open-source time-series database optimized for fast ingest and complex queries. His work on the CONIKS key management protocol was awarded the 2017 Caspar Bowden Award for Privacy Enhancing Technologies. He is also an advisor to Blockstack.