Decentralizing the Internet, Again
Recently I recorded a short video on Decentralized Finance (DeFi) as the result of my discussions with Aleksandar Kuzmanovic, who has a startup in the blockchain space. The use of blockchain technology to decentralize traditionally centralized functions (brokers, securities exchanges, etc.) got me thinking about the bigger issues of Internet centralization, the topic of this week’s post.
Anyone who studies Internet technology quickly learns about the importance of distributed algorithms to its design and operation. Routing protocols are an obvious example of such algorithms. I remember learning how link-state routing worked and appreciating the elegance of the approach: each router telling its neighbors about its local view of the network; flooding of these updates until each router has a complete picture of the network topology; and then every router running the same shortest-path algorithm to ensure (mostly) loop-free routing. I think it was this elegance, and the mental challenge of understanding how such algorithms work, that turned me into a “networking person” for the next thirty years.
The idea of decentralization is baked quite firmly into the Internet’s architecture. The definitive paper on the Internet’s original design is David Clark’s “The Design Philosophy of the DARPA Internet Protocols” published in 1988. Near the top of the list of design goals we find “Internet communication must continue despite loss of networks or gateways” and “The Internet must permit distributed management of its resources”. The first goal leads directly to the idea that there must not be single points of failure, while the second says more about how network operations must be decentralized.
When I worked on the team developing MPLS in the late 1990s, we absolutely believed that every algorithm had to be fully decentralized. Both MPLS traffic engineering (TE) and MPLS-BGP VPNs were designed to use fully distributed algorithms with no central point of control. In the case of TE, we realized early on that centralized algorithms could come closer to providing optimal solutions, but we couldn’t see any way to get those algorithms into the hands of users, given the fundamentally distributed nature of routing.
Ultimately the idea that centralized algorithms could do better took hold with software-defined networking. Google (with B4) and Microsoft (with SWAN) both found a way to improve on MPLS-TE by using centralized path selection algorithms, using an SDN controller to push centrally computed paths out to routers that implement a distributed data plane. And MPLS VPNs now face a serious challenge from SD-WAN solutions, which centralize the control of VPN tunnel creation to provide an operationally much simpler solution than that provided by MPLS.
Many people who had internalized the lessons of distributed network architecture struggled to accept SDN because the concept of centralized control was so much at odds with everything we believed about best network design practices. What pushed me over to the SDN camp was the realization that you could build scalable and fault-tolerant networks with centralized control as long as you leveraged ideas from outside the networking community. Consensus algorithms such as Paxos and Raft, for example, sit at the heart of most SDN controllers, enabling them to scale and tolerate component failures. SDN enables the logical centralization of control without introducing the downsides of scaling bottlenecks or single points of failure. And it has produced substantial benefits, such as the ability to expose a network-wide API, considerably simplifying the problem of network configuration and opening the way to automated network provisioning.
SDN has also not actually made the Internet less decentralized. There are still hundreds or thousands of ISPs, the domain name system is still decentralized, and autonomous systems are still managed independently of each other. But there is an aspect of centralization to be concerned about, which is the platforms that determine how many people use the Internet. While, from a technical point of view, platforms such as Google, Facebook, Twitter, etc. are impressively engineered distributed systems, they present a rather monolithic view of the Internet to billions of users. This view of how the actual services that we consume on the Internet became increasingly centralized is well captured in a blog post from a16z’s Chris Dixon. A similar view has been nicely illustrated by one of my favorite cartoonists, The Oatmeal: “Reaching People on the Internet in 2021”.
Both Dixon and the Oatmeal point to the disadvantages of leaving too much control in the hands of large platforms. For example, central platforms can suddenly change policies to shift users away from the content being provided by a creator. There are more technical examples in which widespread reliance on a single platform has led to broad unavailability of Internet services. For example, the Fastly outage of 2021 had a global impact on sites that depended on its CDN (such as the New York Times and Amazon); days later, an outage at Akamai had a similar effect; Cloudflare’s 2020 backbone failure provides yet another example of a problem at one platform having sweeping impact. There’s an interesting blog from Cloudflare discussing yet another high-impact outage, which is traced back to Raft failing to elect a leader under certain settings and failure conditions. Essentially, a flaw in a distributed algorithm created a single point of failure for many customers.
It’s worth returning to Clark’s Internet Philosophy paper from 1988 and noting that, while the Internet still works when routers and gateways fail, satisfying goal number one, many services and websites now fail when a platform on which they depend (such as a CDN) fails. In effect, single points of failure have been unwittingly introduced. And while distributed management of the Internet lives on, large chunks of the services we depend on are managed by a small number of entities.
Some of these problems are easier to address than others. The Oatmeal cartoon points to a subscription email service as a way to bypass central gatekeepers of content (and we certainly support that–subscribe to this newsletter!). Perhaps it will become a best practice to start using multiple CDN providers. And it is claimed that blockchains could lead to a more decentralized Internet (see Dixon’s post above). Decentralized Finance is one example of how blockchains have created an opportunity to decentralize historically centralized functions. Non-fungible tokens (NFTs) provide a possible path for artists and creators to reach their audiences without central entities (record labels, streaming services, auction houses). At the same time, there is plenty of justified skepticism about the long-term potential of blockchains and cryptocurrencies to move beyond the current speculative phase.
It seems that the pendulum swung hard towards centralization with the rise of a few giant Internet companies controlling the way billions of people experience the Internet, and that pendulum is showing signs of slowing, if not starting to swing the other way. Decentralization is a pillar of the Internet’s architecture that has been fundamental to its success, and we’re now seeing a wide range of efforts to return to its decentralized roots. Let’s hope that at least some will be successful.
Our third article has been published in The Register, tackling the subject of GitOps, and it generated some spirited discussion. We’re making good progress on an update to our SDN book, with a first draft chapter on network virtualization now available. Feel free to raise issues or create pull requests if you see any part of the book that could be improved. And for something different, my old Nicira colleagues Martin Casado and Steve Mullaney joined Hashicorp’s Armon Dadger for a fascinating (and occasionally hilarious) podcast on category creation.