Our article on the fediverse, which appeared here in December, made it into the Register last week, opening itself to the commentariat. We’ve learned to have a thick skin when reading El Reg comments, but our attention was drawn to an inevitable thread about whether the fediverse was anything new. That set us to thinking about other times that old ideas have come around again to good effect, which is the theme for this week.
There is no doubt that many of the ideas embedded in Mastodon and the Fediverse have been around for a long time. The decentralized nature invites the analogy to systems like email and Usenet, while we observed the importance of publish/subscribe systems dating back at least as far as 1987. Federated social networks, as opposed to the centralized ones that have recently shown their glaring shortcomings, were proposed long before Mastodon. But there is a habit I’ve observed among my colleagues in the computing industry to ignore or downplay the importance of a new technical approach simply because it looks like something we’ve seen before. Does the fact that something isn’t entirely novel mean that it is destined for either irrelevance or failure? Anyone who has submitted a paper to an academic conference has probably been told (by Reviewer #2) that their work was insufficiently novel. Surely we might instead be encouraged by the fact that it is building on prior experience rather than starting entirely from scratch. Furthermore, even if the ideas on which an idea is based are believed to have failed in the past, the changing context over the decades as other technologies appear and mature can make a seemingly failed idea entirely feasible in its later incarnation.
My favorite example of an idea that was discounted (by many) because of its historical roots is the concept of a logically centralized control plane separated from the data plane. As noted by Greenberg et al., this idea goes back at least to the telephone network, and was proposed in various contexts for packet switching in the ’90s and 2000s (including IP Switching and Forces). When it appeared as the basis for Software Defined Networks (SDN) around 2009, I encountered plenty of people who made the case that since it hadn’t been successful in its earlier incarnations, there was no reason to expect any impact this time around. This “nothing to see here” view was particularly prevalent among my Cisco colleagues since we all had a vested interest in the current tightly-coupled approach to router and switch architecture staying just as it was.
But in fact SDN has had a huge impact, ranging from Google’s backbone to Azure, appearing in thousands of enterprise data centers, and facilitating the disaggregation of routers and switches as predicted by Nick McKeown at the very outset. There are a lot of reasons for this impact. I’d argue that the most common reason an old idea can take off after many decades is that other enabling technologies make it possible. In the case of SDN, the ability to build robust and scalable logically centralized controllers was enabled by, among other things, a few decades of distributed systems research. Even though we were building a networking product at Nicira when we set out to bring an SDN system to market, there were far more distributed systems people than networking people on the engineering team. Martin Casado, Nicira’s co-founder, has said: “Learned as an engineer: bad ideas generally stay bad. Learning as an investor: powerful, disruptive ideas often were once bad ideas.” Most people of my era believed that centralized control was a “bad idea” in networking, yet it was adoption of that idea that enabled SDN to become the disruptive force that it is.
Virtualization in its many forms is another idea that has been around for decades only to hit its stride after a few technology cycles. When I heard about server virtualization taking off in the 2000s (with plenty of unfortunate consequences for datacenter networking) my first thought was “Why would anyone virtualize an entire server? What is wrong with process isolation?” And of course compute virtualization had been around since the 1970s, so what made it a good idea 30 years later? The answer to that is complex but it had a lot to do with the process of provisioning an x86 server in an enterprise datacenter at that particular point in time and the fact that many applications expected the resources of a complete server. The technical landscape was completely different in the 2000s from what it had been in the 1970s, and server virtualization found its sweet spot.
By the mid-2010s, cloud native applications were taking off, and they did not expect to have a dedicated server to run on. Nor were they designed to be provisioned on servers by IT operations people. As the architecture of applications changed, containers became a preferred deployment option. Here again was a technology that had been around for at least a decade (Google was working on cgroups in 2006; Solaris containers were around in 2004) getting its day in the sun as the technology landscape evolved to create the conditions for its widespread adoption.
For many years I took part in the IRTF’s End-to-End Research Group, and our leader Bob Braden would often lament the extent to which networking folks didn’t know their history. While there is enormous value in understanding the history of our field, it’s also important to appreciate that the environment is always changing and ideas may become more feasible or relevant over time. The converse is true as well: some ideas that we may accept as settled might need to be reexamined as the environment changes. The idea that routing algorithms must be fully distributed was considered a settled fact when I was learning networking. This was challenged by the 4D paper and subsequently by SDN, and it is now common (although far from ubiquitous) to see centralized routing in the wild.
With so many ideas competing for our attention, I think it’s a natural human tendency to find a quick reason to discount an idea. “Seen that movie before, let’s move on” might save us some time to focus on more important things, but it might also cause us to miss something important whose time has now arrived.
We are now offering ebooks in the Systems Approach series directly from our web site at a discounted price. The content remains freely accessible, but if you’d like to support our work and have the convenience of a pre-built ebook, we’ve made it easy (assuming we mastered the intricacies of accepting payments). Speaking of spending money, we enjoyed this article about why humans should not travel to Mars, not just for its technical arguments but for quotes like this “If NASA is Amtrak in space, then SpaceX is the Fyre Festival with rockets”.
This week’s photo by Hermes Rivera on Unsplash.
.