How Innovation in Computing Drives Networking
We are keen students of history at Systems Approach, LLC, and this week we look back on some of the biggest transformations in networking, and how those transformations are shaped by the changes to the computing landscape. Along the way we found ourselves revisiting a “research vision” paper from 2005 that proved to be remarkably prescient. All of which gives us some thoughts about what might happen next, particularly at the network edge. This post was written in collaboration with Amar Padmanabhan.
As the Internet continues to scale and evolve to connect billions of people and tens of billions of devices, it’s impressive to recall that it was originally designed to connect a few dozen computers sitting in the labs and machine rooms of a few universities and research labs. Fortunately for those of us who use it today, the founding architects of the Internet were sufficiently farsighted that it has far exceeded its original design objectives. As one of those architects, Vint Cerf, famously noted, there were a couple of flaws in the design, including a lack of security and the failure to recognize that eventually the address space would need to identify billions of computing devices. However, the architecture has proven remarkably adaptable and, as the computing world has changed, so networking has followed suit to interconnect the world’s computing devices.
One of the most significant transformations of the networking industry began over a decade ago, although it may not have been obvious to the majority of Internet users. This transformation was driven by the rise of cloud computing, which underpins most of the Internet services that we rely on today. As companies like Amazon, Google, etc., built their data centers out of massive numbers of commodity servers, they needed a new networking paradigm to interconnect those servers. This turned out to be the “sweet spot” for software-defined networking (SDN).
SDN grew out of a number of parallel efforts to make networking more innovative and operationally robust. Notable among these were the “4D architecture” proposed by Greenberg et al. and the Ethane project that laid the groundwork for much of what became known as SDN. We were part of one of the early commercial successes of SDN, its application to enterprise data centers in support of network virtualization. And the key changes to the computing landscape that made this success possible were (a) the adoption of commodity servers to build large, scale-out data centers (b) the rise of server virtualization to simplify the operations of said data centers. Once it became easy to automatically provision large numbers of (virtual) servers to tackle computational tasks, it became painfully obvious that the old ways of manually provisioning networks one box at a time were no longer fit for purpose. This was the shift in the computational landscape that paved the way for SDN to take off in the data centers of both hyperscalers and enterprises.
There is another transformation taking place that, we believe, is driving the next generation of network architectures. Broadly speaking, this is the rise of “the Edge”. We have already reached a point where billions of devices are connected to the edge of the Internet. In our recent chat with the Open Infrastructure Foundation we agreed that 50 billion connected devices doesn’t seem like a stretch once we add connected vehicles, the Internet of Things (IoT), and wearable devices to the mix. But there is more to this than just an increasing number of devices. These devices will be both more heterogeneous (most of them will not be mobile phones or traditional computers) and will in many cases require a greater amount of computation to be performed at the edge. It is for this reason that we consider the edge of the network to be the focus of innovation for the foreseeable future. (Some of our colleagues pointed this out as far back as 2005!).
The need for edge computing has been recognized for a few years now, driven by several factors. For some applications, such as autonomous transportation, low-latency communication is likely to be critical. If there is to be any computation done with data generated by autonomous vehicles, it will need to be very close to the edge if a timely response is required. Virtual reality is another example where low latency is critical to user experience. Some applications will generate vast amounts of data that can be processed more cost-effectively at the edge rather than expensively backhauled to central locations.
The Internet was successful because it was agnostic to the applications running over it, enabling decades of innovation at the application layer. It is equally important that the next generation edge architecture be as general as possible. We must avoid unwittingly optimizing for one class of application over another. We see low-latency communication to a set of edge-based computational resources as a new general-purpose capability that needs to be present in the future edge. We now talk of “Edge Clouds”, pools of computational capacity that sit close to the next generation of devices at the edge.
This is where we see Magma playing a role in the future of networking. This is about more than bringing wireless connectivity cost-effectively out to the edge (although it is that too). Magma takes the lessons of the Internet and SDN and applies them to edge networking. Dealing with heterogeneity is essential to the edge just as it has been central to the success of the Internet. Magma supports heterogeneity by design, in terms of both the types of devices that will be connected and the spectrum that will be used to connect them. Because the future edge is heterogeneous, Magma is agnostic to the radio spectrum that it uses, in contrast to a standard implementation of 4G or 5G.
Magma also draws on the lessons of SDN and large-scale data center design to deliver an architecture that is robust and easy to manage. The manual, box-by-box management of routers and other network devices became an impediment to progress in the cloud computing era; SDN solved this problem by using logically centralized control to automate the delivery of network services in the cloud. These same principles–logical centralization, software control, automation–need to be applied to the delivery of network services at the edge if they are to scale at the level needed for the many billions of connected devices to come.
We recently sat down for a podcast recently with Russ White and Tom Ammon, in which we talked about the Systems Approach and our overall approach to open source content development. And on a completely different topic, Bruce’s effort to teach a graduate systems class about quantum computing is available on YouTube.