What OpenFlow Teaches Us About Innovation
This week marks the 13th anniversary of Nick McKeown’s influential paper proposing OpenFlow as an interface to open the network to more innovation. The term “Software-Defined Networking” had not yet been coined. In this issue we’re going to examine how that proposal played out in shaping the future of networking.
Earlier this month I wrote a blog post recounting the history of OpenFlow at ONF. It got me thinking about how to have impact, and potentially change the course of technology. The OpenFlow experience makes for an interesting case study.
For starters, the essential idea of OpenFlow was to codify what at the time was a hidden, but critical interface in the network software stack: the interface that the network control plane uses to install entries in the FIB that implements the switch data plane. The over-the-wire details of OpenFlow don’t matter in the least (although there were plenty of arguments about them at the time). The important contributions were (a) to recognize that the control/data-plane interface is pivotal, and (b) to propose a simple abstraction (match/action flow rules) for that interface.
In retrospect, one of the secrets of OpenFlow’s success was its seemingly innocuous origins. The original paper, published in ACM SIGCOMM’s CCR (2008), was a Call-to-Action for the network research community, proposing OpenFlow as an experimental open interface between the network’s control and data planes. The goal was to enable innovation, which at the time included the radical idea that anyone—even researchers—should be able to introduce new features into the network control plane. Care was also taken to explain how such a feature could be deployed into the network without impacting production traffic, mitigating the risks such a brazen idea could inflict on the network.
It was a small opening, but a broad range of organizations jumped into it. A handful of vendors added an “OpenFlow option” to their routers; the National Science Foundation (NSF) funded experimental deployments on University campuses; and Internet2 added an optional OpenFlow-substrate to their backbone. ONF was formed to provide a home for the OpenFlow community and ON.Lab started releasing open source platforms based on OpenFlow. With these initiatives, the SDN transformation was set in motion.
Commercial adoption of SDN was certainly an accelerant, with VMware acquiring the startup Nicira and cloud providers like Google and Microsoft talking publicly about their SDN-based infrastructures (all in 2012), but this was a transformation that got its start in the academic research community. Over time some of the commercial successes have adapted SDN principles to other purposes—e.g., VMware’s NSX supports network virtualization through programmatic configuration, without touching the control/data plane interface of physical networking equipment—but the value of disaggregating the network control/data planes and logically centralizing control decisions proved long lasting, with OpenFlow and its SDN successors running in datacenter switching fabrics and Telco access networks today.
The original proposal did not anticipate where defining a new API would take the industry, but the cascading of incremental impact is impressive (and perhaps the most important takeaway from this experience). Originally, OpenFlow was conceived as a way to innovate in the control plane. Over time, that flexibility put pressure on chip vendors to also make the data plane programmable, with the P4 programming language (and a toolchain to auto-generate the control/data plane interface) now becoming the centerpiece of the SDN software stack. It also put pressure on switch and router vendors to make the configuration interface programmable, with gNMI and gNOI now replacing (or at least supplementing) the traditional router CLI.
OpenFlow was also originally targeted at L2/L3 switches, but the idea is now being applied to the cellular network. This is putting pressure on the RAN vendors to open up and disaggregate their base stations. The 5G network will soon have a centralized SDN controller (renamed RAN Intelligent Controller), hosting a set of control applications (renamed xApps), using a 3GPP-defined interface (in lieu of OpenFlow) to control a distributed network of Radio Units. SD-RAN is happening, and has the potential to be a killer app for SDN.
One of the more interesting aspects of all of this is what happened to OpenFlow itself. The specification iterated through multiple versions, each enriching the expressiveness of the interface, but also introducing vendor-championed optimizations. This led to data plane dependencies, an inherent risk in defining what is essentially a hardware abstraction layer on top of a diverse hardware ecosystem. P4 is a partial answer to that. By coding the data plane’s behavior in a P4 program (whether that program is compiled into an executable image that can be loaded into the switching chip or merely descriptive of a fixed-function chip’s behavior) it is possible to auto-generate the control/data plane interface (known as P4RunTime) in software, instead of depending on a specification that evolves at the pace of standardization. (This transition to P4 as a more effective embodiment of the control/data plane interface is covered in our SDN book.)
It is now the case that the network—including the control/data plane interface—can be implemented, from top to bottom, entirely in software. OpenFlow served its purpose bootstrapping SDN, but even the Open Networking Foundation is shifting its focus from OpenFlow to P4-based SDN in its new flagship Aether project. Marc Andreessen's famous maxim that "software is eating the world" is finally coming true for the network itself!
A well-placed and smartly-defined interface is a powerful catalyst for innovation. OpenFlow has had that effect inside the network, with the potential to replicate the success of the Socket API at the edge of the network. Sockets defined the demarcation point between applications running on top of the Internet and the details of how the Internet is implemented, kickstarting a multi-billion dollar industry writing Internet (now cloud) applications. Time will tell how the market for in-network functionality evolves, but re-architecting the network as a programmable platform (rather than treating it as plumbing) is an important step towards improving feature velocity and fostering the next generation of network innovation.
Last week we received the first print copies of Computer Networks: A Systems Approach (Sixth Edition) just 25 years after publication of the first edition, so you can now order that in print as well as read it online. We’ve also been digging into the Magma project, which is bringing open source innovation to mobile networking, and Internet access to remote communities, for example in Brazil. And Bruce talked with Eleni Steinman at Blockchain startup BloxRoute on all sorts of topics related to networking, open source, and blockchain.