Abstraction Merchants
As computer networking researchers, we’ve been active members of the SIGCOMM community for most of our careers. Last week, Bruce talked about origin story of Systems Approach in a podcast, pointing back to the first edition of Computer Networks: A Systems Approach, published in 1996. But in fact it was working together on a 1994 SIGCOMM paper that first gave us the idea that perhaps we might be able to collaborate on a book. So given our long history with SIGCOMM, we’re very much interested in the current discussion about the future of the flagship conference, which requires a discussion about the future of networking research.
I recently reread Scott Shenker’s essay questioning whether the practices used by SIGCOMM-sponsored conferences are helping the community achieve its research goals. The essay has triggered much discussion around the practices, but what I found to be most interesting is how the article first takes a position on what those research goals should be. If form is to follow function (as the essay’s title suggests), then you need to start with a good handle on the desired function. My sense is that the ongoing debate about practices (form) are more strongly rooted in how we think about the research goals (function) than we acknowledge.
Scott starts with a perfectly good statement of what he believes our shared goal to be:
SIGCOMM should strive to lay the intellectual foundation for future networks and networked systems.
And then he teases apart the nuanced meaning of the key phrases. This reminds me of a similar discussion 20 years ago, when the National Research Council published a report entitled: Looking Over the Fence at Networks: A Neighbor’s View of Networking Research. Two things stand out about that report. One is that it was written at a time when the network research community was gnashing its collective teeth over the state of networking research, with companies like Cisco dominating what did or did not happen in the Internet. A second is that it purposely included input from people outside the networking community, providing a healthy dose of perspective. That history repeats itself is reason enough to take a fresh look at that two-decade old report, but I think there might be some other lessons to learn from both the report and Scott’s essay, and they (in part) have to do with the 800 pound gorilla in the room: the Internet.
For starters, the report lays out three promising research thrusts that are just as appropriate today: Measuring (understanding the Internet artifact); Modeling (defining a theory of networking); and Making (building disruptive prototypes). This is where the Internet is a double-edged sword. That it exists as a real-world phenomenon worthy of study is a boon for network research. That it exists as a multi-billion dollar industry makes it difficult to have impact by proposing new abstractions or architectures unless they can be justified and evaluated in the context of today’s artifact. I have a lot of respect for researchers who devise techniques to collect data about the Internet and then analyze that data to to gain a deeper understanding of its behavior, but I’ve spent my career more on the synthesis side than the analysis side of CS, and so I’ve been especially aware of the Internet as a barrier to research.
The report introduced the term ossification into our vocabulary, and motivated work like PlanetLab (for me) and SDN (for Scott and others). Building platforms to support innovative and disruptive systems research (which has a strong synthesis component in its own right) is surely a good thing, but it is a means to an end. The “end” is discovering the fundamental abstractions and design principles that are the “intellectual foundation” of networking. And I believe this point to be at the core of why we struggle to get the “conference practices'' right. Here are my personal observations.
First, ten years ago this month, at the other end of my PlanetLab experience, I gave the Keynote at SIGCOMM. It was entitled Zen and the Art of Network Architecture, inspired by Robert Pirsig’s “Zen and the Art of Motorcycle Maintenance”. As I noted in my talk, Pirsig held the world record for having been rejected by 121 publishers; the parallel to academic network architecture papers was too poignant to pass up. In my mind, an architecture is just a multi-faceted abstraction (or a suite of interconnected abstractions), where abstractions represent the “essence” of the fundamental ideas systems researchers are in the business of discovering. If I were to amend Scott’s goal statement, it would be to emphasize the research community’s role as Abstraction Merchants (reintroducing a term Dave Clark once used to describe what we do).
Second, it is interesting to compare the OS community (SIGOPS) with the networking community (SIGCOMM). At SOSP (SIGOPS’ flagship conference) I generally feel qualified to review every paper submitted to the conference, and interested in every paper presented at the conference. It doesn’t matter what the system is—it could be a storage-related system, a network-related system, a compute-related system, and so on—I’m likely to learn about (1) an emerging mismatch between application requirements and technology constraints; (2) a new abstraction that fills that void; (3) the lessons the authors learned as they reduced the abstraction to practice; and (4) an evaluation of how the resulting mechanism stacks up across some subset of the -ities (scalability, reliability, availability, and so on). There will always be room to improve the mechanism, and write papers about those improvements, but the abstractions the community generates and is hammering on at any given time is what keeps the field vital. My sense, which is consistent with my reading of Scott’s position, is that SIGCOMM has become more focused on improving mechanisms within the boundaries of the existing (Internet) architecture than in introducing and exploring new abstractions. (There are exceptions, such as datacenter networking, but even in what could be an area that’s viewed as a greenfield, the inertia of “that’s not how we do it in the Internet” weighs heavy.)
Third, I believe the biggest risk the SIGCOMM community faces right now is a narrowing opportunity to have impact. This is related to the brief discussion about “networks and networked systems” in Scott’s essay (the question being whether SIGCOMM’s scope should include the latter). The answer is obvious to both Scott and me, but the fact that he raises it as an issue is telling. It also happens to be a tension I was keenly aware of in my keynote, having just spent the previous few years shepherding the GENI project, and watching my colleagues on both sides of that question have heated arguments (and on one occasion, a shouting match) about the research opportunities inside the network versus on top of the network. There are important and hard inside-the-network research questions—we’ve been working on distributed resource allocation (aka TCP congestion control) for decades—but the more networking technology matures, the more the interesting research questions move up the stack. And as a corollary, today’s research problems may be best framed in terms of the “cloud architecture” rather than the “Internet architecture”; the vocabulary we use can itself be limiting.
My takeaway is that focusing on the “mismatch between… SIGCOMM's goals… and what our current practices achieve” will not be fruitful until we are certain we understand our goals. Crafting a policy about how reviewers should evaluate papers describing new abstractions, new architectures, and new systems will not make a difference unless the community truly values that work, even when (1) it’s difficult to identify an immediate path to deployment or imagine the associated business model, and (2) it falls outside the traditional boundaries of what is considered to be in-scope. If we can reach consensus on this, then I believe the form (practice) will follow.
For an example of the creation of new abstractions, it’s hard to think of one more important than the packet, the origins of which feature in Bruce’s talk on 60 years of Networking. On returning from Edinburgh he has made a recording of the talk available on Peertube (which is the Fediverse’s streaming video platform, also discussed in the talk). You can hear about the creation of Systems Approach and what we think are our main current challenges in this podcast. And sticking with our theme of the Internet’s evolution, we enjoyed this piece from Cory Doctorow, which talks about how, under different conditions and constraints, the old, good Internet could have given way to a new, good Internet. Perhaps we can still create those conditions.