Congestion Control And Internet Modeling
Congestion control has been on my mind a lot over the last year, given our dependence on a smooth-running Internet to work from home, connect with friends, and watch a lot of streaming entertainment. But it is particularly on our minds at Systems Approach, LLC as we (Larry Peterson, Larry Brakmo and myself) have started the latest book in our series, focussed on Congestion Control. With that in mind, I wanted to revisit some content I had worked on earlier in the last few months.
Fractals are everywhere, including in the Internet's traffic patterns. Photo by Enrico Sottocorna on Unsplash
Back in another era, at the end of 2019, I was writing one of those end-of-year prediction posts, and I decided to say something about the future of the Internet. I don’t regard myself as especially bold when it comes to predictions – unlike the inspiration for my post, the inventor of the Ethernet, Bob Metcalfe. Metcalfe famously predicted the collapse of the Internet in 1995 and publicly ate his printed words when proven wrong. My prediction was “I confidently predict that the Internet is not going to collapse in my lifetime”, which on reflection was a bolder statement than I realised at the time.
Little did I know how much the world would change in a few months after I made the above prediction. As COVID-19 spread across the globe, a large proportion of the world’s workforce moved to working from home, Zoom and Slack became primary means of communication for knowledge workers, and online video streaming – which was already 60% of Internet traffic pre-COVID – suddenly became massively more important to hundreds of millions of people.
Adding capacity in advance
To the surprise of many, the Internet has handled this incredibly well. The Atlantic did a good piece on this, pointing out that it was built to withstand a wide range of failures (although the claim that it was supposed to withstand nuclear war has been well debunked). There are a lot of reasons the Internet has fared so well, including the pioneering work on congestion control that I touched on in my pre-COVID post. But another aspect jumped out at me as I read The Atlantic: The Internet has generally been built with a lot of free capacity. This might seem wasteful, and it’s not how we build most other systems. Highways near major cities, for example, are normally full or over capacity at rush hour, and whenever a new lane is added to increase capacity, it’s not long before new traffic arrives to use up that capacity.
Part of the reason that the Internet tends to have a lot of free capacity is because we have come to expect the growth rate of traffic to increase so rapidly that capacity must be continually added in advance of traffic load. From 2014 to 2020, AT&T’s network traffic grew from 56 petabytes per day to 426 petabytes per day – roughly doubling every two years. If your traffic load is growing that fast, you always need to be planning to be ahead of that growth – installing new fibre, upgrading switches and so on – in anticipation of the new traffic that will arrive soon.
But there is another interesting bit of Internet history that comes into play when you try to plan for growth in the traffic on the network, and this concerns traffic models. Just like road traffic, Internet traffic is not constant from one hour to the next – or even from one second to the next. When I started my networking career in the 1980s, researchers were just starting to develop models of how Internet traffic behaves. Models for the telephone network were well established, had proven to be robust, and relatively straightforward to analyse – therefore, some of the early models for the Internet (and other packet-switching networks) simply borrowed ideas from the telephony world. These models also apply to lots of systems in the real world, such as people arriving to join a queue at a bank.
The phenomenon of self-similarity
By the early 1990s, it was becoming apparent that packet networks were not well described by the models that had served the telephone network for decades. As one example, Vern Paxson and Sally Floyd published “Wide-Area Traffic: The Failure of Poisson Modeling”, contributing to a growing consensus that Internet traffic was much more “bursty” – whereby packets arrived in clumps – than had been assumed by early models. Furthermore, this burstiness displays self-similarity. Self-similarity is a property of fractals – when you zoom in, you keep on seeing similar complexity at finer resolutions. For Internet traffic, this means that at any time scale, from microseconds to hours, you will see similar sorts of complexity.
The practical consequence of self-similarity in network traffic is that bursts of traffic arrive at all time scales. And this in turn means that the essential resources in a network – link bandwidth and packet buffer memory in switches or routers – can be overwhelmed by those bursts unless there is substantial free capacity. One of the key implications of this is that it’s not realistic to run a network at anywhere close to 100% utilisation on average – it has to run at a lower average utilisation to allow room to absorb the bursts when they come. In a sense, the results of this pioneering work on models told us that we had to be prepared for sudden changes – which is what happened when the world shifted to working from home.
As I was preparing this note, a paper studying the detailed effect of lockdown on Internet traffic was published. While they saw some dramatic increases in Internet traffic overall at certain times, there was also a shift in when and where traffic peaks – i.e., bursts – occurred. The fact that the Internet did so well had a lot to do with the existence of free capacity that could absorb these shifts. However, many other factors also helped, such as end-to-end congestion avoidance and the Internet’s architectural resilience to failure. What is clear is that we are all benefiting from decades of research and engineering as we leverage the Internet to stay connected despite such unanticipated changes to our working and living environments.
Our new book traces the history of congestion control and avoidance from the Internet's early days to today. This work is ongoing, and absolutely foundational to the success of the Internet. Feel free to check out the early drafts on GitHub and contribute your own ideas if you can!