Discover more from Systems Approach
Does Network Slicing Solve a Real Problem?
As we’ve been hinting for a while, our new book on 5G is asymptoting towards completion, and this week we decided it was time to unleash the web version on the world. One topic that you have to address in a book on 5G is network slicing, which seems to be a topic that gets some people very excited (e.g., those who think they can use it to drive new business) and leaves others completely cold (nothing to see here). This week we attempt to explain an admittedly complex topic at a high level and ask ourselves: is it useful?
There has been plenty of hype around 5G, and indeed one of our objectives in writing a book about it (in Larry’s case, a second book) is to separate the useful and innovative aspects of mobile networking from the marketing. Network slicing is one of those 5G topics that has been aggressively talked up by the industry, yet I also find it causes lots of networking people to shrug their shoulders. After all, it is in the nature of packet networks that we can “slice” them in various ways, e.g., to create multiple virtual networks running on a single physical network, or to multiplex traffic from multiple streams onto a single link while allocating resources to each of those streams. Furthermore, efforts to provide guaranteed slices of bandwidth in the Internet have produced thousands of research papers while generally failing to deliver in the real world, as bandwidth inexorably increased and applications adapted to network conditions. So we can reasonably ask: is there anything new about network slicing for 5G? And then we should also ask: is it useful? For an example of an argument that it’s not, I found this article helpful (sliced bread puns notwithstanding), making the point that while telcos and equipment vendors have an incentive to push slicing, that doesn't make it useful for customers.
I heard about network slicing several years ago and I was one of those people who initially shrugged their shoulders, thinking it was just multiplexing or QoS by another name. But I think there is something interesting going on here, which I came to appreciate when reviewing the original 5G book by Larry and Oguz a couple of years back. I don’t spend a lot of time thinking about the low-level details of wireless coding and modulation schemes, but their book explains it well for a computer systems person like myself. The radio is where a lot of 5G innovation has taken place, and the 5G radio design is what enables slicing to make some meaningful promises of performance. There is more to 5G slicing than the radio, but in my view the radio is the part that stands out from other efforts to allocate resources in virtual networks, so let’s take a closer look.
5G uses OFDMA (Orthogonal Frequency-Division Multiple Access) to schedule the transmission of symbols on a set of closely-spaced subcarriers. A symbol conveys some number of bits depending on the modulation scheme being used. You don’t need a deep understanding of radio transmission to see that a 5G scheduler has a two-dimensional grid of opportunities to transmit symbols, since it has a choice of subcarriers (12 in the example above) as well as a choice of when to transmit any given symbol. The small boxes above are resource elements (REs), each of which represents the time to send one symbol on one subcarrier. A PRB (physical resource block) contains 7 x 12 = 84 REs: 84 opportunities to transmit a symbol. We’re glossing over a fair bit of detail here, but the big picture isn’t complicated. A scheduler is presented with data to be transmitted; it has a set of opportunities to transmit that data, and it then chooses the appropriate times and subcarriers to schedule the data for transmission. In the figure above, you could think of all the colors as different streams of data: maybe they correspond to different subscribers, or different applications, and the scheduler gets to choose how many REs to allocate to each of those streams. Furthermore, it is able to change this allocation frequently, depending on the objectives it is trying to achieve, and responding to changes in the wireless environment such as variations in signal-to-noise ratio.
Once you understand this high-level view of scheduling, it’s not hard to understand how the resources of the radio network can be virtualized. There is an analogy to a hypervisor, which divides up the resources of a physical server and allocates them to a set of virtual machines. A “wireless hypervisor” divides up the resources of the radio network–the resource elements shown above–and allocates them to a set of virtual schedulers. Each virtual scheduler can be given some guaranteed minimum level of resources, so it gets to make its own guarantees. A given virtual scheduler does not necessarily use its allocated resources at all times, and unused resources can be made available to other schedulers that have the ability to use them. If this was a standard queuing system, we would say that it is work-conserving (or at least that we have that option).
Virtual schedulers as we have just described them provide a foundation for network slicing where we can make firm guarantees to a particular slice based on the resources allocated to it. We can say that a certain bandwidth is assured, and make latency guarantees since we know how long we’ll have to wait to transmit a symbol. (Guarantees are always a bit tricky in wireless networks because of the factors that are hard to control like noise, but that is part of the reason why spectrum allocation is so important: the spectrum used for cellular networks, at least when it is licensed, is relatively free of interference.)
In summary, the technically interesting part of slicing follows directly from the sophisticated scheduling and resource allocation of 5G radios, and it allows virtualization of the radio network in a way that supports firm performance guarantees for both bandwidth and latency. This reminds me of how compute virtualization, an old idea, came to be so important in the 2000s: the ability to deliver all the features of a physical machine, complete with well-defined allocation of resources (CPU, memory, etc.), made it a compelling alternative to deploying a physical machine for every application.
Which brings us back to the question of whether slicing will prove useful. I think that’s an open question, but my guess is that enterprise applications of 5G will prove to be the sweet spot (just as the enterprise is where compute virtualization took off). It’s not hard to foresee industrial settings such as factories or mines benefiting from dedicated slices for the most mission-critical applications while other slices deliver services that are more “best effort”, similar to Wi-Fi. To keep things interesting, the emerging Wi-Fi 6 standards have some of the same resource allocation techniques as 5G, so it could even be that 5G and Wi-Fi will compete for these enterprise applications that have strong performance requirements.
Getting back to the question of whether slicing is a big deal or not, it is definitely a clever piece of technology that leverages the state of the art in mobile wireless communication. 5G slicing addresses the challenging problem of making firm resource commitments in a wireless environment. Whether that cleverness has an impact in the real world is something that we will have to wait to find out.
Now that the updated 5G book is online, we encourage readers to head over to the GitHub repo and submit issues or PRs to improve the book. We’re happy to take everything from typos to technical corrections. When we are happy with the state of the book, we’ll prepare it for print-on-demand and eBook download. Speaking of eBooks, we are offering a discount on our prior books (SDN, Edge Cloud Ops, and TCP Congestion Control) as eBooks purchased from our web site. We’ve also recently enabled paid subscriptions on this newsletter, so if you always wanted to support the work we do with our open source content, now you have multiple ways to do it.
Systems Approach is a reader-supported business committed to making our work available to all. To receive new posts and support our work, consider becoming a free or paid subscriber.