4 Comments

So by IPU are you thinking about Pensando on top of Aruba as a programmable switch? Or are you thinking with Cumulus now owned by NVIDIA will we see a full blown networking engine on a DPU? The same way ESXi and NSX will run on a DPU I think this could be an opportunity to introduce cloud services closer to the guest than at the first hop network device. My only concern is managing all these nodes efficiently with patches, updates, configurations, this needs to be an automated first design. Very exciting times!!

Expand full comment

Nicely written. In the project-emco community, few discussions happened on this. Mainly, the discussions revolved around the IPU/DPU role in edge-computing. Since IPU and CPU take workloads (infrastructure and guest workloads respectively), it was felt that common entities need to be shared among them. That requires automation of IPU infrastructure when guest workloads come up. Due to isolation requirements, one likes to go with an agentless model (that is no agent in the host). Hence, the need for higher level orchestration systems to program IPU (such as networkpolicy, DDOS policies, service mesh etc..) for guests' workloads.

Good to know what you think. Created an article based on project-emco community discussions here: https://www.linkedin.com/pulse/cpu-ipu-why-multi-cluster-orchestration-becomes-super-addepalli/

In regards to P4: I would imagine that this interface is used within IPU between normal-path component in ARM cores and networking hardware IP. As far as the programming/configuring the IPU from external orchestration is concerned, I think it will continue to be K8s custom resources realized via K8s operator. P4 is good in the sense for portability of infrastructure software across multiple IPUs/DPUs. And portability is important and I hope that Industry keeps adding more extern features (to realize stateful packet processing, IPSEC, traffic shaping, RAN DU, UPF).

Expand full comment