Corigine is a fabless semiconductor company that designs.
As SmartNICs become more popular, more decision makers at companies are being asked to look at the way SmartNICs work – specifically the different offload models related to the OVS datapath, and the pros and cons of each model. In this blog, I will go through and explain these various models. As we go through each model, you will realize that not all of them require offload of the OVS datapath to NIC hardware. In some cases the datapath is actually moved up to the user space as well (some call it on-loading). I will cover all of them using a consistent tabular format for easy readability and comparison. Here we go…
Summary of features, pros (in green) and cons (in red):
Location of OVS datapath (kernel, user, NIC) | Kernel | Most mature and proven model Broadest vendor support Easier to integrate and enhance the datapath with other kernel-based networking features such as Conntrack, BPF |
---|---|---|
Policy enforcement using OVS datapath (yes, no) | Yes | Suitable for SDN and Network Virtualization |
OpenStack Orchestration support (yes, no) | Yes | Suitable for cloud-based deployments Supported with most OpenStack distributions |
VM hardware independence (yes, no) | Yes | Uses Virtio driver in the VM making the VMs hardware independent and enabling support of broad array of guest operating systems and live VM migration |
Performance, CPU core use for datapath processing (Mpps, No. of cores) | 1-2 Mpps Using 4 CPU Cores | Poor datapath performance Performance degrades rapidly with more flows and more or complex policy rules Consumes many CPU cores for datapath performance resulting in lowest server utilization and TCO |
Summary of features, pros (in green) and cons (in red):
Location of OVS datapath (kernel, user, NIC) | Kernel but not used (bypassed using SR-IOV) | Mature and proven model Broadest vendor support |
---|---|---|
Policy enforcement using OVS datapath (yes, no) | No Relies on policy enforcement at TOR switch | Not suitable for SDN Not suitable Network Virtualization at scale |
OpenStack Orchestration support (yes, no) | Yes | Suitable for cloud-based deployments where SDN-based policy enforcement is not required Supported with most OpenStack distributions. |
VM hardware independence (yes, no) | No | Uses vendor-specific driver in the VM making the VMs hardware dependent Live VM migration is not supported |
Performance, CPU core use for datapath processing (Mpps, No. of cores) | Close to 30 Mpps No CPU cores as the OVS datapath is not used | Excellent performance delivering packets from network ports to VMs |
Summary of features, pros (in green) and cons (in red):
Location of OVS datapath (kernel, user, NIC) | User space | Broad vendor support Difficult to enhance features leveraging other kernel datapath implementations such as in Conntrack and BPF |
---|---|---|
Policy enforcement using OVS datapath (yes, no) | Yes | Suitable for SDN and Network Virtualization |
OpenStack Orchestration support (yes, no) | Yes | Suitable for cloud-based deployments Supported with many OpenStack distributions |
VM hardware independence (yes, no) | Yes | Uses Virtio driver in the VM making the VMs hardware independent and enabling support of broad array of guest operating systems and live VM migration |
Performance, CPU core use for datapath processing (Mpps, No. of cores) | 6-8 Mpps Using 4 CPU Cores | Good datapath performance Performance degrades rapidly with more flows and more or complex policy rules Consumes many CPU cores for datapath performance resulting in low server utilization and TCO |
Summary of features, pros (in green) and cons (in red):
Location of OVS datapath (kernel, user, NIC) | Kernel and SmartNIC Fallback to kernel OVS for control traffic and new/first flow | Uses kernel-compliant and upstreamed TC flower based offload of the OVS datapath Offload mechanism is included in the RHEL 7.5 distribution Kernel-based offload makes it easier to enhance features leveraging other kernel datapath implementations such as in Conntrack and BPF Available in the latest kernel releases only until backported to older versions |
---|---|---|
Policy enforcement using OVS datapath (yes, no) | Yes | Suitable for SDN and Network Virtualization. |
OpenStack Orchestration support (yes, no) | Yes | Suitable for Cloud-based deployments Supported with newer OpenStack distributions such as Queens and RHOSP 13 Available in the latest OpenStack releases only until backported to older versions |
VM hardware independence (yes, no) | Yes | Uses vendor-specific driver in the VM making the VMs hardware dependent Live VM migration is not supported |
Performance, CPU core use for datapath processing (Mpps, No. of cores) | 25-28 Mpps No CPU cores used for datapath processing | Excellent datapath performance Performance is maintained well with more flows and more or complex policy rules Frees all CPU cores from datapath processing resulting in high server utilization and TCO |
Summary of features, pros (in green) and cons (in red):
Location of OVS datapath (kernel, user, NIC) | Kernel and SmartNIC Fallback to kernel OVS for control traffic and new/first flow | Uses kernel-compliant and upstreamed TC flower-based offload of the OVS datapath Offload mechanism is included in the RHEL 7.5 distribution Kernel-based offload makes it easier to enhance features leveraging other kernel datapath implementations such as in Conntrack and BPF Available in the latest kernel releases only until backported to older versions |
---|---|---|
Policy enforcement using OVS datapath (yes, no) | Yes | Suitable for SDN and Network Virtualization |
OpenStack Orchestration support (yes, no) | Yes | Suitable for Cloud-based deployments Supported with newer OpenStack distributions such as Queens and RHOSP 13 Available in the latest OpenStack releases only until backported to older versions |
VM hardware independence (yes, no) | Yes | Uses Virtio driver in the VM making the VMs hardware independent and enabling support of broad array of guest operating systems and live VM migration Uses one of the two modes: Relay Agent with vhost/DPDK and Virtio 1.0, or vDPA (vhost datapath acceleration) with Virtio 1.1 Consumes 1-3 CPU cores for processing the Relay Agent in user space |
Performance, CPU core use for datapath processing (Mpps, No. of cores) | 25-28 Mpps No CPU Cores used for datapath processing | Excellent datapath performance Performance is maintained well with more flows and more or complex policy rules Frees all CPU cores from datapath processing resulting in high server utilization and TCO |
Summary of features, pros (in green) and cons (in red):
Location of OVS datapath (kernel, user, NIC) | Kernel and SmartNIC Fallback to kernel OVS for control traffic and new/first flow | Uses kernel-compliant and upstreamed TC flower-based offload of the OVS datapath Offload mechanism is included in the RHEL 7.5 distribution Kernel-based offload makes it easier to enhance features leveraging other kernel datapath implementations such as in Conntrack and BPF Available in the latest kernel releases only until backported to older versions |
---|---|---|
Policy enforcement using OVS datapath (yes, no) | Yes | Suitable for SDN and Network Virtualization |
OpenStack Orchestration support (yes, no) | Yes | Suitable for cloud-based deployments Supported with newer OpenStack distributions such as Queens and RHOSP 13 Available in the latest OpenStack releases only until backported to older versions |
VM hardware independence (yes, no) | Yes | Uses Virtio driver in the VM making the VMs hardware independent and enabling support of broad array of guest operating systems and live VM migration Uses hardware implementation of Direct Virtio on the NIC |
Performance, CPU core use for datapath processing (Mpps, No. of cores) | 25-28 Mpps No CPU cores used for datapath processing | Excellent datapath performance Performance is maintained well with more flows and more or complex policy rules Frees all CPU cores from datapath processing resulting in high server utilization and TCO |
Summary of features, pros (in green) and cons (in red):
Location of OVS datapath (kernel, user, NIC) | User space Partial offload of datapath functions to NIC or SmartNIC hardware using DPDK Flow API | Difficult to enhance features leveraging kernel datapath implementations and innovations such as in Conntrack and BPF |
---|---|---|
Policy enforcement using OVS datapath (yes, no) | Yes | Suitable for SDN and Network Virtualization |
OpenStack Orchestration support (yes, no) | Yes | Suitable for cloud-based deployments |
VM hardware independence (yes, no) | Yes | Uses Virtio driver in the VM making the VMs hardware independent and enabling support of broad array of guest operating systems and live VM migration |
Performance, CPU core use for datapath processing (Mpps, No. of cores) | 10-15 Mpps 4-8 CPU cores used for datapath processing | Good datapath performance, full line rate for 10GbE networks Performance degrades rapidly with more flows and more or complex policy rules Consumes many CPU cores for datapath performance resulting in low server utilization and TCO |
Location of OVS datapath (kernel, user, NIC) | SmartNIC Control plane runs in the user space on an ARM or MIPS CPU on the SmartNIC Datapath slow path runs in the kernel or user space on the ARM and fast path runs on accelerator chip available on some SmartNICs | Useful for bare metal cloud applications where the service provider has no control over what host operating system is used on the server Best when SDN and cloud orchestration is implemented via the SmartNIC (not through the host) |
---|---|---|
Policy enforcement using OVS datapath (yes, no) | Yes | Suitable for SDN and Network Virtualization Non-Bare Metal Use Case: When the host runs OVS control plane and datapath, the version that runs on the SmartNIC can easily go out of sync with the version that runs on the host, causing anomalies and feature inconsistencies related to SDN deployments |
OpenStack Orchestration support (yes, no) | Yes | Suitable for cloud-based deployments Non-Bare Metal Use Case: When the host runs OVS control plane and datapath, the version that runs on the SmartNIC can easily go out of sync with the version that runs on the host, causing anomalies and feature inconsistencies related to cloud orchestration. |
VM hardware independence (yes, no) | No | Uses vendor-specific driver in the VM making the VMs hardware dependent Live VM migration is not supported |
Performance, CPU core use for datapath processing (Mpps, No. of cores) | 15-20 Mpps No CPU cores used for datapath processing OVS control plane processing on the host consumes less than a CPU core. This is eliminated by this approach | Good datapath performance, full line rate for 10GbE networks for small packets and 25GbE networks for mid-sized packets |