Meeting Modern Application Needs with Data Processing Units

Accommodating the needs of business workloads and attendant network and security functions with innovative DPU solutions.

This blog is guest authored by Brad Casemore, Research Vice President, Datacenter and Multicloud Networks with IDC.

IDC finds that most enterprises today understand that IT infrastructure must be modernized and digitized to meet the demands and imperatives of digital transformation. After all, structure – and certainly infrastructure – must follow and enable strategy.

That said, aspiring to modernize infrastructure is one thing, and actually doing it successfully is another proposition entirely. IDC sees this challenge clearly in the context of how enterprises address the complementary but sometimes competing resource requirements of business workloads and network and security functions.

While insertion and chaining of network and security functions are challenges in their own right, an added complication is that the processing of network and security functions and services claims valuable CPU cycles and resources that should be dedicated to business workloads. While important to the performance and protection of business workloads, network and security functions are secondary supporting elements that should serve the purposes of business workloads rather than detract from their effectiveness. What’s more, network and security services have their particular resource requirements that must scale elastically in distributed infrastructure environments to support the ebbs and flows of digital business.

The solution to this challenge obviously cannot involve eschewing or otherwise eliminating network and security functions, which are necessary but can individually and collectively consume a growing percentage of CPU resources. Virtualized and containerized network and security services are essential to digital infrastructure and must be accorded the resources they require to provide resilient and robust connectivity and strong security to the business. So, what is the right architectural answer to meet the challenge of prioritizing business workloads for better performance and optimization, while also ensuring that those workloads receive adequate support from integral network and security services?

An integrated, streamlined and elastically scalable offload element is required to relieve overburdened CPUs, freeing processor and memory resources to serve application workloads. In adding an offload complement to this modern IT architecture, we must ensure that it offers certain capabilities and characteristics including:

    • Hardware-based mitigation capabilities to minimize the impact of any low-level security vulnerabilities in the central execution environment.
    • Provisioning ability for a consistent operating model across distributed processing, addressing service quality issues that can be difficult to troubleshoot and remediate quickly.
    • Built-in hardware-based isolation, for separation of network and security functions, as well as for management and control functions, protecting against attacks that might compromise the operating system or workload execution environment.

A technology that checks all these boxes is the data processing unit (DPU), also called a SmartNIC, which has emerged to meet the secure scale-out challenges mandated by distributed IT architectures. In essence, a DPU is a programmable system on a chip (SoC) device, with hardware acceleration and a CPU complex capable of processing data.

In that DPUs are designed to operate independently of the CPU, the architectural result is that CPUs are aware of the presence of DPUs but do not control them. Consequently, the DPU controls access to physical resources such as network interfaces, through which sensitive data can be accessed. Any payload executed on the CPU, including on the kernel itself, that must gain access to those resources must go through function-offload interfaces, presented in virtualized form to the operating system environment running on the CPU. This architectural dualism allows the DPU to assume direct execution of network and security functions.

The challenge of accommodating the needs of business workloads and attendant network and security function is thus solved. As a result, several compelling benefits accrue, including the following:

    • Improved and uniform resource utilization for business workloads. Offloading network and security services to the DPU ensures that CPU processing resources are fully available to enhance the performance of business workloads. In offloading the compute-intensive network and security, as well as management and control plane functionality, to the DPU and off the host CPU, business workloads receive optimal processing resources.
    • Protected architectural environment. Security is enhanced through the ability to execute network and security services on the DPU, which ensures that payload code is isolated from other components executing on the host processor.
    • Ability to support offload of additional functions and payloads. A long-term benefit of using DPUs is that they provide a foundational management fabric that spans physical, virtual and containerized compute architectures, allowing for a CPU-agnostic approach to platform offload. DPUs can be particularly beneficial in shared cloud infrastructure environments where multitenancy and workload isolation are required to host a heterogenous mix of traditional and cloud-native workloads.
    • Automated extension of network policy. The DPU is a natural extension for SDN architectures and network-virtualization overlays, which can define policy for the creation and extension of specific network and security services that require dynamic offload for elastic scale. This is particularly useful during spikes in network demand and traffic and in providing mitigation or protection against network-security incidents.
    • Improved network performance and observability. Offloading of network and security services to the DPU enables the use of hardware acceleration to enhance the performance of virtual networking and security functions such as overlay networking, load balancing and firewalling. Furthermore, network visibility and analytics functions implemented on the DPU provide comprehensive visibility for all traffic flows directly on the SmartNIC.

To learn more about how the DPU reconciles resources needs of applications with the requirements of scalable network and security services, and about how Dell’s portfolio of SmartDPU offerings has been designed to address these needs, please read the IDC white paper The Rise of the DPU: Reconciling Resource Needs of Applications with Requirement for Scalable Network Functions.

Brad Casemore, IDC’s Research Vice President, Datacenter and Multicloud Networks with IDC. Brad Casemore is IDC’s Research Vice President, Datacenter and Multicloud Networks. He covers datacenter network hardware, software, IaaS cloud-delivered network services and related technologies, including hybrid and Multicloud networking software, services and transit networks. Mr. Casemore also works closely with IDC’s Enterprise Networking, Server, Storage, Cloud and Security research analysts to assess the impact of emerging IT and converged and hyperconverged infrastructure. He researches technology areas such as Ethernet switching in the datacenter, Application Delivery Controllers (ADCs) and application-delivery infrastructure, SD-WAN, Software-Defined Networks, Hybrid and Multicloud Networking, Network Virtualization, Network Automation, Software Defined Networks (SDN), Multicloud Networking and Cloud-Native Networking (such as Ingress Controllers, Service Mesh and eBPF/Cilium). In this capacity, Mr. Casemore provides ongoing research for IDC’s Continuous Information Service (CIS), market forecasts, custom consulting and Go-To-Market services.

About the Author: Dell Technologies