As hardware advances and diversifies, we’re entering what many see as a new golden age of computer architecture. However, the idea of an ever-expanding hardware landscape can be daunting for software developers, because while hardware diversity is wonderful for price and innovation, it can lead to spiraling software complexity on the back end. So many software tools are specific to the hardware they run on that a world where CPUs, GPUs, FPGAs, ASICs, DSPs and more are freely intermingled feels overwhelming; an invitation to interoperability headaches and spiraling tech overhead. The question on the minds of developers is how to navigate this constantly changing ecosystem to take advantage of the new world of hardware acceleration without driving themselves crazy in the process.
The challenge for developers is that, if we look historically, every time a new piece of world-changing silicon shows up, it often comes with a new software stack of tools. With heterogeneous hardware innovation moving at a nearly untrackable pace already, keeping up with the software end is beginning to feel untenable. In response, a new philosophy is emerging: We should be able to have a common set of tools, libraries, frameworks and compilers that can address a wide variety of accelerators in a “silicon agnostic” way.
Anyone doing DevOps work today should be asking, “Are the tools we are adopting ready for the heterogeneous hardware future?” Rather than using proprietary stacks exclusive to individual vendors’ concepts of how acceleration should work, we should be thinking about abstraction layers that allow us to move to open software stacks that work across multiple vendors and architectures.
This philosophy is central to a number of movements—oneAPI being one example—of asking for libraries, compilers and analysis tools that are more versatile. It’s incumbent on any future-conscious developer to analyze the tools they’re using and ask if they are preparing for the future or locking them into a single solution that excludes you from other innovations.
When it comes to navigating the changing developer landscape, it’s particularly important to look at the multiple-vendor piece of the puzzle. One of the signature strengths of computing and electronics is its ability to interoperate. Without compatibility between platforms, we end up locked into single solutions that don’t take advantage of the diverse landscape of innovation available to us. If systems are built from the ground up with mixing and matching in mind, we don’t have to change our software tools to adapt to new technologies and we can use the same languages, libraries and analysis tools we are used to.
It’s important we take on this work now because hardware diversity is not just at an all-time high, its momentum is only growing. In the past when the hardware landscape was more limited, switching to different toolsets for different CPUs was a feasible, if inelegant solution. Today’s computers, however, have computational capabilities from multiple vendors. If my tools don’t stretch across all of its ecosystems, I end up in a complex mess very quickly.
A particularly relevant example can be found in supercomputing, one of the most exciting and fast-moving areas of innovation in the developer community. What is the most exciting thing about supercomputing? It’s heterogeneous, allowing multiple architectures to take on specialized hardware acceleration for the different tasks for which they are best suited. The Aurora supercomputer, for example, is planned to achieve up to 2 exaflops of performance—a staggering number (the first supercomputer I helped design had 1 teraflop—or 2 millionths of Aurora’s performance). The amazing capabilities of these machines are only possible because of advancements in heterogeneous memory technologies, storage capabilities, sophisticated CPUs and GPUs all working together seamlessly.
The advancements we’re seeing in supercomputers, as always, will ultimately filter down into lower-level systems. In the near future, regardless of the type of work you do, enormous computing power is coming online. We need to make sure that our libraries and tools can take full advantage of it when it arrives.
Two great places to learn more about activities to help software be open, multi-vendor and multi-architecture for heterogeneous programming are: https://sycl.tech/ and https://www.oneapi.io. There are numerous online tutorials, a link to the SYCL book (free PDF download) and much more.