What Defines a Digital Twin? Understanding the Core Architectural Tenets

Learn about our strategy, standards and solution architectures which enable customers and partners to deploy solutions that deliver digital twin capabilities.

One of the key emerging technology spaces in which we are engaging with customers is the area of “Digital Twin.” At Dell Technologies, we have developed strategy, standards and solution architectures to enable customers and partners to deploy solutions that deliver digital twin capabilities within organizations. During this effort, we continue to build on the understanding that Digital Twin is not a “singular” emerging technology but rather a convergence of multiple solutions that create business outcomes and value. Some of these technologies, such as HPC, have been around a very long time. Others, such as IoT or Edge Computing, are newer and continue to gain traction within organizations. The Digital Twin Consortium, the industry standard body of which Dell Technologies is a founding member, defines a Digital Twin as “a virtual representation of real-world entities and processes, synchronized at a specified frequency and fidelity.” The purpose of this blog is to introduce many of the key technology enablers for Digital Twin. Some of these you may already have within your organization, and for others it is time to start planning for their implementation.

The most recognized and arguably the most important element of a Digital Twin is a capacity for simulation or modelling. Having a capability around CAD or CAE can provide a solid foundation. In the manufacturing vertical, many companies are putting their mature capabilities for simulation and/or high-performance computing at the heart of efforts to build Digital Twins for both operational environments and product development. If an organization uses HPC platforms and simulation modelling, there will already be a strong understanding and baseline capacity on which to build. A fundamental point here is that having a simulation or HPC capability alone will not deliver a Digital Twin, in order to do that, organizations must combine it with real time data. That said, let’s segue to the Internet of Things (IoT) and edge computing.

The real value in IoT, from a technical perspective, is that it can provide a great deal of data, in (near) real time, to IT systems at the edge that can then deliver insights and outcomes. By then combining data from IoT with historical datasets or simulation data, it can deliver the ultimate viewpoint on how physical environments/assets/processes are performing. Leveraging real time data can allow true “what if” scenario planning and process refinement within a Digital Twin. For many organizations deploying IoT and edge computing solutions, as well as combining IT / OT people and processes, the integration has not been straightforward; however, there is a trend today toward distributed compute architectures that will support the demand for data activities like AI and normalization nearer the point of data creation – or the edge. The time for edge computing is now – solutions architectures for edge will be the key to unlocking IoT and in turn – being an enabler for Digital Twin.

Another key element of a Digital Twin is the ability to interface systems. If we have historical data from our HPC solutions and real time data from the edge, they will typically exist in very different IT architectures, applications and solution stacks. Hence, the role of APIs (Application Programming Interface) are critical. Realistically, multiple IT and OT systems will need to be able to “interconnect” and communicate. Over time a Digital Twin can and probably will become a “system of systems.” A fundamental of Digital Twin is that it will address the long running issues with technology silos and “shadow IT” that have become the bugbear of many organizations. Knowing the role of integration platforms, APIs and open source solutions are very important. Leaning into the likes of the industry consortia, such as the Digital Twin Consortium, is key to begin “connecting the dots” on these solutions.

Data is absolutely the most important element across all these emerging trends. It is critical to plan for and understand what is needed to manage, move, process, integrate, store and protect the data that enables your Digital Twin. Understanding how data flows and streams within your organizations, the synchronicity of the data coming from different systems, and “where” your data lives are all fundamental. There are many data concepts being discussed in the context of Digital Twins. If you think long term about building a “unified” data platform that can address the requirements of your IT / OT datasets, it will put you on the right path.

With respect to the data, key elements that can help to drive more competence into a Digital Twin are an organization’s ability to build capabilities in the areas of Data Analytics and Data Science. By layering these solutions, you can understand what your data is doing. An analytics capability allows you to inspect all elements of your data within the Digital Twin supporting your business outcomes, while solutions for machine learning and artificial intelligence will allow you to build models and automated learning systems for “problem solving” and improvements. If you already have a competency or practise for analytics or data in place, you can drive more learning and business improvement by providing all the data available within the Digital Twin to these platforms.

Another crucial element that may be a little less obvious to define is how to use Digital Twins. Having the best available Client Solutions (PC’s, tablets, laptops etc) in place will make your Digital Twin easy to use, consume, interact with and refine. When we see some of the marketing relating to Digital Twin adoption, it typically will jump into the “cool” technology like AR/VR and how you can use this amazing technology to interact with simulation or “overlays” for work instructions or training. However, for many organizations having strong data visualization around KPIs can be the most important use case for getting started with Digital Twin. Having the correct compute hardware, be it PC, laptop, tablet, workstation, smartphone, or other advanced HMI technologies like headsets or haptics in place, will be critical to derive value from Digital Twins.

Bear in mind, it is difficult to “buy a box” of Digital Twin. As you explore how Digital Twin can bring your organization value – understand that the above core tenets – simulation, Edge and IoT, interfaces and APIs, data platforms, Data Science, data analytics and client solutions – will be important areas to plan. Lean into existing capabilities and plan for the areas where you do not have expertise or solutions today. In future blogs, we will discuss the maturity levels of Digital Twin, some of the key partner and ecosystem solutions, industry vertical business outcomes and developments in reference architectures and solutions.

At Dell Technologies we are working on solutions across all areas of Digital Twin. To learn more, please reach out to your account teams.

About the Author: Mike J Hayes

Mike J Hayes is a Senior Technical Consultant within the Dell Technologies Infrastructure Solutions Group. The primary focus of Mike’s current role is engineering and customer engagement related to Edge, IoT and Digital Twin, including partnerships and technical enablement. Mike is also responsible for technical engagement with global key customer accounts and developing solutions specific to these customers. Mike’s background is in Enterprise IT, with 20+ years’ experience working IT and global technical presales. Key technical competencies include IoT, Edge, and the wider Dell Technologies portfolio, with specific emphasis on Virtualization technologies, high performance workloads, and End User Computing/VDI. Mike is a graduate of University of Limerick, and is based in Limerick, Ireland.