Opvizor, a unit of Codenotary, has extended its monitoring capabilities to cloud computing environments starting with support for Amazon Web Services (AWS) and Microsoft Azure.
Codenotary CTO Dennis Zimmer said Opvizor aggregates data to provide a more efficient approach to applying analytics to surface performance and risk issues.
Previously, Opvizor was optimized for instances of VMware vSphere running in an on-premises IT environment. Now the unit of Codenotary, a provider of a service for generating software bills of materials (SBOMs), is extending those monitoring capabilities to the cloud.
That transition comes as many organizations are looking to strike a balance between monitoring pre-defined metrics and observability platforms that make it possible to surface unknown potential issues that could adversely affect IT environments.
It’s still early days as far as observability is concerned, so Opvizor is focused on meeting most IT teams where they are today as they further embrace cloud computing, said Zimmer.
Observability, in general, promises to query not just logs and metrics but also distributed traces that are applied to specific processes. In that way, observability allows organizations to detect patterns that have not been predefined. In effect, an observability platform makes it simpler to debug application environments where the number of dependencies that exist between services are becoming too complex to track manually, much less decipher. Without the aid of an observability platform it can take weeks to discover the exact cause of an application’s performance degradation.
Of course, those applications will need to be instrumented using agent software capable of generating the data observability platforms require. Fortunately, open source projects such as OpenTelemetry are helping to drive down the total cost of instrumentation.
However, in the absence of observability, most IT teams will rely on existing monitoring tools. Opvizor is making a case for centralizing monitoring across multiple platforms to reduce costs, noted Zimmer.
Heading into 2023, each IT team will need to decide how and when to rely on traditional monitoring tools versus observability platforms. Many observability platform providers are justifying the acquisition of a new platform based on cost and the ability to eliminate the need for multiple monitoring tools. However, it remains to be seen whether IT organizations will prefer to employ both monitoring tools to track predefined metrics alongside observability platforms.
Regardless of approach, the reality is that IT environments are becoming more complex to manage. In addition to running workloads in multiple cloud platforms and on-premises IT environments, organizations are deploying emerging cloud-native applications based on containers alongside existing monolithic applications deployed on legacy virtual machines. As workloads are deployed on multiple platforms, the total cost of IT starts to rise. One way to contain those costs is to standardize on a set of tools that can be applied to multiple platforms.
The pressure to rationalize toolsets only increases during any period of economic uncertainty. The challenge, of course, is convincing one team to give up their preferred tool in favor of another to reduce costs.