Digital transformation is driving fundamental change for organizations of all sizes in all industries. As engagement with customers, suppliers and partners is increasingly conducted through digital channels, modern organizations are almost completely dependent on their applications and infrastructure. Collecting, monitoring and correlating the telemetry data generated by computing infrastructure and applications along with business data is essential to operating as a digitally focused organization.
The digitization of business processes relies on ensuring that infrastructure and applications are performing as expected. This is no longer just important but increasingly mission critical. Observability is the ability to see the current state of infrastructure and application performance based on telemetry data, including logs, traces and metrics. It provides visibility that enables organizations to ensure they are meeting uptime and service level agreements and provides a foundation for effective and efficient digital transformation.
The primary benefit of observability results from using telemetry data at high dimensionality and cardinality to reduce the time it takes to detect and resolve IT infrastructure and application issues. Observability has previously been of interest primarily to operations teams. But the failure to combine telemetry data with business data prevents organizations from gaining insight into the business impact of performance issues and failures and addressing them appropriately.
Observability is increasingly important for business decision-makers as organizations combine machine-generated telemetry data with business data to understand the impact of a system outage or application performance degradation on their ability to conduct digital business. We assert that through 2026, more than one-half of organizations will increase their investment in observability technology to accelerate the value being generated from telemetry data. Investing in observability technology is not enough to realize the true business value, however.
Storing all telemetry data in a single data store at high dimensionality and cardinality at scale is hard enough. Combining business data with machine-generated telemetry data adds even greater complexity. Breaking down silos requires observability platforms that provide not only high levels of data dimensionality and cardinality, but also support for open standards to facilitate data collection and support for an open schema to provide the flexibility to extend the schema for additional business context and value.
Monitoring more data from a higher number of data sources is not necessarily enough to generate more insight, unless the data also provides dimensions that can be mapped to attributes that are important to the business (such as customer and product IDs). Additionally, the greater the cardinality of the data, the more business value it provides. For example, while low cardinality date and time stamps might tell an organization how many customers were impacted by an outage, high-cardinality customer IDs will tell the organization which specific customers were affected.
Providing a higher-level view of the business impact of telemetry data also relies on the ability to combine data from multiple data sources. Support for open standards is critical to providing interoperability between multiple computing infrastructure platforms and applications by facilitating data ingestion and collection. Wider adoption of open telemetry standards avoids the risk of vendor lock-in and enables more choice of technologies for storing and processing telemetry data.
Open standards are often facilitated through open-source development projects, which provide a focal point for multiple vendors, users and other interested parties to collaborate on the development and implementation of standards. A prime example is the CNCF’s OpenTelemetry project, which focuses on the ability to standardize metrics, logs and trace data. While open standards facilitate the ingestion of data from multiple sources, support for a common schema is also fundamentally important in ensuring that data can be integrated based on agreed-to and normalized field names and datatypes for improved visibility and faster root-cause analysis.
The OpenTelemetry project has recently expanded with the convergence of Elastic Common Schema and OpenTelemetry Semantic Conventions to provide a common schema to facilitate visibility across an organization’s infrastructure and applications for observability and security. This enables organizations to break down silos, increase visibility, and connect operational and business domains that facilitate improved collaboration between operations and development teams.
Monitoring and analyzing telemetry data—including logs, traces and metrics generated by computing infrastructure including servers, network and applications—is essential to understanding the ability of the organization to meet customer, partner and supplier expectations. Observability is integral to operations teams and developers. Business decision-makers and senior executives need to take a higher-level view of the commercial importance of telemetry data and the benefits it can provide in terms of reducing time spent detecting and resolving IT issues, improving productivity, accelerating innovation, and meeting customer support and service and other business objectives.
Organizations should be evaluating support for open standards in their assessments of potential observability technology providers, including whether support for open standards is native to the product itself, and whether it is related to collaborative open-source development communities. Support for common schema is also integral to surfacing and integrating business context from telemetry data and should also be a key consideration as organizations look to advance the use of observability to support digital transformation.