Viewpoints

Unlocking the Value of Telemetry Data with Observability Pipelines

Written by ISG Software Research | Dec 2, 2022 4:06:00 PM

Analyst Viewpoint

The potential value of telemetry data is well understood. Monitoring and analyzing the data generated by computing infrastructure—including servers, networking equipment, IoT and applications—can help organizations identify issues and ensure their systems are operational, secure and performing as expected. Observability platforms provide dedicated environments for monitoring telemetry data and measuring the state of computing infrastructure. By monitoring and analyzing telemetry data in the form of logs, traces and metrics, organizations can detect issues and act faster to remedy failures and performance problems.

The benefits of analyzing telemetry data are not limited to ensuring infrastructure remains operational, secure and performant, however. Modern organizations are almost completely dependent on their applications and infrastructure, whether on-premises or in the cloud. For example, the applications and infrastructure a logistics company uses to provide information about stock levels, deliveries and customers are just as mission critical as its trucks, loading equipment and drivers.

Telemetry data can be used to identify infrastructure problems that can impact quality of service, potentially helping organizations to understand whether they are able to serve customers, partners and suppliers. Unlocking the full value of telemetry data and translating it into business decisions is easier said than done, however.

The complexity of modern IT infrastructure means telemetry data needs to be ingested and analyzed from an enormous range of computing equipment, sensors and applications, all of which are distributed across on-premises and cloud computing environments. Given the volume and range of telemetry data and the rate at which it is increasing, specialist skills are required to understand the dependencies between infrastructure equipment, identify the signal from the noise, and interpret and act upon it.

As with any analytics initiative, considerable time and effort needs to be allocated to cleaning and preparing the data prior to analysis. More than two-thirds (69%) of participants in our Analytics and Data Benchmark Research cited preparing data for analysis as being the most time-consuming aspect of analytics initiatives, followed by reviewing data for quality issues (64%). Reducing the time spent on data unification and preparation can therefore accelerate the value delivered by observability. Most obviously, it can result in improvements to mean time to detection (MTTD) and mean time to resolution (MTTR). In other words, observability can reduce the time taken to identify issues based on telemetry data and make the necessary changes to the IT infrastructure to resolve them.

The resolution of equipment failures and performance issues is clearly important. Given the extent to which businesses rely on IT infrastructure, understanding the impact on business operations is even more critical, however, and requires specialist business expertise. This expertise is often lacking in those responsible for monitoring and analyzing telemetry data, who are typically technical experts rather than business decision-makers. Translating telemetry data into business decisions therefore requires close cooperation between technology engineers and business analysts and executives.

While observability platforms are designed to serve technology engineers, additional value could be unlocked by the correlation of telemetry data with business events. Doing so could help organizations better understand the business risks associated with IT incidents and proactively act to prevent issues negatively impacting the business.

In recent years, observability pipelines have emerged to complement observability platforms by generating additional and more immediate value from telemetry data. Ventana Research asserts that through 2025, three-quarters of organizations utilizing telemetry data will have invested in observability pipelines to improve time to detection and resolution based on machine logs, traces and metrics.

Observability pipelines improve time to detection and resolution by automating the centralization of telemetry data from multiple sources, with the additional benefit of transforming data prior to routing it to the observability platform to reduce unnecessary costs and time delays. Observability pipelines also allow data to be routed to other destinations such as data lakes, cloud data warehouses or BI tools for further analysis and visualization.

Many observability pipelines are stateless, and simply control the flow of data and its in-flight transformation. More advanced stateful observability pipelines offer additional benefits by facilitating the identification of trends and anomalies through the unification and enrichment of data, as well as the analysis of data in motion prior to it being ingested into downstream systems.

Additionally, observability pipelines also potentially lay the foundations for the combination of telemetry data with business events. The inclusion of workflows and templates that utilize best-practices have the potential to further lower time to insight, as well as facilitating the consumption of telemetry data by employees outside the IT department.

Proactively protecting the business means not just accelerating the resolution of IT problems but also identifying and addressing related business service-level issues. While observability platforms help IT engineers to maintain technical operations, organizations should evaluate the potential role of observability pipelines to help close the gap between the generation of telemetry data and the delivery of business improvement.