Telemetry Pipelines: The Backbone of Modern Observability
We live in a data-driven world where organizations of all sizes are overwhelmed by the sheer volume, velocity, and variety of data they generate daily. This tsunami of data threatens to drown the IT teams responsible for availability, performance, and security and often places a huge financial burden on the organization.
Enter telemetry pipelines.
Telemetry pipelines are designed to collect, enrich, transform, and route data from multiple sources to multiple destinations into formats that cross-functional teams can analyze and use effectively. And the benefits are significant in terms of cost savings, optimization, and improving data fidelity. With the right data in the right place at the right time, the right decisions can be made at a reasonable cost.
The Growing Importance of Network Telemetry
Acquiring meaningful and actionable telemetry and getting it to the right analysis solution, be it observability or security, is more difficult today given the complexities of modern applications and hybrid cloud infrastructure. Historically, observability telemetry was focused on gathering and analyzing MELT data (metrics, events, logs, and traces), but this has limitations because you’re only seeing a partial or top-down view of what’s going on. While dated, this blog from Forrester provides great insight on the limitations of logging.
Today, network data has become a critical part of the conversation and fills in the gaps of MELT or logging alone. Network data is not just about packets and packet captures — it also includes flow records and network-derived metadata that provides additional context for understanding and addressing application and infrastructure interactions, as well as performance and security issues. This same network context provides much-needed visibility into lateral traffic between VMs and containers to eliminate blind spots across your hybrid cloud infrastructure.
It’s apparent to me that Gartner recognizes this shift based on some of the commentary in a few recent research notes. In research from June 2024, titled “Monitoring and Observability for Infrastructure and Applications,” Gartner says, “Network telemetry is data that comes from network infrastructure devices that is not in a format that can be properly read by the monitoring and observability solutions on the market today.1
“A network telemetry pipeline processor processes that data and puts it into an event format that can be read by solutions from observability vendors. This includes packet inspection analysis data that is consumed by solutions, adding even more context to modern observability solutions.”
In this same note they also included network telemetry as part of their Monitoring Guidance Framework. (See graphic below.)
Finally, Gartner wrote a note in November of 2024, titled “Network Monitoring and Analytics for Data-Driven Observability,” that says, “The telemetry pipeline capability lets network operators leverage the combined universe of network monitoring data to realize greater insight into their networks and respond more quickly and intelligently to events.” 2
It also says: “The use of a third-party telemetry pipeline is becoming increasingly common for managing telemetry at scale.” In support of this, Gigamon has partnered with Cribl, and we have many joint customers who recognize the value of a unified telemetry pipeline that brings MELT plus network-derived telemetry together to deliver a panoramic view of what’s really going on in the most efficient way possible.
“Telemetry pipelines have become a key element in modern observability architectures,” said Nick Heudecker, Head of Market Strategy at Cribl. “By abstracting sources from destinations, telemetry pipelines improve how data is managed, stored, and governed, while giving organizations far more control over their environment and platforms. Cribl and Gigamon allows our joint customers to get the most value out of their network infrastructure data.”
The Gigamon Deep Observability Pipeline efficiently delivers network-derived intelligence and insights to security and observability tools and is therefore tightly aligned with how the industry is viewing telemetry pipelines in enabling organizations to gain deep observability across their hybrid cloud infrastructure.
As businesses increasingly rely on modern applications deployed on a hybrid or multi-cloud infrastructure, network data has become a critical piece of the observability and security puzzle. As it’s been said before: You cannot secure what you cannot see. I will add to that: You cannot see where you don’t sit. Network telemetry across your hybrid cloud infrastructure is critical to bringing the complete picture into focus.
Cost Savings and Data Fidelity
Telemetry pipelines do more than just capture and route data. They also help optimize it. By improving the way data is collected and processed, these pipelines enable companies to save on tool capacity and data processing costs. The goal is to capture only the most relevant data, ensuring that it’s both cost-effective and high quality. A poorly managed data pipeline can result in inaccurate insights, which can have serious business implications. A well-managed telemetry pipeline ensures that data is correct and reliable, giving decision-makers the confidence they need to make informed and timely choices.
Modern observability is all about understanding your workloads, platform health, performance, and security. As applications become more complex, collecting meaningful telemetry and getting it to the right analysis tools is more difficult than ever. This is where telemetry pipelines come into play.
They were created to help organizations gain better control of their data by managing telemetry from various sources, including traditional MELT data as well as network telemetry. By integrating network telemetry into the observability telemetry equation, these pipelines offer deeper insights and more context, ultimately leading to better decision-making, performance optimization, and security.
Based on our experience with customers who manage telemetry well, a best practice would be to take a strategic and holistic approach to defining the telemetry your teams need to attain optimal observability of your systems. This approach will enable you to achieve maximum operational and cost efficiency today and to have the depth and breadth of telemetry on hand to respond to business, technology and tooling changes in the future.
References
1. Gartner. Monitoring & Observability for Infrastructure and Applications, June 2024. gartner.com/en/documents/5486695.
2. Gartner. Network Monitoring and Analytics for Data-Driven Observability, November 2024. gartner.com/document-reader/document/5939507?ref=solrAll&refval=457280499.
Featured Webinars
Hear from our experts on the latest trends and best practices to optimize your network visibility and analysis.

CONTINUE THE DISCUSSION
People are talking about this in the Gigamon Community’s Networking and Performance Management discussion.
Share your thoughts today