SHARE
Cloud / September 1, 2022

How to Navigate Multi-Cloud and Mixed-Network Challenges with Deep Observability

I was recently interviewed by Dana Gardner, longtime IT expert and analyst in the enterprise technology space, for an episode on his podcast show, BriefingsDirect. Dana and I discuss how today’s modern businesses are continuing to ramp up their cloud programs in order to gain agility and maintain a competitive edge. Specifically, we dissect the role of deep observability in securing these programs and how it provides the rich network insights needed to navigate their many complexities. Below I’ve highlighted some key takeaways from our conversation.

And if you want to jump straight to the podcast, you can find it here.

Metadata Is Key to Extracting Value from a Firehose of Network Traffic

The amount of network traffic flowing between users and applications has exploded over the past few years. As a result, organizations are faced with massive streams of data, which can easily overwhelm cloud operations tools looking to extract valuable information to detect and eliminate security and performance blind spots.

In order to capture the right insights — or “find a needle in a haystack” — IT teams must first reduce the size of the haystack they’re dealt. This can be done by focusing on metadata intelligence extracted from network traffic versus processing raw network packets. Boosting this intelligence and type of visibility can decrease the volume of data that teams have to parse through (by more than 95 percent) while also reducing the cost and complexities that interfere with organizations extracting the full benefits of their network tools. It’s less about the intelligence of the network and more about intelligence from the network.

Some other helpful areas to support traffic intelligence include:

Deep Observability Supports the Flexibility Needed When Plans Change

We’re seeing a resurgence of the colocation model as people leverage a combination of public and private cloud technologies and container-based technologies. The more applications that are deployed, the less flexible IT teams are in working across the network. However, IT professionals need the ability to take an application and put it back where it was before or wherever they would like within their IT environments. Deep observability supports this flexibility by enhancing visibility and strengthening a workload’s portability capability — enabling teams to easily choose whichever application works best for them on a case-by-case basis. Additionally, these same deep observability capabilities support IT teams not only in successfully moving workloads among hybrid cloud deployments but also maintaining a strong and proactive security posture — presenting a huge digital business and economic benefit.

Cloud Security Is About Building Defense in Depth

Securing cloud applications is a daunting task and ultimately requires a more advanced level of defense in depth. For example, when moving an application into a cloud-based environment, you don’t have the same capabilities that you enjoy when the application sits within your own infrastructure. This begs the question for chief information security officers (CISOs) and security professionals, “How do I secure and maintain compliance for applications that development teams are rapidly deploying and continuously modifying?” As I explained to Dana, this has become the number-one issue facing CISOs today as they support the business and the organization’s desire to run fast and innovate. The key element is how to do so while staying secure.

Emerging technologies like deep observability can support cloud security by helping security and cloud operations teams both see and understand what’s happening within each of their applications. With this enhanced knowledge, comes the confidence needed to detect and eliminate security and performance blind spots and protect hybrid and multi-cloud infrastructures across the business ecosystem.

A deep observability pipeline goes beyond traditional monitoring approaches that rely exclusively on metrics, events, logs, and traces (MELT). Organizations can significantly extend the value of their existing security and observability tools and investments through real-time network-level intelligence, derived from packets, flows, and metadata. CloudOps teams can now use their New Relic, Dynatrace, Datadog, and Sumo Logic investments for new security use cases, such as discovering SSL vulnerabilities, unsanctioned applications, and APIs as well as rogue user activities, such as crypto mining — not just for agent-managed hosts but all hosts across their entire infrastructure. As a result, organizations can realize the full potential of the cloud and accelerate their transformation initiatives.

If you’re interested in learning more about the Gigamon team’s outlook on deep observability and hearing more about what I discussed with Dana, you can find the full conversation here or read a transcript of the conversation here.

Featured Webinars

Hear from our experts on the latest trends and best practices to optimize your network visibility and analysis.

CONTINUE THE DISCUSSION

People are talking about this in the Gigamon Community’s Hybrid/Public Cloud group.

Share your thoughts today


Back to top