Cloud / September 8, 2020

Public Cloud Traffic Visibility Made Simple with Gigamon, Part 1

Updated October 28, 2021.

Editor’s note: Part 1 of this series covers the challenges customers face in getting visibility into cloud network traffic. In Part 2, we’ll look at solutions.

The current global situation and the shift in workforce has accelerated organizations’ journeys to the cloud. More of us are working at home and shopping for essentials online to minimize our exposure to the health risks posed by the COVID-19 pandemic, and many of the organizations that have embraced digital transformation in the last few years are successfully providing digital services using cloud computing.

Nevertheless, challenges remain. Some organizations have been holding back from broad moves to the cloud because of the difficulty in getting full visibility into cloud traffic. For those who were still having second thoughts about embracing the cloud, the pandemic has been an eye-opener about cloud computing benefits — but, as we’ve discovered when we discuss the subject with our customers, those visibility challenges haven’t gone away.

Cloud Traffic Visibility: Not Just Good to Have, but a Must

In the world of Zero Trust, we must assume everyone is an outsider and can’t be trusted without proper visibility. Cloud computing brings agility and scalability to business operations and service offerings, but the downside is that you lose a bit of control over digital assets in the cloud. The elastic nature of the public cloud makes cloud infrastructure less predictable as businesses continuously spin up or spin down cloud workloads based on demand. Then there is shadow IT, which is the use of unknown and unapproved applications.

The attack surface also increases with the presence of digital assets in cloud, since the traffic traverses between on-premises and the cloud. Without mapping your digital infrastructure, including all of the applications running on it, it’s almost impossible to secure your cloud environments. A lack of mapping also makes it challenging to gauge user experience for cloud-based services.

All of this makes the case for better visibility into cloud-based workloads: being able to see who is accessing your cloud workloads, what applications are running in the cloud and who is using them, and what type of traffic is being generated to and from these cloud workloads. Without traffic visibility, security threats remain hidden and service level agreements (SLAs) suffer due to lack of insight into network performance and outages. This negatively impacts businesses, resulting in customer churn and lack of trust.

Cloud traffic visibility helps businesses monitor application and network performance and troubleshoot network and application outages in the cloud. It also beefs up a business’s security posture by helping them detect and prevent security threats and attacks, and protect data privacy for compliance reasons.

Challenges in Acquiring Cloud Traffic

Acquiring network visibility in an on-premises physical environment is much easier than gaining visibility in a cloud environment. On premises, the physical networks are under the control of IT, who can acquire traffic by strategically placing network TAPs that eliminate any blind spots. This takes care of any North-South traffic visibility requirements or traffic traversing business unit segments.

It is estimated that by 2021, 72 percent of traffic in networks will be East-West traffic* — that is, traffic between virtual workloads running inside virtual environments supported through hypervisors. For East-West traffic visibility, Gigamon virtual TAPs are used to get a copy of traffic running between virtual VMs. So far, so good.

The real challenge comes when IT needs to acquire traffic from its workloads hosted in the public cloud, such as in AWS or Microsoft Azure, and requires that traffic to be sent to various security and performance management tools. These tools can be either on-premises or in cloud. So where does IT start to get cloud traffic?

Option 1: Use a Cloud-Native Visibility Service

The first option is to work with your public cloud vendor and ask them to copy the traffic. But not every public cloud vendor has a visibility service that can readily send cloud traffic to any tool of your choice. Though some vendors like AWS and Google Cloud Service do offer a VPC traffic mirroring service, at the time of writing many other public cloud vendors do not. So that limits your choice in using a native cloud visibility service. This also means being locked in with those vendors that do.

Can a native cloud visibility service optimize your tools? Tools are expensive resources, and not all tools need to see everything, which means you need to be smart about sending the right traffic to the right tools. For example, an email security tool may need to process only email traffic and not the kind of high-volume, low-risk traffic generated by code deployments.

By separating signal from the noise, your tools are empowered, as they receive only traffic of interest. As we noted, AWS offers a VPC traffic mirroring service, but it can provide traffic only based on things like port number, IP sources and destinations. It cannot generate NetFlow or mask sensitive information, transform headers for security and compliance reasons, or provide sampling or slicing of payload or headers. This means your tools are bound to receive irrelevant traffic. That’s a problem when the tools are hosted on-premises: You’ll be feeding those tools with traffic leaving the cloud, and public cloud vendors charge you for every gigabyte.

Can AWS’s native VPC traffic mirroring service tap traffic from all cloud workloads? Unfortunately, no. AWS’s traffic mirroring service supports tapping traffic only from the latest AWS Nitro System-based next-gen EC2 instances, which account for only 30 to 40 percent of all AWS EC2 instances. For non-Nitro-based EC2 instances, you may require some other method, like installing TAP agents on the workloads.

Option 2: Install Tool Agents for Every Tool on Your Target Workloads

The second option is to have IT work with tool vendors to obtain visibility into their public cloud workloads. This approach requires IT to install agents from every tool vendor on the monitored workloads. This becomes a resource hog for the monitored workloads, as each tool requires a separate tool agent running on each and every workload. The number of tool agents increases with the increase in types of tools. Next is the pain associated with upgrading these agents from multiple tool vendors on multiple monitored workloads. Depending on the type of monitored workloads, the tool vendors must also support tool agents for all types of workloads, such as for Linux and Windows.

What about agentless tools? Not all tools have agents, which leaves agentless tools out of the race of grabbing the copy of cloud traffic. For such tools, you must either use a native cloud visibility traffic service or convince the tool vendor to develop agents to support visibility into major public cloud environments and for multiple types of workloads. This will increase the tool’s cost, as nothing comes for free.

What about multi-cloud environments? With multi-cloud adoption, tools need to receive traffic from multiple public cloud environments for multiple types of workloads running in those environments. This requires tool vendors to support tools for each and every public cloud environment, or to be able to acquire traffic from each and every public cloud environment. This is just not available, feasible or scalable.

Can all your tools get cloud traffic, regardless of tool location? A tool vendor may not support a specific cloud environment. For example, a vendor may offer it for AWS but not for Azure, or may not have a copy of the tool to be deployed on-premises. In this case, the tool needs to receive traffic from on-premises digital assets as well as from monitored cloud workloads.

Gigamon Solves the Cloud Visibility Puzzle

With so many challenges in place, you may be thinking, “How am I going to acquire traffic from my workloads running in the public cloud and be able to efficiently send that traffic to various tools regardless of whether the tools are hosted in cloud or on-premises?”

The ­­Gigamon Visibility and Analytics Fabric™ is the answer to all these challenges. In Part 2, you’ll learn how the Visibility and Analytics Fabric helps you acquire traffic from all workloads, optimize traffic flow in the cloud to reduce tool load and network charges, and deliver traffic to the tools that need them, wherever those are.

*Cisco Global Cloud Index: Forecast and Methodology, 2016–2021. Cisco, November 19, 2018.

Featured Webinars
Hear from our experts on the latest trends and best practices to optimize your network visibility and analysis.


People are talking about this in the Gigamon Community’s Hybrid/Public Cloud group.

Share your thoughts today

Back to top