SHARE
Security / February 14, 2019

Know What’s on Our Networks, Part 2: Handling Objections

In a previous blog post, I talked a bit about the need to know both what is on your network and why analyzing network data was a good idea to give security personnel context on what has really been happening — a source of truth, if you will. What I did not manage to address in that post was a number of common objections that crop up with regularity. I had said that I did not mean to dismiss those objections, but instead would address them in this second part, which is now here!

Objection 1: Not My Network

The first objection that I often see to attempting to analyze network data is the notion that “it’s not our network.” This most commonly is espoused by those with a public cloud presence, but it can really apply with anyone who, for whatever reason, doesn’t own that underlying route/switch infrastructure.

“Not my problem,” a network analyst often responds with relish when discussing what may be happening on AWS, as an example. Which is true to an extent: If they were to get that dreaded “network is slow” trouble ticket, they can easily reply that the network really isn’t under their control, so there’s no setting that they can change that dramatically impacts performance.

Such is indeed Amazon’s problem in that example. However, the data traversing that network, from server to load balancer to database to application is indeed the company’s problem. The need for analysis of that data hasn’t changed — the only thing that may have changed is that obtaining that data without owning the infrastructure becomes more challenging. However, whether it’s monitoring application performance or looking for indicators of compromise, the fundamental issue hasn’t changed.

That of course is still in the purview of infrastructure as a service, but what of platform as a service? In general, again I say the principle is the same: How is an application calling that instance of SQL as a service, for example? It’s still via Ethernet, and where again the responsibility of the underlying server and operating system has been removed, the availability, performance and security of that instance is still within the context of Ethernet communication taking place.

Objection 2: It’s all Encrypted Anyway

The second objection I sometimes hear is that network traffic is all encrypted anyway, so attempting to analyze that traffic is simply too difficult. Doubtless this stems from various alarming statistics reporting that something like 60 percent of all traffic going across the network is encrypted, coupled with the fact that heretofore the do-nothing approach (that is, not attempting to inspect encrypted traffic) has not yet seemed harmful to anyone.

Let’s break that down a couple of ways. Firstly, not doing any analysis on encrypted traffic still misses out on a lot of valuable information. In our metadata collection, for example, a lot of useful information about the nature of the communications is gleaned without being able to see the payload: the validity of the certificate, the age of the certificate, the signing authority and of course the source and destination. Thus a C2C callback looks suspect even though the payload is encrypted — you don’t need to see which data is being exfiltrated from that packet to already know that such exfiltration is taking place.

Secondly, there is the possibility of decrypting the traffic in the first place. When people invariably quote the aforementioned statistic with regards to encrypted traffic, let’s face it — we’re talking about primarily a single type of encrypted traffic: TLS.

With the likes of Google and others pushing sites without an https:// moniker to be labeled insecure, it’s no wonder that more and more sites our users visit use TLS encryption. Plus, the technology (our very own “man in the middle” attack) certainly exists to perform that decryption. While it may be somewhat tricky architecturally, and any solution comes with a bit of overhead, since the primary use case is generally “users outbound to the great unwashed internet,” there typically exists a relatively straightforward solution.

Also, there is seldom a need to decrypt absolutely everything, as it can tend to be impractical from the standpoint of overall throughput, as well as the standpoint of security and data privacy concerns. By all means, don’t decrypt users’ banking or healthcare information. Moreover, no matter how neat it might be to decrypt that YouTube traffic on your network (yes even YouTube is largely https:// ), there is typically little risk given the amount of bandwidth this is probably taking up in your network.

So, in practical terms, there is no need to discard the notion of network traffic analysis simply because a portion of that traffic is encrypted. From either the metadata itself or by decrypting higher risk sessions, that analysis can still take place and provide valuable information.

Hearkening back to my earlier post, relying on the two other primary sources of data (which security personnel need to respond to an incident) is simply unreliable when it comes to encrypted network traffic as well.

Hearkening back to my earlier post, relying on the two other primary sources of data (which security personnel need to respond to an incident) is simply unreliable when it comes to encrypted network traffic as well.

An endpoint solution works so long as an agent is available for every endpoint needing to be decrypted and every endpoint is known — so again, IoT devices and a lack of perfect asset control work around this. The same can be said of log data from devices: Without decryption facilities of their own they can merely record the nature of the session from header information, in a manner not dissimilar to what network metadata might achieve, but again assuming that said traffic is flowing through that device in the first place.

While the methods will doubtless change and evolve, this is why I think we are going to see network traffic analysis become increasingly popular. There is a wealth of data flowing across the ether, and while the challenge becomes one of bandwidth and scale, increasing computing power and the flexibility of metadata generation allows us to achieve a measure of practicality when considering an architecture that enables this analysis.

No need to take my word for it either: If you’re curious, reach out and see what an experienced incident responder can do with a platform like Gigamon ThreatINSIGHT. From something that may not even have triggered a detection on a tool, the metadata is there for a responder to quickly determine the pattern of infection, the source, the users involved, the transmission method and much more (for example, the name of that bad Microsoft Word document they downloaded!). Do we know what is really on our network in 2019?

Let’s continue the conversation! If you want to join me and other experts to discuss more about this topic, or any security-related item of interest, head over to the Security Group in our Gigamon Community.

Featured Webinars
Hear from our experts on the latest trends and best practices to optimize your network visibility and analysis.

CONTINUE THE DISCUSSION

People are talking about this in the Gigamon Community’s Networking group.

Share your thoughts today


Back to top