SHARE
Networking / November 20, 2018

In the 21st Century We Can — and Should — Know What’s on Our Networks

Looking back on some of my posts in the Gigamon Community’s Security Group reminded me of recent conversations I’ve had with a number of security teams, which left me with a sense of what they’re all struggling with. To paraphrase Patrick Gray, host of the popular InfoSec podcast Risky Business: “It’s the 21st century, do you know what’s on your network?”

The answers appear to range from “not really” to “yes, except on those networks that aren’t ours,” with a few blips in between. This surprised me. Network data has been a common factor in the eventual detection of numerous recent high-profile breaches; only through visibility into their own networks could the affected organizations know with certainty that they had been breached. Considering that begs the question: What’s got these security teams acting so lackadaisical?

One key explanation is that the massive proliferation of tools, alerts and even breaches has brought us to a point where our brains simply cannot process the information overload. Summed up in a word, we are suffering from fatigue.

Tools Upon Tools Upon Tools

These last few years we’ve witnessed an explosion in the sheer number of InfoSec tools, all promising a wide variety of ways to help detect, alert, block, encrypt and remediate. Some seem like “features masquerading as companies” — to quote Amit Yoran, formerly of RSA — with the product proliferation in some areas being truly staggering.

Case in point, a number of years ago I worked for a reseller that sold approximately 50 different endpoint solutions. These could be categorized by functionality into six different classifications that, when subject to a typical red-team attack with lateral movement and data exfiltration, behaved in vastly different ways. Knowing this, how could a security team realistically choose the appropriate tool, or tools, for their environment?

While we recognize an endpoint as a solution, it can’t be the only line of defense, because you simply cannot control the operating system. Or, worse still, if there is a vulnerability in the supply chain — even if the recent Bloomberg bombshell was fabricated, it’s certainly still possible — exploitation can occur at the hardware level. From BYOD policies that companies have had for years, to IoT device proliferation, to Platform as a Service (PaaS), we simply can’t rely on a single agent being everywhere we wish it to be.

Regardless of the tools chosen, the latest breach news invariably reveals an organization that had many tools in the first place, but through a lack of controls, misconfiguration or other factors, was not using them effectively. We know there will be another breach because some device somewhere was not patched — so why talk about it? Yes, here is another tool, but we have 20–30 tools already in our environment, do we really need whatever is the new hotness?

Alert Fatigue Sets In

In the security industry, as soon as spending pivoted from being primarily prevention-focused towards detection and response, the avalanche of alerts began. At first, in the early days of SIEM, the alert fatigue was primarily due to configuration, as we had fewer tools feeding in to the correlation engine to fire content and handle specific use cases. Through the magic of SIEM tuning, we could at least keep the alert avalanche under control.

Fast forward to today, and organizations have a score of different tools sending more and more alerts. As an industry we have focused on the detection side (send me an alert) compared to the response side (okay, now what do I do with that?), and it is not surprising then that the last thing a security manager wants to hear about is yet another tool that is going to send his team an alert about something while contributing little to their understanding of what to do or how to remediate.

Tool, breach and alert fatigue — all of them contribute to that malaise you sense when talking about some of the newer developments in the arena of network analytics.

Sidestepping the Onslaught

From asset discovery and control to security to application performance, there’s no denying that a wealth of data is going back and forth on the wire. What many of the innovators in this space have realized, however, is that we never needed the entire packet; quite a lot of information that could be gleaned from the metadata alone.

One recent startup, for example, started as simply a way to solve the problem of identifying and tracking IoT devices on a network. Then they realized how much they could learn about the environment simply from the metadata. They could not only identify pretty much every endpoint on the network, but also that there was a lot of valuable security information to harvest.

Another successful startup proved that they could harness network metadata not to send the security team another alert, but to provide them with powerful search-and-response capabilities to respond to and actively hunt threats. You can bet that’ll sound a lot more appealing to that overworked security manager than the prospect of installing yet another source of context-free alerts.

Of course we will still need endpoint protection, and we will still need to see alerts. They’re not going away. Cloud topographies and encryption can complicate matters, too — I’m saving those for a forthcoming article. Roughly speaking, though, in the 21st century we can know what’s on our networks, and this presents an opportunity to make our work that much easier for ourselves.

Want to learn more? I’m giving an upcoming Tech Talk just about metadata in general from a security use case point of view. You can register for that here.

RELATED CONTENT

CALCULATOR
Customers have saved millions in IT costs. How much can you save?
WEBINAR
NSX-T and Container Blind Spots? Here’s What to Do.
REPORT
Learn how 1,200 of your IT security peers plan to fight cyberattacks
DEMO
See how to finally achieve visibility to reduce costs and remove complexity

Back to top