SHARE
Security / February 6, 2019

Applying Cyber-Intelligence in the Time of the Modern Gateway

A note from Gigamon CEO Paul Hooper: Below you’ll find an insightful guest blog by Chris Ebley, Director of Engineering at BAI. Blogs like this fit with our mission at Gigamon to provide unprecedented visibility into information traversing networks of any scale. We deliver the power to “see” data in motion, “secure” networks, and “empower” our partners such as FireEye to provide more meaningful, timely and informed insight into the security of our customer’s networks.

Given the rapid growth in the volume of network traffic, and the tangible value of those network communications to threat actors, the instrumentation deployed by many SOC teams generates a huge number of daily alerts for the teams to evaluate. Security tools need to be optimized by both seeing the right information at the right time, and also by tuning and focusing their attention onto the most significant and meaningful events.

High-performance optimization is only possible when key technology providers partner to deliver synchronized, reliable defense in depth across network architectures. Our world class visibility solution provides pervasive and intelligent reach across physical and virtual infrastructure, delivering the right network traffic at the right time to the FireEye appliance. By efficiently processing the right traffic in conjunction with external threat intelligence, FireEye is able to deliver the most robust and accurate picture of the security context of enterprise infrastructure.  

By empowering our key partners such as FireEye with network traffic intelligently identified at speeds of 10Gb, 40Gb and 100Gb, we are able to help provide the underpinnings of a world class curated threat intelligence program.


So, What’s the Modern Gateway?

Chris Ebley Director of Engineering Blackwood Associates Inc.

Gateways have long been an intentionally engineered and convenient chokepoint for policy enforcement for applications and users entering and exiting the network. As I sit now, that’s not really different. What is different is the count of gateways and, more importantly, what those gateways look like, not just in terms of location, but also circumstance as more and more organizations embrace IAAS, PAAS and SAAS solutions to better modernize and enable business practices.

Whether on-prem or in the cloud, bastion architectures (got NAT?) are still best practice for reducing risk and enabling cyber insight into any and all traffic accessing your business’s critical applications and networks. I’m probably not the first person to tell you that your cloud architectures shouldn’t include any public IPs being directly assigned to application infrastructure — that’s what the bastion is for. 

So, now that I’ve tossed a few sentences at what we already know, let me address and get out of the way a few other things that fall into the world of best practices and, also, the known.

Block, Block, Block

Cyber-resources are limited, cyber teams are short staffed, alert counts are increasing and threat complexity is scaling right alongside it. You’ve heard this before, and while it sounds old hat and gloom and doom, it’s not the end of the world. Modern-day threats have developed right alongside modern-day cyber solutions.

What’s new are the slew of high-horsepower and high-efficacy content- and malicious threat-detecting machines that are available to detect the known, unknown and advanced. You’ve probably invested in one or more of these in your environment. What you might not be doing, however, is deploying these inline. Do that…block stuff. If you spent months testing and validating the solutions described above…if you believe that the alerts and content being provided to you is accurate and actionable…and if that solution has the ability to sit inline and stop threats before they develop, then please, do that. Put these devices in front of the threats hitting your environments and let them TCP-reset and drop traffic until they’re blue in the face.

Taking Advantage

If you’re still reading this than I assume you agree with the above. Realistically, I’m always open for a game of “convince me otherwise,” but I feel as though I’ve set the bar so low here that it’s not really a valid exercise. So, if we agree that we have growing counts of exposure points, and we agree that we’re architecting gateways to help enforce and reduce threats at those points, AND we understand that the cloud != onprem and that we’re limited in what we can do in those environments, then let’s talk taking advantage of the content and insight being provided by the gateways and security solutions we’ve developed and deployed. How do we do that? We build and deploy intelligence.

For many people, intelligence has long been a misunderstood and often neglected portion of cybersecurity. For some, intelligence might just be a list of IPs that they spend a couple dollars on per year and blindly ingest into their environment. Well, I’m here to tell you that it shouldn’t be. Realistically I see intelligence as sitting in two worlds:

  1. A self-curated list that leverages the insight and exposure of known cyber professionals that’s modified to fit the specifics of the consuming organization
  2. Immediately actionable intelligence that’s curated from the exposure of that organization.

For today’s purposes we’re going to talk about B.

Coming Full Circle

While not everyone has the luxury and ability to develop an internal intelligence team, everyone DOES sit on a repository of insight and intelligence that can be capitalized on every day. What am I talking about? I’m talking about the data coming out of the gateways and solutions we discussed earlier.

Why are those solutions so efficacious, and why was it worth blocking, for example, C2 activity at those points of enforcement? Because, those solutions are leveraging advanced static analysis and detonation techniques to fully validate the threat prior to alerting. More so, the alerts they provide include in-depth information in the form of OS change reports that show not only that something’s bad, but also how we know.

In those reports we see everything from network-based DNS queries and IP transactions to endpoint-based registry changes and file writes. The point being is to take that intelligence you gathered because a real threat attempted or got into your environment and consume it, immediately.

Update blocklists with domain and IP intelligence at all network points of enforcement and query endpoints for file hash matches or elaborate IOCs to validate threat spread and detonation. Oh, and while you’re at it, keep that information and use it to map to threat actors and campaigns. It’s nice to know if or how you’re being targeted.

Wrapping Up

In the end, the points are still conceptually simple. Architect your networks (on-prem and in the cloud) with security in mind, to the best of your ability. Deploy solutions that provide and feature advanced detection functionality and do so in-line — and don’t let detection or prevention be where you stop leveraging those investments. Update your posturing with the intelligence that your own exposure provides and take it further by beginning any outside intelligence processes with what you already know about your own environment and the threats to which you’ve already been exposed.


}
Back to top