SHARE
Uncategorized / December 7, 2015

Visibility: Separating Fact from Fiction (Part 2 in a Series)

Preserving session integrity when dealing with application intelligence

The cyber security industry today faces several challenges. At a macro level, the growth of industrialized cybercrime, and the massive growth in the number of cyber criminals is causing organizations to be bombarded with cyber incidents. At the organizational level, the massive growth in the volume of data that needs to be sifted through when trying to find the needle in the haystack, and an explosion in the variety of applications that run over an organization’s infrastructure, make it increasingly difficult for security solutions to scale and cope with this growth in the number of cyberattacks.

To cope with this growth, it becomes necessary to be able to peel the haystack systematically and quickly, in order to search for the proverbial needle. In the visibility industry this is done through filtering of flows and sessions, and through specialized functions such as De-Duplication, SSL decryption, etc. The goal being to peel away from the haystack traffic streams that are not relevant to specific types of security solutions or to send very specific types of traffic streams to specific security solutions, thereby reducing the processing overhead on these security solutions. As an example, an email security solution may not be interested in looking at database transactions. Furthermore the same solution when overburdened may want to prioritize or look only at emails containing attachments, or hyperlinks in lieu of plain-text emails. As another example, an IDS type solution when running out of processing capacity may want to forgo looking at certain types of traffic, as an example, streaming video traffic such as NetFlix or Youtube and focus on other application streams.

There are two aspects to filtering streams of data with the intention of offloading security tools. Filtering “in” or forwarding specific types of traffic and dropping the rest (i.e. essentially a default drop policy) and filtering “out” or dropping specific types of traffic and forwarding the rest to specific security tools (i.e. essentially a default forward policy). Both techniques are widely used to offload and reduce the processing overhead on security solutions.

Traditionally filtering used to be done based on L2-L4 information that is generally present in all packets and consequently entire flows/sessions could be filtered based on this information.  However, with the growing use of application tunneling and encapsulation within protocols, the ability to identify applications based on L7 indicators has become a key requirement for filtering applications in or out.

Many applications tunneled or encapsulated within other protocols can be identified using Deep Packet Inspection (DPI) techniques. These techniques look deep within a packet and also deep into the packet stream in order to identify applications. In many cases the application identification can be done quickly after the initial TCP session setup, but in some cases including some custom applications, it may happen far deeper into the application stream. Consequently, when looking at offloading the processing of application streams from security solutions, the actual decision to either filter in or out application sessions may happen many packets into the packet stream, only after the application has been definitely identified. This poses a dilemma – what to do with the packets preceding the decision point?  In the case of a default forward policy those packets may get forwarded to the security tool and in the case of a default drop policy, those packets may not make it to the security tool at all. This can actually be a significant headache for security professionals as it can lead to partial sessions being seen by a security solution. Many security solutions rely on seeing a complete application session including the session setup, teardown and all the control messages, in between. Missing out on the TCP session set up and then suddenly getting a bunch of packets when an application has been positively identified and filtered “in”, can lead to generation of false positives by the security tool due to an incomplete stream of data, making the lives of security analysts and incident response teams difficult. The same may happen when the initial TCP session and few packets are forwarded to the security tool, but as soon as the application identification has been completed the rest of the packets in the application get filtered out suddenly. This type of non-deterministic flow/session behavior while attempting to actually assist and reduce the processing burden on security tools, can actually be a significant detriment to security analytics, forensics and behavioral analysis as it leads to more confusion for the information security personnel. Yet it is a behavior that some test and measurement solutions when re-purposed for application security seem to exhibit.

A true application session filtering engine should have the ability to either forward or offload entire application streams, all the way from the initial TCP session set up, to the end of the session, including all the control messages, ACKs, etc., even when the application identification happens several packets into the stream. The application intelligence engine while preserving the session integrity should also have the ability to identify custom applications using regular expressions, patterns and other techniques.

Here at Gigamon, our Application Session Filtering (ASF) engine has been designed from ground up to focus on preserving session integrity when either filtering in or out entire application sessions even when the application identification happens deep within the application session. ASF restores control to the security professional by allowing them to define custom regular expression based pattern matches and then offload entire application sessions while preserving the integrity of the full application session. This patent pending technology along with several other foundational technologies are an integral part of our GigaSECURE security delivery platform.

For a recent customer case study on this topic, please see: https://www.gigamon.com/sites/default/files/resources/case-study-use-case/cs-george-washington-university-improves-security-3177.pdf

Check out the rest of the blogs in our “Visibility: Separating Fact from Fiction” series:

Part 1

RELATED CONTENT

REPORT
2022 Ransomware Defense Report
WEBINAR
Zero Trust in an Encrypted World
REPORT
2022 TLS Trends Data
WEBPAGE
Suddenly, Ransomware Has Nowhere to Hide

Back to top