BRICKSTORM Malware Report Highlights the Criticality of Network-Derived Telemetry
Bottom line up front: Although GTIG laments the lack of security telemetry in its analysis of the BRICKSTORM malware based on its evaluation of traditional system/application logs and EDR tools, network telemetry derived from the analysis of network traffic is a rich source of security telemetry that can and should be leveraged by threat hunters and IR teams responding to a BRICKSTORM-like intrusion.
Introduction
Gigamon appreciates the September write-up by the Mandiant/Google Threat Intelligence Group (GTIG) on the BRICKSTORM malware, which it is attributing to the suspected China-nexus UNC5221.
It is the open sharing of such information, and notably recommendations which arise from understanding attacker TTPs, that helps make the wider community safer. To this end, and in the spirit of collaboration, Gigamon offers a different, but complementary perspective, and suggests alternative threat hunting and investigative approaches that leverage network-derived telemetry, which are not covered by the GTIG blog or the CISA-issued guidance derived from GTIG reporting. Network and application performance management teams, including multiple defence and intelligence organisations around the world, routinely leverage network-derived telemetry for NetOps and application performance management functions. This same telemetry, however, can and should be leveraged for security operations, yet many threat hunters and incident responders are unaware of this rich source of data.
The statement “the actor employs methods for lateral movement and data theft that generate minimal to no security telemetry” is simply inaccurate given the ready availability of network-derived telemetry, which can be generated by many products, both commercial and open source.
What is Network-Derived Telemetry?
There are multiple sources for security telemetry, although the primary ones mentioned in the GTIG article are traditional system and application logging and traditional endpoint detection and response (EDR) tooling.
The reality is that these represent just two of many sources of telemetry. While traditional logs are useful, they are inherently vulnerable to adversary manipulation and should not be relied upon exclusively. See MITRE ATT&CK techniques 1070 and 1562. Mandiant/GTIG also describes adversary efforts to disable logs in a blog discussing APT29 attacks targeting Microsoft 365 (formerly Office 365). In one example, Mandiant observed APT29 disabling Purview Audit, an advanced log available with an E5 licence that allows organisations to determine whether a threat actor has accessed email.
To address such concerns, CISA’s Extensible Visibility Reference Architecture, released in draft in April 2022, documents a non-exhaustive list of additional sources of telemetry. Appendix B, “Cyber Observable Data”, describes “configurations or configuration settings, data flows, logs, packet data, etc., which describe an event (benign or malicious) or the state on a network or system”.
The bottom line is that not all telemetry is created equal, and no specific form of telemetry is superior or entirely trustworthy[1]. Indeed, adopting the principles of Zero Trust architecture, the assumption of compromise should be applied to telemetry itself. As GTIG notes in other contexts, the suppression and spoofing of telemetry is both possible and widely implemented by threat actors. The assurance provided by telemetry matters and should be considered[2].
One of the challenges of both EDR and traditional logs is that they run inside the blast radius of a system or appliance compromise. While there may be a burst of telemetry while the threat actor achieves administrative access, once that access has been achieved, EDR agents can be bypassed and logs can be replayed from those stored in the appliance and/or deleted or modified. These are critical weaknesses which UNC5221 is clearly exploiting because, as the GTIG blog says, they are generating “minimal to no security telemetry”.
Network-derived telemetry is different because it is observed externally and is not generated by a network device that is a part of the communication. Rather, it is derived from the analysis of network traffic generated by the appliance or targeted system and observed by a packet broker. As such, network-derived telemetry is an analysis of the behaviour of the system based on observed activities on the network. While EDR and traditional system and application logs document the internal state of a workload, network-derived telemetry documents the workload’s actual actions on the network.
BRICKSTORM TTPs and Observable Network Activity
For example, the GTIG blog notes the following adversary TTPs that result in very limited log and EDR data that can be used for security operations:
- Compromise of edge devices (very much the theme of infosec circa 2025)
- Use of SOCKS proxy functionality[3]
- Deployment on devices that do not support EDR, including both Linux and BSD-based appliances[4]
- Lateral movement from network appliances to Broadcom/VMware systems
- Memory-only custom droppers written in Java on vCenters, and long delays from infiltration to the start of beaconing
- Enabling SSH servers on systems that were not meant to be running SSH
- The use of mail, HTTP and (presumably) SMB access from appliances to enterprise infrastructure containing data of interest to UNC5221
- Logging into Windows machines from network appliances
- Unexpected VM cloning, generating the appearance of unexpected workloads
Similarly, CISA/NSA/CSE’s BRICKSTORM Backdoor Malware analysis report identifies additional TTPs that generate minimal to no useful log and EDR data (from a security perspective):
- Access to compromised web services via a web shell [T1505.003]
- Lateral movement from a web server to external and internal domain controllers using RDP [T1201.001].
- Exfiltration of the AD database, presumably externally to the organisation [T1003.003]
- SMB access from compromised web servers in the DMZ into ADFS servers, internal domain controllers and jump servers.
- Connections to potentially unexpected DoH servers that may be different to the ones used by the organisation itself.
- Doubly encrypted C2 channel, whereby an external TLS connection has a WebSocket TLS encrypted channel inside it[5]. The WebSocket connection has a self-signed 2,048-bit RSA certificate.
In direct contrast, however, all of these adversary TTPs generate observable, atypical network traffic that will produce easily identifiable telemetry that can be used by threat hunters and incident responders. Several of these behaviours, for example rogue DoH traffic, unusual protocol usage by appliances to unexpected systems, and embedded encryption inside other encrypted tunnels, should generate immediate high-priority incident responses.
Network Telemetry Types and Detection Approaches
It is critical to note that this document does not, and never would, propose the deployment of network-derived telemetry as a complete replacement for traditional logs and EDR. Instead, it proposes supplementing existing log and EDR data with network-derived telemetry. This is a fundamental defence-in-depth strategy and reflects the reality of an earlier observation: No telemetry is inherently superior or entirely trustworthy. IR and threat hunting teams need as many sources as they can practically obtain[6]. Analysis of network-derived telemetry can be used to validate log and EDR data and vice versa[7].
There are many forms, protocols, and data formats for the delivery of network-derived telemetry including:
- Raw packet feeds (aka “PCAPs” in OMB M-21-31 terminology)
- NetFlow in various formats – essentially flow records with optional embedded metadata
- Metadata generated from application-aware deep packet inspection
Each has different bandwidth requirements, capabilities and, critically, privacy implications[8]. The actual benefits and downsides of each approach are beyond the scope of this blog post, but potential approaches to detect BRICKSTORM include but are not limited to:
- PCAPs into a network detection and response (NDR) tool
- NetFlow into a flow threat detection tool or SIEM
- Metadata into a SIEM, data lake, or extended detection and response (XDR) infrastructure
At a minimum, a typical metadata record will contain flow metadata (timing, source IP and port, destination IP and port, volume statistics), plus protocol-specific metadata including application identification and optional parsing of metadata elements that are extracted from the protocol. So, for example, an HTTPS metadata record would not only contain the flow data, but also extracted metadata like certificate information[9], cipher suites, and application identification (e.g. this HTTPS connection was SAP, or YouTube, or Gmail).
A critical observation is that to be useful, application identification cannot rely on port, as port spoofing is a widely available technique [T1509]. Application identification must be more robust and identify protocols through actual data inspection, optionally combined with heuristic approaches.
Again, the appropriate tool is organisationally specific and is beyond the scope of this blog post.
Why East–West Visibility Is Critical for Detecting Lateral Movement
Inherent in the value of the above approach is that network traffic visibility is not solely available at the network edge[10]. Detection of networking anomalies is a signal-to-noise issue, and evasive threat actors know this. This is why they try to make their C2 infrastructure as stealthy and low-amplitude as possible.
However, a threat actor in the core of your network attempting lateral movement is “noisy”. They are at an information disadvantage in that they are in an unfamiliar location and often generate significant evidence of their presence, especially if they think you are not looking.
The recent movement towards micro-segmentation does not change this. In fact, micro-segmentation increases the noise. Many network architects see micro-segmentation as a lateral movement denial technique. It is not. It is a lateral movement degradation technique. A good threat actor will still find a way to move laterally, usually through misuse of protocols allowed by the micro-segmentation policy. In mapping their environment and understanding how they are hemmed in by micro-segmentation, a threat actor generates far more noise than in a flat and open network environment.
In short, firewall visibility (North–South), whether that firewall is on the enterprise edge or between network zones, does not deliver sufficient network-derived telemetry to be completely useful. It’s not useless, but there are better approaches.
Ideally, any traffic within a network zone, from one system to another (i.e. lateral movement!), should be visible in generated network telemetry. This is called “East–West visibility”.
But, how?
How to Implement Network-Derived Telemetry for East–West Traffic
Modern networks are, with the rare exceptions of multicast, overwhelmingly designed to get traffic reliably from system A to system B. To achieve reliability, modern enterprise networks (on-prem, virtual or cloud) use many techniques, including but not limited to multipathing, dynamic and asymmetrical routing. They operate at speeds up to 800Gbps (IEEE 802.3df) as of the time of writing. 100G Ethernet is now a commodity, with 400G likely soon to be the same. 1.6T Ethernet (IEEE 802.3dj) is anticipated to be finalised in 2026, although widespread adoption is probably several years away[11].
The reality is that networks optimise speed and resilience. The ability to reassemble a session at a point that isn’t systems A and B isn’t a design requirement, yet to generate network-derived telemetry this has to happen. Doesn’t this make network-derived telemetry unreasonably difficult? Doesn’t this mean we have to observe network traffic at the edge, where session reassembly can be done relatively simply?
No, and no. Indeed, the technologies needed to do this have been around for over a quarter of a century.
Fundamentally, the core technology is called “tap and aggregation”, generally abbreviated to “tap and ag”. The tools used to implement this are referred to as “network packet brokers”, and exist in physical, virtual and cloud forms.
Firstly, access to the traffic. This can be done using multiple techniques, and this is not a comprehensive list:
- Passive optical taps (literally light splitters for fibre Ethernet)
- Active electronic taps (for copper Ethernet)
- Mirror or Switch Port Analyzer (SPAN) ports on switches
- Mirroring from virtual switch instances
- Mirroring from cloud platform traffic mirroring capabilities
- Endpoint agents that copy traffic and tunnel that traffic elsewhere[12]
Of note, copying traffic is a unidirectional operation, meaning that tapped traffic can cross a Bell–Lapadula boundary and be “written up” to a higher classified environment, a critical capability for not only the Defence and Intelligence Communities, but also things like ICS and SCADA monitoring. No “write down” is needed for it to operate.
This completes the “tapping” or access phase.
Once accessed, the traffic enters an aggregation stage whereby packets from multiple tapping points enter a platform that combines the packets into a single platform, collapsing traffic asymmetry, multipathing, multi-laning, and all of the other networking techniques that complicate East–West security inspection. While this may sound like a lot of data, it is absolutely within the performance capabilities of modern switching ASICs that can easily process tens of terabytes of bandwidth. Although a modern tap and ag platform also can cluster multiple tap and ag nodes to scale to petabits per second of traffic.
However, to be practically useful, a filtering and transformation stage is then needed. The same packet may be captured at multiple tapping points, so copies can be thrown away (“deduplication”). Removal of network encapsulations can also occur, as many security tools struggle with some network encapsulation protocols.
Finally, the resulting packet can be replicated once, or as many times as needed. If a single NDR tool is attached, it can be delivered to that tool alone. If a NDR tool and a PCAP recorder are attached, then the packet can be duplicated into both tools. Filters can also be used, so that each tool receives a different packet stream, where protocols unprocessable by the tool can be discarded. Gigamon has some customers who have over 50 different network analytic tools attached to their visibility infrastructure.
However, PCAPs – in other words raw packet streams – are a traditional approach with a heavy footprint. A more sensible and modern approach is then to feed those packet streams into an application-aware DPI capability where they are summarised into a stream of privacy-preserving metadata for ingest into your centralised SIEM, data lake or other data analytic tool. This reduces the bandwidth by up to 99%, meaning that a 10G network stream turns into a 100Mbit metadata stream: easily ETL’able by any SIEM or data lake. There it can be used alongside EDR and log data to provide the visibility that will likely make a threat actor’s presence significantly more visible.
Any technique that generates network traffic will generate network-derived telemetry. The claim that UNC5221 generates “minimal to no security telemetry” disappears, and the sorts of telemetry that would be generated from the examples above would be very easy to see.
Additional Hardening Guidance for Network Devices
Gigamon thoroughly endorses the guidance provided in this blog, but makes the following further observations:
- Most network devices have a separate control and data plane, and connectivity to the control plane is best isolated onto a separate management network. However, this management network should not be flat, because if it is, then it becomes a high-value lateral movement conduit. Gigamon recommends:
- Macro-segmentation of the management network at a minimum, but micro-segmentation is preferable.Close monitoring of the management network using the same techniques as described above. Traffic from this network should not be combined into the general network traffic, but isolated into its own NDR sensor or network telemetry stream that is closely observed. Legitimate traffic on this network should be highly regular, and deviations from that regularity – especially things like port scanning which are so trivial to spot – should result in an immediate incident response.
- There should be no direct connectivity from the appliance control planes to the general enterprise at all. This would have prevented many of the attacks documented in this GTIG blog.
- As has been observed above, attacks on network devices have seemingly been the theme of 2025. Tapping external and internal links of high-risk devices is relatively low cost, and generating metadata from them, which can be quickly correlated against threat indications upon receiving a report like this, is an excellent investment.
- Engineering full East–West traffic visibility into all high-value network zones, feeding into one or more tools which can perform analysis of the traffic and look for indicators that correspond to risk (e.g. NDR/NAV, metadata into a SIEM).
In Conclusion
Recent years have seen an increasing focus on the endpoint during incident response actions. This is totally valid, as understanding the internal state of an endpoint is very useful.
However, we have also seen a movement away from understanding the external behaviour of endpoints. Moving beyond BRICKSTORM, there are many different types of attacks that are not well observed (or even possible) from EDR or logging. These include:
- IoT/OT/ICS/SCADA devices where internal access is not granted to end users
- BYOD devices brought into the organisation that may be infected by malware or have active intrusions on them
- Deliberately planted supply chain attacks
- “Below the firmware” or implant-style attacks, including ones that may be running not on the main processor of the device, but in the BMC, or Wi-Fi radio baseband DSP, or embedded in an Ethernet or USB block or cable
Where the C2 channel crosses the network, these communications will be visible. While evasion techniques, including steganography, are available, successfully deploying them at scale is less trivial than many people expect. For example, for steganography to be useful, a communication has to occur to your target system in some form. A network appliance contacting a social media site and uploading a JPEG image containing a steganographic C2 channel still looks extremely atypical.
To reiterate, because this point is so critical, the wholesale replacement of all forms of telemetry with network-derived telemetry is not what this blog proposes. Instead, the argument here is simple: don’t ignore the network as a source of telemetry. Network-derived telemetry has significant advantages versus host-derived telemetry, a category that includes both EDR and system/application logs.
This is a complex area, and while this blog is quite long, it only scratches the surface. If you have any questions, or if further clarification is required, please contact the author or leave a comment in our Security group in the VÜE Community.
[1] Note that the use of a single form of telemetry is inherently making that statement, in that this architectural option either precludes the ability to compromise that telemetry or accepts compromised telemetry as acceptable. In reality, a modern defensible security architecture should accept neither statement.
[2] As an example of how to do this, in Gigamon’s response to the eVRF draft, Orlie Yaniv and Ian Farquhar proposed a telemetry assurance structure to be incorporated into the standard, characterizing telemetry types in terms of:
- Reliability (how comprehensive/accurate is the telemetry)
- Evadability resistance (how readily can an attacker evade detection using this telemetry source)
- Resilience (the ability to detect evasion if attempted)
- Stealthiness (can the threat actor detect the presence of the telemetry generation).
[3] I can locate no reliable source on the prevalence of the 30+ year old SOCKS protocol on modern networks in 2025, but the presence of unexpected SOCKS traffic – if it was visible – is surely a major indicator of compromise.
[4] Note that in being generated externally to the appliance, network derived telemetry doesn’t care what OS the endpoints run.
[5] This may require break and inspect capability, which is inherently an inline capability for any modern TLS implementation (i.e. TLS 1.3 or TLS 1.2 with a forward-secret key exchange.) This could be done at the edge of the network in a firewall or web proxy, although other approaches are available.
[6] It is beyond the scope of this paper to deep dive into this, but the correlation of a benign or malignant action to multiple telemetry domains which is where the power of multiple forms of telemetry appears. Telemetry suppression is a strong indicator of compromise, and if properly correlating actions to multiple domains and then observing telemetry sources “drop off”, incident responders can have early indications of a high maturity threat actor in their environment.
[7] Note that network metadata is extremely amendable to AI-driven analysis, which is especially critical when combating automated AI-driven attacks.
[8] It should be noted that in 2025, the ubiquity of TLS, ssh and other encrypted protocols has largely pushed privacy concerns into relative corner cases.
[9] If TLS 1.2, in TLS 1.3 the certificate is conveyed encrypted.
[10] … and by the way, haven’t we been moving away from “perimeter security” since the early 2000s? As an industry, infosec keeps talking about deperimeterization – indeed, this is a core component of the NIST SP 800-207 Zero Trust Architecture – but most orgs still rely on edge-based firewalls for their network visibility. Macrosegmentation is better, but capable threat actor will assume the presence of intra-zonal monitoring and keep a low amplitude/evasive C2 channel visible there, while generating a lot of noise internal (but invisibly) to that macrosegment.
[11] It is a generally true observation that our network colleagues will move to a faster networking technology when it becomes economic to do so, although the parameters driving that calculation may be quite complex, and include both capital and operational expenditure, but also esoteric things like the availability of fibre, datacenter locations, and so forth. Nonetheless, security practitioners who believe that they can delay an upgrade because their tooling is not yet ready rarely prevail.
[12] As was noted, most network derived telemetry is generated outside.
CONTINUE THE DISCUSSION
People are talking about this in the Gigamon Community’s Security group.
Share your thoughts today