SHARE
Zero Trust / December 6, 2023

Comments on Draft NIST Special Publication 800-92r1 “Cybersecurity Log Management Planning Guide”

Editor’s note: the mechanisms by which organizations derive observability and visibility generally fall under the title of telemetry, and the most prevalent form of telemetry is logging. As we see increased threat actor understanding of evasion techniques, we need to strengthen and protect logging, which means expanding it from the typical device- and application-based logs of the past, and into new types of telemetry which are significantly harder for a threat actor to evade.  This document represents Gigamon’s feedback to NIST covering this argument. These comments were submitted to NIST during the public comments period that ended on November 29, 2023.

As with other responses, we publish our feedback here in the spirit of openness and with the aim of encouraging comments and feedback that will benefit industry and government equally.

Gigamon appreciates the opportunity to comment on the NIST initial public draft of Special Publication 800-92r1 “Cybersecurity Log Management Planning Guide”. Given that logs underpin the Nation’s threat detection and hunting capabilities and will likely be leveraged by artificial intelligence/machine learning tools, the security and integrity of log generation and log management, including log storage, are of paramount importance.

Our adversaries understand the value of deleting, modifying, and manipulating logs. The MITRE ATT&CK knowledge base of adversary tactics and techniques includes many examples of adversaries disabling and modifying logs. For example, see Techniques 1070 Indicator Removal and 1562 Impair Defenses. Similarly, Mandiant also describes adversary efforts to disable logs in a blog discussing APT 29 attacks targeting O365. In one example, Mandiant observed APT29 disabling Purview Audit, an advanced log available with an E5 license that allows organizations to determine whether a threat actor has accessed email. As such, Gigamon recommends that NIST add plays for identifying and managing risk associated with log generation and log management, to include mechanisms for assuring the integrity of an organization’s logs, which we discuss in greater depth below and refer to as “telemetry assurance”.

Gigamon further recommends that NIST incorporate a discussion on the importance of having comparable log coverage across an environment to facilitate better attack detection and response. Unless logs are captured consistently across a complex hybrid cloud environment, an analyst may lose the threat actor trail at some stage of the attack life cycle due to a gap in coverage or inconsistent log coverage. With proper planning, however, this risk can be mitigated. 

In addition, Gigamon recommends that NIST incorporate a discussion on the importance of developing log generation and management systems in accordance with NIST cyber resilience principles. These systems need to anticipate, withstand, recover, and adapt to cyber-attacks. Redundancy and/or the ability to identify when a logging source has been tampered with is crucial for effective protection, detection, and response.

Tying these points together, Gigamon recommends that NIST consider developing a play that addresses organizational requirements for telemetry from public cloud/SAAS providers. At present, major providers manage and log network traffic in different ways (e.g., some providers mirror traffic while others do not; some providers use physical boundaries while other use data boundaries) resulting in inconsistent levels of security, which impact the integrity and consistency of logs. If an organization is operating a complex, multi-cloud hybrid environment, such inconsistencies increase an organization’s data normalization burden and make it more difficult to rapidly analyze and correlate data, thereby making it more difficult to rapidly detect threats. As such, organizations should consider data normalization, data correlation and packet level visibility up front. 

Normalizing log data across a growing set of log sources will also be complex and burdensome when organizations try to leverage AI/ML based tools, which will require the analysis of log data for anomaly-based threat detection and Zero Trust Architecture (ZTA) trust algorithm decisioning. Conceptualizing the need for normalized data as part of the planning process will reduce the complexity and burden.

Gigamon further recommends incorporating the concepts and lexicon described in the Extensible Visibility Reference Framework (eVRF) published by CISA. Among other things, this framework defines the ubiquitous term “visibility” and helps organizations identify and mitigate visibility gaps that expose them to risk.  Since logs are the most prevalent “visibility surface” in most organizations, leveraging the eVRF lexicon would ensure greater clarity.

To help organizations analyze and validate the integrity of logs, Gigamon proposes the concept of “telemetry assurance”. We believe telemetry assurance is necessary because:

  • All network components, including endpoints and endpoint detection and response tools, can be compromised and, as a result, generate inaccurate or spoofed logs;
  • Sophisticated adversaries have demonstrated the ability to evade detection by modifying and spoofing logs (as opposed to stopping logs or changing the logging volume, which are louder actions that may be more easily detected)
  • The log integrity controls referenced in OMB Memorandum M-21-31 Improving the Federal Government’s Investigative and Remediation Capabilities primarily protect the logging platform, not the components generating the logs, which creates a false sense of security
  • Information security professionals often “trust” logs implicitly because they have no other viable option. 

For additional information on the concept of telemetry assurance and how it could be leveraged, please reference the Gigamon comments on the draft eVRF, which are published on the Gigamon website here.

In addition to the Gigamon recommendations above, Gigamon offers the following responses to the specific questions posed by NIST on behalf of CISA and OMB:

  1. This revision is informed by NIST SP 800-207 and the NCCoE’s Zero Trust Architecture project calling out data analytics for zero trust purposes. Should the scope of this publication be expanded to directly support and map to zero trust?

Yes.

The fundamental concept of ZTA is a data-centric, risk-based architecture where telemetry feeding a policy engine creates an understanding of posture and risk in an enterprise, driving dynamically assessed access decisions. Given that the primary source of telemetry in a ZTA is derived from logs, and all zero trust operational implementations leverage logs, Gigamon recommends expanding the scope to support Zero Trust implementations and map to SP 800-207 and SP800-207A.  

  1. Should this publication be expanded to include log management implementation guidance?

Yes.

Many organizations struggle with log management because traditional SecOps-focused SIEM infrastructures are often unable to manage the need for greater capacity in a cost-effective manner. We anticipate this problem will continue to worsen.  We project that over the next five years:  

  • Data ingest will increase by two to three orders of magnitude
  • Queries against the log store will also increase dramatically
  • Ingest and query-driven costing models will become untenable, as will the current database architectures around which many products are based
  • The granularity of logging will significantly increase, driven by:
    • The standard deployment of higher “logging levels” on existing applications and operating systems, mostly for infosec-driven outcomes
    • The greater instrumentation of applications and operating systems, trying to provide greater visibility into threat actor behavior
    • The deployment of logging technologies like code tracing and network metadata, which generate higher volumes of raw telemetry
  • Modern log stores will (and already are) moving toward highly distributed streaming data warehouse architectures
  • Traditional SIEMs will be replaced over time by streaming data warehouses
  • The increase in query levels will be driven by the need to:
  • Run threat feeds and alerts/correlations against historical data stores, rather than just on incoming telemetry
  • Run predictive and other AI algorithms against time windows of data which may be days or even weeks or months wide on a fairly regular basis
  • Run forensic investigations against log stores which may go back years
  • Generate baselining for user and entity behavioral analytics, as is required by ZTA.

For reference, Gigamon is not a SIEM or log management vendor. 

  1. Are there additional considerations for different types of log sources that should be included in this publication (e.g., on-premises, cloud, managed services, or hybrid)?

Yes, but far beyond the identified examples.

The thinking and planning around logging needs to expand beyond the traditional approaches to logging. While we have seen the acronym of Metrics, Events, Logs and Traces (MELT) expand the concept of a log from a programmatically inserted checkpoint to things like program traces and metrics, the expansion of what could and should be a “record of events” is much broader.  

Right now, most organizations are planning logging in terms of a very narrow definition that fails to consider additional, non-traditional records of events such as tracing for security (following code paths) or using network metadata for threat detection. Although these new concepts correspond to eVRF visibility surfaces, they are not considered logs by many practitioners. NIST has the opportunity with this publication to expand what is considered a log to promote more robust cybersecurity outcomes.

As such, these and other sources of logs should be explicitly referenced. NIST should also establish a goal to develop telemetry sources that support innovation and R&D around threat detection and threat hunting. As stated above, because logs are inherently vulnerable to attack and risk needs to be identified and mitigated, Gigamon recommends that NIST incorporate security of log generation and log management as an additional play. The idea that different forms of logging (visibility surfaces) can have radically different resilience against attacker evasion makes sense but is rarely considered. This paradigm needs to shift.

In addition, Gigamon recommends that NIST emphasize that not all logs are created equal.  Log data that is immutable or is not sourced from applications (that could be compromised and spoofed) should be prioritized for use and retention.  Log data that is complementary and sourced from various systems acts as a fail-safe against various types of breaches (insider threats, supply chain attacks, adversarial cyber attacks) should also be prioritized.

Finally, Gigamon recommends consideration of the benefits associated with the correlation of telemetry, where different visibility surfaces from the same infrastructure are compared and correlated.  This provides an assurance mechanism designed to detect compromise of the security control and subsequent logging from that control. This is consistent with CISA’s ZTA Maturity Modelv2 approach as described in the network pillar. For additional information on this point, please see the attached presentation delivered at the IEEE MILCIS 2023 Canberra.

  1. Should the standardization of log management planning to facilitate the sharing of cyber threats or incidents be included?

Yes. 

Aside from our belief that there will be a seismic shift in the coming few years around log management requirements, the sharing and correlating data between cooperating agencies facilitates better and faster threat detection. The National Security Institute at Virginia Tech has explored these issues at a series of workshops with leading experts and published reports validating this concept.

  1. Should guidance on how to determine the purposefulness of logging categories and types be included?

Gigamon offers a wealth of experience supporting logging formats (eg. Arcsight CEF) and continues to innovate to demonstrate that vendor-designed and fit-for-purpose protocols are more difficult to adapt to newer uses and requirements. The development of taxonomies of logging and attributes, including standardized and/or adoption vendor-neutral exchange and transport formats (eg. OpenTransport) would be a significant benefit for the industry.

Just as one example, ZTA policy engines are meant to ingest telemetry from all five CISA ZTA pillars, or five of the seven DoD ZTA gears. Standardized categories and types of log data derivable from each of these pillars will greatly facilitate the ingestion of telemetry, ultimately driving better performance and results for analytics engines being relied upon to make access decisions in near real time.

Take the network pillar, for example. Logging on network transactions, generated from processing packets on the network (on-prem, cloud, virtual) into logging records representing processed protocol data offers an ideal way of understanding network behavior. Do we take an established format, which is workable but not exactly ideal?  Do we pursue a proprietary format, or shoehorn it into some generic logging format like CEF?  Instead, a NIST-owned category and data model would be extremely useful, and drive innovation and interoperability in ZTA.

  1. Should guidance for determining storage retention periods be included?

Most organizations tend to keep data for as long as they can afford to, rather than as long as they need. We also suspect that this time frame will be reduced over time as organization log ingest rates increase, which would have a deleterious impact on security. As such, Gigamon recommends that organizations plan and scope for an increase in ingest rates and retention periods over time. In addition, the retention of logs should be specifically tied to what is necessary for effective threat hunting, digital forensics, and incident response requirements.

As Gigamon mentions above, not all logs are created equal. Different types of logs (visibility surfaces) can have different assurance levels and different values to the various business processes which utilize them. As such, the optimal retention periods for each use case will be different. As cost is always a consideration, different retention periods for different kinds of logs using different kinds of storage (presumably with different cost/performance/capacity ratios) will likely be utilized by most organizations. These varied approaches within a complex environment should be taken into consideration during the planning process.  

  1. Should this publication address how new technologies may change log management planning (e.g., blockchains, zero trust, generative AI, quantum computing)?

Yes, blockchains are very useful for logging, making tamper-resistant logging is definitely an area of interest, for evidentiary validity. Logging of the data used in the training of AI/ML models is also crucial as data poisoning attacks will be an attack vector. In addition, quantum computers will still need to log.

  1. Should this publication address storage costs and offer guidance on prioritizations and trade-offs for cost-effective log management planning?

Yes, comprehensive guidance should be developed with examples that describe the various types of log data, their use cases and applicable half-life by application. For example, endpoint log data is quite voluminous and for large organizations can be cost prohibitive to use/store some or all of it for long periods of time. Giving guidance, by application/scenario, on which types of log data could be used to replace or complement other log types and associated cost estimates could be extremely helpful as most organizations are operating with constrained budgets.

Featured Webinars

Hear from our experts on the latest trends and best practices to optimize your network visibility and analysis.

CONTINUE THE DISCUSSION

People are talking about this in the Gigamon Community’s Zero Trust group.

Share your thoughts today


Back to top