SC Magazine published an article headlined "SIEM rules ignore bulk of MITRE ATT&CK framework, placing risk burden on users." In the article, Bradley Barth writes about a study showing only 16 percent of the MITRE framework was covered by SIEM rules. 

I take issue with the core premise of this article. MITRE ATT&CK is a framework for high level planning and strategic thinking, not for a series of checkboxes on which to overlay a vendor product. We need to avoid turning cybersecurity into checkboxes. What do I mean? Read on to hear my thoughts on the SC Magazine article, and to see how we work with customers to improve observability without forcing them to fit a pre-defined mold. 

Let's take T1133 for example: "External Remote Services". What does this mean, from a security data science perspective? The description is:

Adversaries may leverage external-facing remote services to initially access and/or persist within a network. Remote services such as VPNs, Citrix, and other access mechanisms allow users to connect to internal enterprise network resources from external locations. There are often remote service gateways that manage connections and credential authentication for these services. Services such as Windows Remote Management can also be used externally.

There are two very important words from the MITRE description that I would like to draw your attention to: "such as". It's impossible to buy an out-of-the-box solution that protects from this TTP, because that TTP could be related to any number of technologies within a given environment. Think about the unique tapestry that is IT within a given organization; it's made up of a variety of vendors, all on different versions, glued together with in-house knowledge and scripts to keep operations running. This uniqueness prevents out-of-the-box from being a thing, but it's also a great strength. "Know thyself" is extremely important for defensive capabilities.

What does it actually look like to utilize MITRE ATT&CK for cybersecurity? Let me share a bit about customer onboarding here at Gravwell, which really begins during the proof-of-value part of the sales process. The first step on the Technical Track of our PoV is "Test plan agreement use case confirmed." This is both parties working together to identify the type of data crunching we want to perform and what the success metrics are. Here's the slide we walk through to build up that use case:


Typical POC Test Plan


MITRE ATT&CK gap analysis should look pretty close to this process. We're choosing a TTP, and then performing this loop for that TTP until all relevant data sources are being ingested for analysis, or we accept the risk of leaving some uncollected.


  1. Identify assets, network devices, or data source affected by TTP
    1. List relevant assets
    2. Prioritize
  2. Collect events into analytics platform (or SIEM)
    1. Ingestion capacity
    2. Retention timeframe
    3. Indexing strategy
    4. Identify ingestion method (syslog? File? other?)
    5. Measure data rates and set parameters
    6. Validate search and hunting capability
  3. Simulate attacker using manual techniques or attacker simulation tools
    1. If manual, make sure activity is well documented and timestamped
    2. If attacker simulation, collect those events alongside those from #2
  4. Verify blue team observability of TTP
    1. Sources should reflect attacker activity
    2. Blue team timeline should match up with attacker

We really like working with our customers to improve observability, and frameworks like MITRE ATT&CK are very useful in that process. The cybersecurity community needs to be careful in expecting out-of-the-box solutions to assume "risk burden" when that's impossible. No two organizations are going to have the same environments, data sources, or vendor overlap. I worry that the attitudes present in the SC Media article force a "race to the bottom" and de-emphasize the importance of having great people who are familiar with the environment. Pre-fab search and automations are tools to make the great people incredible, not to replace them or avoid investing in any great people in the first place.

I spent many an hour staring at IDA trying to track a bug, and the same goes for the red-team side of things. Attempting to automate away red-team activity didn't work the first 30 times we tried it and it's not going to work, even with AI. While there are some really cool things being done on this front, I agree with Jorge at Scythe (shoutout to them for their work on step #3 above):


Organizations should be looking to analytics tools for help enhancing their people and they should demand a sales process that's more consultative. We like to meet customers where they are and right-fit Gravwell to the unique environment. We don't force people to fit a pre-defined mold. That attitude is everywhere in the company and in the product. If you wanna talk more about that to a human, press this button:
Schedule a Demo