Why is SCYTHE Building a CTI Team?

Over the course of my cybersecurity career, I’ve been fortunate to work in multiple different areas, spanning red, blue, purple, and even a little yellow. One thing I’ve seen consistently is that we as an industry put far too much value in our security controls. I don’t mean to imply that security controls are worthless, only that they are often profoundly misunderstood.

For those of us old enough to remember “the good old days” (despite what anyone tells you, they were in fact not that good), you were fortunate if you had a boundary firewall and antivirus with an up to date subscription for signatures. For those not lucky enough to have perimeter firewalls, I remember assisting with “net send” incidents where people were “hacked” with evil popup messages on their desktops. You didn’t need to be a rocket scientist to know the solution was to deploy a firewall or install a host-based firewall. 

The security control landscape today is much more difficult and much more complex. As the control (and threat) landscape has matured, the need for penetration tests became obvious. While penetration tests and red team exercises are great for control validation, they are unfortunately point-in-time operations. All but the largest and best-funded organizations get an annual penetration test or red team exercise (yes, I’m aware these are not the same thing). They are however changing their security control stacks far more often.

As I’ve noted repeatedly on my Twitter feed,

https://twitter.com/MalwareJake/status/1509893323242287116?s=20&t=5U2305T1UVTXVLfiyUkA_A
https://twitter.com/MalwareJake/status/1508512289623875592?s=20&t=5U2305T1UVTXVLfiyUkA_A

just deploying security controls and expecting to catch even moderately advanced threat actors is a bad plan. Organizations must tune their controls to their specific environment. But every tuning operation may break a previously functioning detection or prevention. If the next pentest is six months away, do you really know your controls are doing what they’re supposed to? 

It should come as no surprise that misconfigured (or simply misunderstood) security controls contributed to an overwhelming percentage of intrusion investigations I’ve worked. But another extremely common theme is tuning controls for the wrong threats. In my consulting work, I regularly talked to organizations asking if they should be installing multiple EDRs for overlapping protection (pro tip: don’t do this). Other organizations spend ridiculous resources focusing on detection of esoteric threat actor techniques in the headlines while completely ignoring threat actors known to be targeting their vertical. It’s clear that security control capabilities are not well understood, even among enterprise stakeholders.

There’s no question about the value of security control validation tests - and that value is magnified exponentially when these tests are repeatable and easy to execute. The validation method gets bonus points if the tests don’t require some Purple Team wizard to execute properly. SCYTHE has that platform already and I must say I’m excited to join the team.

So why am I here? Customers today are inundated with cyber threat intelligence reports. Some detail adversary actions in specific intrusions. Others discuss new techniques that threat actors may be using. CTI reporting has come a long way in the last few years. A decade ago, any CTI report with even file hashes was a rare gem. Then increasingly many CTI reports began to include Yara rules and other Indicators of Compromise (IOCs). 

These are great when you want to detect a malicious file, but what happens when you need to detect a threat actor? When I brief executives and boards, I explain to them that a malicious file is like a weapon used by a murderer. Take away the knife and they’ll pick up a hammer. Take away the hammer and they’ll use a cheese grater (side note, yuck!). But generally the threat actor will use the same techniques to wield these tools. CTI analysts needed a better way to report on threat actor behavior rather than simply saying, “look for their tools!”

MITRE addressed this problem with the ATT&CK framework. Almost every CTI report issued in the last few years (the good ones anyway), includes a list of techniques used by the threat actor, complete with technique codes. This was a generational leap forward in communicating threat actor behaviors using a common schema and MITRE should be thanked for their tremendous work.

Since we’re discussing techniques and procedures (which admittedly may sound very similar), it’s definitely worth mentioning SCYTHE’s own Christopher Peacock and his work on the TTP Pyramid of Pain (https://www.scythe.io/library/summiting-the-pyramid-of-pain-the-ttp-pyramid). 

Christopher does a great job of breaking down the difference between Tactics, Techniques, and Procedures - the three elements of the familiar acronym TTP. Check out his fantastic blog post to dive into the definitions of each (and why the differences matter).

(Image credit: David Bianco)

Unfortunately, ATT&CK isn’t optimally useful for validating your security controls against a specific threat described in a CTI report. Let me be clear that this is not an attack on MITRE ATT&CK (pun definitely intended). Trying to use ATT&CK technique codes to infer specific procedures taken by an adversary is like trying to hammer a nail with a snow shovel: it might work in some limited cases, but it’s definitely not the right tool for the job. That’s because MITRE ATT&CK operates at a technique level, but we need procedure-level data to properly emulate the adversary and validate our security controls.

Let’s take MITRE ATT&CK technique T1080 (Taint Shared Content, https://attack.mitre.org/techniques/T1080) as an example case - a very apt selection given all the recent concerns about supply chain security. If a CTI report indicates a threat actor targeted a similar organization using technique T1080, how do you create relevant procedures to validate your controls for this? You might put an .SCF file on a network share, either creating a brand new file or removing a legitimate file and replacing it with an SCF file of the same name to blend in. You might access a logon script in SYSVOL and change its contents to run your payload when a user linked to the logon script next logs on. You might replace an executable stored in a git repository that’s used in the software’s build process. All of these procedures would be included under the technique T1080 - and each likely tests a significantly different set of controls.

For large organizations with dedicated CTI teams and sufficient staffing (read money), picking through CTI narratives to extract (and often infer) the appropriate procedure-level data necessary for control validation is within reach. But linking control validation to threat actors shouldn’t be something that only larger organizations can do. It should be something within the reach of all organizations. To the extent possible, cybersecurity should be democratized.

To that end, I’ll be working at SCYTHE to create an open source taxonomy and schema to describe threat actor actions at a procedure level. I expect initially the newly formed SCYTHE CTI team will be extracting technique-level details from CTI reports and inferring the procedures used to create tests that emulate adversary procedures. But over time as we codify the new standard, we believe that others will be more likely to publish this data natively in their reporting. We’ve observed the same evolution in CTI reporting, first with IOCs generally, then with Yara, then ATT&CK, and most recently we’re starting to see some Sigma adoption in CTI reports.

I’m excited to be involved in this initiative and recognize the challenges ahead. As I discussed this initiative with Bryson Bort and Jorge Orchilles, I noted that we’d need more than a typical CTI analyst to pull this off. We’d need people who can ingest a CTI report and fill in the gaps in a way that’s realistic. We’d need people who understand how attackers think and operate, what is (and critically, what isn’t) included in most CTI/incident response reports, and build out procedure-level tests that meet customer requirements for control validation. 

Over the coming months, I’ll be building a herd (er, team), meeting with existing customers to better understand their CTI requirements, and working with leaders in the threat intelligence space to build a schema that makes sense for the broader community. As we noted throughout, technique-level CTI has a place, but it’s insufficient for continuous emulation of threat actors. Organizations need to know they’re focusing on the right procedures - the ones threat actors are actually using. Anything less is like preparing for a soccer game by testing a bulletproof vest, only to later learn you’re completely unprotected against the very real threat of a slide tackle. 

Obviously, we’re building this standard because it’s something that benefits SCYTHE’s customers. But this is truly a rising tide that will raise all ships, SCYTHE customers and non-customers alike. We believe an open standard with community engagement is critical to adoption. If you’re a leader in the threat intelligence space or just have some really good ideas and want to collaborate, please reach out. Obviously the first few weeks in the new position will be a bit of a whirlwind, so apologies in advance if it takes a few days or a week to close the loop with you.