Less is More…|more or |less
|less — In Detection Engineering and Threat Hunting, quality beats quantity every time.
|more — If the title of this blog posts brings a smile to your face, chances are we have a lot in common. Back in my day, OS/2 Warp was one of the coolest operating systems around, TokenRing networks were the go-to topology, Lotus 1–2–3 was a “killer-app”, and TikTok was just the sound that clocks made.
This may feel a bit like another Boomer rant (maybe it is), but in today’s age of hyper-consumerism, we’ve normalized the expectation that tens, hundreds, sometimes even thousands of choices are available to us when we’re looking for a solution to any given problem. Whilst I’m not advocating for the polar-opposite, I do find, at least for myself, this can prompt a bit of ‘analysis-paralysis’.
This tendency is in our nature as humans; given options, we’re inclined to spend cycles going through them all and performing a mental |diff -u.
If it is just me, shopping for the latest gadget, I can spend a somewhat unlimited amount of time ‘researching’ and diving deep in the weeds on things with no truly negative effect. However, if I’m at the supermarket and presented with similar options, my significant-other would not be pleased if I spent two hours trying to choose between store-bought, name-brand domestic, and exotic imported goods.
When it comes to building or consuming new detection rules for our SIEM or queries and searches for Threat-Hunting, our manager will be less than pleased if we were to again fall prey to the same paralytic behavior.
The question is — how many detections are enough detections? If you really think about it, you can only deploy so many detections in your SIEM/EDR
(x number of) before you either hit a hard-coded limit, you hit a wall in terms of system performance, bringing the system down (and giving you unlimited time off…) , or you overwhelm your SOC Analysts.
Either way, you’re not accomplishing your task and so again, your manager will be less than pleased. So, in answering the question of — how many detections are enough detections -we might conclude the correct answer is the one(s) that actually work well (we’ll cover ‘just the ones that you need/fill in the gaps’ in a future episode).
Now, this would appear to be an overly obvious statement but how do we truly know which detections will actually work and work well? For us to do that, we really need to define what we mean by ‘detections that actually work’ and ‘detections that work well’.
For us (and hopefully you), ‘detections that work’ trigger on true-positive events and ‘detections that work well’ trigger those alerts in a manageable, actionable, and repeatable way.
Again, that seems basic, yet most detections we run across have never been truly tested, let alone gone through a standardized process for creation. This tends to lead to bloat in our deployed detections and our workflow (or an increase in cycles as we go about evaluating and refining those detections).
So, what can be done about that?
Detections That Work
Well, here at SnapAttack, our Threat Detection platform and in-house Purple Team have developed both standardized processes and tooling for creating detections which are tested from the start. We do this by creating detections directly off events which have been captured as part of a Sandbox exercise, which we call Threat Sessions.
Figure A — Sandbox Threat SessionDuring these Sandbox exercises, our in-house Purple Team reproduces malicious activity to capture all the events, log files, keystrokes, and video. From here, they use this research data to A) create new detections based directly off those malicious events which were captured in the Sandbox exercise and B) use this data to re-test both new and old detections so identify which ones trigger on true-positive events.
Figure B — Sandbox-Tested Validated DetectionUltimately, this empowers the SnapAttack team to produce better detections more quickly and enables the platform to indicate which detections, from SnapAttack’s growing library of pre-written rules and queries, “actually work”.
Detections That Work Well
Now I know what you’re thinking, “Detections that trigger on way too many things, even if they are ‘true positives’, aren’t that useful”. We agree; noisy detections are not detections that “work well”. It is for this reason that the SnapAttack platform also incorporates, what we call, ‘Detection Confidence’, which is a way to programmatically, and proactively, test and measure how noisy a specific detection is in your unique production environment, not some test lab.
Figure C — Confidence Tailoring DashboardBy running all of our detections as a search against your existing events and data, a process that we call ‘Confidence Tailoring’, our platform is essentially “ranking” the performance of each detection, so that consumers of the detection library know exactly how a piece of detection content will perform in their own environment without having to embark upon a tedious and time-consuming testing process.
Conclusion
So, the next time you’re looking for detections, start by asking “if the detections actually work” and “if they work well”; chances are the answer will be ‘More or Less’.
Of course, there’s a lot more that goes into creating Validated, High Confidence Detections but I hope this gives you some ideas on what to consider when reviewing a repository of detections and I hope that helps you see why, in Detection Engineering, Less is More…|more or |less .
If any of this sounds even remotely interesting —
please reach out for a demo of our platform today!
We’d love to show you exactly how we can help you detect more threats faster with SnapAttack.
Less is More…|more or |less was originally published in SnapAttack on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post Less is More…|more or |less appeared first on Security Boulevard.