Digital Security
Aggregate vulnerability scores don’t tell the whole story – the relationship between a flaw’s public severity rating and the specific risks it poses for your company is more complex than it seems
13 Dec 2024
•
,
3 min. read
Mention vulnerability and patch management to a cybersecurity team and they all have the same dismayed look of fatigue and exhaustion. The CVE database continues to grow at a considerable pace, with far too many of the known vulnerabilities starting life as zero-days. When Ankur Sand and Syed Islam, two diligent cybersecurity professionals from JPMorganChase, took to the stage at Black Hat Europe with a presentation titled “The CVSS Deception: How We’ve Been Misled on Vulnerability Severity”, the room was overflowing.
The presenters have analyzed Common Vulnerability Scoring System (CVSS) scores to highlight how the pain point of vulnerabilities and patching could potentially be reduced. (Note that while their analysis focused on version 3 of the methodology, rather than the current version 4, they did mention that from a high level they expect a similar conclusion.)
They covered six areas that need additional clarity to help teams make informed decisions on the urgency to patch. I am not going to repeat all six in this blog post, but there are a couple that stood out.
The hidden risks behind CVSS scores
The first one is related to the vulnerability scoring on impact that is then broken down into confidentiality, integrity and availability. Each is individually scored and these scores are combined to provide an aggregated score, which is eventually published. If one of the divided categories receives the maximum score but the other two do not, the overall severity is reduced. This results in a potential high score being lowered – by example, in their analysis this typically takes an 8+ down to a 7.5. In 2023 alone, the team sighted 2,000 instances where this happened.
For organizations with a policy prioritizing CVSS scores of 8+ in their patching queues, a 7.5 would not be a priority – despite it qualifying as 8+ in a single category. And, where the one category is the most important in a specific instance, the vulnerability may not receive the urgency and attention it warrants. While I have every sympathy with the issue, we should also appreciate that the scoring system does have to start somewhere and to a certain level be applicable to everyone; also, remember that it does evolve.
The other issue they raised that seemed to spark interest with the audience is that of dependencies. The presenters highlighted how a vulnerability can only be exploited under specific conditions. If a vulnerability with a high score also requires X & Y to be exploited and these don’t exist in some environments or implementations, then teams may be rushing to patch when the priority could be lower. The challenge here is knowing what assets there are in granular detail, something only a well-resourced cybersecurity team may achieve.
Unfortunately, many small businesses may be at the other end of the spectrum of being well resourced, with little to no available resource to even operate effectively. And, having an in-depth view on all the assets in play, even down to what dependencies are within each asset may be a stretch to far. The mention of Log4j makes the point here – many companies were caught off guard and did not know they relied on software that contained this open source code.
Every company has their own unique technology environment with varying policies, so no solution will ever be perfect for everyone. On the other hand, I’m sure more comprehensive data and evolved standards will help teams make their own informed judgements on vulnerability severity and patching severity according to their own company policies. But for smaller companies, I suspect the pain of needing to patch based on the aggregated score will remain; the solution is likely best answered with automation where possible.
An interesting angle on this topic may be the role of cyber-insurers, some of which already alert companies to the need to patch systems based on vulnerability disclosures and patches being publicly available. As cyber-insurance policies require more in-depth knowledge of a company’s environment to ascertain the risk, then insurers may have the granular insights needed to prioritize vulnerabilities effectively. This creates a potential opportunity for insurers to assist organizations in minimizing risk, which ultimately benefits both the company’s security posture and the insurer’s bottom line.
Discussions on standards such as CVSS show just how important it is for these frameworks to keep up with the evolving security landscape. The presentation by the JPMorganChase team shed light on some key issues and added great value to the conversation, so I applaud them on a great presentation.