A somewhat DREADful vulnerability scoring method
A critique of (and homage to) one of the original cybersecurity rubrics.
Following up on my previous analyses of the CVSS and Microsoft SDL bug bar vulnerability evaluation methods, I will now turn to a third and final existing model before starting the process of building my own: DREAD. Although this is a somewhat older framework that does not appear to be in common usage any longer, exploring some aspects of it is worthwhile, as they highlight important considerations when developing a cyber risk management system.
DREAD - another Microsoft framework introduced in 2002 - is an acronym representing the following characteristics of a vulnerability:
Damage potential
Reproducibility
Exploitability
Affected users
Discoverability
These five components appear to map roughly to the impact x likelihood model I have discussed previously, as the first and fourth items approximate impact while the other ones combine together to represent likelihood. At a surface level, this seems like a reasonable method for vulnerability evaluation, as these are all important factors to consider. The biggest problem with DREAD, though, is that there is no easy way to reflect the fact that some attacks are not merely incrementally, but exponentially, worse and/or more likely than others.
Another aspect of DREAD that I take issue with is the fact that some - including the framework’s authors - suggest that the discoverability score always be set to 10. Microsoft notes that the “safest approach is to assume that any vulnerability will eventually be taken advantage of and, consequently, to rely on the other measures to establish the relative ranking of the threat.” While I would say that this is the simplest assumption, I would posit that it is not the safest one. Organizations do a terrible job evaluating opportunity costs, and if you assume every vulnerability has the same level of discoverability, you might very well mis-prioritize fixing them.
The gensis for this assumption is long-standing tenet in the information security community that “security by obscurity” is necessarily poor practice. As several others have concisely noted, this is not always the case. From a purely probabilistic perspective, a more easily found vulnerability is more likely to be exploited than a well-hidden one, all other things being equal. Thus, the risk posed by the former is absolutely greater than that by the latter.
Finally, and despite it being ostensibly quantitative, the consensus seems to be that DREAD is actually a qualitative standard. This is due to the fact that arriving at the various values for each component relies on the subjective judgement of the analyst, guided by a set of suggested values. The lack of objectivity can lead to different outcomes when different people interpret the available information.
To give an example of how DREAD would apply to a real-world scenario, I will re-examine a scenario from a previous post. Check it out for a review, but essentially I compare the relative risk posed to an organization by two security vulnerabilities: an unencrypted laptop containing protected health information (PHI) and an exposed application programming interface (API) that exposes some trivial data to loss or corruption.
The unencrypted laptop with PHI on it would likely rank as a 41 on the DREAD scale:
Damage potential: 10 (loss of PHI for a company entrusted with it is the worst possible thing that could happen to it)
Reproducibility: 10 (this attack vector works 100% of the time)
Exploitability: 10 (accessing the unencrypted information is a trivial exercise given publicly available information and tools)
Affected users: 1 (only one user of the system - the employee whose laptop is stolen - is impacted, although admittedly many more customer/patients are also affected)
Discoverability: 10 (a cursory review of the laptop would reveal that its contents are unencrypted)
Similarly, the exposed API on the web application would also score as a 41:
Damage potential: 1 (this information is extremely low value and its loss creates no liability for the company)
Reproducibility: 10 (same as above)
Exploitability: 10 (writing a simple script or crafting the appropriate URL allows for easy exploitation of the vulnerability)
Affected users: 10 (all users of the web application)
Discoverability: 10 (this could be easily discovered with automated tools)
As I pointed out previously, however, these are two very different vulnerabilities and not even on the same order of magnitude. Exploitation of the former could result in millions of dollars of fines while an attack targeting the latter might cause some minor nuisance for the victim.
The authors of DREAD themselves acknowledge these types of potentially erroneous outcomes are possible. As a result of this admission and the underlying facts, DREAD appears at present to be in less common usage than other methods of vulnerability scoring. By isolating the various factors that contribute to the risk profile of a given vulnerability, however, DREAD was an important milestone in cybersecurity history and I am thankful to its creators for their pioneering work.
Regarding building vulnerability remediation timelines around DREAD, I have seen a few examples. Some are purely relative and qualitative, providing timelines such as “immediately” and “within a short time frame.” If you have stuck with me since the beginning, you will know that I consider such recommendations to be unhelpful. Other systems suggest deadlines based on the mapping of the numerical DREAD score to its qualitative description (e.g. anything with a score of 40-50 is “Critical”):
Critical: 14 days or less
High: 14-30 days
Medium: 90-180 days
Low: 180-270 days
Informational: developer discretion/customer requirement
Having numerical timelines is certainly preferable to ambiguous, qualitative ones, to be sure. Additionally, remeditation regimes such as this reflect an important fact: organizations should focus disproportionate energy on remediating the top quartile of perceived risk stemming from known vulnerabilities.
Unfortunately, if the scoring rubric upon which you base your policy does not actually map to the true risk you face, then the timelines you draw up for remediation will be arbitrary at best and counterproductive at worst. Furthermore, as I have also mentioned previously, business leaders rather than engineers should have the authority to accept risk, meaning that resolving vulnerabilities should not be at “developer discretion.”
Thus, I would advise against building a vulnerability management policy or program on top of the DREAD system. Its development has contributed much to the field of cybersecurity and there is much to be learned from it, but it is dated and no longer very useful, taken by itself.
Borrowing from pieces of the three systems I have evaluated thus far - CVSS, SDL Bug Bar, and DREAD - along with some other inputs, I will eventually move towards building my own lightweight model for evaluating cyber risk.