In October, I mentioned the numeric precision presented in the Fall 2010 report, and that seems to have been addressed. Helpfully some of the standard deviation values are provided (see Figure 1 on page 3 of the report), and these show how careful you have to be about drawing conclusions from the figures — even the rankings.
The report says the web sites tested "represent the most important and secure websites on the Web, owned by organization that are very serious about their security", but it would still be useful to know the scale of the applications sampled (e.g. application entry points, functionality estimates, lines of code). Is there a correlation with the number of vulnerabilities found?
Nevertheless this is a useful document to compare your own application penetration tests against, but it does depend upon how you define a single vulnerability, and the business risk.
The current report mentions it has changed the way vulnerabilities are counted. On page 4 in "Vulnerability Prevalence", the report says vulnerabilities are counted by unique web application (i.e. shared vulnerabilities counted once for each), and for each class of attack (which I'm less sure of what that means in terms of numbers). But it also says that every individual parameter of a URL with vulnerabilities would be counted as a separate vulnerability, and this would increase for every way the parameter can be "exploited". This change in method, together with the improvement in identification techniques mentioned on page 6, mean that it is difficult to compare this report with previous editions.
In terms of risk, the report uses "serious" to mean "vulnerabilities with a high, critical or urgent severity as defined by PCI DSS naming conventions" but also adds "Exploitation could lead to breach or data loss". In PCI, this relates to:
- Level 5 Urgent - Trojan Horses; file read and writes exploit; remote command execution (Level 5 vulnerabilities provide remote intruders with remote root or remote administrator capabilities).
- Level 4 Critical - Potential Trojan Horses; file read exploit (Level 4 vulnerabilities provide intruders with remote user, but not remote administrator or root user capabilities).
- Level 3 High - Limited exploit of read; directory browsing; DoS (Level 3 vulnerabilities provide hackers with access to specific information stored on the host, including security settings. This level of vulnerabilities could result in potential misuse of the host by intruders.).
PCI DSS is focused on protecting payment card data. I just wonder how those map to the wider range of web application vulnerabilities. Also, the inclusion of undefined terms like "breach" and "data loss" worry me slightly, since there are business impacts such as data damage which do not involve theft, and which are not financial losses. What about impacts on users rather than the organisation — do they matter? It would be good to have this discussion.
While Number of Vulnerabilities and Remediation rate (Figure 1) and Windows of Exposure (Figure 2) are provided for these serious vulnerabilities, it is not clear whether the Overall Top Ten Vulnerabilities (Figure 4) includes only "serious" vulnerabilities. This shows Information Leakage tying with Cross-Site Scripting in prevalence, where prevalence is the percentage likelihood of finding at least one vulnerability of the category in a web site.
It would be good to have some industry-wide guidance on counting and ranking vulnerability data for application testing (and different to network/infrastructure penetration testing). The White Hat report is generously open, but still leaves some questions in my mind.
Posted on: 05 April 2011 at 08:01 hrs