A global survey of 1,224 security, development, and IT operations professionals published today by JFrog finds that 60% typically spend four days or more remediating application vulnerabilities in a given month.
JFrog security researchers, however, noted it’s probable most of those vulnerabilities are not as severe as they have been rated. After analyzing 212 vulnerabilities, the JFrog Security Research team downgraded the severity of 85% of the vulnerabilities rated as critical and 73% of one rated as high.
JFrog researchers also found that 74% of the reported common vulnerabilities and exposure (CVEs) with high and critical CVSS scores assigned to the top 100 Docker Hub community images weren’t actually exploitable.
Only 17% of the vulnerabilities analyzed enabled remote code execution, compared to 44% that could enable a denial of service (DoS) attack, the researchers noted.
Stephen Chin, senior director for developer relations for JFrog, said that analysis suggests organizations are arguably spending too much time on application security at the expense of productivity. Tools that generate alerts based on severity rather than also rating exploitability are creating too much noise simply because they don’t provide enough contextual analysis, he added.
Conversely, in the absence of that level of detailed analysis, it’s possible a low-level vulnerability that might be especially severe in a specific IT environment, also known as a false negative, might not be further investigated, noted Chin.
Security concerns, however, also limit innovation, with 40% of the survey respondents noting that because of security reviews, it typically takes a week or longer to get approval to use a new package/library.
Nearly half of IT professionals (47%) say they use between four and nine application security tools, with 90% reporting they use some type of tools that rely on machine learning algorithms or other forms of artificial intelligence (AI) to scan, identify and remediate vulnerabilities.
Less than a third (32%), however, work for organizations that use AI to write code. That relatively low level suggests many organizations are still not comfortable with the quality of the code generated, noted Chin. Of course, individual developers may still be using those tools without approval.
More than half (53%) say their organization uses between four to nine programming languages, with nearly a third (31%) reporting they use 10 or more.
The most frequently used tools are static application security testing (61%), dynamic application security testing (58%), software composition analysis testing (56%), and application programming interface (API) security (56%).
A full 89% said their organization has adopted a security framework such as OpenSSF or Supply-chain Levels for Software Artifacts (SLSA).
Finally, the survey finds little consensus concerning when to run security scans, with 42% saying it’s best to perform security scans as code is being written versus 41% that prefer to perform scans on new software packages before installing them. A total of 41% said runtime is the least desirable place to run scans. More than half (56%) said their organization applies security scans at both the code and binary scanning levels.
The degree to which DevSecOps best practices are being adopted will naturally vary from one organization to the next. The challenge is to prioritize remediation efforts in a way that enables developers to spend more time writing, for example, business logic versus patching vulnerabilities are not actually a material threat. In that context, the quality of the vulnerability analysis generated by scanning tools matters because of the unnecessary toil that might be eliminated, noted Chin.
It may be a while before most organizations are able to strike the right balance when it comes to DevSecOps, but as a greater appreciation for the nuances of application security continues to take hold, the less stress there should hopefully one day be for all concerned.