A survey of 800 DevOps and application security operations (SecOps) leaders published today found nearly all (97%) are making use of generative artificial intelligence (AI).
Conducted by the market research firm Sago on behalf of Sonatype, the survey found that beyond writing code, a full 45% of the 400 security operations professionals have already embedded generative AI into their workflows compared to 31% of the 400 DevOps professionals.
Security operations teams also reported greater time savings than their DevOps counterparts, with 57% saying generative AI saves them at least six hours a week compared to only 31% of DevOps respondents that saw the same benefit.
The top benefits cited by security operations teams is increased productivity (21%) and faster issue identification/resolution (16%) compared to faster software development (16%) and more secure software (15%) cited by DevOps professionals.
It’s still early days as far as embedding generative AI in workflows is concerned, but it’s clear the ability to write code faster is trumping concerns about the increased number of vulnerabilities that might find their way into application environments. The most widely used generative AI platforms are trained on a massive corpus of data that includes code of varying quality. As such, a lot of the code generated by a general-purpose AI platform will likely have as many errors, including vulnerabilities, as the code used to train it.
The level of faith organizations will place in that code will be heavily influenced by use cases. A script used to automate DevOps workflows that are not externally facing may be one example of where a general-purpose AI platform can boost productivity. In other instances, code that contains vulnerabilities that find their way into a web application might be much more problematic.
Despite those concerns, however, the survey finds nearly three-quarters (75%) of all respondents feel pressure to use generative AI.
There are, of course, already large language models trained on code that has been vetted by software engineers. In the months ahead, the code generated by AI platforms specifically designed for that purpose should significantly improve.
Sonatype CTO Brian Fox said it’s only a matter of time before IT organizations focus more on ensuring the generative AI platforms are tuned for specific tasks. In the meantime, however, application security may get worse before it gets better, with more than three-quarters of DevOps respondents noting that generative AI will result in more vulnerabilities in open source code. Only 58% of security operations professionals are similarly concerned.
It will be up to the engineers that make up these teams to bring discipline to code quality practices and make sure the code running in a production environment is of high quality, regardless of how it was created, noted Fox. For the moment, the issue is that AI augmentation is now occurring at a faster pace than the AI advances that should one-day help DevOps teams cope with increasingly larger codebases, he noted.
Survey respondents are also concerned about who might own the code generated by AI. A total of 40% said developers or the organizations they work for should own the copyright for AI-generated output, and most of them agreed developers should be compensated for the code they wrote if it’s used in open source artifacts in LLMs (DevOps 93% versus SecOps 88%). A total of 42% of DevOps respondents and 40% of SecOps respondents said a lack of regulation could deter developers from contributing to open source projects.
DevOps and SecOps leads both want more regulation: Asked who they believe is responsible for regulating the use of generative AI, 59% of DevOps leads and 78% of SecOps leads said both the government and individual companies should be responsible for regulation.
At this point, the proverbial generative AI genie is out of the bottle, so there is no going back. The issue that remains to be seen is how DevOps and security operating teams will adjust to that new reality.