SecDevOps.comSecDevOps.com
AI-Generated Code Poses Security, Bloat Challenges

AI-Generated Code Poses Security, Bloat Challenges

Dark Reading(1 months ago)Updated 1 months ago

Development teams that fail to create processes around AI-generated code face more technical and security debt as vulnerabilities get replicated.

TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTogether, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.News, news analysis, and commentary on the latest trends in cybersecurity technology.Development teams that fail to create processes around AI-generated code face more technical and security debt as vulnerabilities get replicated.October 29, 2025Developers using large language models (LLMs) to generate code perceive significant benefits, yet the reality is often less rosy.Programmers who adopted AI for code generation estimate, for example, that their individual effectiveness improved by 17%, according to the "State of AI-assisted Software Development" report published by Google's DevOps Research and Assessment (DORA) team in late September. Yet the same report also finds that software delivery instability climbed by nearly 10% as well. Overall, 60% of developers work in teams that suffer from either lower development speeds, greater software delivery instability, or both.The problem? AI tends to amplify flaws in the codebases it uses for training and, because it produces a greater volume of code, developers do not have time to scrutinize the output in the same way as if they were writing it, says Matt Makai, vice of developer relations at cloud platform Digital Ocean."If you have technical debt or security vulnerabilities, how you use the tools has a big impact on whether they're going to replicate those same problems elsewhere," he says. "They absolutely are verbose on their first shot, in part, because they're trying to solve the stated problem. The thing that's been missing from a lot of the practices today is ... what's your checklist after you've solved the problem?"Related:'Ransomvibing' Infests Visual Studio Extension MarketUsing AI to generate code has already become nearly ubiquitous among developers. Depending on the study, 84% to 97% of developers use AI to generate code. (The Google DORA report found 90% of developers use AI in their work.) Yet generating code using AI without adequate scrutiny and testing can easily lead to bloated codebases and software with significant vulnerabilities. These two cases are examples of technical and security debt, respectively, because they represent negative productivity and extra work that eventually must be done in the future.For coders who overly rely on LLMs to produce significant portions of their codebases without firm oversight, the quality and security implications have become all too apparent: more code, more vulnerabilities, and more security debt.In 2025, the average developer checked in 75% more code than they did in 2022, according to an Oct. 1 analysis of GitHub data conducted by software engineering platform vendor GitClear. The same analysis concludes that while a "10% productivity gain look[s] real ... so are the costs," and that the increase in output "applies as much or more to the metrics that quantify 'how much code will the team need to maintain?' as it does 'how much output will each developer gain?'"Related:Sora 2 Makes Videos So Believable, Reality Checks Are Required While the syntax of AI-generated code (blue line) has improved greatly, security vulnerabilities continue to be a problem. Source: VeracodeFor the most part, AI-generated code increasingly passes both syntactic and functional inspections. However, research conducted by application security firm Veracode found that 45% of the code generated by AI models had known security flaws.Two years ago, Chris Wysopal, chief security evangelist for Veracode, predicted that the 45% vulnerability rate would improve. It hasn't, he says."It's been completely flat," he says. "So that study is still applicable today — the developers using AI-assisted coding are creating slightly worse code than the ones that are not."Social media has been overrun by AI-generated content, dubbed "AI slop." Workers are increasingly seeing "work slop" — AI-generated work delivered by their co-workers and managers that passes as reasonable deliverables but fails to advance a given task. Similarly, poor development practices can result in "code slop" — code that may compile and produce output but is verbose, brittle, and flawed.One reason for these issues: LLMs are not able to keep the context of large codebases in memory. As a result, developers are seeing massive duplication of code, such as importing an entirely new package — for logging, for example — even if another package is already being used to accomplish a task, Wysopal says.Related:Multiple ChatGPT Security Bugs Allow Rampant Data Theft"That's one of the worst engineering things you could do is start to duplicate all of that code," he says. "Now I have to keep two packages updated. Now I have to fix things in two places. And so the [code] volume problem is there, but I think it just manifests itself a little bit differently."Without processes in place to reduce the voluminous code produced by AI systems and scan code before commits, developers will find themselves dedicating their time to rework, says Sarit Tager, vice president of product management at Palo Alto Networks."AI has enabled developers to move faster than ever, but security hasn't been able to keep pace," she says. "The 'shift-left' movement — intended to bring security earlier into development — has mostly concentrated on detection, not prevention. Many teams hesitate to enforce guardrails and prevention rules for fear of slowing down innovation."The first step is to get developers to commit to understanding the code they are submitting. But the problem is unlikely to get better soon if developers shift from creating code to merely curating it, says Palo Alto Networks' Tager."When developers prompt an AI model, they're accepting or rejecting output rather than writing logic themselves — reducing their understanding of how the code actually works," she says. "Over time, this erodes code ownership and makes it harder to identify or fix security flaws." Software development teams that do not handle AI-generated code properly will face slower throughput and possibly more instability. Source: Google DORAYet the problems AI creates can likely be solved by AI as well. The issues can be mitigated by development teams that have the right processes in place. The Google DORA study found that two out of the seven types of identified development teams — what the report dubs "Pragmatic Performers" and "Harmonious High-Achievers" — deliver on the promise of AI: both higher software delivery throughput and lower software delivery instability."[The two groups'] existence provides an empirical anchor for what is possible — a benchmark that organizations can strive for," the Google report states. "While achieving this state is clearly difficult, these groups serve as a powerful testament to the fact that high-velocity, high-quality software delivery is not a theoretical ideal but an observable reality."Digital Ocean's Makai calls this shift in dealing with AI-generated code moving from "vibe coding" to "vibe engineering.""Make sure that you are asking these tools not just to spit out some code to create a feature, but, hey, what are the potential security vulnerabilities of this feature? How do I rewrite it? How do I make this code more efficient?" he says. "The tools are all capable of that, but if you don't prompt the tool for the security review or to make your code more efficient or optimize that database query, it's not going to do that for you."Robert Lemos, Contributing WriterVeteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.2025 DigiCert DDoS Biannual ReportDigiCert RADAR - Risk Analysis, Detection & Attack ReconnaissanceThe Total Economic Impact of DigiCert ONEIDC MarketScape: Worldwide Exposure Management 2025 Vendor AssessmentThe Forrester Wave™: Unified Vulnerability Management Solutions, Q3 2025How AI & Autonomous Patching Eliminate Exposure RisksThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterSecuring the Hybrid Workforce: Challenges and SolutionsCybersecurity Outlook 2026Threat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesYou May Also LikeAI Security Agents Get Persona MakeoversSora 2 Makes Videos So Believable, Reality Checks Are RequiredOperational Technology Security Poses Inherent Risks for ManufacturersAI App Spending Report: Where Are the Security Tools?Copyright © 2025 TechTarget, Inc. d/b/a Informa TechTarget. This website is owned and operated by Informa TechTarget, part of a global network that informs, influences and connects the world’s technology buyers and sellers. All copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. TechTarget, Inc.’s registered office is 275 Grove St. Newton, MA 02466.

Source: This article was originally published on Dark Reading

Read full article on source →

Related Articles