Dear CIO,

Generative AI promises dramatic efficiency gains, allowing developers to describe the functionality they want, and large language models (LLMs) will generate it. However, speed comes with a catch. The 2025 GenAI Code Security Report from Veracode reveals findings that every CIO should take seriously: nearly half of AI-generated code contains detectable security vulnerabilities when developers do not explicitly request secure implementations.

Best Regards,
John, Your Enterprise AI Advisor

Dear CIO

The Hidden Risk in AI-Created Code

What CIOs Must Address Now

What the Data Shows

  • Security has not improved: While syntax accuracy has increased in the past year, the security pass rate has remained flat at 55%. That means 45% of the generated code contains OWASP Top 10 vulnerabilities.

  • Bigger is not better: Larger and more expensive models did not produce more secure code than smaller ones. The research shows that model size is no guarantee of safer output.

  • Language choice matters—especially for Java: Python (61.7%), JavaScript (57.3%), and C# (55.3%) outperformed Java (28.5%). For CIOs running large Java-based platforms, the risk is notably higher.

  • Some vulnerabilities fly under the radar: LLMs perform well in avoiding SQL injection (80.4% pass) and insecure crypto algorithms (85.6%), but fail badly on cross-site scripting (13.5%) and log injection (12.0%), with performance declining in both areas.

Why This Should Be on Every CIO’s Desk

  • Operational risk and vulnerability debt
    AI-assisted coding can accelerate delivery, but it can also inject security flaws that increase patching workloads, slow down releases, and build up long-term security debt.

  • False Sense of Security
    Teams may assume that newer or larger models must be more secure. This report makes it clear that without explicitly asking for secure code, you won’t get it.

  • Compliance and Regulatory Exposure
    SQL injection and XSS can trigger data privacy violations, regulatory fines, and insurance disputes.

  • Tooling without guardrails
    Many development teams are adopting LLM-based tools with little or no security oversight. CIOs need to assess how these tools are used, not just which ones are chosen.

AI-assisted coding may speed up software delivery, but it also risks introducing flaws that will increase patching workloads, extend release timelines, and quietly add to an organization’s long-term security debt. Vulnerabilities like cross-site scripting or SQL injection can also trigger regulatory violations, data privacy breaches, and even legal exposure. And as more teams adopt AI-powered coding tools without formal guardrails, the likelihood of introducing security flaws only grows.

Bottom Line

Generative AI can be a force multiplier for development teams, but without guardrails, it will just as quickly multiply your vulnerabilities. It is your responsibility to ensure that AI-driven development aligns with your organization’s risk tolerance, compliance requirements, and security posture. That means embedding security into every step of the AI development lifecycle from the very first prompt.

Here is a link to the full report.

Login or Subscribe to participate

Deep Learning
  • Emma Woollacott shares survey findings exposing a gap in enterprise AI adoption, as most teams use AI tools without building reliable infrastructure.

  • Jeff Evans highlights an upcoming session on September 16, where security leaders will dissect AI’s dual role in cyber offense and defense.

  • Christofer Hoff criticizes overhyped reactions to OpenAI's open-weight models by pointing out that similar capabilities have existed long before.

  • Paul Wagenseil reports on Black Hat USA 2025, sharing that AI systems remain vulnerable due to neglect of long-established security practices.

  • Ian Thomas explores how generative AI is escalating cyber threats while also enabling defensive AI agents to automate routine security tasks.

  • Reuven Cohen warns against unchecked psychological dependence on increasingly capable AI models.

  • David Gratton recounts the rapid obsolescence of a custom-built PA app due to GPT-5’s out-of-the-box capabilities.

  • The Artificially Intelligent Enterprise shares a special edition examining ChatGPT updates, from the release of GPT-5 and Agent Mode.

  • AI Tangle covers GPT-5, Google’s debut of Genie 3, and Anthropic’s Claude Opus 4.1.

Regards,

John Willis

Your Enterprise IT Whisperer

Follow me on X

Follow me on Linkedin

Dear CIO is part of the AIE Network. A network of over 250,000 business professionals who are learning and thriving with Generative AI, our network extends beyond the AI CIO to Artificially Intelligence Enterprise for AI and business strategy, AI Tangle, for a twice-a-week update on AI news, The AI Marketing Advantage, and The AIOS for busy professionals who are looking to learn how AI works.

Keep Reading

No posts found