• AI CIO
  • Posts
  • OWASP Updates 2024/2025 Framework

OWASP Updates 2024/2025 Framework

What OWASP's Updates Reveal About AI Security

Dear CIO,

Welcome to this edition of Dear CIO, where we delve into the rapidly shifting security landscape of large language models (LLMs). As generative AI systems become cornerstones of enterprise operations and consumer experiences, understanding their vulnerabilities is more urgent than ever. Specifically, this newsletter will focus on the evolution of the OWASP Top 10 for LLMs, comparing the foundational 2023 framework with the recently updated 2024/2025 list. These updates reflect not only the growing complexity of adversarial threats but also the innovative measures needed to address them. Join me as we unpack the critical changes shaping the future of AI security and what they mean for CIOs at the forefront of safeguarding these transformative technologies.

Best Regards,
John, Your Enterprise AI Advisor

Brought to You By

The AIE Network is a network of over 250,000 business professionals who are learning and thriving with Generative AI, our network extends beyond the AI CIO to Artificially Intelligence Enterprise for AI and business strategy, AI Tangle, for a twice a week update on AI news, The AI Marketing Advantage, and The AIOS for busy professionals who are looking to learn

Dear CIO

Comparative Report: OWASP Top 10 for LLMs - 2023 vs. 2024/2025

What OWASP's Updated Top 10 Risks for LLMs Reveal About the Future of AI Security

The security landscape surrounding large language models (LLMs) has undergone significant transformation over the past year, reflecting the rapid advancements in generative AI (GenAI) technologies. Understanding and mitigating security risks has become increasingly critical as these systems integrate deeply into enterprise workflows and consumer applications. The commercial ecosystem has evolved swiftly since OWASP introduced its Top 10 Risks for Large Language Models in Spring 2023.

OWASP has updated its framework for 2024/2025 to address the fast-changing adversarial landscape, introducing substantial changes that reflect new vulnerabilities and provide a broader, more detailed perspective on existing risks. This evolution underscores the need for proactive security measures to keep pace with the increasing sophistication of attacks targeting GenAI systems.

The recently announced OWASP Top 10 for LLM Applications (2025) highlights the expanding adoption of LLMs and the evolving threats these technologies face. While OWASP's initial 2023 list provided an early, invaluable foundation for identifying emerging risks, it has been enhanced to address the complexity of new attack vectors and better align with the challenges posed by GenAI's integration across diverse industries. This analysis explores the key updates in the 2024/2025 list compared to the original, offering insights into what these changes reveal about the fast-evolving nature of AI security.

Critical Changes Between the 2023 and 2024/2025 Lists

2023 Entry

2024/2025 Entry

Key Changes

Prompt Injection (LLM01)

Prompt Injection (LLM01)

It remains number one on the list. It has expanded to include multimodal injection risks and adversarial inputs involving hidden modalities.

Sensitive Information Disclosure (LLM06)

Sensitive Information Disclosure (LLM02)

Elevated importance, broader examples of accidental disclosures, and inclusion of mitigations using federated learning. This is one of the fastest-growing categories of threats. 

Insecure Output Handling(LLM02)

Improper Output Handling(LLM05)

Renamed for clarity, with an emphasis on systematic validation of LLM outputs. The movement from LLM02 to LLM05 suggests a relative shift in focus as new vulnerabilities (e.g., System Prompt Leakage, Sensitive Information Disclosure) were prioritized. 

Training Data Poisoning(LLM03)

Data and Model Poisoning(LLM04)

It has broadened to include model tampering and backdoor risks beyond training data manipulation.

Model Denial of Service(LLM04)

Unbounded Consumption(LLM10)

It was renamed and expanded to cover resource-related risks like cost escalation and API abuse. The movement from LLM04 (2023) to LLM10 (2024/2025) reflects a relative reduction in urgency compared to other risks like Prompt Injection, Sensitive Information Disclosure, and System Prompt Leakage.

Supply Chain Vulnerabilities(LLM05)

Supply Chain (LLM03)

This document has been updated to address risks in LoRA adapters and collaborative development tools. The repositioning of Supply Chain Vulnerabilities from LLM05 (2023) to LLM03 (2024/2025) reflects its heightened importance and broader scope in the AI security domain. As LLM applications grow in complexity and adoption, managing the integrity and security of the entire supply chain has become critical to safeguarding AI systems against compromise and exploitation.

Insecure Plugin Design (LLM07)

System Prompt Leakage(LLM07)

It was replaced by a new focus on securing system-level prompts responding to real-world exploits. System Prompt Leakage represents a growing area of concern in LLM security, reflecting the nuanced vulnerabilities introduced by the interplay of model design, application integration, and adversarial techniques. Addressing this vulnerability is vital for ensuring LLM-powered systems' confidentiality, integrity, and reliability. The OWASP framework's inclusion of this category emphasizes the need for proactive defenses to safeguard sensitive internal operations against accidental and malicious exposure. 

Excessive Agency (LLM08)

Excessive Agency (LLM06)

It expanded to include risks from agentic LLM architectures and autonomous actions.

Overreliance (LLM09)

Misinformation (LLM09)

Refined focus on disinformation risks, emphasizing societal and operational consequences.

Model Theft (LLM10)

Removed (covered in Supply Chain). Unbounded Consumption (LLM10)

Consolidated into supply chain risks, reflecting overlap and prioritization.

Analysis of Key Updates

1. New Additions and Refinements

  • System Prompt Leakage (LLM07): This new category addresses vulnerabilities related to assumptions about isolating system-level prompts. It emerged from real-world breaches in which prompts inadvertently disclosed sensitive operational instructions.

  • Vector and Embedding Weaknesses (LLM08): Introduced to guide security in Retrieval-Augmented Generation (RAG) frameworks, a rapidly adopted technique in LLM applications.

  • Unbounded Consumption (LLM10): A broader redefinition of denial-of-service vulnerabilities, incorporating unexpected financial costs and resource abuse.

2. Renamed and Expanded Categories

  • Improper Output Handling (LLM05): Formerly "Insecure Output Handling," this category now emphasizes comprehensive response validation frameworks to prevent cross-site scripting (XSS), server-side request forgery (SSRF), and other injection attacks.

  • Data and Model Poisoning (LLM04): Expanded to include risks from tampered pre-trained models and fine-tuning adapters, reflecting the complexities of modern LLM ecosystems.

3. Elevated Priority

  • Sensitive Information Disclosure (LLM02): Moved higher in priority to reflect its recurring prominence in data breach incidents and its expanded coverage of user and proprietary data risks.

Reflections on the Evolving AI Security Landscape

What We've Learned

  • Complexity Equals Vulnerability: As LLM systems incorporate multi-modal data and agentic architectures, their attack surface grows exponentially. Security measures must evolve alongside these advancements.

  • Community-Led Evolution: The OWASP framework reflects the active feedback loop from global stakeholders, demonstrating the value of collaborative vulnerability reporting and mitigation strategy development.

  • Proactive Security: The emphasis on system prompt isolation, embedding vulnerabilities, and LoRA risks highlights the necessity for forward-thinking security in a fast-moving field.

Why the Landscape Is Changing So Quickly

  • Rapid Adoption and Experimentation: As LLMs integrate into more applications, unforeseen vulnerabilities naturally emerge, driving the need for fast updates.

  • Adversarial Sophistication: Attackers have quickly adapted to exploit AI-specific weaknesses, prompting defensive innovations.

  • Scale of Impact: LLMs' widespread and critical use amplifies the consequences of vulnerabilities, necessitating heightened vigilance.

Conclusion

The updates in the OWASP Top 10 for LLMs from 2023 to 2024/2025 underscore the dynamic nature of AI development and the corresponding shifts in security priorities. Developers, businesses, and security professionals must remain agile, adapting quickly to new risks and leveraging the latest guidance to safeguard their systems. The focus on collaboration and proactive mitigation will continue to be pivotal as the AI landscape evolves.

How did we do with this edition of the AI CIO?

Login or Subscribe to participate in polls.

Deep Learning

  • Reese Rogers writes on how the US Patent and Trademark Office has restricted the use of generative AI due to security and reliability concerns.

  • Assaf Morag looks at how Aqua Nautilus researchers uncovered a new attack vector where threat actors exploit misconfigured JupyterLab and Jupyter Notebook servers to hijack environments for illegal sports streaming.

  • Steve Wilson covers OWASP GenAI Project updated 2025 Top 10 Risks for Large Language Models (LLMs), which highlights new and expanded threats like unbounded consumption, embeddings security, prompt leakage, and excessive agency.

  • Marco Figueroa explores the surprising capabilities of OpenAI's containerized ChatGPT environment, revealing how users can interact with its underlying structure through prompt engineering, file management, and sandboxed scripting.

  • A DoD Office of Inspector General report found delays in the Chief Digital and Artificial Intelligence Office's implementation plan and AI policy, and recommended publishing strategic measures to ensure effective AI adoption.

  • Claudio Diniz explores the challenges and key considerations for scaling GenAI applications from MVP to production.

  • Kyle Wiggers highlights the limitations of quantization in AI models, revealing that excessively low precision can degrade performance, particularly for large models, and suggesting a shift toward meticulous data curation and new architectures to balance efficiency and accuracy.

  • Ravie Lakshmanan writes that cybersecurity researchers have uncovered two vulnerabilities in Google's Vertex AI platform that allowed privilege escalation and model exfiltration.

  • This screenshot paints a picture of what CIOs are currently up against.

Regards,

John Willis

Your Enterprise IT Whisperer

Follow me on X 

Follow me on Linkedin