Dear CIO,

Cybersecurity has always been a constant battle of cat and mouse, and right now, a new player has entered the field: Artificial Intelligence. AI and large language models (LLMs) become increasingly integrated into our daily lives and critical systems, creating a new, complex, and rapidly evolving attack surface. We're witnessing a fresh wave of vulnerabilities, and the truth is, we're still learning how to identify, prioritize, and remediate them.

A new white paper, NIST CSWP 41, titled "Likely Exploited Vulnerabilities: A Proposed Metric for Vulnerability Exploitation Probability," introduces a Likely Exploited Vulnerabilities (LEV) metric. While this metric is designed to help with all software and hardware vulnerabilities, its principles could be compelling for navigating the complexities of AI and LLM security. In this newsletter, I will review what all this information means for you.

Best Regards,
John, Your Enterprise AI Advisor

Dear CIO

New NIST Metric Could Help Us Tackle AI Vulnerabilities

A New Metric for a New Era: Introducing LEV

NIST recently released a draft white paper, CSWP 41, that proposes a new metric for security professionals called Likely Exploited Vulnerabilities (LEV). The concept is that instead of just predicting what might be exploited, LEV looks at whether there is evidence that a vulnerability has already been exploited by analyzing trends in historical exploitation probabilities.

Tens of thousands of vulnerabilities are published annually, but only a tiny fraction ever get exploited in the wild. The challenge for security teams is figuring out which ones to fix first, a task made difficult by the sheer volume and the cost of remediation.

Currently, organizations often rely on Known Exploited Vulnerabilities (KEV) lists, which can be helpful, but are often reactive and incomplete, and the Exploit Prediction Scoring System (EPSS), which tries to forecast future exploitation risk but can be unreliable, especially for already exploited issues.

LEV adds a retrospective view, identifying the vulnerabilities that show signs of prior exploitation—even if they never made it onto a KEV list. It essentially turns EPSS into a rearview mirror, spotting patterns that point to real-world attacker interest.

What Makes LEV Worth Your Time?

In practice, LEV offers four key benefits:

  1. Signal Amid Noise: Helps teams focus on vulnerabilities that attackers have likely already touched.

  2. Augments KEV Lists: Fills in gaps by flagging risks that haven’t yet made it onto official radars.

  3. Boosts Confidence in Prioritization: When KEV and EPSS don’t align, LEV adds an extra layer of insight.

  4. Spotlights Missed Trends: Identifies false negatives and other high-risk flaws that EPSS might have missed the first time.

Why LEV Could Be a Game-Changer for AI Security

Let’s bring this back to AI and LLMs. The threat landscape here is still being mapped. If we are being honest, prompt injection, data poisoning, training-data leaks, and model evasion are just the start, and we are in the early innings of understanding how to secure these systems. In such a volatile environment, being able to identify which AI-specific vulnerabilities are attracting attacker attention could be the edge we need.

Here’s how LEV might help:

  • Early Signals in a Noisy Field: For emerging AI threats, LEV might catch signs of low-level exploitation activity before the broader industry catches on.

  • Better Prioritization for Unfamiliar Risks: EPSS may lag when dealing with brand-new attack vectors. However, LEV, which builds on EPSS history, could help spot patterns in seemingly minor issues that later turn into major risks.

  • Noise Reduction: The rush to publish AI vulnerability research is flooding the ecosystem. LEV can help separate the theoretical from the operationally relevant.

  • Insights into Attacker Behavior: By tracking changes in LEV scores, teams can monitor how interest in specific attack types evolves.

  • Support for AI-Specific KEV Development: As formal KEV lists for AI are still in their infancy, LEV could act as a stopgap to identify likely exploited issues in the meantime.

A Useful Tool, But Not a Crystal Ball

I want to make one thing clear, though. LEV doesn’t predict future attacks. It doesn’t replace human judgment. Instead, it is a backward-looking signal, not a forecasting tool. But in cybersecurity, where every insight can be a force multiplier, that backward look may be exactly what’s needed to improve forward-looking strategies.

NIST itself is calling for more industry participation to validate LEV using real-world exploitation data. For AI and LLMs, where ground truth is scarce and attacker tactics are still evolving, that validation will be critical.

The Bottom Line for CIOs

AI is becoming part of the infrastructure, and that means the vulnerabilities we’re seeing today are cracks in the foundation of tomorrow’s systems. CIOs can’t afford to wait for perfect tools or mature lists to start managing this risk. LEV is not the end-all, but it is a meaningful step toward smarter prioritization in a domain that desperately needs it.

How did we do with this edition of the AI CIO?

Login or Subscribe to participate

Deep Learning
  • Armand Ruiz gives further details on the launch of llm-d.

  • Ravie Lakshmanan looks at overly permissive default IAM roles in AWS services and the Ray framework that enable privilege escalation, cross-service manipulation, and potential account compromise.

  • James Coker dives into a report showing that 73% of organizations are investing in AI-specific security tools, making AI security the second-highest spending priority after cloud security.

  • Bob Violino advises on implementing a dedicated AI GRC framework to mitigate risks like bias, cybersecurity, and compliance.

  • Marcin Niemiec highlights how Gemini 2.5 Pro, paired with the AI Security Analyzer tool, generated the most effective threat model yet on a vague architecture.

  • Walter Haydock writes that he is ceasing to use OpenAI for confidential information at StackAware due to a judicial mandate requiring indefinite retention of output logs.

  • Katharina Koerner covers a report on a detailed governance framework for AI agents, emphasizing five intervention areas.

  • Martin Bayer reviews a survey showing 73% of cybersecurity leaders faced incidents due to unmanaged or unknown IT assets.

  • Paulina Okunytė shares that researchers have found a data leak that exposed over 5.7 million job seeker resumes due to a misconfigured AWS S3 bucket on HireClick.

  • Armon Rahgozar, PhD, emphasizes that choosing the right AI model depends on task fit, customization needs, security, deployment environment, performance, scalability, and cost.

  • Dan Lorenc shares Chainguard Libraries for Python to counter widespread package malware by rebuilding Python packages from source with verifiable provenance.

  • The Artificially Intelligent Enterprise shares how to troubleshoot anything with ChatGPT.

  • AI Tangle covers Microsoft bringing xAI's Grok 3 to Azure.

Regards,

John Willis

Your Enterprise IT Whisperer

Follow me on X

Follow me on Linkedin

Dear CIO is part of the AIE Network. A network of over 250,000 business professionals who are learning and thriving with Generative AI, our network extends beyond the AI CIO to Artificially Intelligence Enterprise for AI and business strategy, AI Tangle, for a twice-a-week update on AI news, The AI Marketing Advantage, and The AIOS for busy professionals who are looking to learn how AI works.