Dear CIO,
As Large Language Models (LLMs) and AI/ML tools swarm into enterprise development delivery, the 2025 JFrog Software Supply Chain Report raises a critical concern: our ability to govern and secure these models has not kept pace. In this newsletter, I am going to cover some of the most important findings in the 2025 JFrog Software Supply Chain Report, what it means, and what to do about it.
Best Regards,
John, Your Enterprise AI Advisor

LLMs Are Entering Your Supply Chain
Analysis of the 2025 JFrog Software Supply Chain Report

The State of AI/ML Model Risk
In 2024, Hugging Face saw over a 6.5x increase in malicious models, some executing code at load time, exposing systems to backdoors. Despite widespread scanning adoption (79%), tools are immature: current model scanners miss threats and trigger 96% false positives.
🛠 Governance Gaps
64% of orgs use commercial models (e.g., OpenAI, Claude), but nearly half also self-host OSS and proprietary models.
49% have no reliable way to control model usage, and 58% lack formal policies for sourcing or licensing training data.
Manual model approval processes dominate, with 37% curating model lists by hand.
⚠️ What This Means
AI supply chains are fragmented, opaque, and easy targets. Without centralized controls or automated policies, enterprises risk unknowingly integrating vulnerable or malicious models into production.
✅ What to Do Now
Establish model provenance: Track where models come from and their dependencies, just like you do with code.
Integrate AI/ML into SBOM and SDLC: Extend supply chain security policies to cover training data, model updates, and inference artifacts.
Don’t overtrust scanners: Critically evaluate your AI security vendors—many aren't ready for prime time.
Models are not code: Models are in vector form, and vulnerabilities can be hidden in hard-to-find text and model weights.
You can examine the report yourself using the following link:

How did we do with this edition of the AI CIO?

Evan Powell criticized traditional security architecture for relying on static rules and trusted tools that attackers now exploit.
Ivy Diaz reports on exploring practical AI and machine learning applications in unconventional fields at URTeC 2025.
Beatrice Nolan covers NVIDIA’s Jensen Huang’s critiques of Anthropic CEP Dario Amodei.
Nikki Davidson highlighted the need for public-sector employees to develop AI communication skills.
Greg Otto reports on how AWS is leveraging large language models to convert real-time sensor attacks into security insights.
Bill Toulas looks at 'EchoLeak,' the first zero-click AI exploit targeting Microsoft 365 Copilot.
Europol shares its Internet Organized Crime Threat Assessment for 2025.
Rob Bowley gives his thoughts on Google’s report of a 10% boost in engineering productivity from AI, based on increased developer capacity.
Stephen Schmidt argued that defenders currently gain more from AI advances than attackers.
The Artificially Intelligent Enterprise covers how to start actually using AI agents today.
AI Tangle looks at Apple's annual WWDC 2025 event, OpenAI hitting a milestone of $10 billion in ARR, and more.

Dear CIO is part of the AIE Network. A network of over 250,000 business professionals who are learning and thriving with Generative AI, our network extends beyond the AI CIO to Artificially Intelligence Enterprise for AI and business strategy, AI Tangle, for a twice-a-week update on AI news, The AI Marketing Advantage, and The AIOS for busy professionals who are looking to learn how AI works.