Dear CIO,
Every major technology shift comes with a period of turbulence where innovation races ahead while oversight tries to catch up. We have seen this before with cloud, DevOps, and mobile. In 2025, with the rapid adoption of autonomous AI agents and generative systems, that gap has widened dramatically. Currently, audit function is lagging, and in an era of real-time threats and self-evolving systems, that delay is becoming dangerous.
Auditors, CISOs, and regulators are still trying to retrofit frameworks built for a different era. Meanwhile, AI systems are morphing, learning, and making decisions on their own. This disconnect creates fertile ground for threats to flourish, especially ones that look nothing like yesterday’s attacks.
Best Regards,
John, Your Enterprise AI Advisor

Audit Can Not Keep Up With AI
Why Audit Keeps Falling Behind and What You Can Do About It

AI-Powered Social Engineering
In ISACA’s 2026 Tech Trends and Priorities Report, AI-driven social engineering has overtaken ransomware and supply chain attacks as the top cybersecurity concern for 2026. The reason is that generative models have made it trivially easy to mimic human tone, timing, and even emotional nuance. We are no longer talking about clumsy phishing emails. These are polished, persuasive, and often delivered via synthetic voice or video. What is even scarier is that these are now fast, cheap, and incredibly hard to detect.
Despite the growing sophistication of these attacks, ISACA found that only 13% of organizations feel “very prepared” to manage AI-related threats. A full quarter admitted they are “not very prepared” at all. Policies are being written after the fact, not in anticipation of what is coming. Once again, audit is reacting to smoke, but not tracking the fire.
The Rise of Autonomous Agents And The Collapse Of Static Defenses
Agentic AI is moving into production. These systems don’t just follow rules. They reason, learn, and adapt. They use tools, write code, and in some cases, modify their own behavior. What happens when those agents go rogue? In one of my recent presentations, I shared that the rise of polymorphic AI, which can change tactics, rewrite code, and outmaneuver static defenses. By the time traditional audit controls are in place, signatures, access lists, and rule sets, the threat has already evolved. Today’s AI can exploit a zero-day in minutes. Current detection tools and compliance checklists designed for quarterly audits are simply outmatched.
Shadow AI And The Next Governance Crisis
Perhaps the most urgent threat is invisible AI. Shadow AI refers to agents and systems deployed without IT’s knowledge or approval. Think “shadow IT” on steroids. These systems are often installed by well-meaning employees to automate tasks, generate insights, or write code, but they often run with elevated privileges, process sensitive data, and make decisions that fall outside governance frameworks. When AI systems act autonomously and leave no trail, oversight becomes impossible. We have already seen real-world incidents where agents deleted production data or falsified logs. The line between error and malice is blurring fast.
What Happens When Audit Falls Too Far Behind?
By 2028, Microsoft predicts more than 1.3 billion AI agents will be in operation. Each one represents a new endpoint, a new vector, and a new challenge for security teams. The real underlying issue here is that if AI systems become self-monitoring, they will begin to audit themselves. Unless we rethink our entire approach, human auditors will be obsolete. The deeper danger is AI with autonomy but no accountability. AI that can deceive, replicate, and reason will test every assumption we have made about digital trust.
Where We Go From Here
ISACA calls for better frameworks and more upskilling. That is a start, but it is not enough. We need to move from retrospective audit to real-time observability. In practice, this means designing systems that log not just outcomes, but AI decision paths. My “NORMAL Stack” approach offers a glimpse into what future-ready architecture might look like. We need to focus on inherently auditable building systems.
Do Not Wait For A Breach To Get Serious
If your organization is still treating AI oversight as an afterthought, it is only a matter of time before you are blindsided. Yes, AI-powered social engineering will dominate headlines in the near term, but the real threat is deeper. CIOs must recognize that the audit and compliance playbook needs a rewrite. It is time to move from reactive reporting to proactive sensing. From checklists to continuous validation and oversight after the fact to governance embedded by design. At the end of the day, when audit is always two steps behind, the breach is already in progress.

How did we do with this edition of the AI CIO?

John Rzeszotarski highlights overlooked safety and compliance engineering in agent deployment through an example of building a Highly Regulated Golf Course Agent.
KPMG shares its 11th CEO outlook, showing caution, yet positivity about the future.
Raghvender Arni argues that AI shifts software bottlenecks from coding speed to domain clarity, empowering small expert teams to outpace large orgs and rendering code-scribe roles obsolete.
Om Nalinde examines the new Claude Code sandboxing that slashes permission prompts by 84% through dual filesystem and network isolation.
Tapabrata “Topo” Pal shares how AI is reshaping the buy vs build debate.
The Artificially Intelligent Enterprise covers how to develop research using AI tools.

Dear CIO is part of the AIE Network. A network of over 250,000 business professionals who are learning and thriving with Generative AI, our network extends beyond the AI CIO to Artificially Intelligence Enterprise for AI and business strategy, AI Tangle, for a twice-a-week update on AI news, The AI Marketing Advantage, and The AIOS for busy professionals who are looking to learn how AI works.




