Dear CIO,
There’s a pattern I’ve noticed over the years in podcast interviews. I usually prepare far more material than we ever get to. We’ll have a strong conversation, but because of time, pacing, or where the host wants to go, half the questions never get asked. After a while, I realized those unused prompts weren’t wasted at all. They were often the beginning of something more useful: a deeper written reflection. So I decided to turn some of those unasked questions into newsletter posts. This one is about AI, leadership, and the recurring mistakes organizations make whenever a new technology wave arrives.
Best Regards,
John, Your Enterprise AI Advisor

From AI Hype to Human Systems
What Leaders Still Get Wrong

The Real Problem with AI Transformations Isn’t the Technology
When people talk about technology disruption, they often reach for familiar models like Rogers’ Diffusion of Innovations or Geoffrey Moore’s Crossing the Chasm. Those frameworks are helpful for understanding adoption patterns at a high level, but I’ve always preferred a different lens for understanding what actually happens inside organizations: the J-curve, and more specifically, the Satir Change Process Model.
It’s a better description of the messy middle. Every major technology wave creates the same dip: confusion, resistance, role anxiety, and temporary productivity loss. That bottom of the curve is where the real learning happens. It’s where sociotechnical learning takes place. This phase matters more than most CIOs, vendors, and transformation teams want to admit.
I’ve seen versions of this before. I was around for the expert systems era in the 1980s. My first startup was an expert system for managing distributed systems as companies moved from mainframes to distributed environments. There were suddenly more knobs to turn, more complexity to manage, and an urgent need for new forms of operational understanding. Later, I dabbled with Watson in the 1990s. Different eras, different labels, same pattern.
Recurring behaviors are not unique to AI; they occur in every technological hype cycle. Organizations often underestimate not the technology's capabilities, but rather the ability of humans to adapt to it. To succeed with new technology and navigate the challenges of the J-curve, organizations must become learning organizations. As Andrew Clay Shafer succinctly puts it, “You are either a learning organization, or you are losing to someone who is.” This fundamental truth does not change simply because an organization begins using AI.
A learning organization means scaling talent, building shared language, and increasing the organization’s capacity for innovation. A learning organization is not a nice-to-have in an AI era; it is the only way through. If talent development and innovation practices are not synchronized, the system will experience bottlenecks. Goldratt would call that a constraint. In practice, it behaves a lot like drum-buffer-rope: the pace of transformation will be limited by the weakest coordinated part of the system, not by the loudest executive commitment.
The Uncomfortable Truth Leaders Still Resist
One of the hardest truths for leaders to accept is that real transformation requires giving up the illusion of control. Great leaders drive intent and trust. They proxy control, permission, and ownership. David Marquet captured this well in Turn the Ship Around. Strong leadership is not about saying “never” or “don’t.” It is about clarifying the why and building the conditions for good judgment. I’ve seen this resistance repeat itself for decades. In the 1990s, leaders told me banks would never run tier-one and tier-two applications on Linux. In the 2000s, many of those same leaders insisted that regulated businesses don’t run in the cloud.
And now, in AI, we are watching the same script unfold again. Last year, I worked with a large manufacturer on their AI transformation. The company had an enterprise agreement with Microsoft for Copilot, yet more than half of its 4,000 developers were using Cursor. In another case, a CEO at a top-five U.S. bank saw a prototype of an AI code assistant, bypassed the CIO entirely, and bought a major enterprise agreement directly from the vendor. That tells you something important: when the official strategy and actual behavior diverge, the organization is already telling you where the real learning is happening.
The formula for durable transformation is more consistent than most people realize. Toyota designed the TPS house and left it to Ohno to furnish. Amazon designed the flywheel and let Jeff Wilke operationalize it. Apple designed the vision and let Tim Cook systemize it. The pattern is clear. Leaders must provide direction, constraints, and purpose. But they also have to trust the system to learn.
Why I Reject the “What Should CIOs Do in the Next 90 Days?” Framing
I understand why people request a 90-day AI plan; it seems practical and decisive. However, I reject this approach. This framing creates artificial urgency and promotes short-term leadership. Learning does not adhere to executive timelines. Dr. Deming’s Theory of Knowledge, the PDSA (Plan-Do-Study-Act) cycle, the scientific method, and Toyota Kata are all grounded in iterative learning rather than arbitrary deadlines.
One of Mike Rother’s (author of Toyota Kata) students created a game called Kata in the Classroom, and I’ve used it with leaders for years. It is a simple exercise: teams are asked to assemble an animal puzzle under intense time pressure. The leaders set aggressive targets. The teams make predictions. And almost universally, they miss at first. Then they iterate. Then they learn. Then they improve. That exercise exposes something executives often avoid confronting: the learning gap between mandated delivery and actual capability.
In the game, the CEO expects a result within 15 seconds: the puzzle must be solved in that time. However, the team usually takes about 60 seconds during their first attempt. The team learns and improves its speed through repeated practice sessions, which Rother calls "katas." I have run this game, consisting of operations, developers, team leads, and even C-suite members, and one consistent theme stands out: the only effective time-based plan is one that is learned.
I also use the OODA Loop with leaders. John Boyd’s model is a useful framework for understanding decision-making in uncertain situations.
Observe what is happening.
Orient using context, experience, and judgment.
Decide what to do, and in what order.
Act, then learn from the resulting mini J-curves.
That is what an AI strategy should look like: not a deadline-driven theater of urgency, but a disciplined system for sensing, learning, deciding, and adapting.
The AI Question Most Leaders Waste Too Much Time On
One of the themes I came away with after writing Rebels of Reason is that we still spend too much time on the wrong philosophical questions. The history of AI is often told as a march of technical milestones: Aristotle’s formal logic, Boolean mathematics, Lovelace and programming, Turing and computation, early neural networks, deep learning, and generative AI. But the throughline underneath all of it is our fascination with intelligence itself.
For 100 years, we’ve been trying to make a machine that thinks. What we discovered instead is that we made one that does not think like us. That difference matters. Edsger Dijkstra said it best: the question of whether computers can think is like asking whether submarines can swim.
Who cares if a machine “thinks” in the human sense? We may never know. But it is the wrong operational question anyway. The important question is whether it is useful, governable, and embedded in a system that humans can understand and improve. Time spent obsessing over AGI or ASI abstractions often distracts leaders from the concrete work of design, capability, and risk management. In fact, some of that abstraction can be dangerous, as it tempts organizations to anthropomorphize systems rather than engineer them.
That is one reason I often compare agentic AI to Deming’s Red Bead Game. If the agents are the workers, then everything else is the system. Blaming an agent for hallucination is like blaming a willing worker for drawing red beads. The issue is not the worker. The issue is the system design. Doesn’t matter if it's defective beads or the wrong MCP call; it’s always the system.
The Ownership Question Is Usually the Wrong Question
When I hear a CIO, a CISO, and a Head of Platform Engineering argue about who “owns” AI, I usually assume they are already asking the wrong questions. Org chart ownership is rarely the real bottleneck. The better question is whether the people involved share the same understanding of the mission. This is why I come back to the old stonecutter story, the version popularized by Peter Drucker. A traveler asks three workers what they are doing. One says, “I’m laying bricks.” Another says, “I’m building a wall.” The third says, “I’m building a cathedral.”
Same work. Different levels of understanding. That is the real diagnostic for AI readiness. If one leader sees AI as procurement, another sees it as compliance, and another sees it as developer tooling, they are not yet operating as a system. They are laying bricks in parallel. Organizations scale AI when they align around the cathedral.
That does not mean centralizing everything under a Chief AI Officer. In fact, I’m skeptical of those structures when they become substitutes for actual capability. Readiness is not revealed by who owns the budget line. It is revealed by whether leaders understand the larger mission well enough to coordinate learning, constraints, trust, and execution across the enterprise.
What This Means for Leaders Right Now
The AI era will not be wasted because the models were weak. It will be wasted if it is, because organizations repeat the same behavioral mistakes they always make during technological change.
They will impose certainty where learning is required.
They will focus on tools before a common language is established.
They will centralize authority while real experimentation happens elsewhere.
They will blame outputs rather than redesign systems.
And they will argue over ownership rather than purpose.
The organizations that break the pattern will do something simpler and much harder: they will treat AI transformation as a human systems problem first. That means building learning organizations. It means aligning talent with innovation. It means leading with intent rather than control. It means understanding that the bottom of the curve is not failure, it is the work. And it means remembering that the goal is not to prove whether machines think like humans. The goal is to build systems that help humans think, learn, and work better together.

How did we do with this edition of the AI CIO?

Artificial Intelligence didn’t start in Silicon Valley. It began with centuries of thinkers who refused to treat intelligence as something mystical. Inspired by Rebels of Reason, this live 8-part biweekly course (starting April 22nd) traces AI as a long intellectual journey, not a hype cycle, exploring how machines learned to count, reason, search, learn, and ultimately generate language through the ideas that made it possible. With no heavy math or prerequisites, it focuses on the breakthroughs that shaped modern AI, perfect for technologists, leaders, students, and anyone trying to understand what’s really behind today’s AI moment. Register here.
The Artificially Intelligent Enterprise writes about Meta’s release of Muse Spark.
AI Tangle covers OpenAI’s $122 billion funding round, Microsoft’s three new models, and Google’s embedding of Gemini deeper into its platform.

Dear CIO is part of the AIE Network. A network of over 250,000 business professionals who are learning and thriving with Generative AI, our network extends beyond the AI CIO to Artificially Intelligence Enterprise for AI and business strategy, AI Tangle, for a twice-a-week update on AI news, The AI Marketing Advantage, and The AIOS for busy professionals who are looking to learn how AI works.



