Dear CIO,
There is a funny thing about technological revolutions. We keep treating them like they are new. Of course the tools change, the vocabulary changes, the vendors change, and even the executive decks definitely change. But we consistently make the same organizational mistake over and over again: we over-focus on the technology and under-design the system it is entering.
This is why some people are coming back to Eric Trist, Ken Bamforth, and the British coal mines. In 1951, Trist and Bamforth published their famous study of the longwall method of coal-getting, examining what happened when a new mining technology changed not just the work but also the social structure around it. Trist later said that he “found” sociotechnical systems “down a coal mine by people who were already doing them.” That is the part leaders often miss. Sociotechnical thinking was not invented as an academic abstraction, but rather discovered in the real mess of work.
Best Regards,
John, Your Enterprise AI Advisor

The Lesson From The Coal Mines
The AI J-Curve And Sociotechnical Thinking

The coal miners had developed ways of organizing that gave teams shared responsibility, local knowledge, mutual adjustment, and social cohesion. Then, new mechanized methods and technology arrived that promised efficiency, but when it was introduced in a way that broke the social system of the work. The result was not simply “better productivity.” It created new forms of isolation, dependency, stress, and coordination failure. The technical system improved on paper, while the work system degraded in practice.
That phrase matters: the work system. Not the tool or the machine, or the org chart, but the system. We are currently at this J-Curve right now with AI. The DORA ROI of AI report frames this well: AI acts as an amplifier. It magnifies the strengths of high-performing organizations and the dysfunctions of struggling ones. Without a strong underlying organizational system, AI creates localized pockets of productivity that get lost in downstream chaos. That is the coal mine lesson all over again. A new technology enters the workplace. Leaders assume the productivity gain is inside the machine. Then they are surprised when the system pushes back. They ask: Why aren’t people adopting it? Why did productivity dip? Why are developers using shadow tools? Why did this increase the risk instead of reducing it?
The answer is usually not that the technology failed. The answer is that leadership optimized the technical subsystem while neglecting the social subsystem. That is the trap.
The J-Curve Is Not Failure. It Is the Work.
When a major technology shift enters an organization, there is almost always a dip. New practices are not yet reliable, so confusion goes up, confidence goes down, and old practices stop fitting cleanly. People spend time learning new interfaces, adapting workflows, and figuring out what “good” looks like now. The DORA report refers to this as the learning cost of transformation. AI adoption brings a learning curve, a verification tax, and pipeline adaptation. Developers may move faster locally, but the system now has to absorb more code, decisions, review burden, governance questions, and downstream load. That is not an implementation bug. That is the J-Curve. The mistake leaders make is interpreting the dip as evidence that the initiative is failing, so they pull back, centralize harder, add more controls, buy another tool, or demand a 90-day plan that makes the dashboard look better.
I reject that framing. A 90-day plan can be useful if it is treated as a learning container, but it becomes dangerous when it turns into theater. Theater is what happens when leaders confuse motion with learning: a roadmap is published, a tool is selected, pilots are announced, dashboards appear, and everyone can point to activity. But the harder questions remain untouched: What did we actually learn? What assumptions were wrong? What constraints did AI expose? What risks did we create? What work should we stop doing? Learning does not obey executive timelines. Deming understood that targets without methods distort the system. Ackoff understood that improving individual parts can still make the whole perform worse. Senge understood that learning organizations require reflection, shared understanding, and systems thinking, not just urgency. Toyota Production Systems (TPS) was built on this. So yes, have a 90-day plan, but make it a 90-day learning agenda, not a 90-day performance promise. The question at the end should not be, “Did we finish the rollout?” It should be, “What did we learn about the system that we could not see before?”
Sociotechnical Does Not Mean “Humans Plus Tools”
We use the word sociotechnical a lot, but I am not always convinced we understand what it means. It does not mean “people should be involved.” It does not mean “do change management after procurement.” It does not mean “train the users once the vendor rollout is complete.” Sociotechnical systems thinking means that the social and technical systems must be designed together. You cannot optimize one independently and assume the other will adapt. Machines do not control humans, and humans do not fully control machines. The performance is in the interaction. This matters more with AI than it did with many previous waves because AI does not just execute instructions. Increasingly, it participates in interpretation, recommendation, generation, and action. The system is no longer simply “human decides, software executes.” We are moving toward read, reason, write, and execute. That changes the operational surface. It changes the risk surface. It changes the learning surface. And it absolutely changes leadership.
When leaders treat AI as a procurement event, they miss the point. When they treat it as a coding productivity tool, they miss the point. When they treat it as a headcount reduction strategy, they really miss the point. The question is not, “How much code can AI generate?” The question is, “What system will absorb that code, verify it, govern it, learn from it, and convert it into customer value?” DORA makes the same point in engineering terms: purchasing licenses alone will not guarantee a return. AI’s impact depends on the health of the environment in which it operates. Mature internal developer platforms and streamlined pipelines can turn AI speed into value. Bottlenecks, bureaucracy, brittle tests, and fragmented data can turn that same speed into technical debt. That is sociotechnical systems thinking in modern clothes.
The Coal Mine Is Now the Software Factory
The lesson from the longwall mining was not that mechanization was bad, but that the design of work matters. The miners had a social system that carried knowledge, coordination, trust, and resilience. When the technology disrupted that system without redesigning the work, the organization paid the price. Today, the same thing happens when AI lands inside software organizations. A developer uses an assistant and writes code faster. Great. But now the pull request queue grows. Security review slows down. Architecture standards become inconsistent. Tests are brittle. Documentation is outdated. The internal platform is hard to navigate. Nobody agrees on what an “agent” actually means. The CIO, CISO, platform team, legal, audit, and engineering all have different mental models. Then someone says, “AI did not deliver ROI.” No. The system did exactly what it was designed to do. AI amplified it.
This is where Deming’s Red Bead Game is still one of the best models we have. If the agents are the workers, everything else is the system. Blaming an agent for hallucination without looking at context, data quality, workflow design, evaluation, guardrails, and feedback loops is like blaming a willing worker for drawing red beads. It is always the system. Wrong MCP call? System. Bad context? System. No evaluation harness? System. No kill switch? System. No shared definition of authority? System. No agreement between risk acceptance, risk delivery, and audit controls? System. The point is not to absolve people of responsibility. The point is to put responsibility where it belongs: in the design and management of the work system.
Leaders Are Still Repeating the Same Pattern
I have seen this movie before. Banks would “never” run important workloads on Linux. Regulated companies would “never” use the cloud. Developers would “never” own operations. Security would “never” be automated. Now we hear the same script with AI. Meanwhile, real learning is already happening somewhere else. Developers are using tools before the official strategy catches up. Business users are experimenting before governance has a vocabulary. Teams are prototyping before the organization knows what production readiness means. This is where leaders need to be careful. The instinct is to clamp down. Centralize. Control. Create a committee. Debate ownership. Argue whether AI belongs to the CIO, CISO, Chief AI Officer, platform team, or procurement.
That is usually the wrong question. The better question is: do we have a shared understanding of the work system we are trying to create? Do we know where human judgment is required? Do we know where autonomy is allowed? Do we know what the agent can read, write, and execute? Do we know how decisions are logged, replayed, evaluated, and stopped? Do we know what “done” means? Because a prototype does not equal production. I still feel a little embarrassed that we have to say that out loud. But we do. The easier it is to start, the easier it is to think you are done.
The Real Leadership Work
The organizations that make it through the AI J-Curve will not be the ones with the most licenses. They will not be the ones with the flashiest demos. They will not be the ones who replaced the most people. They will be the ones who understand the lesson from the coal mine. They will design the social and technical system together. That means building a common language. It means investing in internal platforms as products. It means making internal data AI-accessible and trustworthy. It means treating guardrails as part of the work, not after-the-fact compliance. It means measuring throughput and instability together. It means using the J-Curve as a learning model, not a panic trigger. It means freeing capacity for innovation, not using AI as an excuse to reduce headcount. It means leaders giving up the illusion of control and replacing it with intent, trust, and disciplined feedback. David Marquet’s lesson still applies: move authority to information. But in the AI era, we have to add something else: make sure authority has boundaries, telemetry, and accountability.
The Bottom of the Curve Is Where the Learning Happens
The most important part of the J-Curve is not the rebound. It is at the bottom. That is where the organization learns what it really is. Not what the strategy deck says, the vendor promised, or what the steering committee believes. The bottom of the curve reveals the actual system: the bottlenecks, the fear, the hidden work, the missing context, the brittle handoffs, the lack of shared language, and the unresolved risk model. That is not failure. That is information.
The leaders who understand sociotechnical systems thinking will use that information. The leaders who do not will blame the tool, blame the worker, or buy the next tool. Eric Trist found sociotechnical systems down a coal mine because the miners already understood something many executives still resist: work is not just tasks and tools. Work is relationships, knowledge, judgment, coordination, trust, and adaptation wrapped around technology. AI does not change that. AI makes it impossible to ignore. The future will not belong to organizations that simply adopt AI. It will belong to organizations that can learn faster than their J-Curves break them.
As Andrew Clay Shafer says, you are either a learning organization or you are losing to one. AI does not change that. It just speeds up the evidence.

How did we do with this edition of the AI CIO?

Artificial Intelligence didn’t start in Silicon Valley. It began with centuries of thinkers who refused to treat intelligence as something mystical. Inspired by Rebels of Reason, this live 8-part biweekly course (starting April 22nd) traces AI as a long intellectual journey, not a hype cycle, exploring how machines learned to count, reason, search, learn, and ultimately generate language through the ideas that made it possible. With no heavy math or prerequisites, it focuses on the breakthroughs that shaped modern AI, perfect for technologists, leaders, students, and anyone trying to understand what’s really behind today’s AI moment. Register here.
The Artificially Intelligent Enterprise covers Amazon, Google, Microsoft, and Meta’s reported $700 Billion pojected AI spend in 2026 and more.
AI Tangle looks at OpenAI’s release of GPT-5.5.

Dear CIO is part of the AIE Network. A network of over 250,000 business professionals who are learning and thriving with Generative AI, our network extends beyond the AI CIO to Artificially Intelligence Enterprise for AI and business strategy, AI Tangle, for a twice-a-week update on AI news, The AI Marketing Advantage, and The AIOS for busy professionals who are looking to learn how AI works.



