Coordination Theater: Why Your AI Strategy Looks Better from the Top
The dangerous perception gap between executive confidence and ground-level reality
Coordination Theater: Why Your AI Strategy Looks Better From the Top
The dangerous perception gap between executive confidence and ground-level reality

Published: March 2, 2026
Author: Tommy Kenny
Category: AI Leadership
Reading Time: 6 minutes
The Illusion From Above
From the executive floor, AI adoption looks healthy.
Licenses have been purchased. Copilots deployed. Weekly active users are climbing. The quarterly deck shows engagement up and to the right. The story writes itself: we are modernizing.
But here's what new research reveals: there's a dangerous gap between what leadership sees and what's actually happening. Executives are significantly more likely than individual contributors to believe that AI strategy is clear and that adoption is widespread.
From the top, it looks coordinated.
From the floor, it often feels improvised.
This isn't just a perception problem. It's a compounding problem. And it's silently eroding your ROI.
The Data That Should Worry You
Section's 2026 AI Proficiency research, based on thousands of enterprise use cases, delivers a sobering finding: only about 15% of enterprise AI implementations are likely to generate measurable ROI.
The majority cluster around low-leverage tasks: rewriting emails, summarizing documents, replacing search. These activities feel productive. They generate activity metrics. They look good on dashboards.
But they don't compound.
The danger isn't disagreement between leadership and employees. It's divergence. When leadership believes transformation is already underway, the organization begins behaving as if coherence exists—even when underlying workflows remain unresolved.
Why Activity ≠ Transformation
This pattern isn't new. We've seen it before with every major technology wave:
ERP in the 1990s: Dashboards lit up, data became centralized, reporting improved. But companies that treated ERP as a reporting upgrade saw incremental gains. Those that redesigned supply chains and decision rights saw compounding efficiency.
CRM in the 2000s: Installing Salesforce didn't transform customer relationships. Redesigning sales processes, incentives, and forecasting models did.
Cloud migration in the 2010s: Simply lifting and shifting infrastructure did little until operating models changed alongside it.
In each wave, early success was measured in deployment milestones: seats provisioned, systems integrated, dashboards activated. The real economic lift arrived later—and only for organizations willing to rewire how decisions were made.
Technology diffused quickly. Structural redesign lagged. Compounding followed redesign, not installation.
AI is moving faster than those prior waves. But the pattern rhymes.
The Two Layers of Enterprise AI
Most organizations have successfully adopted what might be called the assistive layer of AI:
- Drafting is faster
- Research is easier
- Internal communication is quicker
- Analysis feels more fluid
This layer makes individual workers more efficient at producing outputs.
But there's a second layer most haven't touched: the decision layer.
Decision-grade AI changes:
- Prioritization
- Sequencing
- Trade-offs
- Resource allocation
- The recurring decisions that determine cost structures
Assistive AI improves outputs. Decision-grade AI reshapes decisions.
The difference isn't semantic. It's economic.
The Coordination Theater Trap
Here's what "coordination theater" looks like in practice:
The executive view:
- "We've rolled out Copilot to 80% of the organization"
- "Our AI champions program has trained 200 employees"
- "Weekly active users are up 40% quarter-over-quarter"
- "We have a clear AI strategy and governance framework"
The ground-level reality:
- "I use ChatGPT for emails but my actual workflow hasn't changed"
- "The tools are there but nobody told me what problems to solve"
- "My manager doesn't know how to evaluate AI-assisted work"
- "We're all experimenting in silos"
The metrics look healthy because you're measuring the wrong things. You're measuring adoption when you should be measuring transformation.
Three Questions to Expose the Gap
Before your next AI strategy review, ask these questions—not just to leadership, but to people three levels down:
1. "What decision has AI fundamentally changed in the last 90 days?"
Not "what task is faster" but "what decision do we now make differently because of AI?" If the answer is "none," you have adoption without transformation.
2. "Can you describe our AI strategy in one sentence?"
Ask this to your C-suite. Then ask it to middle managers. Then ask individual contributors. If you get three different answers—or blank stares—you have theater, not strategy.
3. "What's the measurable business outcome we're tracking?"
"Usage" isn't a business outcome. "Engagement" isn't a business outcome. Revenue impact, cost reduction, time-to-decision, error rates—these are business outcomes. If you can't name one specific metric that AI is measurably improving, you're watching a performance, not running a transformation.
From Theater to Transformation
The solution isn't to slow down adoption. It's to reconnect adoption to structural change.
Start with decisions, not tools. Identify the five recurring decisions that most affect your economics. Then ask: how should AI change how we make these decisions?
Close the perception gap. Create regular feedback loops between ground-level users and strategy owners. Not surveys—conversations. What's working? What's friction? What's actually changing?
Measure outcomes, not activity. Kill the vanity metrics. Weekly active users don't matter if they're using AI for the wrong things. Build measurement systems that track business impact, not adoption curves.
Redesign authority, not just access. AI proficiency should change who can make certain decisions and how fast. If your decision rights haven't changed, your AI is decoration.
The Bottom Line
AI adoption is widespread. AI transformation is rare.
The organizations that will compound value are not the ones with the most licenses, the highest engagement scores, or the best-looking dashboards. They're the ones willing to do the harder work: redesigning workflows, rethinking decision rights, and closing the gap between executive confidence and ground-level reality.
Coordination theater feels like progress. But when the dashboards glow and the economics don't compound, something structural is broken.
The question isn't whether your organization is adopting AI.
The question is whether anyone would notice if it stopped.
Tommy Kenny is the founder of Digital Executive Insight and author of Pragmatic Disruption. He advises executives on building AI strategies that create real value, not just impressive metrics.
Related Reading:
- The Integration Paradox — Why having AI tools ≠ having AI
- The Conviction-Execution Gap — Why knowing AI matters isn't enough
- The Agent Manager Era — Why the next executive skill is managing AI agents
Sources: Section 2026 AI Proficiency Report, Forbes ("Enterprise AI's Illusion of Progress: Coordination Theater," Jason Snyder, Feb 26, 2026), NBER firm-level AI adoption research
Executive Takeaways
This article covers key insights for ai strategy. Apply these frameworks to drive measurable results in your organization.
Get More Executive Insights
Weekly briefings with frameworks like this one. Join 15,000+ executives.
Continue Reading
The Management Inversion: When Your Team Includes Machines
Here's a question no MBA program prepared you for: How do you manage an employee that never sleeps, never complains, and can be duplicated instantly?
The Time-Money Gap: Why Your AI Productivity Gains Aren't Hitting the Bottom Line
Your team is saving hours every week with AI. So why isn't your CFO celebrating?