The revolution that never happened
AI was supposed to transform companies. Three years after ChatGPT, the data tells a different story: the real revolution is individual, silent, and invisible to the boardroom.
The organizational revolution never happened. The individual revolution is already here.
Three years of empirical data now allow us to settle the question: generative AI has not transformed companies. It has transformed the individuals who use it. And this gap between the narrative and reality isn't a footnote. It's the key to understanding everything that's happening.
The great theatre
Let's start with facts. Four cases. Four promises. Four corrections.
In February 2024, Klarna's CEO announced that an AI chatbot had replaced 700 customer service agents in a single month. The global press ran with it. OpenAI published it as a showcase. The stock went up.[1] Fifteen months later, the same CEO told Bloomberg the strategy had "gone too far." Cost had become the only metric. Quality had suffered. Klarna started rehiring human agents. The chatbot handles simple questions, like an upgraded phone menu. Anything requiring judgment, empathy, or context reading goes back to humans.[2]
In May 2023, IBM's CEO announced a hiring freeze for 7,800 back-office positions replaceable by AI. Two years later, he corrected: AI had replaced "several hundred" HR employees. Not 7,800. Several hundred. IBM's total headcount actually grew. The savings were reinvested in hiring programmers and sales staff. AI hadn't eliminated jobs. It had shifted payroll.[3]
In October 2025, Amazon cut 30,000 corporate positions in two waves. Headlines read "AI replaces managers." But during the Q3 earnings call, Andy Jassy dropped a line no one picked up: "It's not even really AI, not yet. It's really culture." Amazon was correcting pandemic overhiring. AI was the packaging.[4]
In February 2026, Jack Dorsey laid off 4,000 employees at Block, roughly 40% of the workforce. He then co-published an essay with the managing partner of Sequoia Capital, Block's investor, arguing that AI makes middle management obsolete. The stock jumped 17%.[5]
Let's be honest. A CEO who just fired 40% of his staff co-signs a manifesto with his own investor to explain that the layoffs were visionary. That's not research. That's financial communications.
What the data actually says
You can say anything in a press release. You can't fake macroeconomic data.
In February 2026, the NBER surveyed 6,000 senior executives across four countries and asked a simple question: has AI changed anything in your company? 89% said no for productivity. Over 90% said no for employment.[6] These aren't outside observers. These are the leaders themselves.
MIT confirmed it from another angle: 95% of generative AI pilot projects produced no measurable return.[7]
Daron Acemoglu, the 2024 Nobel laureate in economics, produced the most rigorous calculation to date. He started with a straightforward question: of all the tasks humans perform in the economy, how many can AI actually automate cost-effectively? His answer: 4.6%. Not 30%. Not 50%. Four point six. The resulting productivity gain over a decade: under 1%. Better than nothing, he acknowledges, but not a revolution. A slow evolution, invisible to the naked eye in national statistics.[8]
Goldman Sachs had predicted in 2023 that 300 million jobs were "at risk." Two years later, Goldman published its own follow-up and found: no significant correlation between AI exposure and anything at all. Not unemployment, not hiring, not wages. The U.S. unemployment rate didn't budge.[9]
"AI is everywhere except in the macroeconomic data." Torsten Slok, chief economist at Apollo.[9]
The organizational revolution doesn't exist in the numbers. It exists in the stock price.
Meanwhile, the individual is changing category
That's the organization side. Now look at what's happening to individuals. It's a different story entirely.
The strongest study to date, published in the Quarterly Journal of Economics, tracked 5,172 customer support agents using an AI assistant. The least experienced agents saw productivity gains of 30 to 35%. In practical terms: an agent with two months on the job reached the performance level of one with six months. As if AI compressed four months of learning into a few days of use.[10]
This isn't an isolated finding. Noy and Zhang, in Science, measured the same pattern in professional writing: the least skilled writers improved the most.[11]
In education, it's even clearer. A meta-analysis of 51 experimental studies, published in Nature in May 2025, measured ChatGPT's effect on student learning. The result: the average student using ChatGPT in a structured way outperformed 80% of students who didn't use it. And this wasn't about memorizing faster or cheating more efficiently. The strongest effect appeared in courses focused on practical skill development, where students learn to do, not to recite. Even critical thinking improved.[12]
What these studies document is a mechanism I've observed over two years of training: exposure to what's possible triggers the desire before motivation kicks in. The individual who touches the tool and sees what they can produce doesn't go back. They don't ask for a strategy. They don't wait for a roadmap. They keep going. When you can, you want to.
The silent vote
If AI changes individuals but not organizations, where is the transformation actually happening?
The answer is in a number everyone cites and nobody reads correctly. Microsoft surveyed 31,000 professionals across 31 countries and uncovered a massive phenomenon: BYOA, Bring Your Own AI. Employees are bringing their own AI tools to work, exactly as they brought their smartphones fifteen years ago, except this time the tool transforms what they're capable of producing. 78% of AI users at work do it without waiting for their company to provide anything.[13] When you can, you want to.
52%of AI users at work are afraid to admit it, because they fear being seen as replaceable.[13]
The individual opens a tab, tests, integrates. And stays silent. BYOA isn't an IT governance problem. It's a silent vote of no confidence in the organization's ability to understand what's happening. The individual who masters an LLM better than their organization holds an advantage the org chart doesn't reflect. They've already moved up in the real hierarchy. And nobody knows.
Fast Company calls BYOA an "asymmetry of power."[14] It is. And it runs in the opposite direction from what the CEOs imagine.
The exoskeleton and its blind spots
There's a deeper reading than productivity.
AI doesn't just make individuals faster. It changes their relationship to their own competence. The Microsoft New Future of Work 2025 report uses the right image: AI acts as a "cognitive exoskeleton." It doesn't replace the user's strength. It amplifies what the user can do with the strength they have.[15]
But this exoskeleton has blind spots.
Dell'Acqua's experiment with 758 BCG consultants using GPT-4 revealed what he calls AI's "jagged frontier." The idea is simple: AI isn't equally good at everything. It has zones of strength and zones of weakness, and the boundary between them isn't a straight line. It's an irregular, unpredictable edge. Inside that frontier, consultants with AI were 40% better. But for a complex task outside the frontier, those using AI were 19 percentage points less likely to find the right answer than those working alone. AI made them confident and wrong.[16]
This frontier is a mirror. It reveals exactly where the individual knows and where they don't. For those who understand the map, AI becomes a tool for self-reconception. I know where I'm strong, I know where I'm fragile, I can work the gap. That's the shift from optimization to reconception, the critical threshold that most training programs never reach, not because the individuals can't, but because the organizational environment won't let them. When you can, you want to.
The barrier isn't cognitive. It's systemic.
The real question is about power
Here's the nerve.
If AI transforms the individual but not the organization, then CEOs announcing "AI-driven" restructurings aren't responding to a technological transformation. They're using a technological narrative to accomplish something else.
Marc Andreessen, co-founder of a16z, said it plainly: AI is a "massive silver excuse" to cut headcount accumulated during the pandemic.[17] Fortune 500 companies that mention "AI" in their quarterly results see their stock price rise by an average of 13.9%, compared to 5.7% for those that don't.[18] AI isn't a transformation tool. It's a financial communication tool.
But there's something deeper than AI washing.
When Dorsey announces that AI "replaces middle management," he's not talking about productivity. He's talking about power. Middle management isn't just a coordination layer that an algorithm could absorb. It's a protection layer. Middle managers are the organization's immune system: they translate between levels, absorb uncertainty, surface weak signals, negotiate meaning. They're the membrane that prevents power from flowing unfiltered from the top to the ground floor. Mollick showed they account for 22.3% of the variance in firm performance, more than market or leadership factors.[19] Foss and Klein, in "Why Managers Matter," argue that current challenges "make hierarchy and the managerial role more important than ever."[20]
Removing hierarchy in the name of AI isn't innovation. It's concentrating decision-making power in the hands of those who control the tools and the data. The CEO sees what the algorithm shows. Employees execute what the algorithm prescribes. The hierarchy doesn't disappear. It becomes invisible. And invisible power is uncontrollable power.
The Academy of Management Annals documents it: human-AI collaboration and algorithmic management are "two sides of the same phenomenon." On one side, you hear the empowerment story. On the other, control is being installed.[21]
The complete inversion
Let's recap.
AI is not transforming organizations. 89% of leaders say so themselves. 95% of pilot projects fail. The macroeconomic impact is indistinguishable from statistical noise.
AI is transforming the individuals who use it. Students who integrate it outperform 80% of those who don't. Novice workers gain four months of experience in days. 78% of professionals aren't waiting for their companies to get started.
Between the two, a chasm. And that chasm has a name: the question of power.
Organizations that understand AI serves the individual invest in skill-building, pedagogy, support. They accept that their employees are changing category, even if it means rethinking the org chart afterward.
Organizations that believe AI serves them use the technology to cut headcount, flatten hierarchy, and concentrate decisions. They confuse coordination with control. They dress up restructuring plans as visionary manifestos.
I'm often told that my view of AI, centered on the human, is utopian. It's the opposite. It's the only reading the data supports. The promises of organizational transformation are the utopia, refuted by three years of empirical evidence.
The question isn't what AI is doing to your organization. The question is: who in your organization has already moved up in the real hierarchy without the org chart knowing?
Sources
- Klarna, "Klarna AI assistant handles two-thirds of customer service chats in its first month," press release, Feb. 2024.
- CX Dive, "Klarna changes its AI tune and again recruits humans for customer service," article, 2025.
- Entrepreneur, "IBM Replaced Hundreds of HR Workers With AI, According to Its CEO," article, 2025.
- GeekWire, "'It's culture': Amazon CEO says massive corporate layoffs were about agility, not AI," article, 2025.
- Fortune, "Jack Dorsey and Roelof Botha think AI can make middle management obsolete," article, Apr. 2026.
- NBER, "Firm Data on AI," Working Paper, Feb. 2026.
- MIT NANDA, reported by Legal.io, "MIT Report Finds 95% of AI Pilots Fail to Deliver ROI," article, 2025.
- Acemoglu D., "The Simple Macroeconomics of AI," NBER Working Paper, 2024.
- Goldman Sachs, "AI Labor Market Impact Update," reported by Fortune, Mar. 2025.
- Brynjolfsson E., Li D., Raymond L., "Generative AI at Work," Quarterly Journal of Economics, May 2025.
- Noy S., Zhang W., "Experimental evidence on the productivity effects of generative artificial intelligence," Science, Jul. 2023.
- Wang J., Fan W., "The effect of ChatGPT on students' learning performance," Nature HSSC, May 2025.
- Microsoft, "AI at Work Is Here. Now Comes the Hard Part," Work Trend Index, 2024.
- Fast Company, "BYOAI: What it is and why it threatens your company," article, 2025.
- Microsoft Research, "New Future of Work Report 2025," report, 2025.
- Dell'Acqua F. et al., "Navigating the Jagged Technological Frontier," Harvard Business School, 2023.
- Andreessen and Simon statements, reported by AI+ Info, article, 2024.
- Stanford HAI, AI Index Report 2025; Fortune/FactSet, stock premium data linked to AI mentions.
- Mollick E., "People and Process, Suits and Innovators," Strategic Management Journal, 2012.
- Foss N., Klein P., "Why Managers Matter: The Perils of the Bossless Company," PublicAffairs, 2022.
- Academy of Management Annals, "Managing with Artificial Intelligence: An Integrative Framework," article, 2025.