
There is a statistic circulating right now that keeps coming back: ninety-five percent of generative AI pilots fail to deliver measurable financial impact. Not because the technology doesn’t work. Because deploying it in a way that delivers results turns out to be an organizational challenge, not a technical one.
At EBG | Network, we have been gathering procurement and supply chain leaders across the Nordics for almost sixteen years. And if there is one thing that pattern of conversations reveals, it is this: we have heard variations of this story before. With ERP. With RPA. With e-procurement. Each time, the promise was real. Each time, the technology worked. And each time, the organization absorbed it, adapted it, and largely continued doing what it had always done. More or less.
So when the excitement about AI surfaces — and there is genuine excitement, and it is warranted — a quieter, more stubborn question follows. Not what can AI do? But what actually makes large organizations change?
The gap between wanting and doing
In conversations ahead of EBG | Xperience Stockholm this April, two very different perspectives on AI keep surfacing. One is deeply individual — focused on how a single person, once they truly start working with AI, begins to experience time differently, reclaim hours, and rethink what they are actually doing all day. The other is structural — starting always with the question: what problem are we actually solving, for whom, and does it hurt enough for the organization to act?
Both feel true. And both are confirmed by research published this spring from Stanford’s Digital Economy Lab, documenting fifty-one real enterprise AI deployments. The hardest challenges were not technical. Seventy-seven percent of what practitioners described as the most difficult part was invisible: process redesign, data quality, and getting people to actually change how they work. The technology, they said again and again, was the easy part.
The operating model is not a diagram
What we call an operating model in a large company tends to be imagined as an org chart, a set of processes, a governance structure. But what it really is, is a shared story — a living, daily-reproduced agreement about how things work, who decides what, what counts as success, and what happens if you step outside the lines. That story is stable not because people resist change, but because it reduces uncertainty. It tells people what to do tomorrow morning.
AI doesn’t rewrite that story automatically. It adds a new character to an existing plot.
The analogy to factories and electric motors is well-documented economic history. Stanford economist Paul David traced it in a landmark 1990 paper. Electric light bulbs were available by 1879. Generating stations existed in New York and London by 1881. Yet by 1900, electric motors still accounted for less than five percent of factory mechanical drive — and productivity had barely moved.
The delay was not primarily about resistance or lack of vision. It was rational. Factories built around central steam engines were still serviceable. The economics of replacing them did not yet make sense. The organizations that electrified first were not the ones that decided to change — they were the ones building new plants anyway, in expanding industries, who could design for electricity from the ground up. The rest waited for their old capital stock to wear out.
And even once the technology spread, the productivity gains were slow to appear in the statistics for a second reason: many of the real benefits — safer workshops, cleaner air, better machine control, improved working conditions — were never captured in conventional productivity measures at all. David’s point is not just that change takes time. It is that we systematically undermeasure the early value of a new technology, which makes the case for investment harder to make and the patience harder to sustain. That is a direct parallel to AI today.
The factories that eventually transformed did not do so through a single act of reorganization. They built up, over decades, a new cadre of architects and engineers who understood how to design for the new system. It was decentralized learning, slow by nature. David himself closes his paper with a warning that those who cite him rarely include: computers are not dynamos. The analogy illuminates but does not transfer perfectly. Unlike factory buildings, organizational structures do not physically depreciate — they can persist long after they have become economically obsolete, which makes inertia even more likely for AI than it was for electrification.
What the history does suggest, clearly, is that the productivity gains came four decades after the technology was commercialized — and that they required not just new tools but new skills, new organizational forms, and a new generation of practitioners who had grown up thinking differently.
Sweden understood this about computers in the 1990s. In 1998, the Swedish government introduced what became known as the Hem-PC reform (Home PC reform) — a tax arrangement allowing employers to provide employees with home computers as a tax-free benefit. In the first three years, the reform gave close to one million Swedes their first computer. That generation went on to build Spotify, Klarna, and a tech ecosystem that arguably should not have been possible in a country of ten million people. So notable was the outcome that Sweden’s AI Commission proposed an “AI for everyone” reform in 2024, explicitly inspired by the Hem-PC model. The proposal was not taken forward.
The reform was not a guarantee of transformation — a 2005 evaluation found that nine in ten recipients said they would have bought a computer anyway. But it changed the pace and the distribution.
Who actually resists, and why
One of the sharpest observations in the Stanford research challenges a common assumption about where resistance comes from. Conventional wisdom tends to focus on frontline workers fearing replacement. The data tells a different story.
The most frequent source of resistance — in thirty-five percent of cases — was not end users at all. It was staff functions: Legal, HR, Risk, and Compliance. As the report puts it: “C-level demands measurable proof of ROI, staff functions worry about process risks and blame, end users distrusted system inconsistency, and frontline workers feared replacement. Each group required a different solution.”
This matters because each of these groups has the organizational authority to slow or stop a project regardless of what the executive sponsor wants. Legal worries about liability. Risk and compliance worry about regulatory exposure. The CFO needs a clear line on the balance sheet before approving anything. And middle management — often overlooked — was described in one case as the most resistant layer of all, while senior leadership and junior employees were more willing to move.
The implication is that managing an AI deployment is not one change management challenge. It is four or five simultaneous ones, each requiring a different conversation, a different kind of evidence, and a different kind of reassurance.
What transformation actually looks like — in practice
The Stanford research includes three cases directly relevant to procurement. They are worth sitting with, because each one challenges a different assumption about what is possible and what is required.
Case 1: When an entire job category disappears
The first is a US-based logistics company receiving over 100,000 invoices annually — from fax, phone calls, email, all in different formats. Seven full-time employees did nothing but process them. Before a single line of code was written, thousands of redundant invoice templates accumulated over years and reviewed by nobody had to be reduced to something workable. That invisible cleanup was the real work. The company’s president checked in every week throughout. The result: seven people became two. Processing time dropped to under twenty-four hours. Value created exceeded one million dollars. Not process optimization — an entire category of work, largely gone.
Case 2: When AI replaces the buyer entirely
The second is a regional supermarket chain operating at roughly half the industry margin benchmark with minimal negotiating leverage against suppliers. The company faced a classic procurement problem: too many SKUs, too many stores, too many variables for any human buyer to optimize simultaneously. Waste, stockouts, and timing decisions were made on gut feeling and manually compiled data.
The solution was not an AI assistant or a recommendation engine. The company deployed an autonomous procurement agent that replaced the human procurement function entirely — deciding what to buy, when, and from which supplier, across all stores simultaneously, without human approval in the loop. Waste down forty percent. Stockouts down eighty percent. EBITDA doubled.
As the project lead put it: “The market leader has much higher margins. These guys are super small. But they do almost as well, and their procurement power is zero compared to the big players. But what they have is that they don’t have waste.” For a small company with no scale advantage, AI became the substitute for scale.
Case 3: When the data is bad on both sides
The third is a large construction services company where field technicians submitted parts requests via paper forms, emails, and Excel spreadsheets. A team manually entered these into the procurement system and matched items to the parts catalog — slow, error-prone, and expensive.
The company deployed AI to extract requests from unstructured sources, match them to the catalog, and create requisitions automatically. The catch: data quality was poor on both sides. OCR failed. The catalog itself was inconsistent. The response was not to wait for better data. It was to design a four-stage pipeline that improved data progressively, with humans handling only exceptions rather than every transaction.
The project lead described the shift in mindset required: “We shifted from ‘this is your requirement; this is what it will do’ to ‘what does good enough look like?’ AI will improve if you monitor it and give it better data over time.” Projected productivity gain: thirty percent. Expected ROI over three years: ten times the investment.
Three procurement related cases. Three different starting points. And three different lessons: that an entire category of work can disappear, not just shrink; that AI can substitute for scale when negotiating power is limited; and that waiting for perfect data is itself a decision — one that delays the learning that only comes from running the system.
The supertanker and the shrinking window
Large organizations move like supertankers. You turn the wheel now, and the direction changes a mile later. That has always been true.
What is new is that the technology no longer waits. The implementation timeline for an AI deployment is increasingly longer than the lifespan of the models it is built on. Organizations that took a year to deploy something in 2023 found it was already being superseded before it reached production.
The pressure this creates is real and largely unsolved: you cannot afford the long change cycles of the past, but the organization has not fundamentally changed how it moves. That tension — between the pace of technology and the pace of institutions — sits at the center of almost every serious conversation about AI transformation right now. It has no clean answer. But naming it honestly is at least a more useful starting point than pretending the gap does not exist.
The question worth sitting with
EBG | Network is not arriving at our April gatherings with answers. The most useful conversations over sixteen years have rarely started there.
What seems more productive to explore: what would have to stop being true in your organization for the change you describe wanting to actually become possible? What would have to be different about how decisions get made, what gets measured, and who has the authority to say that something is broken enough to rebuild from scratch?
The leaders joining EBG | Xperience are not observers of this question. They are living it. That shared reality — not a shared answer — is probably the most honest and useful place to start.
Learn more about Xperience Stockholm April 23rd
Learn ore about Xperience Malmö April 28th
Written ahead of EBG | Xperience Stockholm, April 23rd 2026 and EBG | Xperience Malmö, April 28th 2026 — half-day gatherings for procurement leaders in large Nordic organizations, exploring leadership, AI and what organizational change actually takes. For further reading: The Enterprise AI Playbook, Stanford Digital Economy Lab (2026) · The Dynamo and the Computer, Paul David (1990) · Hem-PC reformen report 2002