Every AI rollout has a moment, usually in the first two weeks, where the team decides quietly whether to make it work or let it fail. Most leaders never see that moment, and by the time they do, the project has already lost.

70%
of AI success comes from people and processes, not the technology.
BCG, 10-20-70 principle

The numbers around abandoned AI projects keep climbing. S&P Global found that 42% of companies scrapped most of their AI initiatives in 2025, up from 17% the year before. The post-mortems almost always blame the technology, or the vendor, or the data, or the integration. The actual cause sits one floor up.

When an AI rollout dies, the cause is usually something nobody in the room is willing to say out loud. The team made a call about what the initiative is really for, and they came to that call before the kickoff meeting ended.

The second the announcement lands, everyone runs the same math

The instant a leader announces an AI initiative, every person in the room does the same private calculation. Is this going to make my job easier, or is this the start of the conversation that ends with my job? Nobody says it. Most people will not even admit it to themselves. The calculation still happens.

The team smells the company's real intention about AI before anyone says it out loud, and the rollout dies on contact with that smell.

This is why communication plans and training rollouts feel hollow when the underlying intent is replacement. The slides can promise empowerment and augmentation all day. The people in the room are reading something else: tone in the leadership meeting, what got cut from the budget last quarter, which roles stopped backfilling. They reach a conclusion, and then they protect themselves.

Protection looks like cooperation, until you look closely

The quiet sabotage rarely shows up as open resistance. Open resistance is easy to manage. What actually happens is more polite, and much more expensive.

Four signals that the replacement story has taken hold in the room:

None of these signals look like sabotage. Each one is rational self-protection from a person who has decided, correctly or not, that helping this initiative succeed shortens their own runway.

The narrative the company actually holds does the heavy lifting

This is where most change-management playbooks miss the point. The team is not reacting to the rollout plan. They are reacting to the story they believe leadership is privately telling itself about why AI is on the table at all.

If that private story is "we can ship the same output with fewer people," the team will smell it inside a fortnight, regardless of what the all-hands deck says. If the private story is "we can give our best people more leverage and stop losing the next hire to admin work," the team will smell that too, and they will lean in.

The story leadership tells itself in the closed-door meeting is the story the rollout actually runs on. Everything downstream, the training, the comms, the pilot scope, gets read through that lens. Top performers especially are excellent at reading that lens. They are the ones with the most options, the most context, and the most to lose if they read it wrong.

Where the operating model has to shift before any tool gets touched

The work that makes adoption stick happens before the first vendor demo. It is the conversation among the leadership team about what the company is actually buying with this initiative. Capacity for the existing team. Reduction in error rates that have always been treated as the cost of doing business. Speed on the projects that currently get bottlenecked through three people. Headroom for the kind of growth that used to require a hire.

When that conversation has happened, and the answer is honest, the rollout has a chance. The top performers feel it. They start surfacing the edge cases instead of hoarding them. They start asking for more pilot scope instead of less. The behaviour follows the belief, the way it always does.

When that conversation has not happened, no amount of communication discipline will close the gap. The team is not waiting to be informed. They are waiting to be reassured by what they observe.

The uncomfortable question

What would your top three performers say if you asked them, in confidence and with no consequence attached, what they think this AI initiative is really for?

If you can predict their answer and you are comfortable with it, the rollout has the foundation it needs. If you cannot predict their answer, or you can predict it and it is not the one you want, the rollout has a different problem than the one in the project plan. That problem is upstream of any tool, vendor, or training programme, and it gets solved in the same room where the initiative got greenlit in the first place.