The AI worked. That was never the issue. The manufacturing plant had deployed an AI quality control system. Error rates dropped 30%. The technical implementation was clean. The model performed exactly as designed.

Productivity stalled anyway. Employees who had previously taken pride in quality control had been sidelined by a system they didn't understand, weren't involved in deploying, and didn't trust. They disengaged. Not dramatically — no walkouts, no refusals. Quietly. The kind of disengagement that doesn't appear in any system metric until months later, when the expected productivity gains have failed to materialize and nobody can articulate exactly why.

This example, from Sandeep Girotra — Executive Director and CHRO at DCM Shriram — captures the central argument of his 2026 HR forecast: AI implementations will not collapse because of technical deficiency. They will stall because of human resistance — resistance that is predictable, detectable, and almost entirely preventable if HR is paying attention to the right signals.

The Thesis and Why It's Right

Girotra's argument is specific. "AI is reshaping work, but its deepest impact is undeniably human — jobs are evolving, trust is fragile, and employees are anxious about relevance." The anxiety isn't irrational. In a workforce watching roles compress around them, without clear visibility into their own trajectory, anxiety about AI is a rational response to an opaque situation.

Organizations are dividing, Girotra argues, into two categories. Those that are aligning human systems with technology — building clear reskilling pathways, making AI decision-making transparent, embedding behavioral change into leadership evaluation. And those experiencing cultural friction despite technical functionality — where the AI works and the people around it don't, because nobody made the human case for the change.

"People adopt technology when they feel safe, informed and supported." — Sandeep Girotra, Executive Director & CHRO, DCM Shriram

That sentence is doing a lot of work. "Safe" — not just job-secure, but psychologically safe enough to experiment with and fail at using new tools without career consequence. "Informed" — not just trained, but genuinely understanding what the AI does, what it doesn't do, and how the organization intends to use the outputs. "Supported" — with clear reskilling pathways and career visibility in an AI-augmented role structure.

The absence of any one of those three conditions is enough to generate the manufacturing plant outcome: technical success, human stall.

Three Failure Modes HR Owns

Girotra identifies three specific failure modes, each one firmly in HR's domain to prevent:

No reskilling pathway. The employees most at risk of disengaging from AI implementations are those who can't answer the question "what happens to my role." Not theoretically — specifically. Which tasks will the AI take over, which will it augment, and what will I be doing more of as a result. Organizations that deploy AI without clear answers to that question for their affected workers generate the anxiety that produces resistance. The failure mode is unclear capability pathways and absent reskilling programs.

Leaders who say empowerment and practice control. Girotra identifies a gap that AI makes visible and painful: leaders who claim to be empowering their teams while maintaining command-and-control structures. AI magnifies this disconnect because AI deployment decisions are often top-down — selected, procured, and implemented by leadership, then handed to employees to integrate. When the stated value of AI is "this will make your work more strategic" but the actual experience is "this system now monitors your work" the credibility gap becomes a trust gap. That trust gap is the human failure Girotra is describing.

Credential-based hiring in a capability-based world. The third failure mode is structural rather than cultural. Organizations that continue hiring on credentials — degrees, prior job titles, certification lists — are systematically underweighting the capability that matters most for AI-era work: adaptability. The employees who successfully integrate AI are not necessarily the most credentialed. They're the ones with the cognitive flexibility to work at the human-machine boundary as that boundary continuously moves.

30%
Error reduction achieved through AI quality control at a manufacturing plant — while productivity stalled because employee disengagement offset the technical gains. The model worked. The adoption didn't.

Workforce Confidence as the Real Metric

Girotra's forecast for 2026 is precise: by this year, workforce confidence becomes the true AI success metric. Not model accuracy. Not automation rate. Not cost savings per FTE. Workforce confidence — the degree to which employees feel equipped, informed, and valued in a workforce that includes AI.

The implication is significant. If workforce confidence is the metric, then the function responsible for that metric is HR — not IT, not the CTO's office, not the AI deployment team. Workforce confidence is built through reskilling programs, transparent communication about AI decision-making, leadership behaviors that model psychological safety, and career pathways that remain visible through the transition.

Spotify's CHRO Anna Lundström — who listed "getting AI-ready" as her first priority — took a specific approach: expanding Hack Week to all employees for AI experimentation and launching an AI Momentum Program to track adoption across departments. The focus wasn't on mandating AI usage, but on building the conditions under which employees would choose to integrate AI into how they work.

That's the distinction Girotra is drawing. The organizations that will extract value from their AI investments are those where employees adopted the technology because they felt safe, informed, and supported — not those where adoption was highest on paper because it was mandated and measured.

What This Means for the Flight Risk Equation

There is a retention dimension to this that most AI deployment conversations don't reach. The employees who are best positioned to thrive in an AI-augmented workplace — the early adopters, the integrators, the workflow builders — have skills the market is now pricing at a significant premium. They are also the employees most likely to leave if they don't feel the organization is keeping pace with their ambitions or providing the right environment for AI-era work.

Girotra's forecast, read in full, is not pessimistic. It's a roadmap. AI will fail humanly when HR isn't managing the human dimension of deployment. AI will succeed where HR builds the trust infrastructure — the safety, the information, the support — that enables employees to adopt, integrate, and ultimately extend the capabilities the technology provides.

The manufacturing plant's 30% error reduction was real. The stalled productivity was also real. Both outcomes came from the same deployment. The difference was human. It always will be.