AI and Change Management: The Human Side of AI Change

AI Knowhow: Episode

112

watch-on-youtube-final
aaple-podcast
listen-on-spotify

If AI adoption in your company feels slower than the hype would suggest, it may not be just your imagination.

According to new Gallup data discussed on this week’s AI Knowhow, nearly half of employees have used AI at least once, but only 10% use it daily. Even more striking: almost one in four employees don’t know whether their company is using AI at all.

For business leaders, this gap between ambition and reality should be a wake‑up call. The gap isn’t simply about whether your team has access to AI tools. It’s about whether that access is visible, intentional, and supported.

This week’s episode of AI Knowhow continues our four‑part mini‑series on AI change management by tackling the hardest part head‑on: the unspoken fears and behavioral dynamics that quietly determine whether AI initiatives survive the real world.

The Real Fear Isn’t AI. It’s Being Devalued.

We often hear that employees are “afraid of AI.” But as Knownwell CEO David DeWolf points out, that framing misses the mark.

Very few people are afraid of the technology itself. What they’re afraid of is being devalued.

Behind resistance are deeply human questions:

  • What happens to my role?
  • Do my skills still matter?
  • Am I still valuable here?

This fear isn’t irrational. AI does change workflows. Some skills will matter less over time. Pretending otherwise erodes credibility faster than any failed pilot.

Effective leaders don’t dismiss these concerns or gloss over them with polished reassurance. They acknowledge the coming changes honestly and invite people into the process of shaping what comes next.

Resistance Is a Signal, Not an Obstruction

One of the most common leadership mistakes in AI rollouts is treating resistance as something to overcome or eliminate. Instead, David argues, resistance should be treated as information and input into your change management process.

Pushback often reveals:

  • Where workflows don’t actually work in practice
  • Where incentives are misaligned
  • Where trust hasn’t been earned yet

When leaders assume negative intent or willful disobedience, they miss valuable signals. When they listen, resistance becomes a diagnostic tool rather than a roadblock.

Why Psychological Safety Comes Before AI Literacy

Knownwell Chief Product & Technology Officer Mohan Rao adds another critical layer: psychological safety isn’t a “nice to have” for AI adoption. It’s a prerequisite. AI systems introduce uncertainty because it’s not always clear how LLMs arrive at the outputs they do. Hiding that uncertainty while pretending AI is just another SaaS rollout breaks trust.

Three leadership imperatives stand out:

1. Make Uncertainty Visible

AI’s probabilistic nature isn’t a bug. But obscuring it creates skepticism and quiet disengagement.

2. Put Humans in the Loop

People need to understand how systems work and where judgment is expected. Treating AI like a black box invites a binary, black-and-white choice: blind compliance or silent rejection. This is a recipe for disaster in a business world filled with nuance.

3. Normalize Disagreement With AI

Psychological safety around judgment must come before AI fluency. Mohan says,

AI doesn’t fail because people resist. It fails when leaders don’t create the conditions where people feel safe enough to exercise judgment.

When teams believe it’s them versus the machine, adoption stalls. When judgment is honored, AI becomes an amplifier.

The Pilot Trap: Why AI Value Never Shows Up

For this week’s expert interview, Pete Buer continues his conversation with Tom Davenport, one of the world’s leading thinkers on analytics and AI and the co-author of more than 25 books and 300 articles on data-driven transformation.

Tom highlights a pattern he’s seen repeatedly:

“You don’t get economic value out of AI if you don’t move beyond pilots.”

Organizations that experiment endlessly but never operationalize will continue to fall behind the competition.

Common failure points include:

  • Lack of ownership for scaling successful pilots
  • Over‑indexing on individual productivity instead of enterprise impact
  • Poor data readiness, especially unstructured data
  • Insufficient human review of AI outputs

Perhaps most telling: 93% of data and AI leaders say their biggest obstacles are human—not technical—yet most spending still goes toward technology.

The mismatch is costly.

From Broad and Shallow to Narrow and Deep

The organizations seeing real returns are shifting their approach:

  • Fewer initiatives
  • Clear enterprise‑level priorities
  • Deeper integration into workflows

As Tom describes it, the move is from “broad and shallow” experimentation to “narrow and deep” execution. AI maturity isn’t about how many tools you deploy. It’s about whether the organization knows why it’s using them and what changes as a result.

Watch the Episode

Watch the full episode below, and be sure to subscribe to our YouTube channel.

Listen to the Episode

You can tune in to the full episode via the Spotify embed below, and you can find AI Knowhow on Apple Podcasts and anywhere else you get your podcasts.

You may also like