The AI Reset: When to Pause, When to Pivot, and When to Double Down

AI Knowhow: Episode

105

watch-on-youtube-final
aaple-podcast
listen-on-spotify

Episode Highlights

  • A recent study from S&P Global Market Intelligence found that nearly half of all AI POCs never make it to production
  • This finding ramps up the stakes on the question of which AI projects to pause, which to steer in a different direction, and which to shutter altogether
  • Find out the leadership mindset you need to navigate the “AI reset”

Remember when ramping up AI POCs was the hallmark of innovation? Of course you do. That was all the way back in the second half of 2024. With a new study from S&P Global Market Intelligence showing that nearly half of all AI proof of concepts never make it to production, however, it’s time to inspect how we decide which AI projects to kill, which to pivot, and which to double down on.

The 426 Project Problem

To kick off our roundtable discussion, David shares a real-world example that puts the problem into stark relief. One executive he recently spoke with claimed to be tracking 426 active AI initiatives.

“It’s a perfect example of what happens when experimentation goes unchecked,” David explains. “At some point, activity becomes the enemy of impact.”

Mohan agrees and connects it to a pattern he sees in many organizations. When teams feel pressure to prove they are keeping up with AI, they often flood the company with small, disconnected projects. While these efforts may spark short-term excitement, they rarely deliver lasting business outcomes.

Instead, David encourages leaders to identify one to three AI initiatives that are tightly linked to the company’s core strategy. These are the projects that move the needle. “Top-down initiatives are where transformation happens,” he says. “That’s where the return on investment lives.”

The message here is not to discourage experimentation but to clarify its role. Once the learning phase is complete, leaders must narrow their focus and invest where they can achieve scale and measurable results.

Navigating the Dip

This leaves leaders in a bit of a conundrum. Keep standing still and the competition will pass you by. Focus on too many AI initiatives and you’ll spread your team far too thin. So what’s a leader to do in this situation?

Every meaningful innovation goes through what Mohan calls “the dip.” Borrowing from Seth Godin’s book of the same name, the dip is the tough middle stretch of a project — too early to celebrate, but too late to quit.

“The key,” Mohan says, “is knowing whether the dip is worth the climb.”

AI projects often reach this stage when the technical challenges get hard, the data becomes messy, or the early wins fade. Leaders who are overly focused on short-term returns may lose confidence and pull back just before breakthroughs happen. Others, stuck in the sunk-cost fallacy, keep investing even when the data clearly shows the project has plateaued.

David notes that this tension is at the heart of leadership in the AI era. “The hardest part,” he says, “is stepping back far enough to make a rational decision in the middle of uncertainty.”

The best leaders create checkpoints to regularly reassess progress against business goals. They invite diverse perspectives into the conversation, looking for signs of learning and adaptation. If a project is still generating new insights, even if the ROI is not yet visible, it may be worth continuing. If it is no longer moving forward, the right choice may be to stop and redirect resources elsewhere.

So, When Do You Pivot?

The conversation ends with a few practical signals leaders can watch for as they decide whether to persevere or pivot:

  • Focus drift: Too many initiatives without clear business outcomes

  • Cultural fatigue: Teams treating AI as busywork instead of innovation

  • Data plateaus: When new inputs no longer generate useful insights

  • Momentum loss: Projects that stall in pilot mode indefinitely

Knowing which projects deserve another round of investment and which should be retired is one of the defining challenges for leaders today. The companies that succeed will not be those that do everything fast, but those that learn to move deliberately, communicate clearly, and focus relentlessly on what works.

Watch the Episode

Watch the full episode below, and be sure to subscribe to our YouTube channel.

Listen to the Episode

You can tune in to the full episode via the Spotify embed below, and you can find AI Knowhow on Apple Podcasts and anywhere else you get your podcasts.

Courtney Baker: [00:00:00] Remember the good old days of 2024 when launching AI pilots was all the rage? Turns out not every one of those experiments was a hit. In fact, according to a new study, almost half of AI proof of concepts never make it to production. So how can you tell when to hit pause, when to pivot, and when to double down on your AI initiative?

Hi, I’m Courtney Baker, and this is AI Knowhow from Knownwell, Helping you reimagine your business in the AI era.

As always, I’m joined by Knownwell CEO David DeWolf and Chief Product and Technology Officer Mohan Rao.

We also have a special guest that we’re welcoming back to the show in hopes that our second interview goes more smoothly than the first time around.

But first, David DeWolf has done such a good job breaking down some of the latest AI news that we’ve invited him back one more time.

David DeWolf [00:01:00] joins us again this week to break down the latest in AI news. David, welcome back. You’re a veteran now.

David DeWolf: I must not have failed. Thanks for having me back.

Courtney Baker: Good to have you. Today I wanna talk to you about Accenture. According to the Financial Times, Accenture is reportedly letting go of staff who can’t be retrained for the AI era, even as it continues hiring in AI related roles. What’s the takeaway here?

David DeWolf: You know, Courtney, I want to put this in perspective again because this is coming off just talking about Walmart, right? Walmart. We framed as 2 million employees around the world, 1.6 million in the US alone. Massive employer. Accenture’s not small, but doesn’t reach that level. It’s about half of the US workforce.

So just about, just shy of 800,000 people. So let’s now put this layoff in perspective. We’re talking [00:02:00] about 11,000 people now. It’s 11,000 jobs. It’s 11,000 people’s livelihoods. That’s a big deal, but it’s also only one and a half percent of their workforce. And so I think it’s important that we put that in perspective, but I do think it’s an interesting contrast to what we talked about.

When we talked about it, we talked about how did Doug McMillon roll this out at Walmart. It was all about opportunity. It was about how this would impact every single person’s job, and everybody had the opportunity to upskill and to think about their job differently, and that communication was so effective here.

The way Julie announced for Accenture, the layoff of 11,000 people. You know, it was about certain people didn’t make the jump, they weren’t able to be trained. There was a readiness issue. And so I think it’s really important for leaders to recognize what is happening and be very deliberate. About the communication that they’re sending out.

The last thing you want as a leader [00:03:00] is to look like you’re changing course here and there, right? And so we wanna make sure as leaders that we’re thinking through deliberately. Is our strategy really not to cut headcount? Uh, is it really about opportunity and growth? And what are the success criteria gonna be?

And having a deliberate plan for rolling out these types of changes so that 11,000 people don’t get caught off guard thinking, Hey, I thought this was about all of us at all times. Right? And, and so I think it’s a great message, especially if you look at the reaction that happened, right? There was a lot of.

Hey, see, I told you so, right? And especially in a world of professional services where there’s a lot of disruption, there’s a lot of question about the sustainability of the industry and all of those types of things. I think it just can be a warning sign for all of us to get ahead of our strategy, our communication, and make sure that we’re consistent in the way that we are leading through this change.[00:04:00]

I’m not saying Accenture’s done anything wrong, I’m just saying. There’s some stark contrast out right there, uh, right now that we need to pay attention to and really learn from because with the early phases of these types of changes rolling out throughout our companies,

Courtney Baker: You know, David, I’m really curious to get your opinion. I know this is a news segment, but I’m curious, how do you think they determined. Whether or not somebody could be reskilled. It kind of seems

really

David DeWolf: arbitrary.

Courtney Baker: you know, somebody that you would think the company at large is probably more technically inclined than

David DeWolf: Yeah.

Courtney Baker: that they’ve got 11,000 people that can’t be

David DeWolf: I think if you look at some of the negative feedback that’s happened around this, Courtney, I think that’s exactly the problem, is that it feels arbitrary. It feels inconsistent with what has previously been said because there wasn’t a, Hey, here’s the benchmark of how we’re gonna determine whether or not [00:05:00] people make the jump or not, and now all of a sudden, you know what?

We found certain people can’t make the jump. Here’s how we’ve decided. It’s just kind of an arbitrary number that feels like it’s been put out there without any sort of explanation. And I think void of information, people start to make it up. Right? And so I don’t know how they assessed that and how they decided, um, are there some people, maybe it was simply these are the people that didn’t want to and resisted it.

Right? Maybe it really was a capability thing and they. Simply didn’t have the skills. I can’t really imagine at an Accenture what those skills may or may not have been. I wouldn’t be surprised if it’s also, you know what we learned that a bunch of these jobs are redundant and we don’t actually need them anymore.

Um, and so when you’re starting to look at that, and, and especially because I think part of the announcement was around data professionals don’t know what that means, right? W what was it about data professionals that made it. Something that they couldn’t make. The jump feels to me like data professionals above everybody should have been able [00:06:00] to make the jump.

So I just think there’s lots of questions, and again, the takeaway for me is the consistency in strategy, communication, and execution.

Courtney Baker: Yeah. Such a great lesson on, maybe the biggest tech takeaway is communication here. Uh, David, thanks again for joining us.

David DeWolf: Thanks for having me back.

Courtney Baker: When you’re evaluating your AI initiatives, how do you know when it’s time to pour more fuel on the fire and when it’s time to just walk away? I sat down with David and Mohan recently to get their thoughts.

David Mohan, welcome back. we’re going to be tackling a tough question. How should companies evaluate their AI projects? So are you two. Are you ready? You ready to dive in here?

Let’s do it.

Okay. Everybody listening knows that companies [00:07:00] jumped into AI pilots experiments a lot over the last two years. But not all of those initiatives are delivering.

In fact, many aren’t. According to a study from

S&P Global Market Intelligence, the average organization killed off 46% of their AI POCs before they reached production with cost, data, privacy, and security risk as the main reasons why. So as we look forward. 2026. We, we’ve been in this AI era for a hot minute.

Now I’m saying hot minute a lot lately a tough call to, to make. which projects to scale, which to pause and where to redirect resources. So David Mohan, let’s, let’s talk a little bit about this AI reset. How should leaders out there look at this decision of when it’s time to [00:08:00] just press pause?

Put down the controller. When is it time to turn off the Xbox and go back to the beginning of the game?

David DeWolf: Yeah.

Courtney Baker: I’ve never had an Xbox before, so not sure why I’m using that reference, but you know, Mario Brothers, uh.

No, that’s Nintendo. I’m

pretty sure. I don’t know. I like, do you have one of those?

have a switch, so I

Okay.

full disclosure,

Okay.

when it’s time to stay up all night and see if they can get to, what’s the bad guy?

Am Mario.

Bowser.

is that right?

Bowser

David DeWolf: Mm.

Courtney Baker: for all at the end

David DeWolf: Love Mario Kart. Okay, so I, I wanna start by scoping this a little bit. Um, I, I was in a conversation the other day

Courtney Baker: See me. Shocked.

David DeWolf: the, um, the, the executive I was talking to was bragging about the fact that he was tracking and in his [00:09:00] organization there were 426 AI initiatives going on, and and I almost fell outta my chair.

Courtney Baker: Wow.

David DeWolf: 426. Yeah,

Courtney Baker: that

David DeWolf: that was my, so that’s an example of where you need to like, yeah. Focus is saying no. Right? And why are we not seeing ROI? Why are we not seeing success? Well, a lot of us advocated for experimentation two years ago when AI was six months old. Right? The

democratization of AI, ChatGPT…and, and I think that has led to this mindset in some organizations that the, you know.

Empowering our people to do all of these experiments and doing these things and fostering, I’m gonna call it busy work, right? Is a good thing. And experimentation is good. Until you take anything to the extreme, it becomes bad. And I, would say that was an example of the extreme. I would [00:10:00] much rather. Yes, give your employees an enterprise license to ChatGPT and empower them with the guardrails to use it.

But that’s not your corporate initiative. And micromanaging their experimentation is not your corporate initiative. Right. We have talked on this podcast about the grassroots, the bottoms up, but we’ve also talked about the top down. And top down is where you drive the transformation. Top down is where you drive the major ROI.

Right, and I think it is essential that you pick one, two, maybe three, depending on the size of your organization. Truly transformational imperatives that are linked to business problems you have to solve. And that. Is what that initiative is. So the first question before you go to, which one do I turn off?

Which one do I turn on? Which one do I double down on? Is how many do you have? And are you truly focused on moving the needle? Because if you [00:11:00] want to be focused on moving the needle, you have to focus. And focus is what you say no to, not what you say yes to.

Courtney Baker: I am like so blown away right now by this number that my wheels are spinning. I’m like, did people get, they’re like, Hey, I, I saw the news from Accenture and people are getting laid off ’cause they’re not retrainable. I better get. 65 AI tools so that I can prove that I, I can hang in this AI era that is just

Mohan Rao: Yeah.

Courtney Baker: you, are you baffled

here?

Mohan Rao: totally baffled by that. Um, right. But you know, every, I mean, large AI project, I’m not talking about using tools and

Courtney Baker: Right?

Mohan Rao: every large AI project goes through a dip. Uh, right. It’s that uncomfortable middle where it’s too early to celebrate and too late to quit sort of thing. So you are like, you’ve got a lot of sunk costs.

The key is knowing whether the dip is worth the climb. I use Dip because one of my favorite books is, uh, Seth Gordon’s The Dip. [00:12:00] Uh, right. And, and the idea about this is that, um, people who succeed are the ones who can tell the difference between a temporary struggle and a structural limit that you’re hitting.

Courtney Baker: Mm.

Mohan Rao: right. So as a structural limit, whether it’s market, technology, whatever it is, the same applies to ai. Uh, right. You, you’ll be in the depth. and the way you kind of get outta the dip in ai, I think, is to understand audio learning and getting better. Then the cost at which you’re spending, right?

So it is hard to kind of, kind of think of that frame, but it’s, that’s exactly what you have to do to know when is it worth, keep pushing forward and getting there, versus when do you completely reset, go back to square one and start over again. Right. But I think every project, large AI initiative is gonna go through this where you’re not getting the business results for all the investments that you’re putting in.

And in fact, I would say that for. Any project of some complexity, this is gonna be common. The, uh, you [00:13:00] know, for example, what we are building is one of the toughest engineering belts, um, I’ve experienced in my career. Uh, right. So it is hard to do this. And, uh, the question is, when do you know that you are hitting the, in, in the depth?

When do you get out versus when do you reset? It’s a really good question, Courtney.

Courtney Baker: Do you feel like some leaders, obviously we know when you bring on a new technology like this, there is some inherent risk to you individually as the one. Ultimately it’s gonna be someone is the one bringing this in. and then you, you do the hard work. You make the call, you bring it in, but then you get to the dip.

And then does that, when you’re at that space, does the risk even become even higher? It becomes easier to just say, no, I’m jumping out now. I’m starting over. You know, like how does that, how do you deal with that just psychologically as a leader?

Mohan Rao: Yeah, it is, it is so hard, um, especially in large companies. what happens is when [00:14:00] you are in the dip suddenly people start disassociating themselves from the project.

Courtney Baker: Hmm.

Mohan Rao: when things are going well, there are gonna be

Courtney Baker: like 25 co-sponsors of this Yeah.

Mohan Rao: So, but yeah, that is really hard. I mean, you know, ultimately that’s what leadership is.

David DeWolf: You are Kind of keeping your principles and, uh, what your investments and what you believe in, uh, uh, front and center. You’re justifying it to yourself. You are revisiting the assumptions that you might have made. Uh, but ultimately that’s what leadership is.

Courtney Baker: Yeah.

Mohan Rao: Understanding when to keep pushing forward, when to retrench.

David DeWolf: Uh, I mean, I think that’s such a important distinguishing factor here because I think the tough part is right, as, as humans, uh, we can easily get stuck in the sunk cost. Fallacy. Right. Um, which is, you know, it’s really hard to abandon what you’ve been working on, what you’ve already invested in. Right. But sometimes as leaders, we have to make that call.

At the same time, there are a lot more initiatives that [00:15:00] were given up on too early because you didn’t see them through. Right. Especially in the startup world. Right. then there are those that. Suffered from going too long, right? And, and so this trade off and how do you make that call? How do you back up from the situation and deliberately assess that and make that call, um, I think is a really, really critical piece of leadership.

Totally.

Courtney Baker: it seems important in this AI era when you have, as a leader, made the decision to proceed forward. You know that you’re not gonna bail out, that you have to make a very intentional stand for where you’re going, or then you just stay down in that dip forever and then nobody wins.

David DeWolf: Mm-hmm.

Mohan Rao: think, I think it really comes down to, you know, you can pause a project and go back to the principles and the assumptions of why you started the project, but what you can never pause is the [00:16:00] ability to, the organization’s ability to learn, uh, from the experiences, right? So when you rewind and come back.

Hopefully you’ll just gonna get it across because you have all the learnings from the, the first attempt. this is gonna be a recurring pattern with more pauses and, um, things where, uh, you’ve run outta gas on this project and now you need to kind of figure out how to, how to relaunch this. I think, um, this is gonna be a, a, a recurring theme because you can see the.

advantage is when you get there to the finish line, but it’s so hard to get to the finish line, uh, with large projects.

Courtney Baker: Well, David Mohan, uh, thank you for this conversation and for everybody listening. Hopefully this is helpful you’re wading through 436 million AI experiments.

I did not say million, but next time I talk to you, I wanna hear three no more. Hmm.

That’s right. Hopefully this conversation helps you figure out which ones you need to pause, which ones [00:17:00] that you need to start from the beginning and which ones you need to double down on. David Mohan, thank you as always.

Courtney Baker: Stronger client relationships start with smarter insights. That’s why we created the Commercial Relationship Playbook. A practical guide for leaders who want to retain and expand client relationships by focusing on what really matters. Download your copy at Knownwell dot com slash relationship playbook and start turning relationships into your most reliable growth engine.

Courtney Baker: When chat GPT first unveiled its voice mode feature, last May, we tried conducting an interview with it just to have a little fun. The results. Frankly, it was anything but fun. It was terrible. We had a very hard time getting chat, GPT to acknowledge an [00:18:00] ongoing controversy around a voice model that sounded suspiciously like Scarlett Johansen.

And the whole thing was really just a train wreck that felt disconnected from reality, frankly. It’s been more than a year now though, and we know a lot can change in AI very quickly. So today we’re gonna another shot and let’s see how far ChatGPT has come.

Hey, chat, GPT. Welcome to the show. On this episode, we’re talking about a topic that many executives are wrestling with, especially in 2026 on the horizon. How do you know whether an AI project is worth doubling down on? Or whether it’s time to walk away, what are the signals that leaders should be paying attention to?

ChatGPT: That’s such a timely and important question. Uh, as we look ahead to 2026, I think leaders [00:19:00] should definitely pay attention to a few key signals. Uh, for example, clear alignment with business objectives and measurable. ROI are crucial. Uh, also the adaptability of the AI model, the quality of the data and the level of internal expertise all matter.

And of course, it’s important to look at user adoption and feedback to see if the. Project is really making an impact. It’s definitely a mix of, uh, art and science.

Courtney Baker: You’ve seen thousands of AI initiatives across industries. What separates the projects that make it past pilot mode from the ones that. Quietly fade out.

ChatGPT: Oh, absolutely. Uh, one of the biggest differentiators is usually strong executive sponsorship and alignment across departments. Projects that succeed often have a clear vision, consistent stakeholder engagement, and, uh, a willingness to iterate based on feedback.

Uh, also having robust data infrastructure and a culture that a crisis change definitely helps. [00:20:00] On the flip side. Projects that fade often struggle with unclear goals, lack of resources, or limited buy-in for key stakeholders. It’s definitely a mix of the right people, the right resources, and the right mindset

Courtney Baker: makes total sense.

Okay, so let’s talk red flags. What are some of the early indicators that an AI initiative might be heading for trouble before it’s obvious in the results?

ChatGPT: Yeah, definitely. Uh, some early red flags include things like poor data quality or data silos where the data just isn’t clean or accessible enough.

Uh, another red flag is a lack of clear objectives or changing goals, mid-project, which can cause confusion. Uh, also if there’s low engagement from end users or if stakeholders aren’t aligned, that can be a warning sign. And then of course if the team lacks the necessary expertise or if there’s inadequate resources and support, um, those are definitely signals that things might not go smoothly.

Courtney Baker: Okay. [00:21:00] Looking on the brighter side, when does it make sense to invest even more? When we, results aren’t yet visible, but something underneath suggests there’s real promise.

ChatGPT: Absolutely. Uh, sometimes it makes sense to double down when you see strong early engagement from users, even if the final results aren’t fully visible yet.

Um, another great indicator is when you see incremental improvements in model performance or data quality over time. Um, also if there’s positive feedback from pilot users or if the technology shows potential scalability, that’s a great sign.

Courtney Baker: Okay. So how should leaders balance short-term ROI pressure with the longer term learning curve that often comes with AI adoption.

ChatGPT: That’s such a great question. Uh, I think the key is setting realistic expectations from the start. Uh, leaders can definitely focus on quick wins and incremental improvements to satisfy [00:22:00] their short-term ROI needs, while also making sure that there’s a clear, long-term strategy. Uh, it helps to communicate that AI adoption is a journey and that the full benefits will unfold over time.

Also building in flexibility and continuous learning can help balance both short-term gains and long-term growth. It’s all about, uh, keeping that steady balanced approach

Courtney Baker: Makes total sense. If you could give one piece of advice to an executive trying to decide whether to persist or pivot with their current AI initiative, what would it be?

ChatGPT: I’d say the most important piece of advice is to really listen to the data and the people involved. Take a step back and evaluate both the quantitative metrics and the qualitative feedback. Um, if the initiative shows potential and there’s a genuine commitment from the team, it’s often worth persisting.

Um, but if you find fundamental misalignments or the underlying assumptions just aren’t holding up, then it might be time to pivot. Uh, [00:23:00] ultimately. It’s about being adaptable and staying focused on the bigger vision.

Courtney Baker: Before we go really quick, can you remind me about the drama around chat GBT in Scarlett Johansen?

ChatGPT: Absolutely. The drama mainly revolved around Scarlett Johansen and Open AI’s launch of chat GT’s voice assistant, which was named Sky. Um, there was some controversy because Scarlett Johansen felt that the voice and the persona of the. AI assistant sounded a lot like her own voice and character, and she raised concerns about the use of her likeness without permission.

This led to a bit of a public back and forth and some discussions about ethical considerations in ai. Uh, it definitely got a lot of attention in the tech and entertainment world.

Courtney Baker: That is such a great synopsis. Thank you so much for joining us today.

ChatGPT: Of course it was my pleasure. Thanks so much for having me and uh, I’m always here if you need anything else.

Have a great one.

[00:24:00]

Courtney Baker: Thanks as always, for listening and watching. Don’t forget to give us a rating on your podcast player of choice, and listen, we’d really appreciate it if you would leave as a review. It really helps more people find this show at the end of every episode. We’d like to ask one of our AI friends to weigh in on the topic at hand.

Hey, Claude, sorry for not having you on the show today, but wanted to get your opinion. Today we’re talking about AI resets. How can leaders know when it’s time to pivot versus persevere?

Cedric: It really comes down to whether you’re learning and making progress or just spinning your wheels. If the data keeps showing the same problems and your efforts aren’t moving the needle, that’s usually a sign it’s time to try something different.

Courtney Baker: now you are in the know. Thanks as always for listening. We’ll see you next week with more AI applications, discussions, and experts.

You may also like