Making the Leap from AI POC to AI Product

AI Knowhow: Episode

94

watch-on-youtube-final
aaple-podcast
listen-on-spotify

AI Knowhow Episode 94 Overview

  • POC ≠ Product: the bar for testing out something new with AI is low; the bar for adoption is workflow-level usefulness.
  • Treat AI as a product: think real users, real stakes, and real incentives to change behaviors.
  • The product IS the workflow: integration into the day-to-day drumbeat of business beats revolutionary features.
  • People > tech: AI isn’t the “easy button.” Change management and incentives make or break launch.
  • Be “BORING” on purpose: Aim for predictable value and reliability before you chase flash, says Jim Garrity of SingleStone.

Getting an AI proof of concept off the ground? That’s relatively easy. Turning that initial spark into a product people actually use, love, and adopt is where the real work happens. In this episode, the team gets practical about what it takes to move beyond demos or lightweight POCs: choosing the right problems, treating AI like a product (not a project), designing for workflow integration, and managing the human side of change. Jim Garrity, the Chief Client Officer of SingleStone Consulting, also joins us to explain why being a little “boring” about AI may not be a bad thing.

Roundtable Highlights

1) Stop thinking in terms of demos. Start shipping products.

POCs are cheap to start, and easy to ignore. What separates teams that graduate to product? They pick a specific job-to-be-done, identify who actually benefits, and design for daily workflow. That framing raises the stakes and creates natural pull from the business.

If you’ve already seen your fair share of POCs that generate a lot of excitement up front, only to quickly fizzle out, then try this:

  • Start with a high-friction task owned by a named role.
  • Define “productive use” (e.g., replaced a manual step, reduced decision latency).
  • Instrument it: log usage, completion rates, error/fallbacks.

2) The adoption triangle: problem × product mind‑set × incentives

Teams stall when any one of these core pieces is missing. Picking the right problem for your context matters as much as adopting a Product mindset (deliver value quickly, learn fast) and aligning incentives (leaders get value from adoption, not just launch).

Signals you’re on track:

  • You have a one‑sentence problem statement tied to a business metric.
  • You can name the user and the moment of use.
  • Your rollout plan changes someone’s process on Day 1 (training, comms, SOP updates).

3) The product is the workflow

Model quality and a slick UI won’t save you if the AI product you’re imagining doesn’t fit how people actually work. The team underscores that workflow integration is the hardest part—and the real product.

Design checkpoints:

  • Where in the flow does AI enter? Who hands off to whom?
  • What’s the fallback when the model is wrong or uncertain?
  • What telemetry proves it’s helping (time saved, accuracy, satisfaction)?

Expert Interview: Jim Garrity of SingleStone

Jim Garrity, Chief Client Officer at SingleStone Consulting, joins us to discuss their deliberately BORING framework to implementing AI and how they put it into practice with a number of specialty insurers. These are companies who have to underwrite risk on products that have no real direct comparisons, so they need to be able to consume vast quantities of information to accurately predict risk. Because of its ability to absorb far more information far faster than humans, this is the kind of work that AI is uniquely suited for.

Jim recommends starting with dependable, readily automatable work that compounds trust and ROI. In a hype‑heavy market, “boring is good” means:

  • Favor reliability over flash. Pick use cases with stable data, unambiguous outcomes, and clear owners.
  • Design for human + machine. AI augments judgment and prep, not just clicks.
  • Keep value measurable and repeatable. Leaders fund what shows up in KPIs and customer experience.

Why it resonates for services leaders: The path from idea → income is shorter when you solve frequent pains in client delivery, risk, and more.

In the News: AI flags hidden heart disease with 77% accuracy

For our In the News segment, Pete Buer and Courtney Baker unpack a recent story on an AI tool called EchoNext that was developed by researchers at Columbia University and NewYork‑Presbyterian. It uses ECGs to triage who should get an echocardiogram, reporting that it can detect a “hidden” heart disease 77% of the time, far outperforming human cardiologists. The executive takeaway beyond healthcare? Early‑detection patterns like this can be fare more broadly applicable (financial risk, supply‑chain bottlenecks, employee burnout) when you have signals and workflows ready to act.

Watch the Episode

Watch the full episode below, and be sure to subscribe to our YouTube channel.

Listen to the Episode

You can tune in to the full episode via the Spotify embed below, and you can find AI Knowhow on Apple Podcasts and anywhere else you get your podcasts.

Show Notes

Getting started with an AI proof of concepts never been easier. Actually, sticking with a proof of concept and getting it to the point where you’re creating something new and valuable that people are actually using and hopefully loving.

That’s the hard part. But never fear, that’s what we’re gonna help you out with today. Hi, I’m Courtney Baker, and this is AI Knowhow from Knownwell.

Helping you re-imagine your business in the AI era. As always, I’m joined by Knownwell CEO, David DeWolf, Chief Product and Technology Officer, Mohan Rao, and NordLite CEO, Pete Buer.

We also have a discussion with Jim Garrity about why being boring can have its advantages when figuring out when and where to apply AI in your business.

But first, let’s welcome Pete Buer back to break down one of the latest big stories in AI and what it may mean for you.

Pete, a new AI tool, man, how many times have I said that, was featured in the New York Post recently claiming it can detect hidden heart disease with 77% accuracy. This is, sounds really interesting. What’s your takeaway here, Pete?

I know you and I love these kinds of conversations, AI making the world a better place.

Researchers at Columbia University and New York Presbyterian Hospital have collaborated on an AI model that analyzes ultrasound images to detect heart conditions that would otherwise typically go unnoticed by human doctors.

The AI was able to catch hidden heart issues nearly eight out of ten times, as you said, 77% of the time. Pretty cool.

The story is cool in and of itself, but it also, I think, stands for other possibilities in other business settings across all of industry. Yeah. Using AI to explore all the different ways businesses can take advantage of early detection.

So in financial services, for instance, spot hidden risks in your portfolios. In supply chain, identify and avoid logistics bottlenecks before they happen.

In HR, spot troubles like burnout or other forms of disengagement before they fester and turn into attrition. That list can go on and on and on.

As a matter of fact, I think this is a great topic for leaders who are listening to run sessions with their ELT team or to do a leadership off-site around. Whatever type of business you run, there’s an opportunity.

Where could early detection and the resolution of problems before they occur save us hundreds of hours or millions in operational cost, give us significant competitive advantage in the marketplace, or save our employees’ heartache?

This seems to be a tool to enable leadership at its best. I would love to see companies chasing this down.

Really, really interesting things. I hope more of these stories in the health realm, but also that we can learn from over in professional service firms as well. Pete, thank you as always.

Thank you, Courtney.

Okay, let’s face it, it’s never been easier to spin up an AI proof of concept or try out a new tool.

So how can you take whatever shiny new toy you’re playing with and turn it into something that others see the value and utility of? I talked with David and Mohan about bridging the gap recently. David, Mohan, I think the story will resonate with you.

I don’t know, maybe, I’m going to lay this out.

Maybe you could tell me if you have any been-there-done-that stories, but a lot of times with maybe pilots or you’re evaluating new platforms, you get going, there’s lots of excitement, but it never really makes it into the operations.

It kind of dies a slow, painful death. Right now, obviously, lots of leadership teams are looking at different AI tools, platforms, lots of different things they’re evaluating.

Today, I want to dig into what separates the teams that excel at taking AI from an idea to actually operationalizing those things into their day-to-day business.

First of all, before I get started, do you have any horror stories, like a big thing that you like tried, thought it was going to be great and it just died?

You can just see how many AI startups were founded in the last several years and how many have survived. Right? So, I don’t know what the numbers are, but I’d bet that an AI startup started in 2021, probably eight out of ten no longer exist.

Right? So, you can look at it that way.

The other examples I think of Mohan are some of the SaaS platforms, and I’ve seen not necessarily the platforms themselves, but the operationalizing piece of it.

I have seen multiple failures of project management-like tools, like the Asanas of the world, the Mondays of the world. Everybody thinks, oh yeah, here we go. And there’s like 10 days of energy, and then the whole thing just falls apart.

Guys, yes, that’s a great one.

That’s the kind that I think of when I think about major failures.

Yeah, so I think today, I’d love to help not have that.

I mean, everybody has probably… They’ve got one story like that, where it’s like everybody was real excited until…

They had to do something or enter some data, right?

Yeah, exactly.

The data hygiene part that always messes these things up.

Oh, it’s so true. So help us out. How do we make the leap with these AI evaluations to actually implementing them in our businesses that drive real business results?

Yeah, I think there is an art to this, right?

So we all know that traditionally a majority of the IT projects in an enterprise fail or they’re over budget, right? So I always think of as being in the right details, right?

There are lots of details, but you got to choose the right details to be in. I think it starts off with solving the right problem. If the problem didn’t happen to be important enough, at some point, you’re going to lose steam on it, right?

So but it’s a compelling problem. You’re going to keep going, right? It really starts from there.

Then from there, there are many other things, right? So you got to build for production. You got to have a plan because productionizing is very hard, right?

So prototypes are a lot easier. So just thinking of data pipelines, thinking of data itself, like good data, right from before, having a plan for that is super important.

Then there’s always the thing about having a cross-functional team that’s going to implement it, right? So otherwise, it just becomes a technology project that stops at the water’s edge and nobody’s using it.

All of the rollout considerations that are super important, and that we work with our customers on, right? And you have a lot of experience in that. I’d love to hear, right?

So just kind of thinking of at least these three factors, there are a couple more that I can think of that are really important from making this into a practical value.

So when I hear you stealing from Simon, we got to start with why. You got to start with why it’s got to be good, but it doesn’t solve everything. It still could fail.

David, what do you think?

Well, you know, what Mohan made me think of was just the word change management, right? Change management is the problem with so many of these, and what he was touching on was some of the areas of change. It’s funny you bring up Simon.

I’ll pivot to another thought leader. A lot of folks know Pat Lonzioni for his pyramid of trust for building cohesive teams.

He actually has a new model that he calls the working genius, which is about different people’s personalities and strengths and how they bring those to work.

I think it’s actually a really interesting frame for thinking about change management too, because it starts with some people have this genius of wonder, right?

They just ponder the world and they can think of things that need to be solved and need to be fixed.

The next type of folk have the genius of invention, and this is actually solving the problem and looking at the pain and the problem and figuring out the how.

The third type of person has a genius of discernment, and this is actually hearing an idea and being able to poke holes in it, understand it and probe into it and help perfect it. Then there’s galvanizing.

This is rallying people to drive that change, right? Getting people motivated, getting the people moving. There is enablement, which is really creating all the support infrastructure and the processes and the systems so that you can execute.

And then there’s tenacity of just getting the work done and just driving it to completion. When I think of good change management and I think of successful implementations, I think you need every single one of those geniuses.

And so whether you’re talking about the team you assemble, that should have some representation of all of that, or you’re talking about the different focal points of your project plan, I think you can use that model to think about, are we really

considering all aspects of what it’s going to take to do successful change management? And where I’ve seen these rollouts be unsuccessful is where you’re missing one or more of those.

And it just never comes to be, because maybe you don’t have any galvanization in there, and so there’s just no momentum created. Maybe you have no tenacity, there’s nobody driving it to completion, right?

Maybe you never thought about the problem, and so you just have a solution for the solution’s sake. Whatever it is, if you’re missing a component, it’s going to stall, it’s going to fail.

And you would say people with galvanizing are like the top, that best people?

No, thank you for that. There is no best, Courtney, come on.

I know, I know.

All jokes aside, really interesting to think through.

I mean, I don’t think we think about that very often when we are rolling out something like this, the personalities that it really takes to get something all the way from idea, those early stages of excitement, evaluation, to it changing the way

people work day in and day out. And that’s when you, you know, the real gold happens, but that is hard. It is hard. I mean, I think we all acknowledge that it’s difficult.

Yeah.

Let me bring in another famous author here who wrote on product mindset, right? You have to treat these AI projects as a product, that they have real users and there are real stakes in using it. Right?

So that’s when, you know, there’s a built-in incentive to make this successful because it’s going to be useful for somebody. Right? So treating it as a product, aligning the incentives in an organization, and then solving just the right problem.

To me, kind of, that forms a nice triangle to think about this.

We’re talking broadly about change management here and software and that everything. But I think one of the risks with AI is we often think of AI as the easy button, right? It does so much for us autonomously, right?

This agentic world that is starting to appear and we’re starting to experiment more. We have these huge expectations. What we forget is those expectations in many cases, run directly contradictory to the way we’ve always done things.

And so, while the technology may be there to be able to use it to accomplish things in new ways that totally change the world and are easy, actually integrating that with people and getting our behavior to change, I think is an even bigger lift than

Yeah, I completely agree with that.

If you think about what is the actual product here, right? Is it the model? Is it the UI?

Is it what is the actual product? I’d say that the workflow integration, where the tool is working with the humans, with your team members, that is the actual product. The workflow integration is the hardest part of the whole equation here.

If you can focus on that right from the beginning, it’s going to not just be a POC and it’s going to go into production.

Yeah. The worst-case scenario is your company lays out all this money, you’re in a year-long contract, and no one is using the platform. I mean, that’s worst-case scenario.

I think exactly what you’re saying, Mohan. In some ways, obviously, you got to start with why, but you also have to, what’s the end result that we want to happen and work our way backwards from that?

David, Mohan, really great conversation, hopefully for everybody listening. Really some helpful things to think about as they roll out new technology with their teams.

Go make it happen.

Awesome. We talk a lot on this show about how you can use AI in your professional services business. Do you want the playbook for scaling and growing your service company in this AI era?

Good news. You can download our brand new white paper for AI-powered strategies for scaling professional services. Grab it now at knownwell.com/scalingwhitepaper.

Jim Garrity is the Chief Client Officer at SingleStone, where he and his team work with companies in financial services, insurance and other public sector industries to develop strategic AI data-driven solutions.

He sat down with Pete Buer recently to talk about SingleStone’s Boring Framework for AI and more.

So Jim, welcome to the AI Knowhow podcast. We’re so glad to have you on board.

Thanks, Pete.

Great to be here. For a little bit of context so that we can get into it, could you provide a little bit of background on SingleStone, your role, and then to the theme of the podcast where AI fits in?

We are SingleStone. We’re a human experience and technology company that focuses on people. We study people and we respond with the best architecture, technology, or data solution.

And that takes shape in a few different ways. But we’ve been around a while, 27 years of consulting, based in Richmond, Virginia. My role, I’m a Chief Client Officer, so I work with our customers.

I love being with our customers and figuring out what the next problem to solve is.

And I kind of grew up through Agile Project Delivery, listening to needs, understanding what problems they’re really trying to solve, and thought of myself as kind of an activist, scrum master, facilitating and serving leading on teams, but then

always trying to really think broadly about the business purpose and outcome and how it affects people on the other side, the teams that are using the software, building the software, and then connecting all those dots. And so that led me to customer

relationship management and then moved me along to a business development role. So I think myself as a business analyst who gets to sit with executives and think about what’s next.

Well, so we at Knownwell love ourselves a good framework. But I understand that yours is boring, which, of course. So tell us about it.

Tell us about the boring framework and how it works.

Yeah, great question. You know, boring is good. And that’s what we kind of stumbled upon as the hype curve of AI, which we’re thinking about AI a ton, that’s SingleStone.

We’re real progressive in the way we work, the way we think, the way we help our clients think.

But as people get into the AI conversation, there’s so much fear, uncertainty, and doubt, and concern about what does this mean for me, what does this mean for the future?

I watched The Matrix the other day with my kids, and it was like, you know, yeah, like there’s a lot of years ago, and they’re kind of anticipating a world where AI kind of takes over. So rightly so, right? It’s a scary proposition.

But what the hype curve has done for us is to open up this reality that like, hey, like fairly rudimentary automation that maybe we could do a bunch of years ago is now widely on the table and everybody is expecting it to be true, expecting it to be

there, expecting just to work. And so the consumer expectations have changed again. The speed of change has changed again. I think we’re in the most uncertain era we’ve ever been in.

So what can we anchor back to, I think, is this idea like, can we reduce toil in our business by going after the boring stuff? And so what does boring mean? Well, it’s an across stick.

Business valuable, operationally important, repetitive, integral, navigation, and integral meaning kind of core to the business, right? It needs to be close to the core. Navigational, this is my favorite one.

When we’re introducing AI, there’s all this concern about hallucination and what if it’s wrong? Like, our expectation for humans being right versus our expectation for machines being right, I don’t think there’s parody there.

But this idea of if the machine could be asked to be directionally correct and help us be better, 68% rather than 63%, there’s a lot of latitude there to pick up the stuff that we can be confident in and let the stuff where we’re less confident fade

away. So navigational, like directional. And then lastly, growth enabling. Is it actually at a leverage point in your business?

And if it is, then that’s the kind of use case we should go after. So we want boring things and we think there’s a lot of money and a lot of value for customers in the boring stuff.

I get it conceptually at the frame level. Can you make it a little more sort of real with a couple of examples of how you’ve used the framework with clients and what conclusions they’ve come to?

Yeah. Well, I spend a lot of my time in the insurance space thinking about insurance. And Richmond is uniquely situated in the, they call themselves, InsurTech’s Fertile Crescent for Excess and Surplus Insurance, or some mouthful.

I think there’s a better way to say that. But some of the weird insurances have a big presence here. And so stuff that as consumers we may never have heard of.

But there’s all this insurance being written on the what they call a non-admitted market where there’s no rules, the government doesn’t have a lot of involvement, but somebody’s bearing the risk for these complex things.

Markel’s here in town, James River’s here in town, Richmond National’s here in town, Kinsale’s here in town, market leaders in the specialty and E&S zone. And a lot of those submissions, when somebody’s like, I need insurance, right?

For you, as a consumer, you go online and say, I need homeowners insurance and it’s fairly homogenous, how big’s your house, whatever. For these, it’s like we need to know a lot about the risk. What are you actually asking us to underwrite?

And so there’s a lot of back and forth, the submissions come in as like bundles of information from another business, it’s kind of a B2B2B to C maybe or B2B2B2B kind of transfer.

And so this big dump of information gets handed off and you’re asked as an underwriter in a business to consume all that, make sense out of it, and then write the risk.

Meanwhile, that market’s growing and so there’s a lot of movement there, there’s a lot more risks coming through. And so, there’s a lot of toil in that work that just, and speed matters, how quick they can turn around a quote matters a lot.

And so a lot of quotes fall on the floor, they don’t actually ever get responded to. The fastest in the market is responding to everything, but only binding a small proportion of what comes across.

So there’s a lot there and the opportunity for AI is just the interpreter thing. Can I interpret this bundle of information? So that a human doesn’t have to be just trying to interpret it.

I don’t know, I mean, all the quality metrics you might find in those businesses say like they do it really well. I just don’t believe it. How many things, how many, and I don’t have facts on this, but like how much information just slips through?

Like oops, we missed this thing. My perspective on this thing was different from that underwriter’s perspective or that triage agent’s perspective.

And so like the actual info that gets kind of crystallized is not homogenous, whatever, and then it’s also slow.

And so entering in there with a partner of ours, we’ve kind of looked at an opportunity to help play in that zone with a series of little bots that can do email ingest and just kind of suck that information from email, that can structure that

information in a meaningful way, that can then start to find insights in that submission flow. What are the different kinds of submissions that are coming in? And are we choosing not to write certain ones?

And so, kind of a series of little jobs that these AI bots can kind of run. And it’s not the coolest work in the world, but it’s core to the business.

If you ask where does the money get made in insurance, it’s like in the underwriters, on the underwriters desk.

And so this is a zone where we think there’s a ripe opportunity and we’ve gotten to go and play with some email ingests and sucking in files and just making sense out of them.

With some AI, some traditional techniques, some straight up brute force software engineering, but the appetite is so real that we’ve been able to run right after the boring stuff.

That’s cool. And it would be probably superficial to interpret the impact as just speed and efficiency, because it sounds like the result also is insight and better decisions, right?

Yeah, right. So the ultimate play is growth.

Right.

And growth comes in a couple of ways, right? They’re like, the cost savings is like, in insurance, it’s like really just not the play. Like, you can start an insurance company right now with like a smart person, some capital and an Excel spreadsheet.

Like, it’s 20 to 25 and you can still run the play. What you can’t do is intelligently grow. You can’t capture new markets.

You can’t finally grow hypotheses. You know, figure out new partnerships that maybe are like a little bit less obvious.

And so those are the things that this intelligence will help them do and just create kind of these feedback loops of like business development. And maybe we bind six out of 10, as opposed to binding two out of 10. Like, what a world that would be.

So there’s direct growth, and then there’s kind of indirect growth.

Let’s shift gears a little bit and just talk about the services business in general. From some of your prep conversations with the team, that you’ve got some strong views on client retention.

There was a reference that you made to the awful moment where revenue goes from millions to zero. What makes retention so hard to anticipate in professional services consulting?

It’s really a good question. For us, I think it’s been two main things. I think it’s a little bit by design.

It’s like where we are a capacity play for people as a professional service capabilities organization. We aren’t close enough to the core. We are an augmentation extension of them.

So if you think about when they need to scale back, we’re tight in the wallet or hold up or pause. Where do they go first? They’re going to say, hey, external partners, pause, and then, hey, internal partners, pause.

So they’re going to retain their people first. So kind of by design, we sometimes are staff augmentation, we are an extension of them, and we know we’re the first to go. Trouble is, you’re the first to go.

So you get the indication when you get the phone call saying, hey, we got to pause. So that’s one. The other is a fairly small organization.

We just don’t do that many deals. And so we’re about 75 to 100. We ebb and flow a little bit in our scale of people.

And so the total number of deals per year is just not… Like it’s in the hundreds of deals. It’s not in the thousands or the tens of thousands of deals.

And so each deal is very material to the total revenue for the year.

And so how we are situated, not our situation, but how we are situated in an account is we’re working with one particular buyer at a Fortune 500 company, one line business head or somebody. And we don’t have…

If we’re in that situation, we’re always so thankful to be there and have this great opportunity to be there. But landing and expanding doesn’t happen by accident.

It’s not like we magically end up with three to five projects across multiple lines of business in a diversified kind of way. And so we sometimes will have one buyer at a big account and then something changes or that project ends.

I mean, maybe it’s not even like there’s like an externality. It’s like that project ends. We do the next phase of it.

They’re like great. Like the other day, we got a call that said, Hey, great project. They were like, Jim, great project.

We are going to call it a win. We need that feather in our cap. We’re taking it to the board.

Like, well done. And we’re not going to do anything else with this for a while. Like, OK, like that.

That’s a great version of we’re done here for now. But, but it’s the same result for professional services flow business.

It’s like, oh, like, what’s next?

What’s the next thing?

And we don’t, we don’t want to just entrench and be like the lock-in partner. But we know these places have tons of humans, tons of legacy technology, tons of challenges, tons of opportunity. And so we know that there’s value to us being there.

And so when we end up single threaded, one and done or three and done or five and done, it’s like, ugh. And it’s often that single-threadedness that gets us where we don’t have enough kind of parallel tracks going on.

And are there lessons from your experience for other professional services leaders as they think about looking for indicators or taking operational approaches to maintain retention, had attrition off at the pass?

Yeah. I think one lesson is just never assume that your name is going to just… Never assume you’re famous, right?

Like, we’re just all… We just are… Nobody’s famous, right?

Like, these companies are huge and vast.

And I think that’s what I keep reinforcing with my own team and my own self constantly to say, if I think that, like, there’s no more places to hunt or no more revenue to be generated or no more value to be added at a Fortune 1000 account, I’m

insane. I’m absolutely insane to believe that. And so, like, I’m not just insane, but I’m also wrong. And so, if I’m wrong, okay, where is that?

And who is it? And my name isn’t going to just get referred by accident. Like, it takes real work.

And I think real asks. We ask for warm introductions all the time. All the time.

Like, not like when a project goes well, but like, all the time. But intelligently, hey, I noticed that, you know, Dominic is doing this initiative over here, and Dominic seems like a kind of guy who, you know, makes waves wherever he goes.

You know, Dominic, you know, Joanna, you know, we helped you, can you, you know, introduce me?

Can the three of us get like, it’s just like the very, like, referral-based relational warm-insure request is, like, the bread and butter of how we nurture and go.

We used to laugh at ourselves, too, thinking, like, our strategy around, you know, our client leaves their company, goes to another company, and we end up going to work with them. That that was, like, a bad strategy.

It was like, no, it can’t be your only strategy, but it’s, like, it’s certainly part of the strategy. If that trust is real, like, that’s very much, that kind of continuity is also very valuable.

And so, the lesson learned, I think, is just, like, don’t, you can’t believe that anything just magically happens, and it takes real work, but also people know what value looks like, people know what it feels like to work with good people.

And so, like, whenever I ask for a referral, like, most of the time, I would say, yeah, like, yeah, sure, like, help me do it, but, like, let’s make that introduction.

Insane and wrong, like, talk about the wrong or the bad cell in the four-square to be in. How about, is AI our friend in this process?

Is AI our friend? You mean in terms of, like, interpreting, like, anticipating churn? Can we see when…

Anticipating churn, identifying growth opportunities, yeah.

Yeah, I mean, I think it totally is.

I think, are we doing anything like fancy pants at SingleStone to figure that out? Not really.

We’re working with a couple of customers, working on it, who have a significantly larger volume, and some hypotheses of, like, kind of, like, bringing it to us as a data problem, saying, like, we have tons of data, and we think there’s signal in the

noise. Can you help us find it? We have some hypotheses of where to look, like, can we get some predictive analytics here? At our smaller volume of deals, and then the question of, like, what is our leading indicator that can kind of signal things?

I know we’re getting close on time, so I’ve got one more question for you.

You know, our platform crawls through structured and unstructured data inside the organization, in the news, and works backwards from an understanding of how the business runs and what your historical retention and growth drivers have been to draw

inference and proactively identify either risks or growth opportunities and feed that to your client management team on a daily, like it’s even faster than daily now, basis. So kind of always having at their fingertips the very best information that

What I love about that description, and you hit on it with the insurance example earlier, is it’s not just finding the downside uncertainty of these guys might turn out.

Rather, it’s finding the upside uncertainty of, hey, there might be more here based on the same signals, based on the same kinds of movements.

Look into the crystal ball and tell me, how is AI going to reshape consulting and client retention, and what should companies do to get their acts together?

Yeah. I mean, it’s going to reshape everything.

No matter where people are, I think where any listeners are on your comfort level with it, I think it’s going to reshape societies, it’s going to reshape constructs, it’s going to reshape business, it’s going to reshape what we do trade, and it’s

going to change everything. It’s not because it’s some magic tool, the little magic emoji is, it’s so cool that they unlock on the magic emoji, but it’s not magic.

But I like the analogy of the bicycle isn’t, people aren’t making a gajillion dollars selling bikes, but the bicycle transforms society. I live on a cul-de-sac because the automobile transforms society.

The Internet, when it was 56K and squeaky, I was like, this thing is never going to catch on, this thing’s stupid, it’s too slow, nobody’s going to, and it’s changed, transformed society.

Here we are again, it’s not that AI is going to solve the problem, AI is just going to change everything. Is it going to change the way we anticipate what’s coming?

I think for sure, and I think like efficient, you called it out as an efficiency play, as a growth play, I think it’s more like an effectiveness play all over the place. How do you be more effective?

As a personal B2B complex salesperson, how do I be more personal, smarter, more prepared, more empathetic, more thoughtful?

My brain is not that big, and so if I have my own analyst, and my own advisor, and my own sounding board, and it’s all my little robot, I mean, that’s just the LLM side, like wide open, that’s not even sophisticated, kind of knit together tooling.

So I think it’s going to change everything. And so I think people should just, I encourage folks to just be way more up minded about like, it’s not good or bad, it just is, and now what happens next?

Let’s put a pin in it there. I have a feeling maybe we’ll connect again one time to speak again. It’s been a pleasure, Jim.

Thank you for being here.

Pete, thanks for having me.

Thanks as always for listening and watching. Don’t forget to give us a review on your podcast Player of Choice. And hey, if you don’t mind, send this podcast to somebody you know will enjoy it.

At the end of every episode, we like to ask one of our AI friends to weigh in on the topic at hand. So hey, Claude, today we’re talking about making the leap from AI proof of concept to an AI product.

I understand you’re a really good coder, which is why I’m coming to you for advice. Any recommendations?

The jump from proof of concept to production means solving reliability, scalability and edge cases your demo never encountered. Focus early on monitoring and fallbacks because AI models fail in unexpected ways with real user data.

And now you’re in the know. Thanks as always for listening. We’ll see you next week with more AI applications, discussions and experts.

You may also like

Know how
you’ll grow.

Want to try Knownwell yourself?
The waitlist for our public beta is now open.

LinkedIn
YouTube