AI Knowhow Episode 93 Overview
- Discover why your ability to spot subtle signals, often overlooked or ignored by traditional metrics, can make or break growth and retention
- Learn how AI can not only identify critical signals but also help business leaders determine which insights are actionable
- Explore real-world examples of how AI can enhance and inform human judgment rather than replace it
What do an Oura ring and AI have in common? More than you might think. Both provide early warnings of critical issues that might otherwise remain unnoticed, allowing for proactive intervention. Just as an Oura ring can detect subtle health anomalies before they become major problems, AI tools can identify hidden signals within client relationships that traditional metrics often miss.
Recognizing these subtle indicators early can prevent client dissatisfaction from escalating into attrition, fundamentally transforming how businesses approach driving strategic growth. On episode 93 of AI Knowhow, host Courtney Baker, David DeWolf, and Mohan Rao explore the subtle yet critical signals businesses often overlook that, if missed, can lead to silent churn.
They also give the audience a quick overview of some of the latest additions to the Knownwell platform. We recently made a big update that gives leaders even more tools to understand the health of their entire portfolio and where they should focus their energy and efforts to make the greatest impact.
The goal? As David says, “Telling you what you need to know before you even need to know it. It is answering the question you don’t have to ask. And that is the power of AI that we’re starting to see come to fruition.” If you’re interested in seeing the latest in Knownwell, you can schedule time to take a look here.
AI in the News: Mandatory AI at Microsoft?
Courtney and Pete kick off the episode looking at a recent story detailing how at least one division in Microsoft is being told to lean more heavily on AI…or else. While much of the logic behind the mandate is understandable, might there be a less heavy-handed way to increase adoption than baking AI utilization into performance reviews, as the memo suggests may happen? We’d like to think so.
Pete also asks a larger question that many are wrestling with: if we’re being instructed that turning to AI for help accomplishing work should always be the first resort, what does that do to our ability to think critically? The down-the-line implications and unintended consequences of such a mandate may not be pretty.
Expert Interview: Part 2 of Our Interview with Matt Stauffer
Matt Stauffer, CEO of Tighten, joins Pete Buer for this week’s expert interview segment. Matt and Pete continue their conversation on where and when to integrate AI into business workflows (or not). Matt emphasizes that successful AI adoption isn’t about wholesale replacement of human roles but rather targeted augmentation.
He shares Tighten’s approach, where AI tools handle routine tasks, enabling developers to dedicate their attention to complex problem-solving and strategic thinking. Matt candidly points out that they’re essentially treating AI as a “junior programmer” whose work still very much requires human review and intervention.
Pete and Matt also look at how board members can evaluate whether their portfolio company CEOs are up to the challenge of discerning when and where to deploy AI in their companies. One of the first questions Matt says he would ask CEOs is how they’re personally experimenting with AI. A CEO who has hands-on experience with AI is far less prone to make the mistake of trying to apply AI in every situation.
“As much as I think AI is a wonderful tool,” Matt says, “I’d prefer skepticism over magical thinking.”
Watch the Episode
Watch the full episode below, and be sure to subscribe to our YouTube channel.
Listen to the Episode
You can tune in to the full episode via the Spotify embed below, and you can find AI Knowhow on Apple Podcasts and anywhere else you get your podcasts.
Show Notes
- Connect with Tighten CEO Matt Stauffer on LinkedIn
- Learn about Tighten, Matt’s company that builds and rescues web apps and dev teams
- Connect with David DeWolf on LinkedIn
- Connect with Mohan Rao on LinkedIn
- Connect with Courtney Baker on LinkedIn
- Connect with Pete Buer on LinkedIn
- Get a guided Knownwell demo
- Follow Knownwell on LinkedIn
What would you say you do here? Well, we’re building an AI platform for commercial intelligence. Haven’t you been listening to all the ads I’ve been reading for 92 episodes?
No, have you been skipping ahead? I see what you’re doing.
Don’t worry if you have been doing that, because today we’re going to be peeling back the curtain just a bit on what the Knownwell team has been hard at work on, and why we’re so excited about it. Seriously, it’s a game changer.
You should definitely listen to this episode. Hi, I’m Courtney Baker and this is AI Knowhow from Knownwell, helping you reimagine your business in the AI era.
As always, I’m joined by Knownwell CEO, David DeWolf, Chief Product and Technology Officer, Mohan Rao, and NordLite CEO, Pete Buer.
We also have the second part of our interview with Matt Stauffer, the CEO of Titan, about why he thinks the mad rush towards AI everything is getting a little out of hand.
But first, put on your hiking boots and windex off those binoculars because it’s time for another installment of AI in the Wild.
Hey Pete, how are you?
I’m good Courtney, how are you doing?
I’m doing great, but I do have some bad news, Pete. There’s been another memo. This time it comes from Microsoft.
Everybody that had put that bet in, you got some money coming your way.
And according to the article in Forbes and Business Insider, at least one division in Microsoft is pushing staff to use internal AI tools more and they may consider AI use in reviews. Pete, what’s the takeaway here?
So you call it bad news. I got to say one of the big takeaways here is just how helpful these controversial corporate memos are to our podcasting agenda, right? I think it’s good news.
And there’s good learning in this one, as there have been in all those that preceded. The backdrop, the context on this one.
So the president of Microsoft’s Developer Division sent an internal memo recently saying, among other things, using AI is no longer optional. It’s core to every role at every level.
And we’re hearing echoes of this same sentiment in other places, in other memos even. It’s a little bit of a paradox for me in the Microsoft message. And let me sort of offer that up as our take.
On the one hand, of course, I agree with the notion of encouraging your teams to learn the mission critical skills associated with AI and avail themselves of every tool imaginable in the toolbox. So I’m down with the what.
It’s the how in the Microsoft case that I take issue with. One example, sort of Microsoft specific.
As you read between the lines in the article, you realize that one of the motivations behind the memo and the declaration of intent is that Microsoft is trying to curb its employees’ use of other people’s AI tools.
So stipulating that they use Microsoft product and eat your own dog food, make it better as a team. I totally get that for Microsoft. I wouldn’t want to be limiting in that same way with my own teams.
Different tools are great at different things and you need to pick and choose among them according to what your team, your department, your mission requires.
And so in another example, Microsoft is considering baking AI utilization into its performance reviews. I totally support giving teeth to whatever mandate we believe as leaders we need to drive out with our teams.
I just wonder whether the follow on implications have been thoroughly thought through, maintaining critical thinking skills, striking the right human to machine balance in the work. That list kind of goes on.
As leaders, we have to remember the challenge isn’t just driving AI adoption in a vacuum. It’s creating an environment where employees embrace AI because it’s the best way to make their work more meaningful and their work product more effective.
Now, Pete, I don’t know if you’re okay with this, but you know what this reminds me of? What we were just talking about before we started recording.
Yeah.
You actually gave an example. Are you okay with me pulling this one out?
Yeah, sure. Of course. Yeah.
You gave an example of a situation where you wish that you had not used ChatGPT.
You would have pushed yourself to do the critical thinking beforehand because you felt like the end product would have been better. But you went immediately to ChatGPT to frame some things up.
It was like, is this hurting my critical thinking skills here?
It’s interesting that the unintended consequence of what we see here for Microsoft might be, well, I’m automatically, I’m not even going to have the pause to be like, should I have taken the time to get my own thinking done before I used AI?
That just goes out the window because now I’m the performer in me.
It’s like I’m going straight to AI every time because my paycheck is required for this.
It’ll be interesting to see how that plays out. I think that story that you shared right before we got on air really, really showcases this. Pete, thank you so much for this one.
Thank you, Courtney.
The Knownwell team recently hit a big milestone when we release a huge update to our platform.
This is a big push and introduced a lot of new areas for leaders to dig into within the Knownwell platform.
I was excited to talk with David and Mohan to get their thoughts about what stood out the most to them for changing the way professional service firms operate today. David, Mohan, today, I want to talk about my Oura Ring.
Oh, you’ve been trying to get me to buy one, and I’m the only one on this podcast that doesn’t have one.
That’s right.
Everybody listening, I’ll send my affiliate link later, so thank you. But I recently had a little serious, okay, for everybody listening, Mohan and I both have Oura Rings, and we drive David crazy, because we’re always talking about our Oura Rings.
Matter of fact, I’ll even go to Mohan, sometimes for advice for things happening on my Oura Ring.
I just sleep really well, apparently, compared to all of you.
Actually, this is exactly what I want to talk about right now, okay? Okay, what you just said. There have been, recently, I had this moment where my biometrics, according to my Oura Ring, were not looking good, like, something was off.
But I actually didn’t, I think without my ring telling me what was actually going on, for me to pay attention, I would have just plowed on as normal and been just moving along. But because I had-
Because you felt fine or because you didn’t know what was going on?
Yeah, I just felt fine.
I was like, I’m fine, I’m fine, nothing that you know.
So it was telling you something about yourself you didn’t know.
Exactly. Because I had this intelligence, I started to really pay attention and I actually took the time. I think David, I even said to you this week, I was like, I had to baby myself a little bit.
You did?
I’ve never heard that from you before.
I really had to pay attention and say like, why is this? It was like a heart rate thing, which was just weird. And I really took the time to say like, okay, what could be going on?
What could I do to kind of help with this? But because I had the intelligence, I was able to kind of get ahead of something that could have prolonged, could have gotten worse, things could have ended a different way.
But because I had the intelligence, I was able to be very proactive with what was going on.
But what I want to talk about today is what I’m talking about with My Oura Ring but in companies where what happens is things are going on, signals are happening, but nobody’s picking them up. They’re just like you, David.
They’re like, I slept great last night.
They’re like that call with the client was perfect. I don’t know what you’re talking about. But underneath, there are these little contextual clues firing off, but nobody’s seeing them.
And then what happens is firms silently bleed customers. And it’s painful, but there were things there early on that could have been caught. So today, that’s what I want to talk about.
So with companies, Mohan, I’d love for you to talk a little bit more about, why do you think this is happening with these, why do we silently bleed customers?
You know, mainly because, you know, we are all very metrics-oriented people. We want to see numbers. We want to see categories, high, medium, low, red, yellow, green.
But real life is messier than that, right? So there are these signals that are out there. There are critical client signals that haven’t yet shown up in any of the numbers or actually in the real relationship yet, right?
You know, the power of AI is always the power to predict what could be happening, right? So these are predictive machines.
For us, when we think about signals or contextual clues, they’re really clues to that something might be happening and allow you to connect the dots before they show up in various metrics.
And these are the signal sensing bots that we have, if you will, that go in and surface this to the user. That is the essence of these clues.
You know, the analogy that I think of for those of us that aren’t fortunate to be in the Aura Club, to me, it’s the weather, right?
When I go into my weather app, right, I’ve got a temperature, but then I have barometric pressure, and I have the wind direction, and like, there’s all these other signals, right?
A great example for me in the weather is, if you are looking at temperatures that are in the 60s, maybe they’re approaching the 70s, you’re probably thinking it’s a pretty nice day out.
And we all know, okay, maybe it’s going to rain, maybe it’s not, and so we’re aware of that.
But do you know that most tornadoes happen when you have the mixture of the atmosphere of warm air and cold air, and that most frequently happens in kind of the 50s and 60s, right?
And so you might be seeing a really nice day, but have no clue of all these signals going around. But if you are in a world where you understand weather, you may be able to surface that based on some of these other signals and clues, right?
And I love this analogy, despite not having a ring and feeling left out, because I do think in all walks of life, right, this is one of the powers of artificial intelligence is that we can start to surface signals from the natural information flows
that flow around us at all times, right? All of these data points, whether we’re talking about our health and Oura Rings, whether we’re talking about Oura and the latest weather app, whether we’re talking about commercial intelligence and the flows
that happen between our clients and our firms, these signals are there, but they have to be number one surfaced, right? It’s hard for human beings to sit in enough conversations to be aware of all the data points that are flowing around us all the
time to pick these up. And in many cases for these different things, you have to connect to the dots. It’s not just about the temperature that creates the tornado. It’s not just about the warm air and the cold air.
They’re all sorts of things and you have to say, oh, because these seven things are happening, we have a chance. It’s a tornado watch now, right? Same thing with your Oura Ring.
Because these seven things have happened, Courtney, we’re going to alert you, your heart rate looks a little bit off, right? There’s something, you should pay attention. Something’s going on, maybe your stress is a little high right now.
Because we pick up multiple things, otherwise you’re saying, oh, of course my heart rate’s elevated. I just got done running. Yeah, like, right?
No, no, no. It’s the context of all these data points together. And I think that’s what we forget sometimes, right?
We think it’s all black and white. No, no, no. It’s the probabilistic nature of artificial intelligence.
It is putting all these signals together, first by collecting enough of them, and then secondly, by probabilistically determining that that means more likely another outcome that really matters.
And you know, what makes it harder is that in addition to all the collection and everything that David talked about, you have to ignore 90 percent of it. And then say, then say, like, let me worry about the 10 percent of it, right?
Because if you start looking for signals, there are lots of signals out there, right? And then you’re like, my hand hurts. Why does my hand hurt?
I didn’t do anything. But you know, so you start looking, you start seeing signals, then you got to ignore 90 percent of them, right? So which is so hard.
And AI is so perfectly suited for that task of saying, what is the 10 percent that I need to pay attention to and ignore the 90?
Great.
Well, and I think another part of this is even though it is like connecting seven things, the hard part in an organization, and those seven things could be in completely different departments, different people. Impossible. I could at least-
Two of them are in a system, three of them are in somebody’s head.
Exactly.
I mean, it’s just humanly impossible for us to see those seven things and see that, oh my goodness, it’s a tornado warning.
Totally.
We are about to bleed a client.
Yeah. No, it’s being able to track this at scale through functional silos, be able to say that this is important enough to act versus this is just a watch item.
It’s impossible for one human to do it at scale, and then it is even more impossible for multiple humans to do it at scale, just because of the nature of the problem.
Yeah.
It’s the one reason that I am so excited not to talk about ourselves here, but the next version of Knownwell, we’re in the middle of an upgrade to our user experience, where we start to surface not just the Knownwell score that gives you the health
of your client relationships, but these signals. Is there an active fire? Is there a fire burning in that account that something very strong has gone on that you need to be aware of?
Is there budgetary pressure that’s going on that may have gone unnoticed?
But surfacing that signal and putting those pieces together, and how do you differentiate that signal from noise when, all the time, clients are asking you for budget considerations and different things, that’s different than an actual signal that
there’s pressure. Is there executive turnover? Maybe somebody may be on the verge of leaving and you need to be aware that key person is or has just left.
Those types of signals oftentimes don’t really scream to you in the red, yellow, green score card that you have. They also don’t come out. They’re going to impact the commercial health score, the Knownwell score, but you may not notice it.
They may not be stark enough, but starting to surface, here’s what I love about AI, telling you what you need to know before you even know you need to know it. It is answering the question you don’t have to ask.
And that is the power of AI that we’re starting to see come to fruition through all of these platforms, as we see more and more enterprise platforms arise. But Knownwell, we’re taking that step right now.
Actually, by the time this podcast is released, those features will be out. So I’m super excited to see this vision that I think a lot of people have had for AI really coming to life.
David, Mohan, I’m really thinking the more we talk about signals and how really what Knownwell does is so similar to our good old Oura Rings here. I think we need a sponsorship. We may need to send this episode to them.
But I think the real thing for everybody listening is really thinking about having these signals, these things that are buried, these contextual clues with what is happening in the health of your relationships and how important that is to stop
silently bleeding, getting a year into the contract before you realize, oh gosh, we got a major issue and we had no idea. Well, David, Mohan, I think this was really interesting. Thank you as always.
Thanks, Courtney.
Good health to all.
Good health to all, yes. And it’s okay to baby yourself a little bit. The new era of commercial intelligence is already here.
If you’re interested in having real time, objective intelligence on the health of your commercial relationships, you might be interested in trying out Knownwell. Stop flying blind and having to just use your best intuition and start sprinting ahead.
Go to knownwell.com for a guided tour and to set up a time to speak with our Knownwell team. Matt Stauffer is the CEO of Titan, a software development company that specializes in building the world’s best web and mobile apps.
Here’s the second part of his conversation with Pete Buer, where Matt shows us he’s a little more skeptical of all the AI hype than most.
When we opened, you referenced the top track around AI is coming to take all our jobs. And you have folks on the team who are builders. And that’s where a lot of the speculation early days about job loss was showing up.
How have you observed or maybe even how have you engineered changes in the roles of the folks who are building code, building product for your business?
We’ve changed a lot less than you would think. And there was that time when for a couple of months there where everyone said prompt engineer is the new job that everyone’s going to hire for and that fizzled.
Because in the end, all of my folks need to understand the potential benefits that AI can offer in two ways to our clients.
The first way is integrations for the clients where they’re going to use AI, like we were just talking about, and so they need to understand that.
That’s more of our project managers that are understanding that because they’re doing that product thinking.
Then the second one is the ways that we use AI in our actual day-to-day workflows to ensure that we’re not missing out on opportunities for performance improvements. That’s primarily the developers.
Although our product and leadership are also trying to figure out how are meeting transcription better. We’re still doing all that just like any other business, right?
But our developers are constantly trying out what’s the best code generation AI, what’s the best IDE helper, which is our code tools that we use where it gives you a little hint saying, Oh, did you want to do this?
I’ll stub out a dumb version of this for you. So we’re always testing those, we’re trying to be at the forefront of that. But to me, this is simpler when you think of AI, not as some new way of looking at the world, but as a tool.
AI is a tool, and now it’s a suite of tools, and it’s unlike a lot of other tools, it can be used in a lot of different ways, right? So it looks bigger and more meaningful.
But in the end, AI in our world, in the professional service world, allows us to A, offer AI as a part of the deliverable, and B, allows us to use AI as a part of our development process. Both of those are just tools to do a specific thing.
So we look at the layout of our team, I want my individual developers to have tools to allow them to be productive. So yeah, those developers should be taking advantage of tools.
They should be taking advantage of the way that AI may allow them to deliver code faster or more effectively than they could before.
But it doesn’t change the fact that we need a human being to understand what am I building, is this deliverable good enough?
So one of the ways that’s allowed us to differentiate the way we’re looking at AI versus other folks and not worry about our jobs being lost is, in the end, AI is basically just like a junior programmer.
You can tell it to go do a thing, but you need to review what it does. You need to give it really great directions and you need to be very careful in reviewing what it does and make a whole bunch of changes to it.
Unlike a junior, it never grows beyond junior, but that’s fine. We work with juniors anyway. But you still need a senior developer somewhere in the chain, and that hasn’t changed anything there.
Maybe it makes our seniors a little bit more productive, but in the end, we haven’t really changed our structure that much otherwise.
That’s great. I wonder if there’s a nugget here for listeners’ best practice insight. Have you found any big wins from a productivity or enablement of the human perspective in integrating AI tools into workflow?
From a programmer perspective, the biggest win is when you come down to doing things that…
I’m not like everybody else. I have ADHD, and so my ability to do boring things without an external stimulus is difficult.
And so it’s very clear to me when I’m doing a thing that is using my full brain versus a thing that’s boring, because when my brain starts saying, let’s put on an old episode of Star Trek and a side monitor while we’re working, I’m like, this is a
boring thing, right? Those things are the ones that AI can significantly diminish the amount of grunt work I’m doing. If it’s grunt work, AI is incredible.
So in my particular area of the world, if I’m solving a complex technical architectural challenge, my full brain is there, AI is not going to do anything good for me. Maybe it can be a conversation partner like it can for any industry.
AI is a great conversation partner regardless. So that should be like kind of the first one.
But from our productivity standpoint, if it’s something that is rote, that is boring, that I don’t think I’m uniquely capable of doing, but somebody’s got to do it, AI is incredible for that.
So in my world, it’s if I’m building out HTML templates, that’s not where my usefulness lies. Any developer can build out HTML templates. But because I’m a developer, I have to do that as part of my job.
But as a senior developer, I’m doing really complicated things and HTML templates. I’m going to offload the HTML templates to the AI. It’s going to do 90% good enough job, and it just saved me four hours.
So that’s what we found is any parts of our job that are wrote, it could very easily be handed off to somebody else, but we just don’t happen to have a junior developer today.
AI is really incredible in taking that stuff off our plates, so we can do what only we can do.
Wonderful. Thank you for the explanation. It’s a good triage philosophy.
Good news for the business, probably not good news for catching up on the next generation, but.
You got it.
This has been incredible. I want to wrap up with a question for the CEOs listening.
If you were a board member and you were sitting on a PE and you wanted to kind of look your PortCo CEO in the eye and get a feel for whether or not they really have their plan together on AI, how would you test that?
What’s the killer question or questions that you’d put to them?
To me, my ability to understand how AI should be used in a company context, in a product context for a specific client is all around my personal understanding about it.
And it’s a bummer because I don’t think everybody should always have to integrate AI and everything, but if you’re at a leadership level being asked to make those sort of decisions and you don’t have any practical day-to-day interaction or experience
with AI, that’s a concern for me. So if I say, hey, how are you using AI in your day-to-day life and the answer is, I’m not, then my thought about your ability to evaluate its usefulness or applicability is I’m concerned there.
And again, I feel bad because I want everybody to be able to make that decision for themselves, but if you’re leading a company that has the potential to benefit from AI and you’re not actively engaging with it on a regular basis, you’re not going to
have that kind of like well reasons, kind of like rubber hits the road kind of experience of what AI is good and not good at. And interestingly, I actually think that person is more likely to blanket apply AI places that shouldn’t be used, rather
than, oh, I’m concerned about that person under utilizing it. Because when you have experience with AI, whether it’s using ChatGPT to help you write things or as a programmer, you’re kind of like interacting with it day to day, you haven’t yet
learned what it’s good at and what it’s not. So at worst, you have this kind of like hyper fantastical understanding that it’s just this magic kind of tool that you can throw things in and just get magic stuff out.
At best, you’ll probably have just like a blanket kind of skepticism.
As much as I think that AI is a wonderful tool, I prefer skepticism over magical thinking, but I’d much rather a nuanced understanding because I think that’s your best ability to do what’s best for the company.
We do our best learning by doing, I believe. So if you’ve got someone who answered, I’m not using it. I think the next stage that people are looking down their noses at is you’re using ChatGPT as a search mechanism, right?
Like that’s the, maybe that’s 101. What’s the 201?
Like what should a CEO or an executive on a team be, what are the types of things they should be using AI for to get to a place where they have enough of a real understanding to be able to make good decisions?
So ChatGPT and tools like that, they’re large language models, which means they best understand all the content, all the questions and all the prompts were given as a language challenge.
And so anything in your world that is best understood by consuming a bunch of written or spoken content and then giving you answers, to me, that’s the sort of thing where you should reach for that as your 201.
So many, many, many businesses have an incredible amount of business documents. They’ve got meetings that they either do record or could record. They’ve got statements of work and professional service agreements and blog posts and content.
All that stuff exists there and that stuff can be used to generate useful written communication answers.
You have an entire history of communication with the client over the span of four years, but your PM or your AM who’s responsible for that client left two months ago. Did you just lose everything that there is to know about that person?
Finding ways to use those tools to do what LLMs are good at, which is consume large amounts of very specific content relevant to your communication and then gather useful things out of it.
Is it blog posts about, oh, these are the most common things that I write e-mails about? Is it listening to your business calls and realizing on every single business call you make the same pitch?
Maybe that’s a point for a blog post or a video that you should create or something like that.
There’s so many ways where an LLM can do things that we don’t have time, we don’t have energy or we don’t have consistency of recognition of the same language patterns to be able to do. An LLM excels at those.
Those are really great opportunities for any type of business to benefit from by slurping all that stuff up into some LLM tool, I don’t know if it’s Gemini or whatever else, and then trying to get useful information out of it that helps your
business. How are things going? What’s the health of the relationship with this client? What are the things I talk about all the time?
What are some content that you hear me talk about in all my conference talks that you’ve never seen me write a blog post about? It’s really good at that kind of stuff.
What a wonderful answer. Like everything else that we’ve talked about. Clever in the first place, but that’s so pragmatic in the examples that you share and the guidance that you give.
Matt, I just want to say a great big thank you as we get to the end of the segment. This has been terrific.
It was a pleasure.
Thanks as always for listening and watching. Don’t forget to give us a five star rating on your podcast player of choice. Legitimately, it is really helpful when you take two seconds to do that.
At the end of every episode, we like to ask one of our AI friends to weigh in on the topic at hand. So, hey, Mistral, or should I say bonjour? Can’t believe it’s taken us this long to have you on the show.
Today, we’re talking about what’s new at Knownwell. Can you tell?
Knownwell, the AI-powered enterprise platform, has some exciting updates. They’ve recently achieved SOC 2 Type 2 certification, showcasing their commitment to security and information confidentiality.
Additionally, they’ve secured 4 million in seed funding to accelerate the development of their innovative commercial intelligence platform tailored for professional services companies.
Okay, we’ll let those answers slide, I guess. Those are all new stories about Knownwell, but you can’t always be right. And now, you’re in the know.
Thanks as always for listening. We’ll see you next week with more AI applications, discussions and experts.





