It’s easy for executives to feel like they’re always putting out fires—continually reacting to big problems that flare up instead of getting ahead of them. But what if you could predict issues before they happen? In this episode of AI Knowhow, Courtney Baker talks with David DeWolf and Mohan Rao about how AI can help leaders move from being reactive to proactive.
And in our expert interview, Pete Buer sits down with Jessica Hall, Chief Growth Officer at Ops Canvas, to dive into how businesses can build trust in AI writ large and how product teams can add “markers of trust” into AI products to ensure end users are comfortable using them.
Using AI to Stay Ahead
Most business leaders focus on solving the problems they already know about. But AI can help uncover problems they don’t even realize exist. David DeWolf explains that AI can pick up on small warning signs—like shifts in client relationships or employee engagement—that might be hard for humans to detect. Instead of just reacting to obvious problems, leaders can use AI to address issues early and prevent bigger challenges.
Mohan Rao connects this idea to the Eisenhower Matrix, a tool that helps leaders focus on what’s important but not urgent. AI can free up time for tasks like mentoring, strategic planning, and innovation by handling routine or repetitive work.
Building Trust in AI
Later in the episode, Pete Buer and Jessica Hall discuss a major issue: trust in AI. Drawing from her experience in product strategy and AI ethics, Jess shares a powerful framework for building trust in AI systems, explains why AI skepticism mirrors the early days of automated elevators, and offers practical steps for businesses to create AI-powered products that people feel safe using.
Jessica believes the Trust Triangle, a leadership framework introduced by Frances Frei and Anne Morriss, can be effectively applied to AI. The Trust Triangle is comprised of three components:
- Logic: Does the AI’s reasoning make sense?
- Empathy: Does the AI understand and respect the user’s needs?
- Authenticity: Does the AI act consistently and predictably?
For AI to be trusted, companies need to be transparent about how it works, where the data comes from, and what safeguards are in place. Just like people once took decades to get used to self-operating elevators, and only then were comfortable once phones and emergency alarm buttons were installed, businesses and coonsumers need clear markers of trust to feel comfortable using AI. These markers of trust include things like “chain of thought” reasoning that you see in LLMs like Perplexity and DeepSeek, which make it clear to users how their outputs were derived.
Helping Employees Feel Confident About AI
It’s not just customers who need to trust AI—employees do, too. Many workers worry that AI could replace their jobs. Jessica advises leaders to be clear about their AI plans and encourage employees to use AI as a tool to help them work better, not as a replacement.
Many startups are already leading the way by building AI into their workflows from day one. They’re finding that AI lets them do more with fewer people, making their businesses more competitive. Established companies can follow their lead by creating clear AI policies and encouraging employees to explore AI tools in a structured way.
AI as a Tool for Proactivity, Not Just Productivity
AI isn’t just about saving time—it’s about helping leaders stay ahead, make smarter decisions, and focus on what really matters. Businesses that integrate AI effectively can shift from reacting to problems to proactively shaping their future. But to get the full benefits, companies must build trust by being transparent and helping employees embrace AI.
Want to hear the full conversation? Watch or listen to the latest episode of below, or wherever you get your podcasts.
Watch the Episode
Watch the full episode below, and be sure to subscribe to our YouTube channel.
Listen to the Episode
You can tune in to the full episode via the Spotify embed below, and you can find AI Knowhow on Apple Podcasts and anywhere else you get your podcasts.
Show Notes & Related Links
- Connect with Jessica Hall on LinkedIn
- Learn more about OpsCanvas
- Connect with David DeWolf on LinkedIn
- Connect with Courtney Baker on LinkedIn
- Connect with Mohan Rao on LinkedIn
- Connect with Pete Buer on LinkedIn
- Watch a guided Knownwell demo
- Follow Knownwell on LinkedIn
You know the thing, the squeaky will gets the grease.
But in order for that to be true, you have to be a reactive leader.
You have to wait for that squeak.
What if you could know which will would squeak before it ever made a sound?
How would your leadership style and your organization as a whole be different?
Hi, I’m Courtney Baker, and this is AI Knowhow from Knownwell, helping you reimagine your business in the AI era.
As always, I’m joined by Knownwell CEO David DeWolf, Chief Product and Technology Officer Mohan Rao, and NordLight CEO Pete Buer.
We also have a discussion with Jessica Hall about how to establish trust in AI, both in individual products and in the technology as a whole.
But first, I talked with David and Mohan recently about how leaders can take a more proactive approach to leading their companies in the age of AI.
David, Mohan, I want to talk with you today about a topic that I think and hope will resonate with everybody listening, how they can utilize AI to drive proactive leadership.
So, we’ve all been there.
You know, those times where just everything is hitting the fan.
It’s just, you know, it’s constant reactive.
It’s all the things you didn’t think about.
You’re just putting out fire after fire.
You’re never able to move forward on the things that you know are like really important.
And so, that’s what I want to talk about today.
How can AI change?
How can it help us as leaders change this reactive MO?
How can it help us more readily be in that proactive, you know, the space we all want to be?
You know, Courtney, one of the things that I think about is so often we talk about needing to be able to control the controllables, right?
We want to be able to execute on what we know.
But in business, there is a lot that we don’t know.
And we don’t even know we don’t know it, right?
And I think what sometimes gets us in trouble is not only the extreme of not executing on the controllables, but the other side of things, of not even knowing that you don’t know something.
I think artificial intelligence is great at identifying those things that I need to know, that I probably don’t know, and I may not know.
And I think by pushing into more of the operational use cases of AI in business, we will begin to see more and more of this, you know, in the old world, we called it, you know, proactive notification, notifications.
Well, those notifications were very simplistic.
It was the things we knew to look for.
I think there’s going to be a new class of notifications that are the things that I just can’t put all of the different ways a client may, client relationship may sour in a rules engine and monitor for all of them.
But artificial intelligence absolutely can monitor for those warning signs and predict that, oh, this might be a way this client relationship is in trouble and can surface it for me.
And I think we’re going to see more and more and more of those.
And I think that is the genre, the class of issues that it can really help prompt us to be more proactive in.
Because now I have a situation that I didn’t even know to look for, that I’m able to be responsive to, I’m able to get ahead of.
It’s so interesting.
Obviously, you brought up clients.
The other area that that scenario feels like it comes up a lot with is with employees, with your internal team.
It’s like the thing you didn’t know, you didn’t know that.
And those are so painful.
They hurt so bad.
I actually think as AI plays out, there’s going to be a lot of human examples like that, right?
If you think of binary decisions and deterministic rules, a lot of them are systems-oriented, right?
We’ve seen great advances in IT and processes and those types of things.
I think the management and leadership of people, of relationships, of the soft side of business is going to come a long way in this new world.
Very interesting.
I think you can simplify proactive leadership.
And let me ask you both, in terms of how much time can you claw back to be in Eisenhower quadrant number two, right?
So, just kind of thinking of it that way and rephrasing this, of saying, what are the critical things that I need to be doing that are not urgent?
Some of the things that you mentioned, right?
You know, human relationships, people relationships, motivation, mentoring is a big category.
When you’re firefighting all the time, there’s no time to do that, right?
So, spending time with clients is another one that is so super important.
It is about, in multiple ways, essentially thinking about future progress rather than fighting about the past, right?
So, that’s what it comes down to.
The question is, how can AI buy you more time?
I think it can buy you time from both the efficiency and effectiveness angles, right?
So, efficiency in terms of work that you had to spend a few hours compiling, even if you had a chief of staff.
But that’s just brought together for you using AI, but then also be able to be more effective in your job, and be able to make some predictions in terms of what could be happening and just be there already.
I think this is a massive opportunity for proactive leadership and for us to increase our time more in the proactiveness side of things.
You know what somebody needs to build, and you two are smart enough to do it.
Okay, what we need, we need to have a platform where we can go in and put in, use the Eisenhower matrix to fill in all the things we did for the last month, and have a platform that can tell you the tools that you should be using, the AI platforms you should be using to complete those different tasks in the different buckets.
Don’t y’all feel like that’s half the battle right now?
Oh, there’s no doubt, right?
The number of times I get asked, what’s your favorite AI tool?
People are just groping for, how do I find these tools in a world where they’re being thrown at you all the time?
It’s just the practical, I just need the practical answer.
This is the thing I’m doing, what should I be using for it?
If y’all could fix that for us, y’all could do that in your sleep probably.
Courtney, I don’t have time to do that because I’m worried about the past and working on it.
That would be proactive.
That’s such a great suggestion.
I think all of us should be in Quadrant 2.
Just for listeners, reminder, eyes on our matrix, Quadrant 2 is, it’s, I think, the critical, not urgent Quadrant.
Also, I would like to say for the smart listeners that we have, if you want to take my idea, let me know, I will be one of your investors or early users.
Maybe I shouldn’t say I’m one of your investors.
I will be an early user.
Okay, any other ideas on making this turn for really helping us all be in the place we really want to be?
Quadrant 2, Quadrant 2, important but not urgent.
You know, I thought Mohan actually brought up a really interesting point.
Think about the positive aspect of, oh, it can help us get ahead, it can give us these dynamic notifications we didn’t even know to ask for, blah, blah, blah.
The other piece that Mohan mentioned, and I just really found compelling was, because it can help us to execute work and do more of the rudimentary work, it’s actually going to free up time for us to be able to be more proactive versus reactive.
And I think that is true.
We often talk about those operational use cases because that’s where we’re focused, and I think that’s where the most economic value comes in a B2B world.
But the execution-oriented use cases, I think, are going to be driving a lot of efficiency that frees time, that allows us to be able to spend more time there.
And so look at the existing tools that are there today, right?
Gemini in your Gmail account that is writing emails for you and getting you a head start on that, that saves you a minute and a half.
Then you go into a Word doc and you are typing and Grammarly is helping you to edit that doc and it saves you another seven and a half minutes.
And then you go from tool to tool and you are doing research and Perplexity is giving you the references to the answer that it gave you and it expedites that research.
If you start adding all of that up, there is a lot of work that AI is doing that is scraping minutes of time away in every task that you do that frees you up.
Now, the key is, you can’t use that time to get lost in your Instagram feed.
You’ve got to use that time to plow it back into productivity and actually being proactive and moving the rock forward.
I literally, I actually thought this on our last episode, we were talking about on that operations layer, like the AI being able to tell you like, hey, this is the enterprise view of the thing you should be working on now.
Like, here it is, of people not liking that.
There is a certain cathartic, just like, I’m going to look at my email and did something.
There’s that dopamine thing.
It’s also the worst productivity hit too, right?
Totally.
But it’ll just be really interesting as we’re able to solve some of that with technology.
Will our human nature allow us to do the thing to really sit in quadrant two of eyes and hatred?
Why can’t I say his name?
The Eisenhower Matrix.
Oh, you did it.
Good job.
Yeah.
I can do it, but I need AI’s help.
I’m just kidding.
Another area where AI could be super helpful is in risk management.
Right?
So one of the things that happens, that chews up a lot of time, is not the things that you’re focused on, the things that come out from the left field, just because you weren’t focused on that.
But if there is a way to say, hey boss, you need to be looking at these things too, right?
So even if it were like a list that it could create that’s 80% active, that is 80% accurate, I think it’s going to help so much, right?
So in just kind of managing your time and energy.
You know, the other thing is you talked through that, that it makes me think of, is one of the areas where I think AI can help us a lot, is as leaders, we don’t have time to research in depth all of the information that’s actually available to us, to be able to make good decisions.
And we often get stuck doing the reactive, because everything that’s available to us is way too much to consume.
I think artificial intelligence can really help us with this problem of, you see it over and over in the last few years in research studies, leaders are overwhelmed by data.
We talk about data, everybody wants their new BI tools, data up, they want more, yet it’s not helping them make decisions.
It’s actually muddying the waters and making it more difficult because they can’t parse through the noise.
I think artificial intelligence, number one, can consume infinitely more than we can, but number two, it can also synthesize infinitely more than we can.
And if we can go from a place of consuming data and information, and rudimentary knowledge assets, to consuming more intelligent information assets, and true conclusions, as opposed to raw data, I think that’s going to give us a lot of time back and help us be more effective in our decision making so we can get ahead of some of these things.
Well said.
David Mohan, thank you as always.
Thanks David.
Thanks.
I’m not going to answer because I don’t want to be reactive to your comment.
It’s vital to have the right tools in your disposal if you want to become a proactive leader.
That’s why we created Knownwell, to help executives like you tap into the power of AI to stop reacting and get ahead of the curve.
Say goodbye to spending your days bouncing from fire to fire.
Visit knownwell.com to learn more and let us show you your data on the Knownwell platform.
Jessica Hall is the Chief Growth Officer at OpsCanvas and a member of Knownwell’s AI Advisory Board.
She’s also the co-author of the book, The Product Mindset, which she co-wrote with our very own David DeWolf.
Jess sat down recently with Pete Buer to talk about the importance of establishing trust in AI.
Jess, so good to see you.
It’s good to see you too.
You and I go way back having served now at two companies together.
What I’ve always appreciated about you as a leader and in your professional roles, is the emphasis that you place on the people side of what are otherwise tech questions.
So hoping to spend some time in that space together today.
Also worth mentioning, Jess has been on Knownwell’s Advisory Board for some time now, lending her talents and expertise as we grow the business.
And I understand that you have a new role.
Yeah, I’m the Chief Growth Officer at OpsCanvas.
We are a DevOps and cloud infrastructure platform company, and we help growing companies manage their cloud, manage their spending and automate their deployments.
Congratulations on all of the above and welcome.
Yeah, thanks for having me on the advisory board, and thanks for having me here today.
So, speaking of these people questions, the theme for our conversation today is trust.
Trust seems to be coming up a lot in the context of AI.
Starting at the top, why so much focus?
Yeah, so if you think about a typical computer system, if you give it the same inputs, you’re going to get the same result or the same output.
But when you have generative AI, it has what’s called a non-deterministic output.
When you hear non-deterministic, I want you to think toddler, because you can give the toddler the same inputs.
You can ask the toddler the same question three times, and it’s not going to answer, the toddler is not going to answer it the same way.
So where traditional systems are deterministic, generative AI is non-deterministic.
So it’s not always going to generate the same result.
And that makes you think, hmm, what’s going on?
Where are the guardrails?
How does this thing work?
And how does this thing operate?
And another concern that we have is we don’t necessarily know in a lot of cases what data was used to train the model, how the model was trained, what guardrails or controls in place, how did the people who put those things in place think about crafting them?
And I think we’re also pretty worried about handing over our data and special things about us.
And all those things together, I think, creates kind of a low trust environment in AI, even though we are super excited about the interesting things that it can do.
So when I think about trust, one of the frameworks I’ve been using in AI actually comes from your world, Pete, of leadership development.
And it’s from Francis Fry and Anne Morris.
And they talk about the trust triangle.
And so there’s three corners of the trust triangle.
Logic, which is, does this kind of make sense logically, is expressed clearly, which having been both former CEBers, logic was like a huge value in that organization.
And it was very much part of the culture to challenge logic, regardless of whose logic it was.
Another piece is empathy, which is, is this showing that it cares about me?
Does this person, or in this case, this AI system care about me?
Is it thinking about me as an individual and what my needs are?
And the last corner is authenticity, which is you are who you say you are.
And I actually think the trust triangle, while it was developed for humans, kind of works in similar ways for AI systems.
And it’s a good way of thinking, not just does it make logical sense, but how are we thinking about our users and how are we kind of showing up when we put this AI into a tool that people are using to do things?
How do I think about where to pick up the story on the trust question?
So is it more fear or mistrust of the unknown?
How are the models built?
How does this thing work?
Or are people bouncing off of it, getting different kinds of taller responses and getting shaken up by that?
Where are we?
Where is, I guess, where is the rubber hitting the road in terms of our trust problem?
Yeah, it comes through a couple of different places.
And Daniel Kahneman did some work on this before he sadly passed away and was saying that what we would forgive in a human, we don’t forgive in a tool.
We don’t give, we do, we, you know, EVs compared to human safety records are probably way better, but we still don’t always want to give them the key.
So I think there’s an inherent distrust.
There’s also a sense of like, this is new, and I’m not quite sure.
So elevators had been around since about the 1800s, and they had to be operated by humans until about the 1950s.
But the elevator operator didn’t actually leave the elevator for the most part until the 1970s, almost a full 20 years later.
Because people were kind of like, what makes people comfortable here is knowing there’s a human being who’s in control, who can do things.
And in order to get people okay with using automated elevators, one, there was an elevator strike, that kind of helped.
And two was they had things, they put more into the design of the elevators and the experience of using the elevator that made people more comfortable.
So there’s a phone in there.
And there is an alarm bell that you can hit, so it will go off.
And I actually did get stuck in an elevator at my old building.
I actually had a phone at the time, so I called the non-emergency number and the Arlington Police, Arlington Fire Department came and got me out of the elevator because nobody was there who could fix the thing.
So if you think about it, we have to put these things in place that will allow people to feel comfortable and safe, so they’re willing to kind of jump in.
And over time, we forget that there was a world where a human being had to operate an elevator.
Like I never have been in an elevator that didn’t just require me to hit a button.
Let’s talk more about getting comfortable with elevators.
So trust is a major challenge, but there are companies out there that have AI powered offerings that they need to bring to market and they need to get their teams and their customers to a place where they’re comfortable with the idea.
What does best practice look like in establishing trust with your targets?
It’s actually going to be a collection of things as opposed to one.
It’s not going to be one singular thing that’s going to get this done and that’s going to make people feel safe.
I think it’s going to be a collection of things working in harmony instead of disharmony that’s going to make things possible.
So, if you think about, if I’m at a shopping center in the middle of the day and the sun is up and people are around and I can see the staff, I’m going to feel very safe.
I’ve looked around and there’s enough markers there that make me feel safe.
When I’m there late in the evening, it’s dark, no staff, nobody there.
I’m going to feel now all those markers are gone.
So, where we feel safe and secure and trust is a combination of factors.
So, I think there’s a couple that you’re seeing.
Everybody is hot about DeepSeek, which is doing this chain of thought or if you’ve ever used Perplexity, particularly Perplexity Pro, I can’t talk today.
They show their work.
So, one of those is saying, let’s show our work, let’s help people understand how we got to this place and what it is I’m looking at, and what type of confidence I should put in it.
That’s one thing to do.
A lot of these organizations are also very public about some of the ones who like Anthropic spends a lot of time and a lot of focus on constitutional AI and how they train and how they do it in such a way that they feel is better for humans.
Then you have DeepSeek and Llama who’ve said, we are open source and so we’re going to allow people to see more things where open AI has kind of said, hey, Nelloc and ironically, open AI was very mad at DeepSeek for potentially taking things from them, which is funny because they basically sold the entire internet.
So, not a lot of sympathy.
So I think it’s those combinations of things.
How are you working on it?
How are you showing your work?
What types of results?
And also, you know, as an organization, are you investing in things like EVALS, monitoring, making adjustments, being open and transparent about what you’re learning about your models and how you’re making those adjustments and continually improving?
I think it’s a combination of those things that is going to generate trust.
So, a whole bunch of cool ideas on places where these markers of trust can be embedded.
If I’m trying to figure out for the kind of offering I’m bringing to market, the most leveraged places to embed markers of trust, with your user experience, expertise, is there a method for getting to that?
Is there a way to sort of divine the right places where I should be engaging customers in a way that causes them to have more rather than less trust?
Yeah, I mean, that’s where I think you need to spend a lot of time kind of understanding your customers and their mental model.
Kind of how they see the world and what their behaviors are and how they approach things and what are kind of things that are norms and ways of behaving within a certain organization.
So if you’re working on a financial services product, you’re largely going to see something that’s SOC 2, right?
So it’s going to have a higher degree of security present.
So you’re going to see more signs of, oh, this is encrypted and I need to authenticate more.
And there’s going to be a lot of error checking and business logic that says, let’s make sure that all the things you have in here makes sense.
If you were working on in education, does it align to standards?
Can you explain about what the role?
So it’s going to depend a little bit on the situation.
And that’s where I think you need to kind of talk to customers and understand what are the norms?
What are the values?
How do people think about that?
And don’t be surprised when something that you wouldn’t have thought was important to them is deeply important to them because it’s a part of that culture and you don’t want to go against that culture.
You certainly don’t want to go against the culture without knowing it.
But if you’re going to do something where you go against the grain, yeah, that’s going to be something you really want to understand before you take it on.
Are companies spending enough time worrying about this?
No.
They have done the thing that they always do, which is we need all the data scientists.
We need all the engineers.
Let’s build a data lake.
What are we going to use the lake for?
I don’t know.
They worry about that later.
Yeah.
A lot of these organizations cut up their UX research, and they are focusing on more on these practical AI skills.
What I have heard from every single machine learning or AI expert I’ve interviewed in the last two years, and it’s been a bunch of conversations because I’ve been interested in this myself, is get the use case right.
Be really clear at explaining what is the input, what is the output, and the output is, while it might be expressed as a simple formula X to Y, underneath that needs to be a lot of rich understanding of people and how they think about it and what they value and what they’re going to do with it, and we are not spending a lot of time thinking about that.
If you get the thinking part of a project wrong, you’re definitely going to get the doing part of a project wrong.
It feels like there’s a role for the old jobs to be done framework here.
I love the jobs to be done framework.
I wrote something a while back where I applied some of Bob Mestas.
A lot of the things he talks about that innovators do applies here.
It’s very, very simple, and we don’t actually have to reinvent product management, UX research for AI enabled products.
We have to use what works.
Now, there’s a lot of tactical differences, and the order in which we do things and the way we staff things is different, but those core fundamentals of building something for someone that is useful and desirable and feasible, that’s always going to be the case.
Beautiful.
I’m going to change altitudes here on the trust question and go from a user’s trust of a resource to employees or society’s trust of the role that AI plays in work and career going forward.
There’s a lot of anxiety there.
How should companies be thinking about addressing that trust question?
I mean, I think it’s pretty, it’s a pretty challenging thing.
There are a couple of CEOs who’ve been public about what they want to do with this and reducing labor costs.
And there’s a bunch who are kind of like, and I don’t know.
What I can say is this.
We talked about elevator and elevator operators earlier.
And there are no elevator operators jobs in the world today.
There are no phone operators, because you still have to take, if you ever saw an old movie, and people are plugging things in.
That job doesn’t exist today either.
That jobs change and the skills necessary for jobs changing happen a lot.
And I don’t think people are wrong to worry.
Things will change.
And somebody is going to get the short end of that.
But if you think about elevators, elevators enabled so many other things.
So many other things became possible because this technology advanced and opportunities created there.
And so how do you get to that opportunity?
For years and years, I’ve taught skiing on the weekends during the winter.
And in skiing, we have this concept of dynamic balance, that in order to stay in balance while you’re moving, you have to hold tension in different parts of your body, and you have to continually adapt and change, so that you stay in balance.
And I think that’s the kind of mindset you have to really approach these things with, is to say that there will be tension that you have to hold and manage, and you need to be constantly adjusting, because pretending like this isn’t happening and it’s not going to affect you, is not going to be, that’s not going to work very well.
Are there markers of trust maybe in how a company communicates with its teams and how HR thinks about workflows and work planning, like other, could there be ways that we can put into place a version of markers of trust in this question?
Yeah, I think there’s a few.
I think it starts with acknowledging the reality that chances are many people in your organization are using AI to do their jobs already.
Whether you have authorized it or not, whether it’s being done on your computer system or not, this is already happening.
I think, again, the smarter thing to say, let’s put some rules in the road.
Some ones I’ve heard that other people are using are, one is you are responsible for the quality of your work.
If you are using work that hallucinates, you will be held accountable for that.
Another one is don’t put any of our proprietary or confidential or protected data into any of these tools, because it is not clear that it will be protected.
So we cannot expose our data to people who shouldn’t have it.
Another one is to say, leaders should be using these tools, and they should be using them regularly, and they should encourage others to do that, and we should have Sharon around that.
Ethan Mollick has done some really smart work around that.
In the last couple of months, I spent a lot of time with startups, and one thing that I’ve really, I’m seeing with startups is that startups from an HR, an internal perspective, are totally AI first.
They are doing a lot with these tools, and the investors are not saying, here’s money, go hire 100 people anymore.
It’s kind of like, here’s money, let’s scale this up.
And there’s the expectation that we’re not going to have these massive hiring sprees for these startups anymore, that you can actually do more with less.
And I think smaller companies coming up are realizing that one opportunity they have to compete is to be better at using AI inside of their organizations.
And the startups that I work with, and the people that I’m talking to, are using it all the time for lots of things, and they are doing a lot more with small teams.
And that’s probably going to grow.
They’ll have the trust advantage because the expectation is set from the word go, right?
That this is a fundamental part of how we do work, right?
And this is how we think differently about our staffing models.
And there are no surprises when you’re working from a agreed upon foundation like that.
Yeah, I think they just, as opposed to, let’s keep it away from us and who’s doing it, you’re doing something behind your back, or maybe you’re going to use this to get rid of me.
It’s like, no, this is a tool.
We should use this to be competitive.
You should use this to extend your capabilities so that we can, you know, do more with less, which at the end of the day, if you do well, is more equity for you.
And that ends up just being advice for incumbents, you know, just act like a startup, right?
Like communicate transparently, use it everywhere, right?
Like, okay, awesome.
Jess, it’s a delight to be talking to you.
I’m so happy that you’re going to be engaged with Knownwell on an ongoing basis, but how about for the folks who are listening, who might not know how to find you, if they want to talk shop, where do they track down Jess Hall?
Where the bulk of everything I’ve done and written and talks and things are, is at hallwaystudio.com.
You can find me on LinkedIn, Instagram, and mostly though, I do occasionally dabble in some of the others, but not regularly.
It’s been wonderful.
Thank you so much for sharing your deep bench of expertise and insight.
We’re grateful.
Thanks a lot.
Thanks as always for listening and watching.
Don’t forget to give us a five-star review on your podcast player of choice.
And listen, we’d really appreciate it if you can leave a review or share this episode with someone that you know would love it.
At the end of every episode, we like to ask one of our AI friends to weigh in on the topic at hand.
Hey, DeepSeek, what’s happening?
This episode, we’re talking about how to turn data into actionable insights.
And before we get your answer, I am concerned on whether or not we should actually be asking you any questions.
But regardless, tell us what you think.
First, clean up your data and look for clear patterns or trends.
Then ask specific goal-oriented questions to turn those findings into practical actions or strategies.
And now, you’re in the know.
Thanks as always for listening.
We’ll see you next week with more AI applications, discussions, and experts.