AI Knowhow, Episode 87 Overview
Skating to where the puck will be? Not enough. In professional services, it’s all about anticipating where your client—the skater—is headed next. But how do you stay ahead when clients start zigging, zagging, and getting slammed into the boards? How can you get out in front of these strategic pivots before they even happen?
This episode of AI Knowhow tackles that challenge head-on. Knownwell CMO Courtney Baker is joined by CEO David DeWolf and Chief Product and Technology Officer Mohan Rao to unpack how AI can empower service leaders to become true strategic navigators. They explore why staying close to clients, decoding digital breadcrumbs, and scaling the “trusted advisor” model are essential in today’s AI-driven landscape.
Anticipation vs. Reaction
David emphasizes the paradox inherent in professional services: becoming a trusted advisor makes scaling difficult. He highlights the necessity of intimately understanding client businesses, industries, competitors, and economic ecosystems. According to David, the key lies in synthesizing information across multiple sources, leveraging cross-industry insights, and proactively advising clients, even if the recommendations aren’t directly beneficial to your own services. “Those types of recommendations are what fuel great, trusted advisor relationships,” David says.
Leveraging AI to Anticipate Client Needs
Mohan highlights the role AI can play in anticipating client shifts through digital footprints. “The data is always there,” Mohan says. “The question is, can you take these signals, amplify them, and get them to the right people in time?” Digital traces such as industry disruptions, budget reallocations, and internal communications can be effectively harnessed by AI to reveal strategic opportunities or threats well in advance.
Practical Steps for Proactive Advising
The team shares clear, actionable insights professional services leaders can follow to ensure they’re passing the puck where the skater will be:
-
Develop Deep Industry Insight: David recommends, “You’ve got to know your client’s business better than they do. Act like an insider studying their customers, competitors, market trends, and more.”
-
Speak the Client’s Language: Clients trust advisors who demonstrate clear industry fluency and credibility. David noted the importance of understanding industry-specific terms and nuances to authentically engage clients.
-
Proactive Engagement: Being proactive isn’t optional; it’s fundamental. Courtney reflects on the missed opportunity several providers she worked with missed when they failed to proactively address big issues on the horizon like the advent of GDPR: “It felt like a missed opportunity. Their proactiveness and thought leadership totally would’ve shifted the game.”
Expert Interview: Ardy Tripathy of OpsCanvas
For this week’s expert interview segment, Pete Buer sits down with Ardy Tripathy, AI lead at OpsCanvas, to explore how his team is using AI to automate infrastructure intelligently. Ardy shares a refreshingly honest rubric for evaluating AI use cases and explains how he turns high-potential features into real, verifiable products.
He emphasizes evaluating AI based on clear data availability, contextual understanding, and ease of verification, stating: “Think of AI as an enthusiastic intern that somehow suffers from amnesia. They are very diligent. They’ll lift rocks around and do the thing you ask them, but they’re going to forget it five minutes later because their context window is limited.”
Ardy also highlights the importance of keeping AI use cases realistic and focused: “If you can verify the output easily, whether programmatically or through human oversight, that’s a potentially good use case for AI.” Noting the continuously evolving nature of AI, Ardy stresses the necessity for agility in decision-making, acknowledging the rapid changes in AI capabilities and costs that businesses must continuously monitor.
Listen to the Episode
You can tune in to the full episode via the Spotify embed below, and you can find AI Knowhow on Apple Podcasts and anywhere else you get your podcasts.
Show Notes
- Connect with Ardy Tripathy on LinkedIn
- Learn more about OpsCanvas
- Connect with David DeWolf on LinkedIn
- Connect with Mohan Rao on LinkedIn
- Connect with Courtney Baker on LinkedIn
- Connect with Pete Buer on LinkedIn
- Watch a guided Knownwell demo
- Follow Knownwell on LinkedIn
The great Wayne Gretzky said that you’ve got to skate to where the puck will be.
But as a professional service firm, your role isn’t really to skate where the puck will be, but instead to pass to where the skater will be, aka your client.
And what if your client, the skater, started zigging and zagging and getting slammed into the boards by the defense?
And that’s gotta hurt.
You can see how things from your end would get really complicated.
Never fear though, by the end of this episode, you’ll have some clear directions on how you, as the service leader, can stay ahead of your client’s strategy shifts.
Hi, I’m Courtney Baker, and this is AI Knowhow from Knownwell, helping you reimagine your business in the AI era.
As always, I’m joined by Knownwell’s CEO, David DeWolf, Chief Product and Technology Officer, Mohan Rao, and NordLite CEO, Pete Buer.
We also have a discussion with Andy Tripathy of OpsCanvas about using AI to automate infrastructure at scale.
But first, should you start confiding all of your deepest, darkest secrets with Meta’s new chatbot?
Let’s find out from Mohan and Pete.
Pete Buer is back again to help us separate the signal from noise in AI News.
Pete, are you ready to scare the audience a little?
I live for the opportunity.
The Washington Post ran an article recently titled, Zuckerberg’s new Meta app gets personal in a very creepy way.
Tech columnist Jeffrey Fowler tested the standalone Meta AI app and found it tracking and remembering just about everything.
What’s the real story here, Pete?
So, what you need to know, the Highpoint’s first Meta just basically stapled a surveillance engine onto their chatbot.
The app hoovers up decades of Facebook and Instagram data, infuses it with your new chats to create a rich, however private, intimate profile of you that can be searched.
Memory is on by default and there’s no do not train switch.
So, deleting is a multi-step maze and it still doesn’t guarantee permanent destruction.
Privacy settings, in the words of the article, are laughable, insufficient and difficult to manipulate.
So, bottom line, in the absence of your ongoing active continuing personal defense of your information on the platform, oversharing ends up being inevitable.
And all of your worries about paying the bills or that weird thing growing on your undercarriage become searchable by strangers.
So, great.
As a business leader, maybe not so great.
What do we take away?
If you’re building AI features into customer facing products, I think the lesson is treat the meta example as the worst demonstrated practice and how not to get it right.
So, instead, make private use your default position.
Make the settings obvious, easy to find, easy to use, trustworthy, and keep all drafts private from the get-go with maybe an opt-in process for deciding what gets shared to what audience on what frequency, et cetera.
The whole thing only works for you as a business if your users trust it.
And the meta approach is not the way to earn trust.
In fact, Mark Zuckerberg had the chance to digest this kind of feedback.
It was fast in coming and could have made some changes in the name of trust with his community.
Instead, he’s kind of taken the sorry not sorry route and insisting that this is how meta AI fills the quote unquote friends gap that Americans are experiencing.
And as the saying goes, my feeling is with friends like that, who needs enemies.
Thanks for the takeaway.
I hope all the business leaders take that advice.
Thank you, Pete.
Thank you, Mohan.
A big challenge for services leaders is staying ahead of their client’s strategic shifts.
I sat down with David and Mohan recently to hear how they think about solving this complex problem.
David, Mohan, so many firms talk about being trusted advisors.
But here’s the reality.
You can’t really advise well if you’re always reacting.
So, you know, I hear a lot of trusted advisors, but I also hear so many people just running around, putting out fires.
So today, I want to dig into what it takes to actually get ahead, to see a client’s strategic shifts before it’s spelled out in a QBR or an RFP.
Where should leaders be looking, listening, and learning to stay out in front?
And maybe more importantly, how can AI help us with that?
Courtney, I love this question because I think it’s one of the fundamental challenges in professional services.
At Knownwell, we’ve coined this idea of the paradox of professional services, which is it is exactly what helps you to grow and to build a growing firm that gets in your way of scale in professional services, right?
It’s one of the real complex parts.
Scaling professional services gets really, really hard because the more you do exactly what you’re talking about, be a trusted advisor, the harder it is to scale because not everybody can be a trusted advisor.
Not everybody can be in every conversation to have the context, to be able to connect dots and to make that piece of strategic advice.
And so, it’s really, really important that you begin to figure out different ways to be able to scale.
And I think ultimately when I think about being that trusted partner, being that strategic partner that is helping companies navigate through strategic shifts and take their strategy and advance it, those that become the trusted advisors are the ones that number one, they never cease to prioritize the client, right?
They are continually staying close to the client, understanding their industry, understanding their ecosystem, observing what competitors are doing, observing what their economic ecosystem, right?
Vendors and clients, customers are doing, continually synthesizing all of that information, and then they’re drawing conclusions from that and mapping it in many cases to outside perspectives as well, right?
Cross industry insights, for example.
This trend happened over here before, and this is what we learned from it, serving these types of clients.
If we apply that to your business, we think that it will help advance you.
Those types of realizations is really what fuels great trusted advisor relationships.
But that intimacy, that closeness, that ability to continually process and synthesize data points is so important to it.
But as you scale, it becomes harder and harder and harder to do.
Yeah.
You know, it’s ultimately, it gets down to how well do you know your client?
How well do you know the industry?
Right?
So what it boils down to is can you anticipate these strategic shifts as supposed to just always be reacting to them?
All right.
So sometimes you have to react.
But hopefully, the percentage of times where you can anticipate goes up and up.
And there are ways to do it.
You’re a deep, you’re deep industry knowledge.
You know what’s coming.
You have your delivery teams be so tuned to all of these subtle changes that are going on.
They report it back to you.
Right?
So the anticipation is really important, followed by just sort of positioning yourself as a strategic navigator of these changes, right?
In the client’s mind is so important, right?
And they look at you as an advisor who can help them get to wherever they’re shifting towards.
All right, so just being seen as the thought leader, as a strategic navigator is so important.
Helping them see around corners too, right?
There’s one thing to help a client get there.
It’s another thing to help the client figure out where they need to get to, right?
And I think that’s another piece of it.
Exactly.
It’s not like they know exactly where they should be going.
If you can be a trusted advisor where you give them a frame, views on it, that helps you get more information about the client in turn.
I think what would be helpful is really boiling that down into the practical application.
Like, what do we need to do to be that trusted advisor so that we get those signals early on?
Yeah, if I were training a new consultant, I would say first and foremost, you’ve got to know your client’s business better than they do.
And you’d be surprised that that’s very doable, right, as a consultant.
You’ve got to act like an insider, studying their customers, competitors, we talked about market trends, all these types of things, and really dive into that and take ownership for understanding it.
You might be surprised as an outsider, you’re able to observe bottlenecks, issues, competitive threats, those types of things, better than your client may be able to.
So that intimacy, knowing their business, I think is number one for me.
Number two, I think that what goes right along with that is speaking their language.
You know, this is one of the reasons why a lot of professional services firms hone in on a specific industry, for example, right, or a specific domain.
Because you have to be able to speak the language that your client speaks, be able to identify with them, make them feel understood, make them feel like you truly do know it.
You don’t just know it artificially, but you are that insider.
And I think one of the ways you prove you’re that insider is by using the acronyms that their industry uses, right?
That would be example, as Courtney twitches because she hates all acronyms.
Exactly.
I think the third to me is taking all of that.
And I think about the challenger sale.
I think this idea of challenging, giving advice, pushing back, really pushing them to improve, not just being the professional services firm that’s trying to sell something or sell my widget, right, my approach, but really doing your research and leveraging this inside knowledge to put forth real recommendations for how their firm improves, even if it is not to your best interest, right?
Like showing up, wanting them to get better, regardless of what that solution is, I think is really, really important.
You know, I have a little bit of an example there.
I remember when GDPR, just the initial rumbles of that were happening, and for marketers and a lot of business leaders, we were all like, what does this mean?
What is this going to look like?
There were just a lot of question marks of what that was going to mean for our businesses.
And I was shocked with all of the different partners and people, vendors that we worked with, how few reached out to us proactively about that.
Rather, I had to engage them and say, hey, have y’all thought about this?
Do you have plans?
Do you have any advice here?
And it felt like a missed opportunity that I had to go to them versus the other way around.
That could have been like, oh, wow, they’re a leader in this.
Even if that’s not something they would help us in, it just reframes my thoughts about them, their proactiveness, their thought leadership.
Just totally would have shifted the game.
Totally.
I’m sure you two have many, many more, but that’s one that comes to the top of my head.
Yeah, I think that proactiveness is key, is taking that leadership position.
In fact, that has shown up in our research.
When you look at service quality perception, which on a recent episode we just talked about, we talked about responsiveness in that episode.
One of the other drivers that sits right behind responsiveness of service quality perception is that proactiveness.
Are you just doing what I ask you to do?
Are you answering requests or are you leaving me where I need to go?
Are you taking me to the promised land?
That really drives the perception of service quality, and it may not be embedded in the SOW that you’ve signed, but it is part of how you actually shepherd and steward this client relationship to make sure that you’re really building that trusted advisor relationship is exactly what you said.
I love that example.
Yeah.
There is the client relationship part, but it’s equally important as you see the shifts happening to build an organization that’s flexible, that can upskill itself, that can where you’re proactively doing talent development.
All of these things are so important in addition to positioning yourself well in the minds of clients because the industry or you’re seeing the signals, but also preparing your own organization for these rapid pivots that might come.
I think that’s really interesting in so many ways.
It’s not like people call us up and say, hey, strategy is shifting, here’s what you need to know, so that you can get ready to show us how valuable you are to our business.
It’s in the margins.
So many times, it’s things that we’ve got to pick up on.
And so Mohan, where are the digital traces that can kind of come alongside us to help us clue in before the client actually says anything out loud?
Because they don’t always communicate it so clearly as soon as we would like it to serve our purposes.
Yeah, there are so many digital footprints from where you can get this, right?
So you can just divide this into macro and micro, right?
So in the macro sense, there could be major industry disruptions or regulatory changes or leadership transitions.
These things are happening.
You may see something in the budget reallocations.
You know, so there are these things that are happening in the macro environment that you need to be dialed into.
But also your teams, your delivery teams, are often noticing subtle client changes before they become formal strategic pivots because they’ve heard something in their all hands that they are discussing with you more at the delivery team level.
And if your delivery team level is able to pick these signals up and let you know as the leader, you can now get a jump on that and then go talk to the client.
So these things are all there.
The data is always there.
The advanced data is there.
The question is, can you harness it or not?
And these days, there are these digital footprints everywhere, right?
So whether it’s in a Zoom call or in an email or in a meeting that you might have just had and you just look at the meeting title, the question is, can you take these signals, amplify it, get it to the right people, the right leaders in time, so you can do something about it?
They’re all there.
It’s just a question of, are you open to receiving these signals or not?
So in the end, it sounds like staying ahead is part pattern recognition, these digital crumbs that you’re laying out, part intuition, and part willingness to act before you’re asked.
Kind of a choice to be proactive versus always just defaulting to, I’ve got too many fires, you get stuck in that loop and can never make that proactive decision to get out of it.
That’s a different type of leadership.
It’s a weird combination, I think, that really helps you move to a place of proactiveness ultimately to retain more of your clients over time.
Anything else as we wrap up this conversation?
I think this is one of those areas where we don’t necessarily think of AI helping us, but it really can take us to the next level.
AI isn’t just great for creating content, which is the default mode we think about, because of generative AI and how it’s taken off.
It’s also great at understanding natural language and synthesizing natural language.
And I think all of these digital breadcrumbs that Mohan’s talking about can be used to gather and synthesize and really summarize and bring to the forefront.
What do I need to be acting on before I even know I should be acting on it?
And AI can be the prompt, it can be the chief of staff to help drive us to the point where we can, with greater scale, be the trusted advisor.
And so as a leader of a professional services firm, I would encourage you, don’t just think about this as the chief of staff for you to do this.
How do you empower your entire workforce with artificial intelligence and platforms that can help to really stay on top of all over?
How do we stay close to these clients?
Know what’s going on, know what their customers are doing, know what their competitors are doing, synthesize all of that, and then go back and be their proactive trusted advisor?
The three things I had in mind are that you’ve got to anticipate these shifts that are happening and not always be reacting to them.
Positioning yourself as a strategic navigator of these various choices that are going on in the pivot, and then internally building a flexible service delivery model that can take advantage of these shifts.
Well, for everybody listening today, I know that you’re listening to this episode.
You’re probably someone that is very capable of getting out of that loop of just being reactive over and over again today.
I hope some of these thoughts, ideas will help give you the inspiration to be proactive as you engage with your clients this week as you move to retaining more and more of them to grow your business.
David, Mohan, thank you as always.
Thanks, Courtney.
Thanks, David.
One thing that makes it easier to pass to where the skater will be is having the right intelligence at your disposal.
This is an area where the Knownwell platform excels.
It’s always listening for client and industry news that can help make sure you always have the knowledge and information you need to serve your clients effectively.
Go to knownwell.com to find out more and to find out how you can see your company’s data on the Knownwell platform.
Ardy Tripathy is the AI lead at OpsCanvas.
He recently sat down with Pete Buer to talk about what his team is building and how AI fits in.
Ardy, thank you so much for being with us today.
Thank you, Pete.
It’s a pleasure.
To get us started, can we hear a little bit about OpsCanvas and your role there?
Yeah.
OpsCanvas is a SaaS product that makes cloud deployments easier and faster.
They do that by giving you visibility and context about all of your cloud resources.
For example, if you’re an engineering manager or a director of finance and operations, this visibility gives you a bird’s-eye view of all the cloud resources that are managed or owned by the different teams and initiatives and the cost associated with them.
On the other hand, if you’re like an individual contributor or you are, let’s say a DevOps engineer, it frees you from the toil of having to constantly check the different requirements for your applications, ensure that the correct versions of the microservices are present, and it helps you perform the rollbacks and deployments in a repeatable manner.
Another persona that can be helped with OpsCanvas is the security or site reliability engineer.
Then what OpsCanvas really does is it enables them to know definitively what’s actually out there, and so that you can perform your audits, assessments, and implementations of security practices.
So how does OpsCanvas do it?
Like what’s the thing behind the scenes enabling all of this?
And I would say in a few sentences, it’s basically a GitOps approach.
And so what we do is we integrate with our customers CICD pipelines, so that we know about the cloud resources at the point of deployment, not after the fact.
So we use infrastructure as code to deploy and destroy resources.
And we have a record of all your deployments, so that we can give you one-click actions, like promote, rollback, clone, destroy application environments in a repeatable manner.
And as you know, this is the AI Knowhow podcast.
So let’s take one more step forward on the business and help us understand where AI fits in to your work.
Yeah, yeah.
So I am actually the AI lead at OpsCanvas and part of its engineering team.
So my role is to architect, scope out, and develop AI features in the OpsCanvas platform.
Nice.
And so, yeah, I mean, I can expand on that, or if you want to come back to that later, that’s fine.
Well, I would love to dig in on that right now.
And I’ve got a question I can start you with.
I guess one more bit of context.
AI applications exist all across the business.
And so, for the work that you do, where in the typical company’s business system are you helping with AI applications?
Yeah, so we’re a software company, and so I’m helping develop AI features in the backend, and so that our customers can achieve what they want to achieve in an easier manner.
So more specifically, I mentioned how we integrate with our customers’ CI-CD pipelines, and one of the AI features that we have, we call it the installer, and basically it installs the OpsRunner binary in the customer CI-CD pipeline.
And so that installation process and integration process is performed by AI.
So we don’t need customers to read and understand the docs behind OpsCanvas or OpsRunner, and then go and manually change their pipelines one by one.
They can use AI to do that automatically.
As I think you know, our leaders are heads of businesses and on the senior team, especially looking across operations, middle market, professional services.
And there’s something ironic about looking at the business case options around AI features.
It’s unusual because there are too many options to pick and choose from as opposed to too few or too many that can’t be justified, you know.
So I was hoping that you could help, particularly given your role, to understand what’s your rubric?
How do you go through the decision-making process to decide if a use case feature is a good one?
Yeah.
So there are different aspects to that question.
So let me first answer by a general perspective.
And AI, I mean, the elephant on the room is that AI is constantly improving.
It’s constantly changing.
And to be honest, it’s very capable.
But we must remember at its core, it’s just a pattern matching machine that has been trained on tons of data.
So more broadly, if you have unique requirements for a particular use case that are not likely to be present in publicly available data, then you need to translate your requirements in a way using metaphors or using explicit steps to the kinds of data or reasoning it may have seen in its training data.
So think of AI as an enthusiastic intern that somehow suffers from amnesia.
So they’re very diligent.
They’ll lift rocks around and do the thing that you ask them, but they’re gonna forget it in five minutes later because their context window is limited.
So as any intern, you need them to be having the context of your internal knowledge or company processes for them to be productive.
And you need to be able to check their outcomes.
So I guess this is a long-winded way of saying towards the rubric is, if you can verify easily an output, then that is potentially a good use case for AI.
And verification, yeah.
No, no, please.
Yeah, the last part is that verification can be human in the loop or programmatic, right?
So if you can write rules and guardrails, then that’s good as well.
So you have to have the right kind of contextual data, the right amount presumably, and it has to be verifiable either through technology solutions or by way of human.
I get that academically.
Can you make it real with an example, like?
Yeah, yeah.
So some of the most widely used applications of AI are in like programming assistants or writing assistants, like copilers, clouds and what have you.
And so what’s happening there is that you have an expert in the loop and who is managing, not managing, directing, let’s say, the AI to produce the outcome that they desire.
And that direction involves providing them the correct context, providing them the personal and professional choices, aesthetic preferences, and so on.
And once the output has come, it’s ultimately the responsibility of the professional to vet it and to make sure that it meets their standards.
So that’s a good example of an AI feature.
A bad example is some where, like I said, going back to the pattern matching analogy, something where you constantly experience Knownwell situations, which you cannot have assumed to the AI to have encountered in its training.
So for example, customer service.
So end customer service is, it is likely to get many novel circumstances, which may require a broad contextual understanding of the affordances before replying.
And AI, just handling that over wholesale to AI is not a good strategy.
I mean, there is an example of a Swedish fintech company called Klarna, who tried to do basically customer service with AI chatbots.
And it sounds good in theory, but they ended up after having two years of this experiment, ended up rehiring many of their customer service representatives because the empathy was missing.
But even there, AI can be used to flag which are the more ambiguous interactions and then escalate to a human.
And so there are still efficiencies to be gained, but as I said, it’s constantly changing and you got to evaluate and re-evaluate, but with that basic setup in mind.
Super, super helpful.
Thank you.
I’d like to push on the rubric one more click, if I may.
Yeah.
Let’s say now I have 10 cases that have borne out, at least hypothetically, as being viable.
How do I decide which ones to tackle in what order?
What’s the strategic overlay once you’ve qualified a set of potential use cases?
Oh yeah, okay.
So of course, the use cases are helping your business in some way or other.
And there is an element of how much do we foresee the benefit to the business versus how much benefit do we get today by deploying it right now.
Because you have to balance the fact that AI takes time, even if yes, the AI models are present, it takes time to make it into a finished product.
We cannot just simply expose a ChatGPT dialogue to our customers.
That’s not.
For example, going back to our OpsCanvas case, we are using AI to install OpsRunner inside customer pipelines.
That is very different from a chat bot interface.
So the models have to be packaged, they have to have these guardrails in place.
So suffice to say, there is some amount of development time that takes before you reach and get that feature.
So there is a balancing act between how much potential benefit you would get from the AI feature and contrasted with the amount of development time that you need to be able to bring it to the market.
That calculus is going to be different for every company in their unique sector.
And so I think one thing that can make that calculus easy to happen is to have clear communication between engineering and product.
And then that is, I think, something that AI can help in fact.
Yeah, right.
Well, the process you’re describing sounds to me pretty aligned with how we think about prioritizing features generally in the context of business need and market signal and engineering capability and so forth.
Is there anything that’s different about AI use cases than none?
Yeah.
So, the only one difference that I can think of, like you absolutely pointed out, I completely agree, this calculus exists for every feature.
The one thing about AI features that is a bit different is the fact that the underlying models are just constantly changing.
For most businesses, they’re not training their large language models from scratch.
They’re using one of the existing open source or other commercial models like OpenAI, Cloud, and so on.
Those models keep changing and their price points keep changing.
Just yesterday, OpenAI reduced the price of their most premier model called O3 by 80 percent.
So there’s an element of, well, in that calculus of O, is it cost-effective today versus is it going to be cost-effective the day it goes to market?
I would say that there is a little bit more shooting in the dark with respect to the cost.
But the one thing is certain is that the costs are going to come down.
So something that you think was prohibitively expensive today, might very well be very cheap to meter, say three months down.
Interesting, isn’t it amazing?
The market is changing so much.
That’s awesome.
Thank you, as always, for listening and watching.
Don’t forget to give us a five-star rating on your podcast player of choice.
At the end of every episode, we like to ask one of our AI friends to weigh in on the topic at hand.
So, hey, DeepSeek, haven’t talked to you in a while.
How’s it going?
This episode, we’re talking about how services leaders can stay ahead of client strategy shifts.
What’s your advice?
Stay plugged into industry trends and keep a close eye on your client’s evolving goals.
Anticipate their needs before they even ask.
And don’t underestimate the power of regular check-ins to keep the conversation flowing and your strategy aligned.
And now, you’re in the know.
Thanks as always for listening.
We’ll see you next week with more AI applications, discussions, and experts.