AI Knowhow Episode 82 Overview
-
Learn why AI introduces new security risks that go far beyond traditional SaaS vulnerabilities
-
Get a clear, non-technical framework for evaluating the security posture of any AI vendor
-
Hear the specific red flags that signal an AI platform may not be ready for enterprise use
As organizations race to integrate AI tools across functions, one essential topic is rising to the top of every executive’s priority list: security.
In this week’s episode of AI Knowhow, host Courtney Baker is joined by Knownwell’s Chief Product and Technology Officer Mohan Rao and CEO David DeWolf for a deep dive into how C-suite leaders can think strategically, and be sure they’re asking the right questions, about AI platform security.
Why AI security is different—and riskier
Traditional SaaS security focused on data protection, governance, and operational hygiene. But AI introduces non-determinism—models that don’t always behave the same way. That variability expands the attack surface and introduces new vulnerabilities, including:
-
Prompt injection attacks (today’s equivalent of SQL injection)
-
Data poisoning, where malicious actors tamper with training data
-
Model inversion, which can leak private information from training data
“You’re no longer just protecting an app,” David explains. “You’re securing a dynamic system built on an evolving model, and that’s a big shift.”
The right questions to ask vendors
Executives don’t need to become security engineers. But they do need to be able to vet AI vendors with confidence. A big part of this is knowing what questions to ask when evaluating new AI products or vendors. According to Mohan, here’s what to ask:
-
What foundational model is your platform built on?
-
Have you incorporated any open-source models?
-
How was your model trained, and where did the training data come from?
-
What safeguards do you have against prompt injection and data leakage?
-
Can you provide documentation on your security posture and practices?
If a vendor can’t clearly answer these questions or provide supporting materials, that’s a red flag.
What good looks like when it comes to AI security
Security isn’t just about having the right tech. It’s also about organizational maturity. Look for signs of a serious commitment, including a named CISO or senior executive focused on security, proactive monitoring and clear architecture diagrams, and adoption of best practices like OWASP for LLMs — a new standard for evaluating AI application security.
“The reason you buy a product is that they’ve done the hard work for you,” Mohan says. “You’re paying for trust.”
Watch the episode
Watch the full episode below, and be sure to subscribe to our YouTube channel.
Listen to the episode
You can tune in to the full episode via the Spotify embed below, and you can find AI Knowhow on Apple Podcasts and anywhere else you get your podcasts.
Show notes
- Connect with David DeWolf on LinkedIn
- Connect with Courtney Baker on LinkedIn
- Connect with Mohan Rao on LinkedIn
- Connect with Pete Buer on LinkedIn
- Watch a guided Knownwell demo
- Follow Knownwell on LinkedIn
If AI uptake requires an abundance of trust, what security measures should you have in place to ensure that trust doesn’t get broken with AI?
And what are the top questions you should be asking when deploying AI into your organization?
Hi, I’m Courtney Baker, and this is AI Knowhow from Knownwell, helping you reimagine your business in the AI era.
As always, I’m joined by Knownwell CEO, David DeWolf, Chief Product and Technology Officer, Mohan Rao, and NordLite CEO, Pete Buer.
But first, put on those cargo pants and hiking boots because it’s time for another installment of AI in the Wild.
Hey Pete, how are you?
I’m good Courtney, how are you doing?
Doing good.
Salesforce CEO, Marc Benioff wrote about the rise of AI agents in a recent Wall Street Journal opinion piece with this subtitle.
Today’s CEOs are the final generation of executives who will lead exclusively human workforces.
Pete, I think this is so interesting.
What do you take away here?
Yeah, you love the subtitle.
It conjures images of General Grievous and Count Dooku leading drone armies in business.
In a way, that’s not a bad analogy really, is we’re kind of talking about the possibility of a widespread fundamental reallocation, reassignment of work between human and machine.
Given its provenance, the article focuses on the use of agentic AI at Salesforce, but it’s a good use case.
They’ve got AI agents working with 9,000 customer service reps who resolved, or I guess that resolved 84% of incoming service requests entirely on their own, with only 2% of the machine resolved requests requiring any kind of escalation.
Well crafted agentic AI solutions are really compelling.
They post excellent numbers as per the Salesforce example.
They do work independently.
They’re available 24-7.
They can be trained to be expert in just about any field.
And most importantly, they don’t steal your lunch from the refrigerator.
Most importantly, they change the math of scaling a business.
Small companies can now dress for battle with the might of a human army behind them.
And large companies can reallocate major resources to drive innovation and change and adjust to market realities.
I think that’s the biggest change for leaders in all of this.
It’s one of thinking differently about strategy, not working backwards from a theory of your current constraints, but working forward from the possibilities of what a different world can look like, a world that is accessible to you now.
Can we be creative enough, fast enough, to shape new, compelling journeys for our businesses?
Well said.
I think that’s a really big challenge for everybody listening to think through and to deploy in their organizations, if they’re up to it, that is.
I just keep throwing down the gauntlet more.
This is a challenge for you, if you accept the challenge.
Pete, thank you as always.
Thank you, Courtney.
Every silent customer departure leaves revenue on the table.
Knownwell’s AI driven commercial intelligence platform flags churn risk before they happen and prescribes the next best action.
So you can keep more of the clients you’ve already earned.
We’ve all heard the stat, just a 5% boost in retention can lift profits by up to 95%.
Ready to protect your revenue?
Visit knownwell.com today to see your company’s data in the Knownwell platform.
I recently sat down with David DeWolf and Mohan Rao to get their advice for all of us non-technical leaders on what questions we should be asking when we’re deploying AI initiatives in our businesses.
David, Mohan, welcome back.
Today, I want to talk about a topic that I think will be really helpful for our audience, and that’s security.
Super sexy topic, but it is a hot topic, it is trending.
And I think as we are bringing in more AI platforms and tools into our businesses, especially those of us that are not technicians, really understanding how do we ask the right questions?
How do we know that we’re bringing something into our organization that has solid security?
So Mohan, David, I can’t think of two better people to talk about this.
You know, what’s interesting about AI and security is that it goes both ways.
You need security, obviously, and build with the security foundations to build AI products, but then you can enhance security using AI.
So it can go both ways, which is the fascinating thing about AI and security.
Is it AI for security or is it security for AI?
It’s obviously both.
So Mohan, what are the questions that you would recommend or things that we need to be thinking about?
Again, can you break it down for someone like me, Mohan?
What would I need to know, you know, when I’m considering bringing in a new AI platform for, let’s say, the revenue team at my company?
What would I need to be thinking about when I consider it to feel confident that, okay, they have done the work, this is a secure platform, I can feel confident in moving forward?
Yeah.
You know, the way I think of it, Courtney and David, is sort of, there are all the things that we’ve always done with SaaS products before, right?
Where is it hosted?
Who has got access to it?
Generally, trust questions, confidence questions, legal questions, the frameworks under which the vendor operates.
All of these things are standard and we’ve done it for many, many, many years while buying products.
So all of those principles are still good, right?
It’s built on a foundation of, do they have good governance?
Do they test often?
What is their security architecture?
What is their security operations?
There’s a pyramid that you generally go through and you’ve got to do all of it.
What is different about AI is that the, as we’ve talked in this podcast a lot before, is that it is a non-deterministic system.
So the model, unlike the code that we used to have in traditional SaaS is not deterministic.
So you don’t know what output it’s going to provide.
And that leads to hackers being able to exploit it in different ways.
And that is because data is, data used to be something that was input, that got crunched in different ways and there was output.
As opposed to now, it is the fuel and the feedback loop of the whole system.
It is central to what we are building.
And therefore, the way you evaluate it and the supply chain through which this data comes in and how the model has been trained, there are some extra steps you’ve got to take on top of what you’ve always traditionally done while evaluating a new vendor, as you asked, with AI systems.
Yeah, Mohan, when you say that, where my mind goes is the expansion of the attack surface, right?
The model is now part of this attack surface, and there are some very specific things that come to my mind as I begin to contemplate that, right?
The ability to do data poisoning, right?
How do you corrupt the model with training data to be able to manipulate the outcome, the output of that model and the behavior?
Prompts, another piece of it, right?
How do you inject malicious input prompts to impact the generative models?
And that, you know, I think the equivalent in yesterday’s world was SQL injection, right?
And as you think about SQL injection, well now, it’s not just these tricks to get database queries, it’s actually the prompts into the LLM and how it’s being prompted.
What do those attack surfaces look like?
And then of course, there is the tricks to tease out private data from trained models and inverting kind of that input-output sequence.
You know, the private data, the things that’s fed it to train it are supposed to be for training it, not necessarily for getting down and finding personal information or those types of things, but folks are focused on that.
So those are three very specific examples to me that explain kind of how this attack surface has expanded, and we have to think differently in new ways about the types of security we’re thinking about.
That’s exactly right.
I mean, at a high level, those are the three major things to focus on.
So you hit all of it.
David, what makes this complex is, generally, you’re building on somebody else’s model, right?
There is no, there are only a few foundational model providers out there.
You trust them in different ways, because you’re building on the trust that they have done the right things.
So it is a foundation of trust that gets built up, and you’ve got to just make sure that your trust architecture is good.
And what makes this harder is when you start bringing in some of the open source models and you don’t know exactly what data it has been trained on.
And even if they’re trained on the right thing, if somebody got into that and poisoned the data, like you were saying.
So just understanding the supply chain of the whole of how you acquired the model and what was it trained on, and just being able to audit it end to end is really important.
So therefore, what you can do is you can essentially have control points either at the model entry and at the model output.
That’s how you build security architectures.
You say that, let me make sure that every prompt that goes in, not from the application, but at the moment that it’s going to go into the model, let me check that and make sure that it didn’t get modified from what I said is the prompt, and be able to tap into that prompt at that level, and be able to evaluate it.
Similarly, when the output comes out, you want to make sure that you’re evaluating it in some manner, so you know that the output is right, and building an architecture where you’re securing the model just like you were securing the database traditionally is going to be key here in building your security architecture.
So boil that down for me, Courtney Baker, CMO.
The question that I’m going to want to ask is, hey, what model is your platform built on?
Is that the question?
And then two, am I asking, hey, are those, did you build on any open source models?
Would that be a good question?
And then, how are you training your model?
Is that the third question?
I’d say those are really good starts.
So you want to understand that the vendor that you’re considering has thought through all of these questions deeply.
So ultimately, when you buy a product, the reason you buy a product is that they’ve done the hard work for you for the money that you’re gonna pay in return for the service.
For you, it is not so much that you have to actively worry about it every day, but as the customer who is buying this product, you ask these questions while you’re procuring.
How have you built this?
What safeguards have you taken?
And how do you prevent prompt injection?
As David said, how are you making sure that if you use fine-tuning, where is that data source coming from?
Just understanding the security architecture and building enough confidence that they’ve thought through these things and these LLMs are being, that they bring in a whole new set of security threats on top of the basics, and they’ve considered it well.
That’s what you’re really asking for.
Okay, let me ask you this in another way.
What are the red flags that I should be looking for?
I think if they’re not able to explain it well, right?
So in a simple document that they have, and they declaratively have said, this is how we do it, this is our position on this, right?
And if they’re not able to either say it or provide you a document, that should be a red flag for you.
That, you know, maybe they’ve not thought through all of this yet, right?
Then there is a question of, are they monitoring these things themselves?
How well are they monitoring it?
That’s really important.
Then there is a question of leakage.
Is my data going to leak out anywhere and have they thought about this?
Just asking these questions on top of all the things that you’ve traditionally asked a SaaS vendor is going to be what you should do.
On top of it, you can always look at the organization and say, do they have a CISO?
Do they have a CTO or a CEO who is totally in the loop and managing this as a first citizen problem of the company?
These are all the things that you can get clues from.
If they’re not able to answer this well or provide you a document, or you don’t get the confidence while procuring, those would be red flags for me.
Wow.
Listen, whatever vendor I get on with next is going to be like, wow, this girl knows a thing or two about security and AI platform.
So thank you, Mohan, for making us all a little bit smarter as we continue to bring more and more AI platforms into our businesses.
Any final thoughts?
This is my favorite episode so far.
I get to sit back and just learn a lot.
It was awesome.
Thanks for sharing your wisdom, Mohan.
I love it.
Don’t forget the basics.
There is something we’ve traditionally used called OWASP, which is a set of top 10 things that you look for.
We’ve had it for regular web products.
Now, there is something called OWASP for LLMs.
You’re just going to go through that list.
There are checklists available out there.
I’m sorry.
Can you break down what is it called?
It’s called OWASP.
O-W-A-S-P.
Listen, you all got to get some marketers up in here.
You love acronyms though, Courtney.
Stop it.
You know I don’t.
You do.
It’s your favorite thing.
You know I don’t.
You’re going to love the name even better.
It’s the Open Worldwide Application Security Project.
Isn’t that just a phenomenally named piece of work there?
Yes.
Yes.
You know what?
But here’s, okay, it’s OWASP.
Is that it?
OWASP?
It’s OWASP.
Every other person that’s not a technician listening to this podcast, we’re going to be dropping OWASP all the time now.
So just wait.
It’s going to be trending pretty soon.
Well, David, Mohan, seriously, thank you.
Awesome.
Thanks.
Thanks as always for listening and watching.
Don’t forget to give us a review on your podcast Player of Choice.
And listen, we’d really appreciate it if you’d share this episode with someone you think would enjoy it.
At the end of every episode, we like to ask one of our AI friends to weigh in on the topic at hand.
So, hey ChatGBT, this episode, we’re talking about the top questions you should be asking when deploying AI into your organization.
So, what do you think?
First off, they should be asking where sensitive data is going and who or what has access to it.
Then they’ve got to wonder, if something goes wrong, who’s accountable, the humans, the AI or some ghost in the machine?
And now, you’re in the know.
Thanks as always for listening.
We’ll see you next week with more AI applications, discussions, and experts.