Working with AI: Why Security is a Must

watch-on-youtube-final
aaple-podcast
listen-on-spotify

AI Knowhow Episode 82 Overview

  • Learn why AI introduces new security risks that go far beyond traditional SaaS vulnerabilities

  • Get a clear, non-technical framework for evaluating the security posture of any AI vendor

  • Hear the specific red flags that signal an AI platform may not be ready for enterprise use

As organizations race to integrate AI tools across functions, one essential topic is rising to the top of every executive’s priority list: security.

In this week’s episode of AI Knowhow, host Courtney Baker is joined by Knownwell’s Chief Product and Technology Officer Mohan Rao and CEO David DeWolf for a deep dive into how C-suite leaders can think strategically, and be sure they’re asking the right questions, about AI platform security.

Why AI security is different—and riskier

Traditional SaaS security focused on data protection, governance, and operational hygiene. But AI introduces non-determinism—models that don’t always behave the same way. That variability expands the attack surface and introduces new vulnerabilities, including:

  • Prompt injection attacks (today’s equivalent of SQL injection)

  • Data poisoning, where malicious actors tamper with training data

  • Model inversion, which can leak private information from training data

“You’re no longer just protecting an app,” David explains. “You’re securing a dynamic system built on an evolving model, and that’s a big shift.”

The right questions to ask vendors

Executives don’t need to become security engineers. But they do need to be able to vet AI vendors with confidence. A big part of this is knowing what questions to ask when evaluating new AI products or vendors. According to Mohan, here’s what to ask:

  1. What foundational model is your platform built on?

  2. Have you incorporated any open-source models?

  3. How was your model trained, and where did the training data come from?

  4. What safeguards do you have against prompt injection and data leakage?

  5. Can you provide documentation on your security posture and practices?

If a vendor can’t clearly answer these questions or provide supporting materials, that’s a red flag.

What good looks like when it comes to AI security

Security isn’t just about having the right tech. It’s also about organizational maturity. Look for signs of a serious commitment, including a named CISO or senior executive focused on security, proactive monitoring and clear architecture diagrams, and adoption of best practices like OWASP for LLMs — a new standard for evaluating AI application security.

“The reason you buy a product is that they’ve done the hard work for you,” Mohan says. “You’re paying for trust.”

Watch the episode

Watch the full episode below, and be sure to subscribe to our YouTube channel.

Listen to the episode

You can tune in to the full episode via the Spotify embed below, and you can find AI Knowhow on Apple Podcasts and anywhere else you get your podcasts.

You may also like