Explainable AI vs. Understandable AI

AI Knowhow Episode 61 Summary

  • Understandable AI and explainable AI are both important concepts in the field of AI
  • Explainable AI is a more technical term, whereas understandable AI is focused on helping end users know how/why an outcome has happened
  • Both are vital to developing trust with end users, which is a necessary foundation for AI uptake

Explainable AI vs. Understandable AI

What’s the difference between explainable AI and understandable AI? And why is comprehending the distinction between the two and when each is necessary important? That’s the topic of discussion in our roundtable discussion for this episode of AI Knowhow, with Knownwell CMO Courtney Baker, CEO David DeWolf, and Chief Product and Technology Officer Mohan Rao.

Mohan highlights that explainable AI primarily deals with the technical aspects of why a model behaves a certain way. “It’s like saying, ‘I did A and then I did B and then I did C to produce D,” Mohan says. “Understandable is always the goal, but sometimes it’s a lot harder because the context matters a lot.”

Understandable AI is geared toward creating intuitive AI systems that end users can easily comprehend without needing technical knowledge or deep explanations. One of the difficulties of ensuring understandable AI is that even developers and data scientists aren’t always sure what the outcomes of an LLM will be.

David underscores the importance of these concepts for business executives, emphasizing that trust in AI models will be a key hurdle to clear for them to achieve widespread adoption. Trust in AI systems is built through transparency, allowing users to know the inputs and processes that lead to an LLM’s decision while also fundamentally understanding its outputs.

He cites the Perplexity search engine as a real-world example of an AI platform that provides an experience that’s both understandable and explainable.

Practical Applications and Importance in Regulated Industries

Explainable AI is particularly important for companies deploying AI solutions in regulated sectors like healthcare and finance, where being able to document how an AI system reached its conclusion is critical. The primary aim of Explainable AI is to demystify complex AI models so that their decisions are more transparent, auditable, and trustworthy.

Both forms of AI are essential and interlinked, contributing to the development of trust and usability in AI-driven platforms, which in turn fosters broader AI adoption in businesses.

Expert Interview: Dom Nicastro on AI in Customer Experience

CMSWire Editor-in-Chief Dom Nicastro joins Chief Strategy Officer Pete Buer for an expert interview on the intersection of AI and customer experience. Dom shares valuable perspectives on the areas where he sees the greatest demand for AI to help drive improved customer experience.

The main use case where Dom is actively seeing demand for AI is for it to effectively be used to empower customer service agents by taking over mundane tasks, thus allowing agents to focus on delivering exceptional service. And in this day and age where the customer is king and choice is abundant, Dom says that the role of customer support has never been more important. “I think the support team is more important than the CEO in a company,” he says. To that end, it’s vital to equip support teams with the tools they need to be “specialized, skillful, thoughtful human beings” as Dom puts it.

From a journalism standpoint, Nicastro is optimistic. While AI can assist with generating content, the value of human insight and credibility remains irreplaceable. AI acts as an editorial assistant rather than a replacement, providing efficiency while maintaining the human touch necessary for credibility.

Watch the Episode

Watch the full episode below, and be sure to subscribe to our YouTube channel.

Listen to the Episode

You can tune in to the full episode via the Spotify embed below, and you can find AI Knowhow on Apple Podcasts and anywhere else you get your podcasts.

Show Notes & Related Links

Quick question for you today, and for this one, you’re gonna be on the clock.

What’s the difference between explainable AI and understandable AI?

Go.

If you’re like most executives at a professional service firm, you probably aren’t aware of these terms.

But today, that’s exactly what we’re gonna get into, and you’re gonna leave the end of this episode sounding really smart in your next meeting about AI.

Hi, I’m Courtney Baker, and this is AI Knowhow from Knownwell, helping you reimagine your business in the AI era.

As always, I’m joined by Knownwell CEO, David DeWolf, Chief Product and Technology Officer, Mohan Rao, and Chief Strategy Officer, Pete Buer.

We also have a discussion with Dom Nicastro, Editor-in-Chief of Simpler Media Group in cmswire.com.

He discusses the intersection of AI, customer experience, and journalism.

But first, congratulations, because you’re officially deputized to join Pete Buerme for a brand new segment we’re calling Dragnet.

Pete Buer joins us as always to break down some of the latest and greatest in AI.

Hey, Pete.

Hey, Courtney, how are you?

Good.

This week’s story comes to us from CNN.

The headline reads, AI helped the feds catch $1 billion of fraud in one year, and it’s just getting started.

Pete, what are your takeaways here?

So, a great real world example of AI being used as a force for common good.

A little bit of background.

The Treasury Department is the organization that we’re talking about here.

They issue payments on a truly massive scale.

So, 1.4 billion payments a year to 100 million people, totaling something like $7 trillion for items like Social Security checks and Medicaid, paychecks to federal workers, tax refunds, etc.

Treasury used machine learning AI with humans in the loop, so, not completely ungoverned, to detect fraud in the payment process.

And so far in 2024, so a year not even ended, they’ve recovered a billion in check fraud alone and another 3 billion in other forms of fraudulent activity.

Cool example of the magnitude of impact possible from bringing AI to the sorting through of absolutely massive amounts of data.

Now, the one thing I did not see in the article is how all of this fraud detection is going to translate into savings on our 2024 tax returns.

I guess that’s just something that Treasury departments still working out.

Yeah, that’s awesome.

I do love this though, because certainly you think about AI and how it might be used for those on the other side, on the fraudulent side.

It’s great to see that it’s always already being deployed for the good of us all.

Pete, thank you as always.

Thank you, Courtney.

Why do you need to know the difference between explainable AI and understandable AI?

Well, you’re about to find out in a discussion I had recently with David and Mohan.

David, Mohan.

Okay, so I kind of want to just like hand the ball to you here and just say, can you out of the gate break down what we mean by explainable AI and understandable AI?

Because it sounds like the same thing, but it’s not.

Yeah.

You know, the first thing to understand is AI models are very, very, very, very complex.

Right?

So we all know that.

The term explainability is a more technical aspect of it, which is, it is, you know, you’re saying why the model made the production prediction the way it did.

Right?

Understandable is more about the humans understanding what does that mean.

Right?

And there’s a difference between that.

So it’s almost like, you know, there’s a difference between saying and hearing.

Right?

So it is a model can explain the technical aspects of how it explained the decision, but the way human understands it is different.

So there could be aspects of translation.

That’s very, very important.

And that’s the fundamental difference.

OK.

So, David, help everybody listening know, why is this important for executives to understand what explainable AI is versus understandable AI?

Well, I think one of the important concepts here is trust.

Right.

So in this world where we have this artificial intelligence that is a machine and they’re very, very, very complex algorithms, it’s hard when you see a result that looks like magic to totally trust it.

Right.

And so explainable AI is all about trying to make sure that we have clarity around why did the machine process the algorithm in a way that produces a certain result?

What were the inputs that led to that?

Now, the hard thing is here, even the most genius AI data scientists don’t really know the answer to that.

There’s literally ongoing research being done in exactly how these neural networks work.

But what we can do is begin to distill and we can understand the basics, and then understand what is the corpus of knowledge that it’s looking at, so that we can point to, here are some of the factors that helped this machine, that’s leveraging this neural network that is modeled after our brains, process the information in a way that came to this result.

That’s what explainable AI is all about, is about helping to convey this level of trust or confidence, if you will, in the actual result.

Can you two explain when you would need explainable AI in a business versus when you need understandable AI?

For me, you know, you always strive to have understandable AI because, you know, that is the language of the humans, and ultimately humans need to understand what’s going on, right?

So humans need to be in the loop.

So understandable is always the goal.

But sometimes it’s a lot harder because the context matters a lot.

Explainable is a lot easier, right?

It’s more like a process answer.

It’s like saying I did A, and then I did B, and then I did C to produce D, right?

So it’s a lower bar for me, right?

So you should always be striving for the higher bar for the exact reason that David said.

It’s all about building trust.

It’s about knowing why the machine is making the prediction it is.

Do we take it, not take it, that sort of thing.

And if you think about domains like health care or finance, you just cannot go with what a model said and execute a really big trade that you didn’t understand, right?

It could be illegal for all we know, right?

So, the context really matters of why these predictions come out, what does it mean, what are the gold rails, and how do I understand and interpret it.

I think you make a really good point about regulated industries there, right?

So, there’s one thing for trust and confidence to give users the ability to act upon knowledge.

There’s another level there that you’re talking about when it comes to things like accountability and just really having the confidence you need to take action when you’re in a health setting, right?

You’re literally dealing with somebody’s lives, right?

And so, we really need to understand that.

And that goes to one of the common themes between both of these is the difference in AI, really understanding that now we’re working in a probabilistic world versus a deterministic world, right?

And so, these two concepts really come from that in this probabilistic world, especially because these models, and this is why we can’t understand them, they are calculating these probabilities off of billions, like literally billions with a B.

That’s not a typo, billions of parameters.

And so, when you’re doing that much math and that much probabilistic algorithms, you really don’t know exactly the why behind it, but it’s really important.

We have to have different levels of explainability in different types of use cases.

But the same thing can be true about usable experience, even though it’s not as technical, even though it is about the understanding.

Because this is probabilistic, we don’t know what the LLM is going to actually produce.

And so, one of the real difficulties in building one of these probabilistic AI-first systems is that we’re getting outputs that we have no idea what they’re going to say before we actually present them.

So, whether we present them in a context that makes sense to a human is really hard to understand.

And so, there are some emerging methods for dealing with that.

And I think the understandable is going to intersect with user experience a lot.

And so, one of the things that I think of is explainable.

Explainable AI is really about the technical side.

It’s about the regulation, the trust, the confidence, and then the usable, understandable fit together and will really drive the future of user experience in AI-first platforms.

It also seems like understandable AI will be really important for having more adoption of AI within businesses.

Really both, but it seems like you’re not going to get the general user using it at the level we hope to see in businesses without it.

Yeah, I think both are important.

When you build applications on top of models, understandable is very important.

If you’re a researcher or if you’re a data scientist, explainable is as important because they’re trying to make sure that the wires are connected properly in terms of prompts and data and all of that.

It’s also important, Courtney, that sometimes these words are used interchangeably and you have to know what they’re talking about.

I also hear the word interpretable AI.

That’s another word out there.

These are all related things, but generally in the way it’s loosely used.

But if you’re building AI native applications, you have to strive for understandable AI so humans can understand it.

Okay.

I’m going to put you two on the spot.

Can you give us an actual real life example?

I know you all have hit on maybe something in healthcare or what roles use this.

Can you give a scenario of where these would be used in different contexts?

One of the examples that I like, that I think brings the concepts together a little bit is perplexity AI.

If you’re not familiar with perplexity, it’s one of these modern AI replacements for a search engine slash research tool.

You can ask it a question, and it will come back not only with the answer, but actually with references, almost like footnotes.

It brings me back to my high school research paper days.

I think that was an innovation that came about because of the lack of explainability and understandability where it helps to both answer the question, how did you come to that?

Here’s the references and the understandability of providing some context for the answer that came about and displaying those references in front and center way.

I think those are the types of things that we’re starting to see in end user products that really come back to these two concepts.

The other example with the work that we are doing is when you look at it from one client or even specifically one scenario within a client, you look at it and you look at it technically and say, can we just explain this?

So that’s kind of more on the explainable AI side of things.

But then when you look at it holistically and said, if I act on this, how does it affect all of my clients?

So that is an example of understandable AI.

That’s a much higher bar that shows the inner logic and understanding the macro effects that it might have on that decision on all my clients as opposed to one scenario of one client.

Okay, this is really interesting and hopefully helpful for those listening to if you hear these terms, to really understand what the difference in explainable AI and understandable AI and what’s the other one?

I’m just kidding.

Interpretable.

But is interpretable, which one is that?

It’s like explainable or understandable.

It’s more, it’s all used so, it’s the language is loose, but I think of it as more as understandable.

That’s how I think of it too.

Yeah.

Interpretable is, it’s how you interpret it as a user, an understandable similar concept, which is more the target audience there is your end user of the output, whereas explainability to me is much more, in many cases about the researcher themselves, the data scientist, the technical side, and the target audience is more towards the scientist producing it, making sure they have that level of confidence, if that helps.

Okay.

To wrap this up, explainable AI is more technical than understandable, but there’s also a lower bar for explainable AI than understandable AI.

It’s not technically hard to say we got A input and so we did B, then C, then D.

And then understandable AI is an important component of the UX of AI, the user experience of AI, and is important for developing trust with end users.

David, Mohan, any final thoughts to this conversation?

You know, the thing that I would say is there’s a lot of research being done here at every moment that especially the large LLMs are diving into the explainability.

And I think as you see more and more enterprise level products especially, and consumer oriented products come out that leverage these LLMs, you’ll see more on the usability side of it and the understandable, interpretable side of it.

And especially when regulatory compliance is important, the context matters a lot, and that’s when understandable AI is super important.

Awesome.

David, Mohan, thank you as always.

Thanks.

It’s a pleasure.

Are you an innovator who’s looking for ways that AI can materially impact your business?

If so, we’re looking for people just like you to become part of Knownwell’s early access program.

If you’ve listened to the show before, you know that we think there is so much more to AI than the chat bots we’ve come to know, and frankly, love.

But we want your help in proving it.

Go to knownwell.com to sign up for our beta waitlist, watch a demo of the product, and more.

Go to knownwell.com.

Dom Nicastro is the Editor-in-Chief of Simpler Media Group in cmswire.com.

He recently sat down with Pete Buer to talk about AI’s role in driving superior customer experience.

Dom, welcome.

So nice to see you.

Yeah, nice to see you too.

I’m so looking forward to this conversation.

You live at the intersection of two both interesting and AI occasion affected worlds, customer experience and journalism.

So let’s get into it.

Tell us about CMSWire and your role there.

Absolutely.

I started as a reporter about 10 years ago, worked my way up the chain, and I’m the Editor now, Editor-in-Chief.

And CMSWire, it started catering to content management professionals, CMS, the FATWires, the Adobe AEMs of the world, Sitecore.

And we still do that coverage, but now we’ve really expanded our core coverage areas, customer support, customer experience, contact center leaders, that kind of thing, but definitely still dabbled in marketing too.

We want to talk about things that are happening in the world of the CMO, the Chief Marketing Officer.

So we’re very, very interested in what’s happening in that role and where they’re going with AI, what they’re doing with it, what the customer experience leaders are doing with it too.

So yeah, website, publish daily content, email newsletter, podcasts, get some video series, and that’s about it.

Well, as you know, Knownwell has an interest in the role of AI in enabling customer management activities to the ends of retention and growth.

So let’s start with customer experience.

What’s happening at the edge because of these two letters that we keep throwing around all the time?

What’s cool in customer experience driven by AI?

So here’s the synopsis of what I’m hearing, Pete, with customer experience in AI.

The leaders out there, the contact center managers, the VP of customer experience, the chief customer officers, they are trying to use it to build up their agents.

And if it can be infused into those scenarios where it’s gonna make the agent’s job better and make them smarter, empower them, then that’s where you can see some actionable results.

Agent experience is the number one thing I hear from these leaders on the ground floor.

I’ve been to a couple customer experience conferences lately this year, and that’s what they tell me.

They, it’s almost like an employee experience conference in a sense.

They want to take tools, take technology, take processes, strategies back to their organizations and make things better for their support team.

Because, Pete, I think the support team is more important than the CEO in a company.

I mean, that’s the front line.

Working with the customer every day.

Are there particular use cases that are bubbling up as the most promising?

It’s avoiding the mundane tasks that they have to do, like giving them a skewed number.

Avoiding having the agent look past those kind of things.

So, it’s in a broad sense, it’s moving them away from those mundane tasks and kind of empowering them just to be human beings.

And like, all right, you know what?

Because we talked to the Affleck customer support leader, one of their customer support leaders on one of our shows.

And they have customers coming to them in horrible times of need.

Like, I have cancer, I went through chemo, what’s my reimbursement going to be and all that.

So, they want to have AI sort of handle the automation part of it, the mundane, and empower their agents to be specialized, skillful, thoughtful human beings on the call.

So, if AI can do that stuff, get out of the way and then let the agents do their thing.

Gotcha.

So, the Jarvis-enabled Iron Man image, I guess.

Are you seeing much that’s customer-facing or is it really mostly about powering the agents?

Agents, yeah.

It’s mostly back end what I’m seeing even as a consumer.

I don’t know sometimes if it’s AI handling my stuff, but I think we’re still behind.

I think we’re worlds behind of getting to where we can be.

You know, I mean, we had a, my wife and I had an agent experience where the agent was a sweetheart, she was great and everything, but it took like 30 minutes just to tell me if I ordered a product, right?

Like I didn’t remember, like I know I had it, but I wanted to make sure they had it in their inventory so we can do things because of that.

And it took them both 30 to 35 minutes to get me that answer.

You know, we were on hold, my wife just would just laugh and talk and dance into the whole music, like the commercial you see with that, I forgot his name, the Top Gun guy, he’s dancing with the whole music, fine.

On paper, you look at that and say, that’s a horrible experience.

It took you 35 minutes to get a simple answer, and most people might complain about that.

We were fine, but I think that’s where AI needs to come in and some kind of automation and help those agents, because the agents on the front face of people, they’re going to take the brunt of it, if not, if we don’t empower them.

So you’ve mentioned journalism a couple of times, and we do have some questions for you on that front.

I know that it’s a passion.

I read that you started in the fifth grade in Gloucester, writing about the high school football team.

So that’s very cool.

Tell us where journalism is taking you, and then I’d like to go down the AI rabbit hole as well here.

Where journalism is going with AI right now?

Well, I’ll tell you this.

I am absolutely unafraid of AI in terms of replacing my job.

Can it write articles?

Yeah.

You know, generative AI, of course it can.

Can it edit articles?

Yeah, of course it can.

Can it edit better than me?

Maybe, yeah, probably, right?

Because, like we said, generative AI is an amalgamation of all the other editors who have ever done the real work in the past on the internet.

Isn’t that funny how we look at AI now and how fascinating it is?

But yet the model is this, everything that’s ever been said on the internet, like that’s what we’re taking information from?

Yeah, right.

Didn’t they tell us not to do that when the internet came out?

Right, it used to be not reliable.

Haven’t we always said, like, don’t take anything for what it’s worth there.

Don’t use anything on the internet.

Now, we’re like, use it all.

But with journalism, it’s not scaring me because journalism is always going to need sources.

It’s always going to need backup.

It’s always going to need credible, thoughtful people that give you information and commentary that has no agenda.

You know, someone that’s just passionate about a topic.

You know, this is where voices of the customer is failing, and I’m going to tell you why.

And they’re not going to sell something, you know.

An AI right now cannot go interview someone and a live interview and then take that live interview and write an article on it, right?

Because what if cmswire.com wrote a story on, on how AI is infusing CX, right?

Five ways.

And we asked AI to do that because we needed it fast.

And we printed it, people liked it, it got great engagement, maybe whatever, 10,000 unique users, right?

Wow, this is amazing.

Now the one person that comes back and says, hey, I read the article, where did you get number four, that tip?

Because I want to challenge that.

I think, I think some of that’s wrong.

Pete, what am I going to say to them?

That AI wrote it?

Yeah, Chad GPT.

Yeah, go talk to them.

No, it’s on CMSWire, it’s our responsibility.

It’s our, that’s our ethics policy.

It’s our, you know, it’s everything we do, and we need to back it up, and it can’t do that now.

So, that’s the big picture for me, Pete, with AI and journalism is it’s not replacing, it’s an intern.

It’s like an editorial intern that, you know, you let it do things for you, help you, but then you say, check that, check with that what he just did.

I find too, and I didn’t realize we’re still early days, but I find that generative AI is great at giving you the five-step list of what to do and what order that the entire world appreciates and understands.

Whereas, insight and distinctive writing and thinking kind of sits among the outliers, and it’s where gut instinct takes you that the averaging of access doesn’t.

I would think that’s another courage-giving fact for journalism.

Yeah.

It’s almost like, don’t…

It’s like, let AI come into your world.

Don’t go in their world.

What I mean by that is, give me five takeaways on this 7,000-word podcast transcription.

Like, I’m sure you’re going to do with this one after.

And it just gives you those takeaways.

And of course, you’re going to read those takeaways.

You’re going to edit those takeaways.

Edit this…

What is this contributor of ours trying to say in this paragraph?

Can you help me out?

I can’t figure it out.

And they might help with that, right?

So that way, they’re not giving you outputs from the ground floor.

They’re taking what you had researched, you’re going to publish, and they’re just trying to answer questions that you have about it and refine it.

That’s it.

Last question.

Many of the folks who are listening are still relatively early on their generative AI journey, starting to discover the ways that it will impact, whether it be threat or opportunity, you know, their jobs and their domains, and they’re needing to engage.

What would you advise someone who’s listening if they want to get up to speed fast and kind of master it in the context of their world and the way you have done with journalism?

Yeah, I think they need to loop in their colleagues and have it be kind of like a team effort.

We have like Slack channels where we talk about our AI use cases, how we’re using it, we call them brown bags, where we’re getting people together and talking about what we’re doing with it.

We’re working on policies and stuff like that.

So my answer is, treat it like you do a lot of other initiatives.

Have policies and procedures around it, right?

What are we doing?

What are we trying to do with AI?

Answer that question.

And people can certainly explore, but I think at the end of the day, it’d be nice for companies to have some sort of standards in the company.

Like, your IT needs to be involved.

Like, what kind of data are you putting into these systems?

If you’re a marketing team, you’re a customer experience team, what are you looking at all day?

Customer data, are you just going to feed this private information into a tool like Chat, you know what I mean?

Like, people’s phone numbers, their emails, what if you’re doing like a salary survey or something?

I just think the answer is have every, you know, the thought, the highest people in the organization need to be involved in setting policy, setting ground, not ground rules.

I hate to have restrictions, right?

But some kind of policies and procedures to get off the ground and now let’s go nuts with it and see if it can help the business use cases and really measure it too.

If you look at before ChatGPT and after ChatGPT for cmswire.com, I can’t tell you, Pete, and lie and say, we’ve broken through these major engagement levels because of ChatGPT.

It just hasn’t happened.

So measure, measure like you would with any other initiative, any other technology.

Is it giving you ROI?

Well, right now, it’s only 20 bucks a month.

So it’s definitely giving us some ROI.

But end of the day, what is it actually helping with?

Is it efficiency?

So you got to have those measurements and always kind of just checking what you’ve done.

Awesome.

Thank you, Dom, so much.

Yeah.

Thanks, guys, for having me.

Thanks, Pete.

Appreciate it.

Thanks as always for listening and watching.

Don’t forget to give us a five star rating on your podcast player of choice.

And we would really appreciate it if you can leave a review or share this episode on social media.

At the end of every episode, we like to ask one of our AI friends to weigh in on the topic at hand.

Hey, Gemini, what’s happening?

This episode, we’re talking about the difference in explainable AI versus understandable AI.

What’s the difference?

Explainable AI is about understanding how an AI model makes decisions, often for technical experts.

Understandable AI, on the other hand, is about explaining why an AI made a decision in a way that’s easy for everyone to grasp.

And now, you’re in the know.

Thanks as always for watching or listening.

We’ll be back next week with more AI applications, discussions, and experts.

You may also like