Are AI & ROI Incompatible?

Are AI and ROI incompatible? David DeWolf and Courtney Baker recently attended a conference where an OpenAI executive said the company is not really seeing the executives they work with looking for ROI from AI yet.

The comment sparked a debate between David and Mohan Rao in our roundtable during this episode on whether executives are expecting to see a return on their AI investments.

“Anybody who tells you that the cost of this is going to be $400,632.42 is wrong,” Mohan says. “Because there is a level of imprecision here that you’ve got to live with.” Where do we land? Even if executives can’t make a precise ROI calculation, they can at least get started with an ROI hypothesis that they test and learn against.

Pete Buer also talks with board member and CEO advisor Anna Catalano to get her take on how boards are thinking about AI in terms of governance, enterprise risk, return on investment, and considerations for AI deployment within organizations.

Listen to the Episode

You can tune in to the full episode via the Spotify embed below, and you can find AI Knowhow on Apple Podcasts and anywhere else you get your podcasts.

Watch the Episode

Watch the full episode below, and be sure to subscribe to our YouTube channel.

Episode Highlights

  • Courtney and Pete break down some of the week’s top news, including the week-long saga of Sam Altman’s full circle turn as CEO of OpenAI, as covered by Wired (OpenAI’s Boardroom Drama Could Mess Up Your Future) and The NYT (Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding)
  • David, Mohan and Courtney talk through the AI and ROI question, with David and Mohan staking out (relatively) opposing sides
  • Anna Catalano and Pete Buer dive into the idea that there’s risk on both sides of the AI equation. While it’s important to understand the security risks and risks of using incorrect or biased data, it’s also important to consider the risks of inaction

This transcript was created using AI tools and is not a verbatim, word-for-word transcript of the episode. Please forgive any errors or omissions from the finished product.

Courtney: [00:00:00] Are AI and ROI incompatible? And How should executives and boards be thinking about measuring the success of their AI initiatives? Hi, I’m Courtney Baker. And this is AI Knowhow Podcast from Knownwell, helping you reimagine your business in the AI era.

As always, I’m joined by Knownwell CEO David DeWolf, Chief Strategy Officer Pete Buer, and Chief Product Officer Mohan Rao.

We also have a discussion with Anna Catalano about how boards are thinking about AI investments and much more. But first, the news.

 

Courtney: Chief Strategy Officer Pete Buer joins us each week to break down the latest headlines in AI. This week, ooh, Pete, it a week, you know, who needs Taylor Swift and Travis [00:01:00] when you have open AI? Uh, by the way, first of all, before we get to that, Happy Thanksgiving, Pete.

Pete: Happy Thanksgiving to you too. And so much for Thanksgiving being a slow news week.

Courtney: Yes, totally. It has been anything but, There’ve been so many twists and turns in the OpenAI Saga, Pete, that honestly, hard to pick just two articles here for us. We looked for some that go beyond all of the twists and turns so we could focus on the business impact of the shakeout at OpenAI. The first article comes from Wired, and it’s titled, OpenAI’s Boardroom Drama Mess Up.

We chose this one in part because it’s the closest thing I’ve seen to an explanation of what in the world is happening over there with the board, uh,  Pete, what did you take away from this one?

Pete: thanks to the article, understanding a little bit of the [00:02:00] history of open AI helps to clarify, um, what the board was thinking to your question, you know, opening. I started in 2015 as a research lab, uh, not for profit, um, with a mission of building, um, artificial general intelligence at the level on par with human intelligence in a safe way.

A few years in, computing requirements and infrastructure needs, caused them to set up a commercial entity and take on, um, investment. And those are two completely different environments to lead in and to govern in. Can you just imagine the conflict and the stress that must have been felt in leadership conversations throughout?

Are we trying to grow? What’s the balance of safety and execution? So I have to imagine it’s hard for a board. Set up with one mission to shift gears, turn around and start, you know, governing [00:03:00] against, um, another. And I think to your question, the board was probably driven by its original mission, uh, of, safe development when they made the move on, on Altman, of course, how they went about it set aside, whether it was, you know, a mission driven purposeful decision or not, how they went about it was, um, terrible, right.

Anytime someone is dismissed summarily, uh, on no notice, you assume something nefarious and horrible is going on. And I don’t know yet that we, in fact, have all the facts on, on that. and it left everyone in disarray, the drama that ensued most importantly, I think, left customers wondering what was going to become of the business and what was going to become of their services.

And that at the end of the day, I think is the takeaway for executives on this one, you know, pick your partners, pick your suppliers, like you pick your investments, you know, are they promising, but are they also stable, [00:04:00] you know, and do you have confidence that you can get to the right balance of risk and reward working together?

Courtney: Yeah, I think this is a great example of if you’ve just been following the story, you know, lightly, this is that layer of really understanding how open AI is set up structurally and what their mission. Honestly, I did not know all of how they were set up before all of this broke.

And so really interesting to see that, um, as the story has unfolded, kind of the clashing of those two things kind of coming together. So next I want to dig into another article on the same story from the New York Times. The title is, Before Altman’s ouster, board was divided and feuding. We’ve already kind of touched on why that might be.

I know Pete, that you in this episode talked to Anna Catalano. And she’s a board member and CEO advisor. Does this article tell you about the importance of [00:05:00] the relationship between executives and board members?

Pete: So, Courtney, I think the answer to your question is that it’s everything, you know, it’s only through the strength of the relationship between the leadership team and, uh, an aligned board that companies get through difficult times.

And I think we’re seeing here an example of how things come undone when you don’t have that bond in place, that alignment. Worse, when you read through the article, you see that this has been going on for a long time. They’ve had a hard time replacing board members. You just get the sense that there’s been friction in this business for ever.

If you think about the very best thing that open AI can do to fulfill its mission of developing, prudent, safe AI that holds the potential to change the world is to be a Going concern, you know, productively pursuing executing its mission. And what a grand irony here that the thing came undone, not [00:06:00] because of the scary technology that’s involved, but because of the humans.

Courtney: You know, I, I still, I find it ironic that, you know, even with these high flyer companies like open AI, that seemed so. And by on paper, they have been even for those companies behind the scenes, all of this can be going on without any of us knowing. So maybe that’s some encouragement for companies that are, you know, having a hard time, even these high flyer companies.

I can as well, the way, if you maybe were traveling this week and signed off, not paying attention to what was going on, uh, at Open AI, PS, their employees had this whole, the whole Thanksgiving week off. And yet all of this still

Pete: right.

Courtney: of that. Uh, but you can’t sign up for our newsletter and we include a lot of links like this to keep you informed and up to speed on what’s happening.

So you can sign up for that at [00:07:00] knownwell.com. Pete, as always, it was great to have you.

Pete: Great to be here. Love it. Courtney, take care.

 

David DeWolf and I were recently at a conference where we heard an executive from a leading AI company say something that sounded a bit, well, heretical. He said that they weren’t seeing executives focused on driving ROI from AI. It was provocative enough that I literally could not wait to get David’s take on it here on the podcast. David, Mohan, thanks for joining us today.

David: It’s great to be back. You haven’t fired us yet.

Mohan: Hello.

Courtney: Mohan. Okay, David, you and I were at a conference recently and there was someone from OpenAI speaking and they said something that frankly was pretty [00:08:00] controversial. And they said that when it comes to, do you remember this?

David: Yeah, I think it’s when I kicked you under the table and I was like, What?

Courtney: Yes, it was that kind of moment for us I, as soon as he said it, I thought we got to talk about this on the podcast. So Mohan, in this moment, uh, the person from OpenAI said, Hey, when it, yeah, we have executives, we’re, they’re here all the time, but right now they’re really not thinking about ROI at all. Period. The end.

And we, you know, it was kind of jarring. And so I wanted to bring that up to you two today. To say, what is your thoughts when it comes to ROI? And, and certainly we’ve talked about on this podcast, a sense of experimentation. How do these things work together? How do we, obviously I think we would all say are very interested in ROI and ensuring [00:09:00] anything that we do has an ROI, how do we balance these things?

But before we get there, do you agree or disagree with the fundamental statement that CEOs right now are not worried about ROI? ha ha ha!

Mohan: so I don’t know the context here, but I can tell you that it’s very hard to measure the ROI, uh, of these projects right now. Right? So everybody wants a straight formula that says what’s the gain relative to the cost, right? That’s not going to happen anytime soon.

So that makes it very challenging in that sense. You know, it’s, it’s hard. Uh, you, you have to have a lot of, um, error for margin here. Uh, but you still had to make an attempt.

David: Okay, I am so excited. I have been waiting for the opportunity where Mohan and I get to disagree and debate. Okay, because just because it’s hard Mohan doesn’t mean it’s not important and I think that [00:10:00] in all of the conversations that I’ve had with CEOs, I’m actually hearing the opposite.

They’re like, I don’t want to do this thing we did with mobile again, and for the first three years, build a bunch of mobile apps that add no value to the business and just waste money. And I think especially because of the economy that we’re in right now, um, it’s not this wild growth economy where there’s tons of money sloshing around, right?

Organizations are tightening their belts, and I think they’re looking at every single penny. I was candidly floored to hear it, because it’s not the conversation I’m hearing at all. Yes, I think people understand it’s hard, but I also think there’s a maniacal focus on ROI, and the return, and why am I doing this more than I’ve seen in previous waves of technology adoption.

Mohan: I think these are the CEOs who are hanging out more with the CFOs as opposed to kind of how this is actually getting done.

David: Oh, now! Now we’re throwing down. This is good.

Mohan: I mean, just just think about it right in an AI project [00:11:00] to to get to the cost side of the equation. you’ve got to figure out first of all what the business value is. You’ve got to, you’ve got to do a data exploration, you’ve got to do modeling, then you’ve got to do evaluation, uh, you know, evaluation is sort of testing, but it’s more than that, that you’ve got to kind of keep iterating, and then you finally deploy, right?

Anybody who tells you that the cost of this is going to be 400, 632.42 is wrong, uh, right? Because, because there is a level of imprecision here that you’ve got to live with. So that’s point number one. The second point is the gains on the other side are also the gains accrue over time when it comes to AI.

So at what point do you measure the gain is a question because if done well, the gain should keep accruing. It’s not like a traditional IT project. And that’s what I mean that there is a lot of complexity here. It’s still important to measure, but you should know that you should write it down in [00:12:00] pencil and not in pen.

David: Okay, so I will give you on the point of precision. I think you’re absolutely right that it is hard, like, you just don’t know the costs right now. The training of these models and some of the complexities, it’s way too new, um, and it’s way too high right now too, right? So I will give you on that point.

Mohan: Thank you, David. Is that the first time?

David: and the last.

Mohan, and the last. Um. That said, I don’t think that the mindset I’m hearing in the marketplace is because it’s hard to measure because we can’t be precise. We’re just gonna throw money at experimentation, right? The air I heard at the conference was, But we’re in a world of experimentation, and you should just be doing it to do it so that you come up to speed and you learn it.

And I’m just not hearing that at all. I see a lot of timidity. I [00:13:00] see organizations who are saying, Give me a specific use case. I mean, You’ve even championed on this podcast, right? Pick a very specific use case and let’s look at what experiments are small enough and which ones are the more longterm ones.

We talked about the two by two, uh, in one episode, right? And, and over and over again, I have heard at least spiritual commitment to ROI. Let’s find a specific use case that drives return for our business. And let’s not just play with technology for the sake of playing with technology.

Mohan: Okay, let’s think about what a spiritual commitment to ROI means. Alright, so, so,

David: As soon as that came out of my mouth, I knew that was not my best point.

Mohan: So, so the one way to think about this is how do you reduce risk? Right when you reduce risk your ROI is in the right going in the right direction

David: It’s true.

Mohan: two [00:14:00] fundamental ways of reducing risk in an AI project is to have a good methodology had known well have a methodology. So, you know subscribing to any methodology is So that’s going to be advantageous for you because it gets the whole organization in together all the way from what is the business value you’re trying to get to getting these into production to realize the value.

one. The second is what you just said David starting off with proof of concepts starting small, learning by doing and then building on that. That is how you reduce risk. And therefore you are moving the needle on ROI. I think where you and I are, or maybe we are not, is around, um, the precision that people expect when you say ROI.

That is not going to be possible here. Um, so, so that’s the general sense of where things are. So ROI is very important. Uh, the focus should be on reducing risks.

David: You know what? One of the things [00:15:00] that we have lived for before is a world where all these large corporations are spinning off these labs to go play with ideas, right? And I think you and I have both seen the downfall of that is a lot of times they’re detached from actual business results. And they don’t get embedded back into the business unit.

It’s interesting when I think about experimentation in AI and playing with the technology for technology sake and oh, we will figure it out. I’m not hearing a conversation about labs and siphoning things off. What I’m hearing is organizations say. Let’s find the use case. Where is this going to impact?

We hear a lot, right? Open AI and actually Microsoft was at this conference as well. They both framed the conversation as there’s two things they’re seeing in the market. One is personal productivity, right? Um, the other one is productization. Where can you use AI in your products and services themselves to add more value to the customer?

Those [00:16:00] were the two themes. Those in and of themselves are just dripping with ROI from them, right? The reason those are put forth as the genre you should be looking into is because we know, not spiritually, but directionally, right? If I am driving more productivity, there is going to be ROI there over time.

I can figure out how to measure that and what the precision is around that. If I’m delivering more value to my customer, right? I know that. Ultimately, I will figure out a way to monetize that. That’s not just this void experimentation landscape of spin off a lab. Let’s do a bunch of work and see if we can become geeks, um, that are steeped in the technology, but haven’t found a real business problem to solve.

Mohan: Yeah, so one way probably to think of it is to do your traditional ROI calculation, but also figure out what is the estimate margin that you need in terms of an error here, both on the gain side and the cost side, [00:17:00] And I think that should be possible to do. And then that gives you a range of ROI that would make sense that you could run with.

Courtney: it sounds like through this discussion, Our hypothesis is that OpenAI, their statement is incorrect. That maybe they seem, CEOs seem like they are, don’t care about ROI, but it’s because they’re focused on these specific use cases that they know ultimately they will figure out

David: I’m going to interrupt you real quick because I actually think the takeaway is a little bit different. And I don’t know if the way we think about it is actually the same. Here’s what I here’s what I’m hearing. I’m hearing Mohan focus on the measurement. And the precision of R. O. I. And I think I actually agree with.

I’ve already conceded on the precision part of it. I will say it is really hard to measure R. O. I. At this phase of adoption at this phase of [00:18:00] experimentation. But I think that there is a fundamental difference between being able to measure especially precisely R. O. I. Versus Actually having an R. O. I.

Mindset and building and doing your experimentation with an R. O. I. Hypothesis in mind. And what I would encourage executives to do is to look at and make sure they have a strong hypothesis that’s different from measurement and precision and being able to do it. You’re only going to drive forward if you execute on that hypothesis.

That’s where it will begin to. To measure value. I would test that I would monitor it as it goes along, but the experimentation can begin without the concrete measurement. The other thing that I would say is OpenAI has raised how much 10 billion just this year from Microsoft added up like in that environment.

I don’t have to be capital efficient and there probably is [00:19:00] a lot of experimentation for experimentation’s sake there.

That’s not the rest of the world. There’s a limited number of companies living in that ethos. And so I would, I would challenge that premise. Yes, there are some companies there, but the vast majority of our listeners are coming from a different type of organization that absolutely, I promise you, their organization cares about ROI.

Courtney: I think if you’re an executive listening, hopefully, if you felt this tension between ROI and experimentation, that this conversation has helped shed some light on how one you’re not alone. This is something a lot of people are figuring out and thinking through. So David Mohan, thanks for wading into this little debate today.

David: Oh, that was so much fun.

Mohan: Great fun. See ya.

 

Courtney: Okay, everyone, this episode is dropping on November 27th. What’s on everybody’s mind with December right around the [00:20:00] corner? If you’re an executive, it’s likely strategy and planning for 2024. At Knownwell, we want to help you make 2024 the year that your organization takes your AI game to the next level.

And you can get started by taking our free AI assessment to see your company’s strengths, weaknesses, and AI preparedness. You can take that right now at knownwell.com and you can, of course, reach out to us at knownwell.com. If you’d like a hand getting your most impactful AI initiatives up and running, it’s literally why we exist and we’re here to make sure you’re ready to hit the ground running in the new year.

 

Courtney: This week’s discussion is with Anna Catalano. Anna is a board member for a number of companies, an advisor to [00:21:00] CEOs and a sought after speaker. We couldn’t think of a better perspective to get than hers on whether AI and ROI are compatible or incompatible.

Pete: Anna, it is so great to have you on the show. Thank you so much for being here.

Anna: Well, thank you. Great to be here.

Pete: We, uh, as a service to our customers, um, run something called the AI transformation readiness assessment, helping, uh, leadership teams look across the organization at all the capabilities you have to marshal in order to be sort of ready to take on AI, um, as, as an execution factor and strategy, continuously, we find the area where companies score the lowest in terms of their readiness is on, uh, governance and responsibility.

Um, does that worry you? Is there advice that you would have for a company to get themselves to a place where they feel buttoned up?

Anna: Yeah, I think that, you know, governance and responsibility, compliance, all of those areas that have to do with enterprise [00:22:00] risk. Um, is a real big part of what boards are about and, and what we need to watch out for as stewards of, of investors, investment in our, in our organizations. And I think the reason why it’s difficult right now is it’s still a moving target.

Don’t think, um, there’s been a lot identified yet in terms of what the requirements are going to be. I think the regulators are getting their arms around it. Um, but as with everything around enterprise risk, I think it’s important for people to understand, what are the inherent risks? About AI relative to the information that you use where it comes from.

Is it correct? Is it accurate? Is it biased? Does it does it contain, a bias that can cause you to make? Bad decisions. You know, what are the things that you’re doing to protect yourself against that? So, you know, I think from a risk standpoint, that’s very important for executives [00:23:00] and for directors to understand.

That being said, I think it’s also dangerous. Um, when people think about risk to only think about the, what could go wrong things as opposed to what happens if we don’t get it, right? And so I, I always like to bring up the flip side of risk, which is. a lot of risk in doing things wrong, but there’s a lot of risk in not doing things

Pete: right

Anna: as well.

So, um, I think you need to make sure you’re, you’re having both sides of that conversation.

Pete: Yeah. Say AI is happening regardless of whether you take it on, uh, aggressively or not. And so in my mind, the cost of, um, inaction is kind of existentially high

Anna: Yeah, exactly. Exactly.

Pete: on the question of risk, and you mentioned, uh, ChatGPT, uh, OpenAI recently. Ran its DevDay day and launched like an unfathomable, uh, number of, um, innovations and updates.

And, [00:24:00] what’s your feeling with, risk in mind on, experimentation and sending everybody off to the races to play with chat, chat, ChatGPT in their role, release the hounds tread lightly.

Anna: I think people need to be smart about it. I think they need to, I do, as I mentioned earlier, I think it’s important for people to understand how it works. So, you need to work with it, understand the power as well as the limitations. I think it’s important for companies to encourage employees to.

Do that, but I think people also need to understand, um, that there is, um, you need to be careful about, um, policies and guidelines around use of things like ChatGPT in an office environment. Do you want your employees using it on their company laptops? Do you understand why that’s even a question? Right?

I think there are people that don’t even understand why that’s a question.

So I think, I think, that, until you understand why that’s a question, you need to talk to experts to, learn to, [00:25:00] to,

to learn about that. Right. but once you do, I think you need to establish very good parameters around security around when it’s appropriate to use, how it is used, who can use it.

and what kind of questions you’re asking. So, um, I think you need to, I wouldn’t say unleash the dogs and, you know, let them let everyone do anything. But I think that once you create some boundary conditions, the space within those boundaries,

I think you do need to let people, um, work with it.

Pete: shifty gears if I may, uh, to the topic of ROI elsewhere in this podcast, there’s a discussion of ROI and an open AI exec mentioned something to the effect of companies aren’t worried about ROI yet when it comes to AI. Has that been your experience with companies that you’re working with?

Anna: I have not had a single conversation in any of my companies about the actual ROI on AI. I,

Pete: yeah.

Anna: um, I, I don’t, I don’t think talking about ROI as a starting point is the right thing to do. Cause I don’t [00:26:00] think, I don’t think people have enough information to know how to fill in the blanks to calculate the number.

Pete: So is there a different way to think about how to filter through? The myriad opportunities to apply AI in the business. You know, is I’ll say we worked with a company and laid out a set of benefits that AI can drive. It helps to execute strategy. It helps to, um. Enhanced human capital. It actually saves cost.

It does this. It does that rather than driving to a full on ROI statement. It’s just like to get a sense for the magnitude of the types of benefits. But

Anna: Yeah,

Pete: some other clever way that like if we’ve got 50 ideas to pick among how we how we make decisions?

Anna: well, I think AI is a tool. I, I think that it’s important to understand what the tool might do for you. and in some cases, there are ways that you can utilize AI to reduce. Complexity and time to deliver in terms of supply chain, for example, [00:27:00] and you might say, well, how does it do that? Well, it might be able to help you analyze, um, the purchasing behavior of your customers.

And when people, when people have a greater propensity to, to purchase, it may have an implication on manufacturing. Um, AI may help you monitor performance of manufacturing units. Anticipate downtime, anticipate when you’re going to need routine maintenance done even before humans can figure it out. AI may be able to help you identify customer, um, behaviors that cause them to, buy in a certain time of year or not, or, Certain types of purchases will cause other types of purchases to happen.

I mean, there, there are so many ways that AI might be able to help. I think it’s important for an executive team to really ask the question, where do we want to utilize this tool first to see what it can do? Is it to generate more [00:28:00] revenue? Is it to, um, create opportunity for more efficiency? Um, is it to eliminate some, some, um, headcount where routine jobs can be done faster, more accurately through AI?

I think those are the, the fulsome conversations that executives need to, need to have, and then present that to the board and find out if the board has other. questions and other experiences from other industries that they might be able to add to that,

Pete: Thank you so much for joining us today and it has been an absolute pleasure.

Anna: Thanks. I really enjoyed it.

 

Courtney: That’s it for this week’s episode of AI Knowhow from Knownwell, don’t forget to visit knownwell.com to help you get ready to develop your AI strategy for 2024. And of course, like always, we would really appreciate it. If you would like rate and review the [00:29:00] show, wherever you listen, And by the way, you can also watch the show on YouTube. If you really want to get to see our faces along with hearing our voices, that’s where you go. So every episode, we love to get one of our favorite AI tools to weigh in on the topic that we’re discussing. So Hey Bard would really like to know what you think about AI and ROI.

And should we be expecting a return when we invest in tools like you? Now you’re in the know. We’ll see you next week with more AI news, round table discussions, and interviews.[00:30:00]

You may also like