Still think of AI as just a chatbot? It’s time to reframe that mindset. And have you thought about how AI can evolve your business beyond just “what you do”? Now’s the time.
In this episode of AI Knowhow, Logan Kilpatrick of Google DeepMind joins Mohan Rao to explore why UX—not chat—is the real frontier of AI, and how companies like Khan Academy are designing smarter, more human-centered experiences even while navigating the challenge of designing for non-deterministic outcomes. Logan also shares some of the latest on Gemini, including long-context reasoning and native multimodal capabilities, and Google AI Studio.
Later in the episode, Courtney Baker, David DeWolf, and Mohan build on the theme of reimagining how we think about UX, exploring how leaders must rethink how they deliver value in the AI era—from compounding intelligence to building systems that learn and adapt over time.
AI is not just a chatbot
Courtney kicks off the episode by challenging the audience to shift their mental model: “Let’s not limit ourselves to pecking into a machine for another generation of work.” It’s a theme that runs throughout the show. Logan Kilpatrick echoes this when discussing how much of the world still thinks of AI through the lens of a chatbot interface.
In reality, the future of AI-powered products and platforms will look very different. Logan points out that successful AI applications won’t rely on open-ended chat, but on user experiences tailored to specific use cases. One standout example he gives is Khan Academy, where the goal is not to provide direct answers, but to guide students through problem-solving. That’s a radically different kind of user experience—one where the AI doesn’t just respond but teaches.
From research to real-world impact
One of the most compelling insights in the episode comes from Kilpatrick’s observation about how blurry the line has become between AI research and product development. In AI today, even frontline developers are discovering new insights simply by experimenting with the tools. “You might stumble upon something that no one else has figured out before,” he says.
That means the companies willing to dive in, experiment, and iterate—especially at the application layer—have the chance to build meaningful competitive advantages.
Reframing UX for AI-first products
A key theme from the conversation is that building great AI products requires rethinking UX. Mohan Rao describes the challenge this way: “Sometimes the output is so unpredictable that the user doesn’t know what to do with it. Our UX has to compensate for that.”
Logan adds that human-in-the-loop designs—where users collaborate with the model over time—are critical to getting the best results. In this kind of dynamic, the system prompts the user, accepts feedback, and improves iteratively. It’s not a one-shot question and answer, but a conversation aimed at reaching a better outcome.
The new challenge: Systematizing and compounding knowledge
David, Mohan, and Courtney kick off the roundtable portion of the interview by reflecting on a LinkedIn post about the AI recruiting company Mercor, which is embedded below. What stood out to them was the idea of “compounding knowledge systematically”—or building systems that learn from every interaction to deliver better outcomes over time.
Among the thought-provoking ideas in the post is that, in the AI era, the real way companies will continue to provide value will go beyond what their product does to what their product learns. As Usman says in his post, “Tesla doesn’t just make cars, it’s learning with every trip. Spotify doesn’t just play music, it learns taste.” The best AI-native companies are those that not only deliver a product or service, but continuously improve it by learning from each user interaction.
Choosing the right tools and models
In our weekly news segment, Pete Buer covers Anthropic’s skyrocketing valuation, reminding leaders to look beyond the hype and understand what makes each large language model (LLM) unique. Claude, for example, has strengths in coding and evidence-based outputs, making it particularly useful in fields like healthcare and finance.
Pete’s advice? “As a business leader, you want to know what’s unique about each of these models and how their strengths align with your business needs.”
Wrapping up
Reframing how we think about AI is not optional—it’s essential. Whether it’s moving beyond chat interfaces, rethinking UX, or considering what compounding intelligence might mean in your context, business leaders need to evolve their thinking.
Watch the episode
Watch the full episode below, and be sure to subscribe to our YouTube channel.
Listen to the episode
You can tune in to the full episode via the Spotify embed below, and you can find AI Knowhow on Apple Podcasts and anywhere else you get your podcasts.
Read the LinkedIn post on Mercor
David references this LinkedIn Post about Mercor during our roundtable segment. He cites some interesting thoughts from the post in making the point that AI requires leaders to reimagine how they think about their companies and the products or services they provide.
Show notes
- Connect with Logan Kilpatrick on LinkedIn
- Learn more about Google DeepMind
- Learn more about Google AI Studio
- Connect with David DeWolf on LinkedIn
- Connect with Courtney Baker on LinkedIn
- Connect with Mohan Rao on LinkedIn
- Connect with Pete Buer on LinkedIn
- Watch a guided Knownwell demo
- Follow Knownwell on LinkedIn
When you think of AI, and let’s be truthful here, does your mind immediately go to ChatGPT or any mirrored of other AI chat bots?
If so, seriously people, it’s time to rewire that mental model before it’s actually too late.
Let’s not limit ourselves to pecking into a machine for another generation of work.
Hi, I’m Courtney Baker and this is AI Knowhow from Knowwell, helping you reimagine your business in the AI era.
As always, I’m joined by Knowwell CEO, David DeWolf, Chief Product and Technology Officer Mohan Rao, and NordLite CEO, Pete Buer.
We also have a discussion with Logan Kilpatrick of Google DeepMind.
Mohan talks with Logan about the latest updates to Gemini, why we’re just beginning to understand what’s possible when we talk about the UX of AI, and why working in an Apple Store prepared Logan well for working in AI.
Hey Logan, welcome.
Thanks for taking the time and thanks for being with us.
Looking forward to this.
Yeah, I’m excited for this conversation.
Thanks for having me Mohan.
As I was looking at your background, you’ve spent time at NumFocus, at Julia Programming Community, OpenAI, now at Google Gemini, DeepMind.
What has drawn you to make technologies accessible to developers and businesses?
Why have you chosen this track?
Yeah, it’s a good question.
I think a lot of this is actually rooted in, I started my early career working at the Apple Store actually, and I spent like two and a half years helping folks at the Apple Store, and it was such a transformative experience for me just in like this deeply rooted technology customer service mindset.
And I think today actually like has still has so much impact on my life, where if I look at like one of the biggest challenges of working in the Apple Store, it was one person would come in and I worked at an Apple Store in California, and they would literally be like an engineer at Apple and like actually work on the iPhone or something like that.
And then the next person would come in would be, you know, someone like my grandma who’s like has no idea how phones work and can barely tap through screens.
And I think it’s actually funny enough, there’s a lot of parallels of like today’s AI world, whereas like you, I spent a lot of time with people who are like incredibly deep on the technology.
And I spent a lot of time with people who are like truly at the very earliest stages of like understanding what’s happening in the AI ecosystem and how they should be thinking about their business and where the opportunities are.
And I think being able to sort of context switch to the level of depth to talk about has been super important for me.
And I think that empathy for people at the different parts of their journey, I think is something that’s kept me excited.
Like I think there’s just so many people who, and I felt this pain myself, are just like, it’s a lot of friction to build stuff that’s interesting in today’s world.
And I want to spend my time making it easier for people who want to build stuff.
I’m so glad you do what you do.
So super helpful for everybody else out there.
Thank you.
You also are at the intersection of a lot of research and practical business applications.
You probably see so much out there.
As people are using LLMs, what is the most surprising insight that you’ve gained?
I think I’m a meta-observation, which is I think one of the interesting things about the language model space right now and just the AI space in general, is the line between product and research is extremely blurry.
What I mean by that is, if you look at a lot of ecosystems, doing web development, for example, people are building websites and they’re doing stuff, and yeah, there’s a frontier of that technology.
But the random web developer is not stumbling upon net new knowledge that the rest of the world doesn’t have on any given day.
I think that’s actually not true in the world of AI and LLM right now.
If you’re just an engineer building something at some company or you’re putting AI into production at your company, you might actually be stumbling upon a bunch of things that no one else has figured out before.
First, that’s really cool.
It speaks to how early we are in this wave of AI.
But secondly, to me, that just means there’s all these opportunities.
If you’re willing to actually put in the cycles of figuring out how to best leverage the technology, there are differentiated advantages, there are edges that you can have literally across the whole AI stack, whether it’s just using AI tools inside of your business, or if it’s like we’re actually building something with AI inside of our business, on both ends of those spectrums from the user or developer side, there’s all of these really interesting edges.
The fact that there’s more tools, more new tools, more new models, just sort of exponentially increases the chance that you like stumble upon something that a lot of people haven’t figured out and like can actually have a differentiated business opportunity because of that.
So that’s gotten me really excited.
I think it’s hard, like the larger your enterprise is, the harder it is for those things to like trickle up into like a really material way from a product perspective.
But if you look at a bunch of some of the most successful product surfaces that have like gone viral in the last two years, I think a lot of them were like actually just earliest on finding some of those like differentiated edges from a model capability standpoint and sort of made it work because of that.
So fascinating.
You know, we play more in layer seven of the stack, if you will.
Like we’re building apps on top of these amazing models.
And the way it comes out for us is sometimes the output is so unpredictable that the user frankly does not know what to do with it.
And therefore, our UX has to compensate for it.
And there are so many challenges around UX to build on top of non-deterministic systems like you’re describing.
And that is such an interesting challenge, both with what we do, and I’m pretty sure you see this across the board as well.
Yeah, I think to me, that is one of the big opportunities of this AI moment is around how you can actually build the UX to make these non-deterministic systems easier to navigate through.
And I think there’s been a bunch of interesting examples around this.
If folks have listened to some of the stuff that Khan Academy has done, they actually have a really unique problem space where if you think about what the core thing the models want to do is, they want to answer user questions.
That is the whole thing that they’re trained to do is given some input, provide an output.
And if you look at how Khan Academy, as an example, wants to deploy LLMs like their user base is students.
And like the thing that students want is the answer to their problems.
And yet the core product experience is like not just directly giving students the answer to their questions, but instead guiding them.
And I think they’ve done a bunch of really interesting things around how they sort of put guardrails on the model actually through the UX experience instead of just like, you know, the alternative option of this is just like a chat interface where you can sort of ask any question.
And there was so much in the last two years of it just being hammered into people’s heads that like AI is a chat bot and therefore, that’s the UX experience you should put in front of your customers.
And like, actually, that’s not true.
Like the UX you should put in front of your customers is probably vastly different than what that like default chat experience looks like.
But it takes iterations to figure out what the ideal experience is given the domain that you’re in.
You know, as you know, our customers, our professional services companies, with Gemini 2.0 and the new reasoning models that you’re bringing out, what specific capabilities do you see that could transform professional services companies?
And as you know, PS companies generally do things for the clients, right?
So they’re running workloads or doing things, whether it’s marketing or technology.
How do you think these technologies can use these technologies to deliver more value to clients?
Yeah, this is a great question.
This is something that I think about more generally a lot.
But I think I’ll say two points.
One, the current paradigm of reasoning models in general is one of those situations where the model performance is now 20 percent better across a bunch of domains.
And what that actually means for you if you’re building with these models or using these models is like all of a sudden performance is better, things work that didn’t work before, which is exciting.
I think it’s hard to be, in some cases it’s hard to be specific about, if you already have AI in production and you were to use one of the new reasoning models, things just work better.
It’s sometimes hard to quantify what working better actually means.
But I think that is the case, a lot of use cases work better.
I think the second piece of this that it starts to hit at is, the direction we’re going with reasoning models actually ties a lot back to this user experience perspective, which is today’s AI experiences independent of the reasoning model thread are very real-time focused.
You could start to think about how I’d imagine, I’m not in the professional services industry, but I can imagine a lot of that is actually, there are real-time components where you’re talking to a client, you’re doing something with them hand in hand, but there’s actually a lot of work that can sort of happen behind the scenes, that you’re not sort of latency slash time-sensitive.
You have a week to get back to the customer and provide them this amount of value or this artifact of work.
That bodes really well for this future of reasoning models where all of a sudden, the more compute that you throw at a problem, the longer that you let the model reason over that problem space, the better the answer actually becomes.
I think there’s some updated thinking just because of the way that the ecosystem has gone where so much of this is focused on this real-time nature today, and we need to start building those experiences that are a little bit more asynchronous, and you actually give the model the time to solve problems, just like you give the humans on your team time to solve problems when they need it.
Yeah.
Then allowing for the iterations.
You come back with partial answer, and then you go back with some feedback, and you refine the answer, so on and so forth.
Yeah.
Without a doubt, I think that’s actually like the best way to get value from the models.
And I think if you just look at like a lot of product experiences, like not many people do that.
So I think if you are doing that, that is like a frontier use of AI to like have the sort of human actually like intelligently in the loop when it matters.
There’s a lot of things that are like not human parallels with AI, but like this is actually one of the great human parallels, which is if you’re in IC doing some job somewhere, and you have questions or problems, you pull out of your work loop and you go ping the people who you work with, and you say, hey, I need help on this thing, please give me suggestions or guidance, and then you go back into your work loop after you’ve gotten that input.
The fact that historically models have been forced to just do the thing without being able to actually pull out and get support is actually crazy in hindsight as we look at where we’re going.
Yeah, for the longest time, we’ve had people that we trust that are sounding boards, right?
So you go and chat with somebody and you unlock a problem just because you chatted with them.
Yeah, it makes total sense to me.
For the listeners who are thinking about building these types of solutions, from your perspective, what does Google AI Studio provide?
What do these frameworks provide as supposed to just custom build things?
Is it as simple as a build versus buy, or is there something deeper to this?
Yeah, I think for us, AI Studio, if folks haven’t tried it, is literally just a thin wrapper on top of the Gemini models themselves.
So it isn’t intended to be a super robust tool that you would use and go and put into production.
It’s really meant to be a showcase of what the models are capable of.
Ultimately, the artifact that we care about is the Gemini model itself, and being able to take that and put that into production.
So all of the angles of the product experience are like, how do we showcase the differentiated capabilities of Gemini, long context, reasoning, some of the native multimodal capabilities, both input and output.
I think that’s the angle in which we continue to push on.
I think the cool thing is, if you’re surveying the model ecosystem, today there’s a lot of these different trade-offs between the models.
Again, back to this point about this research versus product exploration, if you really sat down, there’s all of these interesting different product directions you could take based on the different capabilities of the models.
I think for Gemini, there’s a bunch of really differentiated product experiences you could build with long context as an example, or the native multimodal capabilities as an example that other models literally just aren’t capable of doing.
As you explore the model ecosystem, you see more of these across all the different providers, which is super interesting.
Logan, last question.
As you look to the next 18 to 24 months, what changes and capabilities, what do you see coming on the horizon?
What would you predict?
On this reasoning point, I think we’re still in a world today where the models, the analogy of this is like if you were to take a human and put me in a little box and say, here, human, go and think and solve this problem, given this small amount of input that I’ve been given about what the problem is to be solved.
You would then look at the output and you’d be like, Logan’s pretty dumb.
He actually doesn’t know what he’s doing.
If you think about what we’re doing with LLM today, it’s actually that very same example, which is we give the models basically no context about the problem that we’re trying to have them solve, and we give them no tools in order to solve the problem that we want them to solve.
I think as we look towards the next 18 months, the problem that I’m hopeful that will be solved is actually removing some of this burden on the users to be the ones to do all the work or provide in the context.
Today, you actually can solve this problem.
You could have a really robust function calling infrastructure, you could spend a long time gathering all the context for a given prompt, but basically all that work is on you when you send the initial query to LLMs and I’m hopeful that that role reversal happens where humans can a little bit more laxadaisically ask models questions and the models will just have the infrastructure to get the information that they need in order to actually give a good response and then make use of the tools that they need in order to really smartly give a response.
Because I think a lot of the expectation mismatch of AI actually comes from this point, which is people are promised this beautiful thing, and then when they actually get to it, they’re like, I don’t really want to do the work of giving the model all the right stuff for it to actually help me, and then therefore the outcome is like AI doesn’t really work that well.
I think it’s not the user’s fault.
I think the technology just needs to be better for people to actually be able to see that value.
Makes total sense to me.
There’ll be fascinating that happens where you can ask a simple question and the context is derived from the question or based on who you are and what you’re doing.
So it understands the context in a more ambient way as opposed to you having to kind of grok and provide every bit of information.
That’s really fascinating.
Cool.
Logan, thank you so much for taking the time.
Really appreciate this.
Yeah, Mohan, this was awesome.
Hopefully, it was helpful and thanks for having me on.
On one of our recent episodes, we talked about the challenge of synthesizing all of the data at your disposal into something that’s both digestible and actionable.
Good news, it’s just one of the things the Knownwell platform can do for you.
If you’re ready to reframe how you think about AI and how you deploy it into your business, we’d love to show you some ways an AI native platform like Knownwell can help you do just that.
Visit knownwell.com to learn more.
If you’ve listened to enough episodes of the show, you’ll know by now that David and Mohan love a good frame.
Seriously people, they talk about frames all the time.
So I was excited to talk with them in the wake of Mohan’s conversation with Logan to help us reframe how we’re thinking about AI.
David, Mohan, welcome back.
Are you too ready to help listeners reframe how they think about AI?
So David, and actually, you’re the one that kind of prompted this.
Oh gosh, I hate it when she says that.
I always say things I don’t mean to say and she surfaces them again.
Well, you shared a LinkedIn post in one of our Slack channels earlier this week.
That’s about a company called Mercor and the AI platform they’re building to help companies hire smarter.
And I thought this may be instructive to help people understand what it means to reframe how they think about AI.
So and a lot of that goes into, you know, we think about things based on the things that we’ve already experienced, but so much of AI is new.
So could you help kind of recap that post?
And by the way, for everybody listening, we’ll make sure to post this in the show notes as well.
Yeah.
So the core of it is that this company sits in the recruiting space, and they’re looking to leverage AI in order to reinvent that space.
What’s interesting about recruiting is that it’s a $500 billion industry.
But with knowledge worth changing, it’s being turned on its head, and I think there’s a big challenge to it right now.
But it’s not happening by the big players, right?
It’s the startups that are thinking differently that are really challenging the status quo, and not just by trying to make recruiters more efficient, right?
We’ve talked on this podcast a lot about efficiency versus effectiveness, and there’s a lot of ways to drive efficiency and just make the execution of work faster.
But I think the core question so many of us have to ask is, what new value can we bring?
So, Mercure just closed a $100 million Series B at a 2 billion valuation from what we understand.
And they’re building smart systems that learn from every single interaction in this recruiting space.
And that allows them to improve their intelligence over and over again, because they’re looking at all of these conversations, they’re looking at all these interactions, and they’re able to say not only who gets hired, but how do they perform afterwards, how long do they stay, all these types of things.
And that’s compounding their intelligence.
And they’re able to look at the different patterns and leverage AI to actually make sense out of it.
There are a couple of lines in there that I love, that I really think are illustrative of how people should be thinking differently about what makes successful companies in the age of AI.
One is Tesla doesn’t just make cars.
It is learning with every single trip.
We’ve talked before about how intelligence is the ability to learn and apply knowledge.
So take this recruiting example.
You’re learning with every single interaction with a candidate and every single interaction with a company.
Tesla, same thing.
Every single trip, the car and the platform is learning.
Spotify, same thing.
Another quote, Spotify doesn’t just play music, it learns your taste.
So I think this idea of learning and applying knowledge and not just leveraging intelligence as it exists, but continually having more and more intelligence as fueled is a really important concept for folks to think about and really chew on as we get further and further into the AI world.
Yeah, specifically there, David, I loved the phrase in the post that you had put about compounding knowledge systematically, right?
I just love the phrase there because a lot of times, I’ll say something to which Courtney will say something and you’ll say something and I’ll say something.
And just the knowledge compounds as we are just having these discussions, just loved it.
And if you can do that systematically, we all know the power of compound interest.
Yeah, and the key there, you know, so often we’ve also talked about the power of taking individual knowledge and institutionalizing it, right?
You start to tap on that idea, the systemization of that, whatever that word is, I got that wrong, I stumbled on that, we won’t go back there.
But systemizing knowledge and making it institutional knowledge, not just the recruiter or the partner at the recruiting firm that’s driving the firm, but how do you take that and truly pool it, compound it, learn from it?
I think that’s one of the secrets because people aren’t going to just pay, you know, more and more money to a recruiting firm or as they leverage AI to do their work faster, just give them more money because they’re being more efficient.
But if you can actually make hires more successful and you can find the right people and streamline the process and collapse the time it takes to identify them and all of these things because now multiple partners are sharing all of their knowledge and you’re collapsing it down into institutional knowledge that the firm can tap into.
And that is what is orchestrating your business to do recruiting.
Wow.
The whole game’s just changed.
The business model is different.
Right.
And so I think that’s the type of transformation that organizations have the potential to see in today’s AI era.
Yeah.
And there’s a complete correlation with how consulting or professional services company work.
Right.
So every engagement is siloed and there is not that knowledge that compounds through the organization.
So true.
Yeah.
We’ve seen over and over again, knowledge management is such a key problem, especially in professional services.
Right?
Absolutely.
So Mohan, I want to take this from a little different angle.
Earlier in this episode, you got to chat with Logan Kilpatrick of Google.
What stood out to you as you talked with Logan that listeners might want to key in on to rethink their approach to AI?
Yeah.
You know, we were talking, we talked about many things, but the thing that stood out for me is building UX on top of these models, and building great applications, right?
So one of the things that he mentioned was students go to Khan Academy to go get the answer, right?
But that’s not what Khan Academy wants to do.
They want to teach you how to get the answer, right?
And then how do you do that, right?
So you’re teaching as opposed to giving you the answer, right?
Because you’re all used to this model of typing bunch of things and hoping to get the perfect answer back, right?
So it’s a different type of problem.
And we were talking about building sophisticated applications like that.
And at Knownwell, we have the same set of challenges.
And the next analogy that he made was around, how do you get human in the loop, right?
So you take the brightest person and lock them in a room and just say, go get the answer.
They’re not going to be able to get the perfect answer.
They need to be interacting with the outside world, right?
So they need to talk to their colleagues, to the customers and clients and so on and so forth.
So then the conversation went to human in a loop, where you can build the app, if you can build the applications in such a way, where you prompt, you get some things back, you provide more human input that gives you more answers back, a more intelligent answer back.
So we were on this talk track about building applications, putting human in the loop, getting to a higher order intelligence.
Just kind of these concepts that really stood out to me, and that was very powerful.
And obviously to do that is not easy and takes several iterations of the application to get there.
Mohan, there was one line from Logan that just really struck home for me, and I want to play it here.
There was so much in the last two years of it just being hammered into people’s heads that AI is a chatbot, and therefore that’s the UX experience you should put in front of your customers.
And actually, that’s not true.
The UX you should put in front of your customers is probably vastly different than what that default chat experience looks like, but it takes iterations to figure out what the ideal experience is, given the domain that you’re in.
I thought that was really interesting when it comes to reframing how we think about AI, because for so many of us, our first exposures, it was chat GBT.
And it kind of solidified in our head, this is how you interact, this is how this will work for us.
When in reality, the scope is much larger, but I don’t know for most people who aren’t working in an AI company that they ever stop to think about that.
So is there helpful tools that you too would advise executives to kind of open up our mindset when we think about these applications?
I’m just so happy somebody from Google said this because they’ll get more attention than I do, but I could not agree more.
Can we actually just play that again, please?
Here’s the deal.
The chat experience is a horrible user experience.
There may be places for it, but where certain tools that are AI native have really taken off is when they get away from this open anything goes chat experience and they provide discreet controls that allow you to manipulate whatever you’re doing and to engage with the system in ways that are specific to that specific circumstance in the application.
The great example I have is the image editors that are AI first, right?
They didn’t take off when they were first just chatting, type in, tell me what you want.
It was once they augmented that with, you know, control bars where you could play with the tone and play with the hue and play with the animation style versus real lifestyle and all these different things.
That’s how as humans, we have simpler interfaces when we’re dealing with computers.
I think there will be a lot of innovations and these controls will get more refined.
I don’t think we have figured out what they are, but I think it is so important that I fear organizations that are just building chatpots and deploying them ever and spending millions and millions of dollars, right, or hundreds of thousands, whatever it is, doesn’t matter, just rolling out chatpots because I don’t believe that is the future.
And I think it’s a really good point that he makes.
And you know, I agree with David.
The reason it’s a poor experience is because the burden of context is on the user and not on the model, right?
You have to put more and more and more and more information into the model to give you the right answer or the answer that seems that’s much higher-order than what you had conceived of through the prompts.
So in a way, kind of, the burden is on you.
If these things, as they evolve, what will happen is, based on the persona, based on the context, we are talking about the customer, the client, whatever we’re talking about, if the context can be self-derived by the model and the burden of the context is shifted, that’s when we’ll have more powerful interfaces and much more powerful systems.
Very interesting.
Well, David, Mohan, thank you for me and all the listeners on helping us reframe how we think about AI.
David, Mohan, thank you as always.
Thanks, Courtney.
Thanks, David.
Courtney, Mohan, you’re welcome as always.
Pete Buer joins us to break down the business impact of some of the latest and greatest in AI news.
Hey Pete, how are you?
I’m good, Courtney, how are you?
I’m doing well.
So it wouldn’t be a week in AI news without another big uptick in valuation for one of the LLM companies as they take on another round of funding.
This week, it’s Anthropic’s valuation of 61 billion, up nearly four times over their valuation of 16 billion from just over a year ago.
Pete, I know these stories of massive valuations for AI companies starting to feel like monopoly money, and it feels far removed from our listeners day to day.
But what’s the takeaway that we should be taking as we hear this news?
Well, I think at the highest level as a starting point, these eye-popping valuations and investments, for me, are reinforcing of the promise of AI.
You know, when investors put their money where their mouth is, that tells you something believable is going on much of the time.
I think another takeaway as this market matures for business leaders who are listening, if you haven’t done so already, it’s time to evaluate each of these LLMs to understand what it is that makes them work, what it is that makes them unique.
So, Anthropic’s Claude, the subject of the article, for example, it sells at Coding.
It’s also got a focus on evidence-based answers in its returns, and so tends to be a good fit for businesses like healthcare and finance.
As a business leader, you want to get to a place where you know what’s unique about each of these, what their capabilities are and what their value propositions are relative to your business need.
Also worth noting, I read separately that Claude is one of several models that will be powering some of Amazon’s new Alexa Plus capabilities.
No surprise as Amazon is one of their biggest investors to the tune of 5, 6, 7, 8 billion, but may be useful to know you might already be getting a taste of Claude when you’re making your dinner reservations or booking an Uber through Alexa.
Well, as a devout Alexa user, I can tell you that there has been no perceivable improvements on my end.
Actually, on pretty much a daily basis, I’m like, why is this not better already?
Well, if you’re not paying for the Plus, that’s probably why.
Like I don’t give Amazon enough money.
Okay.
Well, we’ll look into that more.
At the end of the day, whether it’s Anthropic or another homegrown system, the reality is as long as my groceries are delivered by drone on time, I’m good.
Pete, thank you as always.
Thank you, Courtney.
Thanks as always for listening and watching.
Don’t forget to give us a five star rating on your podcast player of choice.
And seriously, we would really appreciate it if you would leave us a review.
It is the number one way people find this podcast.
And at the end of every episode, we like to ask one of the AI large language models to weigh in on the topic at hand.
So hey, Gemini figured it would only be appropriate to talk to you for this episode.
Any advice for how business leaders should reframe how they’re thinking about AI?
Stop thinking of AI as just a tool and start seeing it as a strategic partner that can augment your team’s capabilities.
Focus on how AI can help you solve real problems and create new opportunities rather than just chasing the latest trends.
And now you’re in the know.
Thanks as always for listening.
We’ll see you next week with more AI applications, discussions and experts.