AI Knowhow Episode 64 Summary
- Agentic AI will dominate the conversation (and AI exploration) in the year ahead, with its potential to perform complex autonomous work
- In the doom-and-gloom department, are career displacement and rolling blackouts going to be byproducts of the AI boom in the coming year?
- And before you put too much stock in any of this…we look back to find out: how did David’s 2024 end-of-year predictions pan out?
With 2025 on the horizon, one of the final AI Knowhow roundtables of 2024 finds us looking ahead to the coming year and what it will hold for AI. The team also looks back at David’s four predictions for 2024 to see just how prescient his previous predictions have been.
The jury may still be out on some of David’s predictions, like that a Moore’s Law equivalent for AI will be established, but these predictions just go to show some things still take time to develop, even in the age of AI.
Is 2025 the Year the “AI Application Layer” Arrives?
One of David’s 2024 predictions that didn’t fully come to fruition for this year was that we’d begin to see the advent of the application layer of AI. Expect anticipation surrounding the AI application layer to be heightened in 2025, with expectations for its development to take significant strides forward.
Mohan Rao predicts that “Gen AI, which are mostly standalone or bolt-on tools, will now take the second half step that we missed in 2024.” He envisions a future where AI becomes an integral component of enterprise systems, enhancing the user experience and fundamentally rethinking application architecture. This advancement is anticipated to be a key theme throughout the year ahead.
Agentic AI on Deck
Mohan shines a spotlight on what will be one of the most talked-about AI advancements of 2025: Agentic AI. “There’s gonna be a lot of talk about it,” he predicted. “It won’t be fully implemented because it’s a big, big item, but it’s going to dominate the conversation.” Agentic AI, capable of performing complex tasks with a high degree of autonomy, promises to shift our understanding of the role AI can play in everyday tasks and decision-making processes.
Pete Buer and Courtney Baker also break down a recent Quartz article on agentic AI in our intro segment, including the news that OpenAI has announced its AI Agent tool, codenamed Operator, will be released in January.
Navigating Sustainability and Talent
While innovation opens new doors, it also presents fresh challenges. David DeWolf didn’t shy away from speculating on the broader impacts of AI’s evolution, positing that we might experience our “first rolling blackout due to AI energy consumption.” As AI technologies demand more from our infrastructures, this prediction emphasizes the critical need to balance progress with sustainability.
Perhaps more unsettling is the notion of AI contributing to substantial career shifts. David speculated that 2025 could herald our first “wave of career displacement,” where industries are noticeably impacted by AI’s capabilities. Mohan chimed in, acknowledging the potential for job disruption but also highlighting an opportunity for upskilling within the tech workforce: “I always say that software engineers, over a 35-year career, need to be ready for 7 major upskills,” he says. “If you have the ability to upskill, it’s gonna be a fabulous future. And this one probably counts for two upskills at once.”
Expert Interview: Dr. Andrew Abela
In an engaging and insightful interview, Pete talks with Dr. Andrew Abela, author of the new book Superhabits: The Universal System for a Successful Life and Dean of the Busch School of Business at The Catholic University of America. Andrew delves into the essence of cultivating superhabits—or habits that can significantly enhance our productivity and well-being. During the discussion, Andrew reflects on the evolving role of AI, noting, “AI may take away some of our skills, but we will gain new capabilities.”
This perspective encourages listeners to embrace change and leverage the intersection of technology and habit formation to unlock new potential and drive growth in an ever-evolving world.
Watch the Episode
Watch the full episode below, and be sure to subscribe to our YouTube channel.
Listen to the Episode
You can tune in to the full episode via the Spotify embed below, and you can find AI Knowhow on Apple Podcasts and anywhere else you get your podcasts.
Show Notes & Related Links
- Watch a guided Knownwell demo
- Listen back to David’s full 2024 AI predictions
- Connect with Dr. Andrew Abela on LinkedIn
- Connect with David DeWolf on LinkedIn
- Connect with Mohan Rao on LinkedIn
- Connect with Courtney Baker on LinkedIn
- Connect with Pete Buer on LinkedIn
- Follow Knownwell on LinkedIn
Hey guys, Courtney here.
I don’t know if you figured out if you’ve listened to AI Knowhow before, but David, well, he’s quite competitive.
And at the end of last year, we made predictions about what would happen in 2024.
And by we, I mean David.
And so today’s the day that we’re going to find out how right or wrong David was about those predictions.
So join me as we, I don’t know, have a little accountability and project what we think is going to happen in 2025.
Hi, I’m Courtney Baker and this is AI Knowhow from Knownwell, helping you reimagine your business in the AI era.
As always, I’m joined by Knownwell CEO David DeWolf, Chief Product and Technology Officer Mohan Rao, and NordLite CEO Pete Buer.
We also have a discussion with Andrew Abela about his new book, Superhabits.
But first, climb out of your Aston Martin and get ready for a martini chicken not stirred, because it’s time to welcome Pete Buer to the studio for a segment we’re calling Secret Agent Man.
Pete Buer joins us as always to break down the business impact of some of the latest and greatest AI news.
Hey, Pete, how are you?
I’m good, Courtney.
How are you doing?
I’m good.
There’s a hot topic in AI circles these days that all the big players are getting in on agents.
And there was a really good article in Quartz recently that broke down some of the latest developments on Agenic AI.
So Pete, what do you make of this?
And what do we as executives need to know?
Well, so first the Cliff notes on this one.
We’ve heard it in other channels and from other people.
And we see it in this article as well.
2025 is meant to be the year of Agenic AI.
Lots of companies, OpenAI and Microsoft, referenced in the article and among them are investing in the creation of AI agents with offerings available as soon as January, as long as they’re not on a Elon Musk publication production deadline.
So what is an agent?
Let’s start there and I’ll try to describe it by mechanism of comparison.
We’re all now familiar with what an AI assistant is.
It’s the co-pilot that passively rides alongside you in your work and is available to take on tasks as you assign them with your prompts.
AI agents, by contrast, can be assigned not just tasks but entire jobs to be done, and they can handle them end-to-end independently.
The takeaway for business leaders then for me becomes, you have new resources in your workforce, how do you think differently about how work gets done going forward?
For instance, rethinking team roles, with virtual agents in the mix, how do you rethink your team structures and each of the jobs of the people on the team, onshore versus offshore, whatever.
This is the fable moment we’ve been waiting for, where humans are freed up to do higher-value work.
It’s time to put pen to paper on what that actually looks like.
Secondly, rethinking contract labor roles, right?
Do you really need that next new contract worker?
Do you really need that next new consultant, that coach?
And if so, how do we hold them to a higher bar, if we have agents in our midst who can be independently helping us around the edges to get work done?
And what’s the greater expectation that we should have for the services teams that we pay big dollars to bring on board?
And lastly, implications for skill sets, right?
With new resources on teams, team leaders and team members need to be able to look at their work and ask themselves how does this happen differently going forward?
How do we map our processes?
How do we creatively think about problems we want to solve, pain points we want to eradicate, solutions that we want to develop?
All of these imply new skill sets for the people on the team and the people leading them.
I will mention in addition for anyone sort of intimidated by the notion of AI agents in our midst, the article offers a pretty fresh perspective on what these resources might be like to work with, right?
Speaking in multiple languages, working all hours of the day, best of all maintaining composure through thick and thin.
As I reflect on my time in work, you could have it a whole lot worse having resources on your team that show up with that kind of attitude toward work.
Pete, I couldn’t agree more.
Thank you for those words of wisdom like always.
Thank you, Courtney.
David and Mohan eat, sleep, and breathe AI.
So I was really excited to talk with them recently about what they see as some of the most likely big developments in AI in 2025.
David, Mohan, it is that time of the year again where everyone is peering into their crystal balls for 2025 to make predictions about what they expect will happen in the world of AI.
So obviously, we’re excited to get in on the action here again.
But before we do that, I want to take a look back at 2024, see how we did, see if we have any caveats to kind of what we thought was going to happen before we make our own predictions for 2025.
So the first prediction was there will be a Moore’s Law established for AI.
David, can you give a really just for people that may not be familiar with Moore’s Law, the quick synopsis of Moore’s Law?
So Courtney, Moore’s Law was the observation that Gordon Moore made that he was seeing a pattern whereby the number of transistors on an integrated circuit in a microprocessor would double every two years.
That he had seen that start and he predicted that it would continue to happen for the next ten years.
And it was proven to happen more and more and Mohan, I think it was actually modified a couple of times from the time he first made the statement.
But basically, it was about the advancement of technology doubling at an unprecedented pace.
Exactly.
Faster, smarter, efficient.
So you could build more complex applications on the same hardware cost base.
I don’t know if this one has really happened.
We’ve definitely seen some advances, but I think we’ve actually seen at least what is feeling like a slowdown in progress, right?
We’ve recently heard about the large LLM companies and how their next generation model doesn’t feel like as much of a breakthrough as the previous generation was to the one before it.
Now, at the same time, I’ve heard some recent commentary on this that’s suggesting that what is actually stalled is the tests that we use to test them.
When you have reached 90 percent plus, it’s really hard.
There’s a law of diminishing returns where you only have so much improvement left.
Maybe they are advancing just faster than our ability to test them.
But I think the key here was that Moore’s Law for AI would be established.
I don’t think it has been.
I think we’re still grappling with it and really dealing with the realities that we haven’t seen substantial hardcore breakthroughs.
I think we’ve seen some really interesting ones in the existing generations of the LLMs, but not necessarily that next law coming out of, oh, here’s how this is going to progress over the next 10 years.
There have been news reports in the last couple of months that it slowed down a bit.
But the truth with how innovation works is there is always like a big bump in innovation.
And then there are what I’ll call micro innovations, right, that you don’t see.
That then add up to the next big thing.
I don’t think there is a slowdown here.
There are reports that slow down because of data availability and such.
But it’s probably more of the micro innovations that we don’t notice that are happening.
David Mohan, the second prediction that was made about 2024 was that we’ll figure out how AI integrates with other technologies to drive true lasting change rather than being something that sits on the side.
So AI is clearly being integrated into more and more existing tools and technologies.
This one, I feel like pretty safe to say, yeah, I mean, this has been true.
But I would love to get your thoughts on how maybe you, is it close to what you expected, less so, more so, where are you laying?
I’d say on this one, we’ve taken half a step.
It seems like most products have a bolt-on LLM, right?
It’s truly not integrated into the user experience.
It seems like a bolt-on.
So there has been progress, but it’s half a step rather than a full step.
Yeah, I agree with that, Mohan, and I’ve been a little bit surprised with this.
I feel like we missed this.
I was fairly confident that we would see the advent of true AI-first enterprise products in 2024.
I expected some winners to start to appear and emerge, and I don’t think we’re there yet.
I really don’t.
When I look at some of the different areas where products have come out, they don’t seem breakthrough.
I don’t think we’ve seen the next level of platform company emerge, and there will be some that come out of this AI era.
So I don’t know.
Kick the can down the road.
Maybe that’s next year.
The third prediction was we’ll see the advent of the application layer of AI, and we’ll see a $1 billion valuation of an application layer company in 2024.
David, I think this was a correct prediction.
So some notable examples of this are Dev Rev.
This is more of a customer support software firm leveraging AI to integrate end-users, sellers, support and product teams and developers all into a single platform.
Dev Rev secured a $100 million Series A funding round and had its valuation at $1.15 billion.
Glean Technologies, which specializes in enterprise-grade AI and search capabilities.
By February 2024, the company raised over $200 million in Series D funding and reached a valuation of $2.2 billion.
The next one is Cognition AI, founded in November of 2023.
Cognition AI developed Devon AI, which is a software developer aimed at automating coding tasks.
In April of 2024, the company raised $175 million, achieving a valuation of $2 billion.
Courtney, what I’d say on each of these is interesting that we started to hit them.
I think Glean probably hits closest to what we were thinking in terms of that application layer.
Interestingly, Cognition really technology-oriented software developer use cases.
We’re seeing a lot of those.
Those are much more of the market than I expected.
I thought we’d get to real business use cases beyond the technical side.
Glean, interestingly enough, I think probably closest to what Mohan and I have talked about a lot around knowledge management in the enterprise.
I definitely think we see that coming up.
We have some winners here, but like with the last question, I don’t know if it’s as fast and as transformative of companies as we expected.
I still think it’s going to get there.
I just think the application of AI is going a lot slower than the adoption of just the core use of the LLM and the hardcore technology than we expected.
Okay.
So the next prediction that, David, you made in December of 2023 was that we’ll see the next level of UX evolve so that AI blends into our lives more and more, a la the transition from mainframe to mobile.
Yeah, you know, I definitely think that it has continued to blend in and become more available in existing things versus just be this chat bot we can go off and type into.
So I think it’s gone along this path.
Has it to the degree I thought or the way I thought?
I don’t know, that’s a good question.
I don’t think it has.
Until Alexa can answer my dang questions as well as chat, GBT, I like, I don’t think it has.
That’s like my threshold, when that bar has been met.
But Siri does now, right?
You have your new iPhone and are taking that?
Honestly, as much as I’m an Apple fan, Siri and I just have never, we’ve never clicked.
You’ve never gotten along.
So we never got along.
So you’re working with the laggard.
That might explain your perspective.
I know, it’s so frustrating.
Yes.
Yeah, that might.
I don’t know.
What do you think, Mohan?
Yeah, I don’t think this came true.
Like we said, it’s more looks like a bolt-on to me.
So you look at some of the co-pilots, they have a chat interface now.
So I don’t think we saw the next level of UX yet.
The primary reason I think for that is, you have to rethink the application layer fundamentally with personas as opposed to just bolting it on.
And that’s a hard thing to do for the products that are at scale.
Well, and I’ll take this to the next level, is I actually think we’re not going to notice this until we step back seven years back and we look back.
Because I actually believe the definition of user experience is changing fundamentally.
Right now, we think about this as the screen and how we click.
And success in early startups is often measured by usage statistics.
I actually think what we see is, right, the most visible way we see this right now is in a video call when the notetaker is there before, and it just starts doing its work.
We don’t have to do anything.
And then something shows up in our inbox, right?
That is the type of blending into our environment that I personally don’t even think we’re noticing.
But AI is being deployed all the time.
And so it’s not front and center top of our minds.
And we’re not thinking about how the screen has changed.
But the paradigm is shifting.
And so it’ll be interesting to see how that plays out.
Yeah.
So just to go back to your prediction, do you think we are seeing the next level of UX or is it just kind of getting subsumed?
That’s an interesting question.
Is it the next level of UX or is it a different X, right?
Is it a brand new thing?
It’d be interesting to see how it plays out, right?
I definitely, I think, was thinking about it as the next layer of user experience, but blending into our lives, right?
And so, you know, I think back a year ago, we had these devices merging.
We haven’t heard about the wearable next generation hardware in a long time.
The rabbit is dead, I think.
I’m pretty sure.
Yeah, no doubt.
The rabbit is now in the stew.
But, you know, those types of things aren’t even talked about right now.
What does that mean?
I don’t know.
It’s interesting.
I still think there will be next generation devices.
But I think we tried to force it maybe too early.
What we’re seeing is the software layer starting to do some of that in just very, very subtle ways.
Okay, so really great to recap 2024 and kind of adjust in hindsight what actually happened, but I would love to go in to Predictions for AI in 2025, and we’re gonna do this snake draft style.
So yes, yes, love a snake draft.
So we are gonna go David, Mohan Mohan David.
All right, I don’t know why, you know, I could have like tossed a coin for who got first, but that’s just the way we’re gonna go.
So David, first up for 2025, and you know, based on this past year, that not only are you gonna have to make this prediction, but then we’re gonna talk about how well or not well your prediction was.
Yeah.
There’s a lot of accountability.
This isn’t like ESPN, you know, where they just will be like, Where the weather, you know, where they only get 50% of it, right?
Right, exactly.
So what, why don’t you throw out your first prediction?
Yeah.
So my first prediction is going to be a little bit of a hot take, I think, because I think one of the fastest growing areas of AI is the transcription services market, right?
How many video conference calls do you get on?
And before anybody else arrives, there’s a note taker there, right?
And these note takers are great, right?
They’re transcribing, but then they’re providing additional value, right?
You’re getting summaries and tasks lists, and depending on the one that you use, you know, doing different things.
And I think they’re wonderful.
But my prediction for 2025 is we’re going to start to see these commoditized a bit.
I think there’s a lot of commit editors in the space.
I think it’s really hard to differentiate in this tool, and it’s ultimately going to be something that is just expected of any video conference tool.
And we’re going to start to see the beginning of the end of the separate company doing this, and we’re going to start to see them embedded within the video conference service itself as kind of an expectation.
I think of it kind of like when I first had the backup camera on my car, it was really cool and differentiated, and you could actually buy the add-on or the add-on GPS or whatever.
Now, if that doesn’t come with my car, I’m not buying the car.
And I really think that’s where this is going.
And I think it’s going to go there fast.
And so I expect 2025 to be the year.
All right.
First prediction in Mohan, you’re up next.
I’m going to predict that agentic AI is going to be a big theme in 2025.
There’s going to be a lot of talk about it.
There’s going to be a lot of buzz around it.
It won’t be fully implemented, just like many of the technologies, because it’s a big, big item, but it’s going to dominate the conversation.
So just as a refresher, an agentic AI is a piece of software that has high degree of autonomy and can make complex decisions and perform tasks for you.
And then come back and say, I’ve done this in the agent that has done some tests and made some decisions.
So we will see the rise of agentic AI in 2025.
And I would like for them to start with my laundry, if possible.
So I’d love to be your first use case.
I guess I need some robotics to make that happen, but that’s beside the point.
All right, Mohan, you get to go again, back to back.
What is your second prediction?
My second prediction is around gen AI, which are mostly standalone or bolt-on tools.
We’ll now take the second half step that we missed in 2024.
And you’ll see more of the products that have been built in a native AI way, right?
Meaning user experience is well integrated.
The engineering around it is kind of hard, right?
So the engineering and the data science and to build enterprise systems that take into account gen AI as an integral component of the architecture and build these systems and products, that’s going to be a very big theme in 2025.
So would you also advise everybody listening if they didn’t budget for these types of things that they might want to try to get a last-minute edition before next year?
Oh, that is time till December 31st, right?
And most companies don’t set their budget till the end of January or early February.
What?
That’s not true.
Great.
Great predictions.
David, last one here.
Yeah.
Well, I think it’s kind of unfair that Mohan got to do two in a row.
And I have these two hot takes in my mind that I want to add.
I really wanted to kind of rock some thought here.
So, I’m going to add two more on because I can’t decide between these two, okay?
So, number one, I predict that in 2025, we are going to see our first rolling blackout due to the fact that AI energy consumption has caused some houses to not get the electricity that they need.
And then number two, I think on the same kind of shock and awe spectrum, I think we’re going to see the first wave of career displacement, of kind of major, massive, an industry where a certain segment of jobs and an industry or some cut is going to be impacted in a way that makes the news and turns everybody’s attention and says, oh, this is happening.
Well, can I go back to, first of all, Mohan, are we going to let him get away with his double prediction?
It was so artfully done, right?
So you got three predictions, which means that there are three opportunities 12 months from now for us to score on.
There’s three opportunities for failure is what they’re already like based on last year’s predictions.
I do love that these feel like really, when it hits in the news that there’s been some kind of blackout, we’re going to be like, oh, it’s a very clear, you know, did that prediction.
Yeah.
And that’s actually, I can say that Mohan and I have actually heard a similar prediction to that at a live event that Mohan and I, so there are other people talking about electricity and how it gets impacted by AI.
Yeah.
I tell you, I live in the heart of all these new data centers going up in Prince William County, Virginia, right next to Ashburn, where the old hub of the world was, and like I literally pass 50 new data centers almost on a daily basis.
Like it’s crazy, the number, and I just see them going up, and as soon as they come online, I’m thinking, I might be the one without electricity.
I would like to extend the invitation to move to Nashville, to YouTube for everybody else listening.
I’m sorry, it’s off the table for you all.
I really like both of David’s predictions.
The first one about job displacement, I think is going to be real in 2025, so I’d like to attach myself to that prediction.
Nice.
Because the stat in my mind that I’ve been thinking is, Sundar Pichai recently said 25% of the code in Google is being written by AI and not by humans.
Just think about how staggeringly high that number is, and then it’s going to continue to grow.
I mean, this would be so sad in a lot of ways that AI would displace technology workers because it’s kind of like, worked yourself out of all of you tech folks were so smart that you’re now going to be the ones to get displaced.
I do think there’s an element of truth there, though I also think that the technology workers, when you’re talking about using AI, have the ability to upskill themselves because they’re close enough to the technology to understand it, that I think it could be one of these very traditional ones where, yes, it displaces jobs, but it’s very easily creating new jobs that are adjacent.
And I think of all of the career segments that have the ability to be resilient in this AI era, it is the technology workers, right?
So to some degree, yes, but to a bigger degree, I’m not really worried about those folks.
I think they’re gonna land on their feet and be able to upskill.
You know, I always say that software engineers over a 35-year career need to be ready for seven major upskills, right?
So five years each.
So this is one of those things where if you have the ability to upskill, it’s gonna be a fabulous future.
Yeah, yeah.
And this one probably counts for two upskills at once, right?
Exactly.
David, Mohan, really strong predictions.
I can’t wait to talk about them at the end of 2025.
Thank you both.
Awesome.
Thanks, Mark.
I didn’t say it earlier, but I do have a prediction for 2025, and it’s that you could really use an AI platform like Knownwell in your life.
No, seriously, stop getting surprised when it comes to a good client firing you.
You hate that pain, so let’s end it in 2025.
Start by getting real-time proactive intelligence on the health of your client relationships and recommendations on how you can keep and grow those clients you worked so hard to win.
Find out more at knownwell.com.
We’d love to show you what we’ve built here at Knownwell.
Andrew Abela is the author of the recently published book, Superhabits and the Dean of the Busch School of Business at the Catholic University of America.
He sat down with Pete Buer recently to talk about virtues, how technologies like AI can actually make us better humans and more.
Andrew, welcome.
I’ve been looking forward to this.
Same here.
To those listening, Andrew Abela joins us, author of the recently released Superhabits book, and we’ll talk a little bit about this, but a rich background relevant for our listeners beyond that.
Andrew, if you wouldn’t mind, could you sign in with background on what readers ought to know, and then we’ll get chugging?
Sure.
I run the Busch School of Business, a relatively new business school at the Catholic University of America, and we specialize in graduating students who are very successful in making lots of money by doing it with a conscience.
Perfect.
Awesome.
Thank you.
So we’ve talked about this, but recently you participated in a major event that was organized around the topic of what it means to be human, and you were a keynote in that session, bringing content and thinking of your own.
Can you tell us a little bit about the motivation for that event in the first place, and why this is a newly relevant topic, this notion of what it means to be human?
Well, a big part of the impetus, as you know, is the general topic of your podcast series, which is AI, and the astonishing progress that AI seems to have made in just a very recent few years for those who haven’t been paying attention, is raising a lot of questions.
I was an undergraduate in computer science 40 years ago, and the Turing test was the gold standard for Can Machines Think?
We blew through that, what, 10 years ago, I think?
And now, it’s just, last year, it was, well, these things are great, but they hallucinate.
Well, they don’t hallucinate anymore, not very much.
So, the question starts to be, well, what’s different between the machines and us?
You know, what is?
So, that’s the big question.
I think it’s on so many people’s minds if they’re paying attention, right?
And recognizing we don’t have a week for the conversation, which is kind of what we’d need to tackle it properly.
Can I get your take on what it means to be human?
I suppose it’s a Douglas Adams sort of number 42 answer.
How do you think about it?
Right, right.
Well, there is…
That may be true also.
Yeah, yeah.
I honestly don’t have a pat answer.
If I had to give you a pat answer, it would be one in religious terms.
I work at the Catholic University of America to know I’m a practicing Catholic.
But the difficulty with that is it makes complete sense to somebody who shares a religious point of view, and it makes no sense to someone who doesn’t, you know?
It all comes down, I think, to the question of consciousness.
What does it mean to be conscious, to be self-aware, right?
The idea is that the poorest, weakest, least intelligent human being is more conscious than the most massive planet or supernova, you know?
Or the most complex and best AI tool we have.
There’s no LLM that is aware that it exists, or that is aware of anything in that sense, you know?
Now the difficulty comes, how do you prove that?
Because if you ask it, it will tell you, you know?
So as you say, we would need a week to really plumb the depths of this.
And actually, I would be out of my depth soon enough because I’m a professor of marketing, not a philosopher.
So you referenced AI, and of course, as you said, that’s the focus of the podcast.
As we introduced AI into the conversation, what role does AI play in our efforts to optimize our humanity?
Is it a threat?
Is it an enhancer?
How do you see it as an actor in the mix?
Like every tool, it has the potential to be both, and the stronger it is, the greater of both, right?
And so the thoughtfulness is how to make sure that it is constructive and not destructive.
And one way I heard, there was just a couple of weeks ago, a major conference in the Vatican on AI, a number of people, scientists and others.
And one of the best lines I heard coming out of it is, a good measure of good AI is that when the human being interacts with it, the human being leaves better as a result, right?
I love the book In a Vacuum for just guidance on how to think a little bit differently and more deliberately about how to improve ourselves day in and day out.
But let’s take it on behalf of the executive listener.
What’s the point?
Why are we working so hard at these things?
And how do we relate it back to the influence role of AI?
The way I think about it is what this book helps us see.
And the book is not about AI.
We don’t know.
It’s about the survivors.
But the way you put the two things together is it helps us see what is uniquely distinctively human and what can AI do best?
So what’s the comparative advantage of human beings versus AI?
I had some fun discussing this with ChatGPT, looking at literally the different virtues, the different super habits that make up decision-making, that make up the virtue of practical wisdom.
And I was asking ChatGPT, OK, so for example, goal setting.
What are the advantages of human beings in goal setting and what are the advantages of AI?
And it was a really clever and insightful analysis.
And then I tried to trick it because I thought, well, one distinctive advantage of human being is we actually set the goals.
You know, like the AI just responds to the goals that we set.
So I just, as a prompt, said, okay, set a goal.
And it came back with a goal.
But it says, this is an example of a goal.
I said, no, no, I don’t want an example.
I want a goal.
And it came back and said, okay, the goal of my interaction with you is to make you more successful or whatever.
I was like, wow, okay, so it can set goals, you know.
So scratch that one up in terms of a human’s pain.
But the responses that I got from AI, I think part of the way towards the discussion from Chachi PT, part of the way towards the discussion, it’s talking about the human ability to have intuition, to have a sort of sensitivity to nuances, a sensitivity to other human beings that AI has difficulty with.
But use Practical Wisdom Decision Making as an example, it’s always the human being who initiates, right?
AI is never going to initiate except in response to something.
Right, unless it’s told to initiate.
It came up with a goal, right?
But I had to ask it to come up with a goal.
I was surprised that it came up with a goal, but I still had to ask it, you know?
So there has to be a prompt to start it.
Somebody has to do something.
And so the idea is, as you go through each of these virtues, you’re saying, okay, what should I be getting better at to become more human, to be more successful as a human being?
And in doing that, what am I leaving behind?
What can I leave behind for AI to fill in?
You know, because I think everybody who’s paying attention realizes that it’s the collaboration between human beings and AI.
That is the way forward.
It’s not a question of AI replacing human beings in the same way that calculators didn’t really replace human beings.
I mean, this is calculator on mega steroids, but nevertheless, we lose some skills, but we gain some capabilities.
So my dad, when he was a young man, was a banker.
Most of his life was a banker.
And they didn’t have calculators when he first started out.
So he has this incredible ability to scan a line of numbers and just add them up all in his head.
Now that’s a great party trick.
It’s not particularly useful.
It was very useful back then.
But I don’t feel bad that I can’t do that and my kids can’t do that.
You know, they can do other things.
One last question before we dismount.
And it’s a two-parter.
As you look to the future in the way that we’ve described it, how work changes and how the role of the human in work changes.
What makes you nervous and what are you most excited about?
Nervous is the pace of change.
There’s a lot of stuff that we didn’t anticipate, can’t anticipate.
And there are always rogue actors, you know.
I think it was Reinhold Niebuhr, who’s a Protestant theologian, who said that the Christian doctrine of original sin is the one that is most empirically verifiable.
That that’s the doctrine that says we are good but flawed.
Just look around.
There’s just a lot of bad people out there or people doing bad things, you know.
Sometimes accidentally, sometimes intentionally, you know.
And so any tool magnifies human power.
AI is going to magnify human power tremendously.
So what worries me is the harm that could be done by malicious folks.
I am not worried for a second about AI taking over the world.
I am really worried about people, malicious bad actors, using it to do devious things.
Yep.
What am I optimistic about?
I’m optimistic about human flexibility.
I mentioned neuroplasticity before, you know, the ability to grow through, particularly through habit.
And there’s some good research that justifies all the excitement about habits we’ve seen over the last several years.
It’s the single most powerful method of behavior change, is forming a new habit.
Ahead of emotional appeals, ahead of knowledge, skills, training, habit, habit formation is this good research on this.
And so I’m hopeful that people will recognize kind of, I am not limited where I am right now.
Talk about growth mentality, we all have the ability to be, we talk about our best selves.
The best version of ourselves is usually so far ahead of us, that we have no clue until we start trying to grow into that.
And then you add the technological supports that we’re creating.
It could be terrific, you know?
Back to a phrase that you used earlier about the best measure of the success of AI, if we’re coming away from our experience using it as better humans, then it’s doing its job and maybe that’s a good place for us to wrap.
A good optimistic place to end.
Awesome.
Andrew, it’s a delight, it’s wonderful to see you.
You may be older than me, but I’m doing everything I can to catch up to you.
Wish you all the best.
Thank you.
Thank you, Pete.
That was great.
Thanks as always for listening and watching.
Don’t forget to give us a five star rating on your podcast player of choice.
And listen, we would really appreciate it if you would give us a little Christmas gift.
If you would share this podcast or give a review to this podcast, it would be such a gift to us.
At the end of every episode, we like to ask one of our AI friends to weigh in on the topic at hand.
So hey, Jim and I, welcome to this show.
This episode, we’re sharing our predictions for AI in 2025.
Care to make your own prediction?
Hey there, 2025 is shaping up to be an exciting year for AI.
I predict we’ll see even more advanced AI tools for everyday tasks from writing emails to creating art.
Plus, AI will likely make significant strides in health care and sustainability.
And now, you’re in the know.
Thanks as always for listening.
We’ll see you next week with more AI applications, discussions and experts.