AI Knowhow Episode 97 Overview
- Why AI today feels like the “dial-up stage” of the internet, and what leaders can learn from that analogy.
- Companies should consider and plan for how to move beyond pilots; this will increasingly be a necessity as we prepare for more ubiquitous AI.
- Dr. Pia Lauritzen shares why the biggest risk of AI is forgetting what makes us human in the first place.
AI Roundtable: Drawing the Line from AOL to AI
As AOL shutters its iconic dial-up service, we reflect on what that moment means for leaders navigating the AI era today. What stage are we in, and what’s next? What can looking back on the last thirty years and evaluating how broadband and internet access changed the world as we know it teach us about what the next thirty years and beyond will look like?
Knownwell CMO Courtney Baker is joined by Knownwell CEO David DeWolf and Chief Product & Technology Officer Mohan Rao, who draw parallels between the internet’s evolution and AI’s trajectory. They look at why companies must move begin to move past experimentation so they’re not sitting on the sidelines as AI becomes as interwoven in our lives as the internet.
“The thing that’s hard for all of us to get our heads around when we’re talking about innovation like this is, look at where we are today,” David says. “This is thirty years after the internet. And we can now imagine the progression that’s been made with always-on access in our homes because we’re living it today.”
One of the big challenges today is imagining a world where today’s LLMs and AI platforms are like the water pipes or electric utilities of the future: something everyone has access to and can build with (or upon).
Expert Interview: Dr. Pia Lauritzen
If the roundtable discussion and trying to imagine the world thirty years post-ChatGPT-release isn’t enough to stretch your brain, this week’s expert interview should help.
Philosopher and author Dr. Pia Lauritzen joins Pete Buer to share a different view of AI risk than the P(doom) declarations that dominate the headlines. Forget Skynet or AI-induced nuclear annihilation. The real threat is if we lose sight of what makes us human in the first place. What happens if we start thinking of ourselves as less than and start offloading too much to machines?
Here are some of the highlights from Pia and Pete’s discussion:
- The real existential risk of AI: “The biggest risk that faces humanity is if we stop realizing and reconnecting with what makes us human,” Pia says.
- Prompting vs. questioning: Pia warns that prompting a machine is not the same as asking questions. True progress comes when we question assumptions, use cases, and even whether technology belongs in certain domains.
- A reminder for leaders: Technology pushes for speed and productivity, but leaders must ensure it doesn’t overshadow values, relationships, and human connection.
This is part one of a two-part conversation with Pia Lauritzen. Part two will continue next week; be sure to subscribe wherever you listen for the rest of the conversation.
Watch the Episode
Watch the full episode below, and be sure to subscribe to our YouTube channel.
Listen to the Episode
You can tune in to the full episode via the Spotify embed below, and you can find AI Knowhow on Apple Podcasts and anywhere else you get your podcasts.
Show Notes
-
- Visit Pia Laurtizen’s website
- Connect with Pia Lauritzen on LinkedIn
- Read Pia’s regular column in Forbes
- Connect with David DeWolf on LinkedIn
- Connect with Mohan Rao on LinkedIn
- Connect with Courtney Baker on LinkedIn
- Connect with Pete Buer on LinkedIn
- Get a guided Knownwell demo
- Follow Knownwell on LinkedIn
Do you remember this sound? Well, it’s time to say goodbye as AOL announces, I know, you can’t believe this, the end of their dial-up service. What’s the connection between what AOL introduced in the coming years in AI?
And did anyone else’s significant other ask them out on AIM for the first time? Because mine definitely did. Hi, I’m Courtney Baker, and this is AI Knowhow from Knownwell, helping you reimagine your business in the AI era.
As always, I’m joined by Knownwell CEO David DeWolf, Chief Product and Technology Officer Mohan Rao, and NordLite CEO Pete Buer. We also have an interview with Dr.
Pia Lauritzen about why the existential questions around AI don’t necessarily involve Skynet and Terminator robots. But first, grab your climbing ropes and carabiners because it’s time for another installment of AI in the Wild.
Pete, Axios just reported that AI is already displacing jobs, but mostly in outsourced and offshore roles rather than US staff. What’s the takeaway here for leaders?
Courtney, you didn’t say hi to me, by the way.
That is so true, I didn’t. Hey, Pete, you’re so right. The story was so good.
I needed to just jump in. Hey, Pete, how are you?
I’m good, Courtney, how are you?
I’m doing well.
So Axios is citing an MIT study that’s been getting a lot of headlines, the 2025 state of AI in business. And the big finding, as you mentioned on jobs, is that the majority of AI impact thus far has been in outsourcing and offshoring roles.
About 3% of total outsourced and offshore roles have been displaced by AI. So, at least in the near term, I guess that spells temporary good news for the US labor market. Focusing on external roles mitigates the impact on direct hire employees.
But of course, as you can tell by the way I set it up, this shielding will be short lived as companies grow more proficient in deploying AI and teams grow more comfortable with the notion of augmentation and automation.
The focus will expand to other jobs closer to home. The MIT study in fact predicts ultimately 27 percent of jobs could be at risk. We’ve seen lots of other data sources that offer even bigger potential numbers than that.
There’s of course some bad news in all this for business process, outsourcing and staffing companies in the US, if not US workers, especially those that are competing on the basis of cost differential as opposed to rare capability or skill provision.
All of a sudden that BPO company that’s been running your finance function or doing your recruiting and talent sourcing, isn’t the low cost provider anymore.
You can extend that logic to all our faithful offshoring partners that we’ve been working with for decades now, entire nations partnered with US companies across time.
Final thought, as I mentioned when I opened, this is the MIT study that’s been featured in news elsewhere. This is where the bomb was dropped, about 95 percent of AI investments returning Zilcho in terms of ROI.
Once you get past the sensational headline, the TLDR of the conclusion is that the early plays that are in fact returning meaningful ROI are the ones that are focused on efficiency and cost savings.
The innovation strategies either are faltering or just haven’t played out yet.
This is exactly what anyone would have expected from use of the new technology and deployment and order of operations in business, especially when you’re answering to financial markets.
I don’t buy into any of the sensation around that 95 percent number for what it’s worth.
One other interesting thing that I found from the report was sales and marketing being the test bed for AI, which I just think is interesting. Makes a lot of sense that those revenue facing roles, morally way to take some risk and try new things.
It’ll be interesting to see how that plays out over time.
Yeah, absolutely. Because I think also that’s a place where you get the benefit of both efficiency and innovation. In the order of operations, I suspect we’ll be hearing from that space second.
Well, Pete, thank you as always.
Thank you, Courtney.
Internet pioneer AOL recently announced its Shuttering Dial-Up Internet Service.
Given the company’s role in bringing millions of Americans into the Internet era, I wanted to get David and Mohan’s opinion on what the death of AOL dial-up means, especially as AI has surged into the mainstream in the last few years.
David, Mohan, I don’t know if you two saw this, but this was, man, it was so nostalgic for me. But in the news, AOL has announced, which is kind of shocking, this is a thing, officially ending its dial-up service next month.
Co-founder Steve Case wrote, really, I thought it was a heartfelt LinkedIn post reflecting on the impact of pioneering Internet companies like AOL. For many who are of a certain age like me, that famous modem noise.
You’ll never forget it, will you?
I will never forget it, and I will also never forget sneaking downstairs, which is where the family computer was in the living room.
And trying to get on AOL without my parents hearing me in the middle of the night, that was the epitome of sneaking around for this.
How did you muffle the sound? That’s what I want to know.
Oh no, I didn’t muffle it. I just prayed hard, like, please don’t hear it, please don’t hear it, please don’t hear it. That was my jam.
There was no way to, I mean, could you muffle it? I don’t know. I don’t think so.
I thought you had the secret to life there for a moment.
No, I didn’t.
But obviously that sound, man, it is nostalgic, that slow, clunky, but opening up a new, entirely new world of possibilities, aka talking to our friends on AIM. That wasn’t a thing yet, but AIM, AIM, yeah, AIM.
Actually, my husband asked me out on AIM.
True story. Wow.
I don’t know what to say to that one.
Yeah, yeah. I think that’s pretty common these days, but back then.
Yeah, yeah.
Apparently, it worked.
Obviously, yeah, and we knew each other, but we went to college in separate towns. We were long distance. Broadband obviously arrived and obviously, being online was seamless and always on.
I think AI is mirroring that same inflection point where it’s moving from a thing you log in to, to something just embedded into the fabric of our work and our lives.
So today, I just, I wanted to, as we reflect on that nostalgia, to think about this shift, what it means for us today for leaders, especially those who may still be in that dial-up stage, still sneaking around to try out the AI without mom and dad
finding out. So David, what do you think, does this analogy hold up for you?
You know, one of the things we’ve talked about a lot on this podcast is our notion of ambient user experience, which is AI everywhere all the time knows what you need before you need it and meets you where you’re at, right?
We definitely have not realized that dream yet, though I think you see steps moving in that direction.
Yeah, can I, real, PSA, I’m afraid to say her name because she’s in this room, but Alexa has tried to step up the game.
I got to be honest, my children are really mad at it because it’s realized it can tell children from adults apart and it will not let them do certain things. I knew she was going to wake up. Okay, she’s talking now, but it’s tried to do better.
It’s made progression if you’re wondering about the ladies whose name starts with A.
Yeah. No, I think we’re moving in that direction, but I think the thing that is hard for all of us to get our heads around when we’re talking about innovation like this, look at where we are today.
This is 30 years after the internet and we can now imagine the progression that’s been made with always on access in our homes being available because we’re living it today.
But to do the same thing and to forecast that forward is something entirely different. But I do think in a very real way. We’ve talked before about how AI today in a lot of ways, going and asking chat, GPT feels like a green screen.
Well, it’s come a long way even since then. We said that a year and a half ago. I don’t think we’d say that again still today, even though it’s still pretty rudimentary.
I think over time, yeah, I think there is a great analogy.
I think at some point in time, OpenAI will shut down ChatGPT as we know it because there will be a different interface, because there will be a different experience, because it will have matured so much.
But as you’re living through it, it feels so cutting edge that it’s kind of hard and you end up doing what you did and sneaking downstairs to use it. You just gotta have it, right?
And I think that’s the phase that we’re at right now is those early innings.
Yeah, you know, first of all, AOL was an amazingly groundbreaking company, right? So here in, live in Northern Virginia, a lot of people, including my wife, will say it’s the best job she’s ever had.
At least, you know, so it was just a remarkable company and got millions and millions of Americans onto the Internet. Like it did to Courtney and also me and several others, right? So it was groundbreaking.
I think the analogy is right on. I think we’re going from the sort of AI tools being more gimmicky, experimentation is equivalent of AOL mailing those CDs to Verizon actually putting fiber in the ground, right?
So that transition that happened over three decades, like David said, I think we are in that process of early chatbot experience into more of the future and we can talk about what the future is, but that transition has started and is happening
already. Yeah.
And Mohan, to me, I think the point for us leaders to take away from that is, I think it was easy in the early days in the internet, even though we were excited about it, to kind of treat it like a experiment, right?
And I think we’re in a prolonged experimentation phase with AI for a lot of companies, right? Those that are truly leading are the ones that are not using this as an experiment anymore.
They have figured out how to fundamentally transform their business by leveraging the latest and the greatest technology, and they’re pushing the envelope, realizing that, just like the internet brought about digital transformation and business
Exactly.
I mean, you know, AI more and more, the way we used to think of AI is like Wi-Fi. Everybody has it, right?
The question is, how are you going to, you know, lay down the water pipes of the business that’s running the operations of the company, and that’s the transition that we’ve got to talk to, and that’s the topic a lot of the leaders are grappling with
right now. I think the early days of experimentation, a pilot here and there, I think we are done and dusted. We need to move on to the next things.
I love that you actually just said the water pipes, because I do think the utilities are a great analogy for one of the bottlenecks I’m seeing to this adoption and getting past experimentation, which is that I think a lot of folks are still stuck in
data integration, data engineering, data hygiene mode. It’s cliché to say, well, you have to have your data right first.
The brutal reality of AI is we can leverage the artificial intelligence not just to produce the output, but to actually understand the input.
In that world, allowing it to just passively listen and consume the natural information flows that are going through the organization, can dramatically expedite the process of getting plugged into the grid, to use that analogy of utilities.
And I think that’s a big mind shift that a lot of executives have to get around.
Yeah, the more and more it’s not going to look like a tool, and it’s going to look like the ambience in which you’re working and you’re conducting the business.
Yeah, I think it’s really interesting and I’ll be excited when we get there. Even more recently, for me personally, just, you know, we talked several episodes back or a few months ago about our shift to using an enterprise account of OpenAI.
Totally game changer for me in what it’s done for knowledge management. And it’s weird. It does like you have to change some patterns and how you’ve done things because you just kind of default to the way you’ve always done it.
Even we do. We’re an AI company. You would think we were like just naturally geared this way.
But even for us, there was a project that I took on that I was dreading. And I realized as I was like getting into it, all of a sudden I was like, oh my gosh, wait, I should just feed all of this into.
But it was like why I didn’t default that from the beginning. It still took me to be like, what am I doing? Take all of this, get a better V1, V2 and especially now having access to our enterprise data.
The results were fantastic. It was probably more like a V2 that needed some cleaning up, that I was just really impressed with.
So even that, there are all these steps that are happening, but still that for me, you’ve just got to keep rewiring my brain to think about these new tools, these new technologies and how to deploy them more efficiently.
It’s cliche to talk about change management, but what you just described was change management for yourself. The brutal reality is, forget everybody else and influencing them to change.
It’s hard to get outside of the existing default mindset of how we work and how we operate. But that personal change management, I think, is one of the big challenges.
It’s hard to have organizational change management until you have a few champions that can actually lead the way, and it is simply thinking different. We just have habits that we get up and this is the way we’ve always done things.
I think that is probably one of the biggest hurdles that I’ve seen.
Well, maybe this is helpful, maybe if you’re like me, maybe this is a helpful reminder today to think about one process.
Maybe it’s something you are really dreading doing like I was in your business, and that still feels like dial-up, slow, manual, or siloed, and explore how AI could make it always on or just a lot easier. David, Mohan, thank you as always.
Great. Thanks.
Thanks so much.
Here’s a quick stat that might surprise you. Expanding existing client relationships is up to 95 percent more profitable than winning new ones. Yet, most firms still celebrate new logos like it’s 1999, while revenue quietly leaks out the backdoor.
Or through churn. That’s why we built the Commercial Relationship Playbook. It shows you exactly how to strengthen client partnership, anticipate risk and turn relationships into your most reliable growth engine.
If you want to shift from chasing deals to compounding revenue, grab your free copy at knownwell.com/relationshipplaybook. Dr. Pia Lauritzen is a philosopher, keynote speaker and author of five books.
She writes a regular column for Forbes, asking big questions about tech and transformation.
She sat down with Pete Buer recently to talk about why the real threat of AI isn’t mass extinction, but resigning ourselves to the idea that better, faster and more innovative is always better.
This is such a rich, important conversation that we’re bringing you part one this week and part two next Monday.
Pia, welcome. We’re so happy to have you on the program.
Thank you, Pete.
May I start with a little bit of context for listeners on your background and where AI fits in? As you know, that’s the focus of the program.
Yeah, so I have a background in philosophy.
I’ve done university research on philosophy and I’ve been a tech founder for a while, building digital technology, trying to democratize the power of questions, as I typically say, because my philosophical research show that when human beings ask
questions, that’s when we learn and grow and evolve. So if we are not allowed to ask questions, then we make it very hard for ourselves and each other to actually evolve as a species.
So I’ve been spending my life at the university doing research and also in the industry, trying to build technology, to design technology, to help people in organizations and in society to ask more questions, have more people asking more questions.
And it fits into AI in that, well, AI is the technology right now that everybody’s talking about, everybody’s trying to get used to and finding out how to deal with. And it’s very much designed around the questions we ask and don’t ask.
So it feels like my philosophical background in philosophy and in working with questions makes a lot of sense when it comes to understanding and using AI.
Awesome, and I’ll say it all comes together so perfectly in your recent Forbes article, which I loved and I’d like to spend a little bit of time on. The article is entitled, From Existential Threat to Hope, A Philosopher’s Guide to AI.
I highly recommend folks Google it and read it because it’s terrific. And if I could start off linking to the article, you reference threats that AI poses.
And of course, we’re all familiar with so many versions of the story from Hollywood and everywhere else. But you see a different type of existential risk on the horizon due to AI. Could you tell us a little more about that?
Yeah.
I think that coming from a philosophical perspective, the biggest risk that faces humanity is if we stop realizing and reconnecting, constantly reconnecting with what makes us human.
So it’s not something from the outside coming and wanting to wipe us out or giving us weapons that we don’t know how to use, or we use them against each other or something like that.
For me, it’s a bigger threat that we, without even noticing, forget what it means to be human.
That we start behaving more like machines or that we stop behaving at all because we just think that we are not, we don’t think of ourselves as worthy of thinking or worthy of doing anything or making decisions.
We just leave these things to the machines because we assume they are wiser than us and more capable than us.
And I think that is a big risk because humans, I believe strongly based on my research also, we are the animal that is not meant to do something specific, but to actually make up our own mind, to ask our own questions, to find our own position in
this world, to do what makes us able to connect and to commit to something bigger than ourselves. And making these decisions requires us to ask our own questions and to make up our own mind about what is and isn’t important.
And there’s something about AI and about the technology that is, it’s actually it’s a paradox because on the one hand, it has sparked a lot of questions. A lot of people are asking very philosophical questions that they didn’t use to.
If we only look 10 years back or five years back, it was only philosophers or regulators or very, very visionary politicians that were asking these kinds of questions. What do we want for humanity? What do we want in society and stuff like that?
What do we believe is good? But now everybody is talking about what does it mean to be human? And how are humans different from machines and from animals and from gods?
And we are asking very existential questions, all of us now. So on the one hand, AI has sparked this very, very important philosophical discussion.
And on the other hand, the way the tool is being used is to kind of put our own questions on hold while we’re busy prompting. And prompting is not the same as asking questions. Prompting is adjusting our questions to the machine.
But asking questions is something completely different. It’s about questioning the assumptions that were built into the machine, questioning the use cases that we are told we need, while in fact we don’t need all of them.
We don’t need technology to make friends or to feel this lonely or to connect with our loved ones or something like that. We don’t need machines for that. We need machines for something else.
And in a way, AI is making it more difficult for us to ask these questions that it cannot answer. It kind of wants us to ask the questions it can answer, the problems it can help us solve.
So there’s a risk that we forget asking the questions that only we can ask and only we can come up with a way to navigate.
So that’s a risk, as I see it, that we forget this basic human feature of asking questions that may not have answers, but we ask them anyway, because that’s how we grow and that’s how we learn.
I have so many follow-up questions, but let me see if I can get them in an order that makes sense. My first one is, and I hope this doesn’t sound oblivious or stupid.
What is the cost of not attending to the existential questions, of not asking important questions as we go?
I typically use this picture of, and now I’m from Denmark, so we see what’s going on in the US from distance. But we hear a lot about mental health crisis. We have that too in Denmark.
I think that’s a case all over the world. But we also talk about the crisis that has to do with misuse of narcotics and fentanyl crisis. And we see these things from across the pond.
And seeing these pictures of cities where people are zombies standing and not living, not moving, not talking to each other, not connecting, that is kind of, that’s not something we have in Denmark, at least not yet. But that’s what I fear.
Not necessarily that we numb ourselves with drugs, but that we numb ourselves with technology.
That we stop using our unique perspective, our unique ability to connect with each other and to constantly adapt and to see opportunities and see ways to deal with problems that we don’t, we will never be able to solve them on our own.
Not as individuals, not as societies, not as nations. We will need to come together somehow. And we can do that.
I believe very much in that. But if we are not asking these kind of questions, and we’re not trusting ourselves to do our own thinking, then we will be, yeah, a world of zombies. That’s a big risk, as I see it.
I love the related distinction that you drew in the article about the fact that there’s not so much of a difference between good and bad applications of AI, because even with good applications of AI, and if that’s how we’ve conditioned ourselves to
think, we are still unfree from technology, dependent on technology. And it is as much a problem as a bad use case.
Yeah, I think that is definitely another risk that we think that if we only think of the technology, and we build it to do good, and we practice and educate ourselves to use it to do good, then we won’t have a problem.
Thinking that way is being blind to the fact that technology has an essence of its own, so to speak. It has its own inner logic.
I also think about it in terms of when we talk about human-centered or human-centric technology, that we want to build technology that helps us do good. The thing is that the only thing that is human-centered is humans.
Everything else is everything else-centered. A dog is dog-centered, a bird is bird-centered, and technology is technology-centered. That’s just how it is.
And we think, no, it’s not, because we created it.
We can build it to do this instead of that, but all technology has this essence, you could say, of wanting to, or at least, it’s not like it has a will, but it is designed to make us believe that we can use it for whatever we want.
We can use it for good, or we can use it for bad. And that’s not the whole story. The whole story is that technology is meant to speed up things.
It’s meant to make us more productive. It’s meant to make us ask less questions about what is good and more questions about how can we be faster or better at doing whatever the technology is designed to do.
So we need to remind ourselves and each other to ask these kind of questions, and basically also questions like, do we even need technology for this?
Does it even make sense to talk about technology when we are talking about mental health crisis, for instance, or loneliness, for instance?
No, it does not, because no matter how much technology, how good technology is to make me think that I’m not alone, to communicate with me as if I was surrounded by other human beings, it’s just a matter of time.
I think before it dawns on me that I’m not getting what I need, because I need to be touched by another human being, I need to be looked in the eyes by another human being, I need to be recognized as something that is human-centered because it is
human. So that’s kind of the idea.
It gives me hope that because you’re traveling in the right circles, to hear you say that everyone is asking these sorts of questions now.
I’m probably closer to business leaders and the rush to exit on behalf of your private equity investment in the next 18 months.
And so the decisions have such a maybe pragmatic or such an end-in-mind outcome framing, that I don’t hear the questions getting asked enough.
Is there something fundamentally different about business leadership versus philosophy or the incentive structures that keep business leaders from being as open to these kinds of questions as they should be?
Yes, I think there is. And I think that people doing research, whether in tech or in leadership or in education, for instance, they are probably… It’s kind of their job to ask more questions.
It’s not part of… Necessarily, it hasn’t always been, at least, part of an executive understanding of her or himself, that their job is to ask a lot of questions.
It’s more to know the answer and to make sure to make the decisions that enable the company to survive at a time where that survival is at stake in many cases, for many companies, in many industries.
So what I think, there are front runners in these kinds of discussions, and then something else will happen after that.
And I think we will see a lot of, as we all always do with new technology, we always see a lot of people being busy running and being first and showing that we can do it.
That we will be the one who utilize this technology in a way that makes us succeed. And then someone else is busy saying, hmm, let’s see, because maybe the first movers won’t be the winners.
Maybe it will be second or third movers, because they will be able to do it a bit differently. You could see it nationwide as well. The US and the EU have very different strategies in how to deal with AI.
And if you just look at it very quickly, it looks like, well, it will be the US and it will be China who will fight and one of them will win this AI battle or competition or race or whatever.
And I just think there’s a reason why EU is doing something different, because it’s paying attention to what the speed and listening to the history of technology and the history of everything else, basically, that if you just run without asking
questions, if you just assume that the first one is the good one, then you will pay a price later on. And maybe these things are connected, you know, what we are talking about, mental health crisis and we are talking about these things with
technology. And I think more and more leaders, all I can say is that I’m getting invited to more and more conferences with leaders to share some of these insights.
And I think it’s because more and more leaders are realizing that we also need to make room for some of the questions and not just assume that we have the answers and not just assume that the people who say they have the answers, because they have
the billions of dollars, that doesn’t mean that they have the answers that we need. Maybe we need something else. So you’re totally right. There is a basic difference, but there is also dialogue at the moment.
That’s awesome.
Thanks as always for listening and watching.
Don’t forget to give us a five-star rating on your podcast player of choice. And listen, we’d really appreciate it if you would share this episode with someone that you think would enjoy it.
At the end of every episode, we like to ask one of our AI friends to weigh in on the topic at hand. So hey, perplexity, how’s it going? Today, we’re talking about going from dial-up days of AOL to always on ambient AI.
What parallels do you see between the early days of the internet boom and where we are today with AI?
The vibes feel pretty similar. In the dial-up days, people were just figuring out what the internet could do for them.
And now with AI, we’re in that same messy, exciting phase of experimentation where it’s clear the tech is huge, but we’re still shaping how it fits into everyday life.
And now you’re in the know. Thanks as always for listening. We’ll see you next week with more AI applications, discussions and experts.