AI Knowhow Episode 70 Summary
- Companies looking to build out AI innovation teams can learn some important lessons from the Lean AI approach, which has its roots in The Lean Startup, by Eric Ries
- Once you chart an AI path, ensuring you have the right data and tech talent included from the jump is vital, a change from the previous order
- DeepSeek has been all over the news for its performant (and most importantly, much cheaper to build) R1 model. How should business leaders see this news?
Building an efficient, effective AI team is no small feat, and it requires a different makeup than the product and digital teams that many companies have become adept at building in recent years. As businesses race to leverage artificial intelligence, often at great expense, they don’t have room for waste in what they build. This means having the right AI team in place from the start.
Pete Buer kicks things off with Ben Hafele, CEO of Lean Startup and host of The Lean AI Podcast. They cover:
- What skill sets are essential for success on AI teams
- How to balance technical and strategic talent
- Why AI teams need a different structure than traditional innovation groups
- The remedy for a misaligned or swirling leadership team on their overall AI strategy
As always, host Courtney Baker is joined by CEO David DeWolf and Chief Product & Technology Officer Mohan Rao for a panel discussion where they further break down the key differences between AI innovation teams and traditional ones. Plus, we cover a few more burning questions about building the right AI team, like does Amazon’s famous “two-pizza rule” still apply in the AI era?
To kick the episode off, we dive into what might be the most game-changing week in AI since ChatGPT launched. With Producer Will Sherlin stepping in to help, Pete Buer unpacks the frenzy around DeepSeek’s new open-source AI model. How did this Chinese startup manage to train a cutting-edge model at a fraction of the cost of OpenAI and Meta? And why does this mark a “Sputnik moment” for AI development? From business leaders to investors, the ripple effects could be massive.
Watch the Episode
Watch the full episode below, and be sure to subscribe to our YouTube channel.
Listen to the Episode
You can tune in to the full episode via the Spotify embed below, and you can find AI Knowhow on Apple Podcasts and anywhere else you get your podcasts.
Show Notes & Related Links
- Connect with Ben Hafele on LinkedIn
- Subscribe to The Lean AI Podcast
- Connect with David DeWolf on LinkedIn
- Connect with Courtney Baker on LinkedIn
- Connect with Mohan Rao on LinkedIn
- Connect with Pete Buer on LinkedIn
- Watch a guided Knownwell demo
- Follow Knownwell on LinkedIn
Okay, listen, you and I know that you are super smart, right?
Like you’re listening to this podcast.
You’re doing things, you’re an executive at a really great company, you know how to drive gross margin and EBITDA, and you even get your quota of 75 uses of ICP into every conversation that you have.
We get it, you’re smart.
But do you know how to build the right AI team?
It’s hard, and we’re all starting from scratch here.
Well, we’ve got you covered.
We are not actually starting from zero because today we’re talking about how AI innovation teams differ from traditional innovation teams.
And we’re also going to talk about pizza.
Yep, that’s right, in this new era, does the two pizza rule still apply?
And if so, like seriously people, let’s make sure we get at least one Supreme.
Okay, it’s the best pizza.
Hi, I’m Courtney Baker, and this is AI Knowhow from Knownwell, helping you reimagine your business in the AI era.
As always, I’m joined by Knownwell CEO, David DeWolf, Chief Product and Technology Officer, Mohan Rao, and NordLight CEO, Pete Buer.
We also have a discussion with Ben Hafele about how to build the right AI team.
But first, let’s check in with Pete Buer for his take on what’s arguably been the most consequential week for AI news since ChatGPT was released.
And stepping in for me is our podcast producer, Will Sherlin.
Pete, break the glass, pull the fire alarm.
It’s an emergency situation here, all hands on deck.
Well, Will, I’m so pleased to be a first responder in this context.
So a quick rundown on why everyone has been losing their minds over the past few days.
A Chinese startup called DeepSeek released their new open source R1 foundation model that produces very good results as measured by all the established AI benchmarks, beating ChatGPT 4.0 and many of its competitors in a lot of them.
So everybody panicked, including me, because my monopoly money in Robinhood, suddenly there’s a lot less of it than there was, thanks to NVIDIA stock getting absolutely crushed.
But Pete, what’s the takeaway here for business leaders on the DeepSeek news?
Well, with apologies to you and your portfolio, Will, I see this as good news.
I think the way you need to be able to see this as good news is to take off your current investor glasses and put on your economist glasses, because DeepSeek stands for me, a pretty fundamental reset of our assumptions about the costs of AI as an input to all of our work.
Just for example, and the center of a lot of the media flurry, DeepSeek claims it cost just 6 million in computing power to train their models.
That’s about one-tenth of what it costs Meta to train their newest llama model.
Now, Meta and their cohort of GenAI leaders from across industry have a new and highly public, importantly, competitive benchmark from a cost perspective to be working toward.
That’s only ever good news for buyers in corporate America or in the corporate world.
If I’m a listener and I’m on the leadership team or in a PE looking across my portfolio, maybe I’m a little more bullish now in our ability to pursue some of the more exciting AI transformation opportunities at a reasonable cost.
Maybe more broadly, this kind of eases what is currently a popularly dim view of ROI on AI overall.
I saw a reference in the press that I really loved.
This is kind of a Sputnik moment in the global race for developing the dominant AI.
It feels really apt and now maybe countries and companies will have a little more incentive, a little more fire to their feet on chasing after the best economic model for delivering to business, further consistent to the spy versus spy frame of a Sputnik moment.
There’s a New York Times article just today where OpenAI is accusing DeepSeek basically of stealing its data to develop their technology.
So we’re already into the Slugworth, Willy Wonka, competitive dirty play that you expect to see in perhaps a Sputnik moment to mix metaphors.
Well, to achieve that moonshot, sometimes you need a good foil as we’ve seen from the past.
So maybe the OpenAI’s and Claude’s of the world now have that in DeepSeek.
Very nice.
Well, Pete, nobody else had rather walked through the emergency fire with than you.
Thanks very much for joining us on short notice.
Thank you, Will.
Thank you.
Ben Hafele is the CEO of Lean Startup and host of the Lean AI Podcast presented by Eric Gries.
He recently sat down with Pete Buer to talk about how to build the right AI team.
Ben, welcome.
So nice to have you.
Thank you.
Great to be here.
Your company is called Lean Startup and you run the Lean AI Podcast.
Tell us about your focus on Lean and why it’s mission critical.
Yeah, absolutely.
So I guess Lean, of course, has its roots in Toyota production system and manufacturing.
I spent 13 years at Caterpillar where there was a tried and true Lean group that does really good work there.
We’re focused in Lean Startup, Lean Innovation.
We’re focused on the upstream component of Lean, which is what’s the most efficient thing to do when you’re not sure what customers value in the first place.
In order to make something lean in terms of the supply chain, you have to know what creates value and then you can define what’s waste.
Well, what’s the most efficient way of determining what customers value in the first place?
It turns out going out and asking them is a really bad way of doing that.
I came out of the market research, product development space and then moved over to the Lean Startup company five years ago and get to work with companies all over the world now, doing good kind of lean innovation work, which is basically trying to avoid the defect of building something nobody wants in the first place.
So we are on a podcast and you’ve got one too that have AI in the name.
Crosswalk me from where we are on lean in the innovation process to the role of AI.
Yeah, love it.
So innovation takes many forms.
So in some cases it’s, hey, we’re aware of a problem and we’re not sure how to solve it.
A customer problem out there and we’re not sure how to solve it.
Sometimes it’s, we have a solution idea and we’re not sure if it’s gonna address a customer need.
Sometimes there’s a strategic imperative where we need to win in China or we need to get into whatever and we don’t know what solution to offer to which customer.
And then the fourth kind of innovation entryway, entry point rather is, hey, there’s this new exciting technology and we wanna commercialize it.
We wanna create a product out of it.
And a technology is not a product.
It’s not an offering yet.
It’s a component of a product, but it needs to be woven together with a target user and a value proposition, a business model.
And so that fourth category of innovation, especially right now, is a hot topic.
And it’s not just that it’s a hot topic, it’s that there’s an incredible amount of waste going into AI right now.
So wasted time, wasted shareholder investment, where people are just kind of, their heads are on fire and they say, we have to do AI everything.
And there’s not a lot of discipline, there’s not a lot of strategic intent or alignment.
And so companies are just burning cash on AI right now.
And that’s the space where, with anything to do with innovation, that’s where we really help, is helping people to create more wins with a lot less wasted time and effort.
And so AI just happens to be about half of our portfolio of work now.
And there’s some specific nuances when it comes to AI.
The Lean AI approach is based on the general Lean Startup approach.
But there’s some key differences that are really important.
And so we decided to do the Lean AI Podcast to kind of bring those differences to light.
Keep going.
Can we hear sort of how the differences present?
Sure.
Generally speaking, for any innovation initiative, there’s some milestones that you want to establish.
And the milestones are kind of pre-revenue milestones.
Like, you can’t measure something with traditional accounting metrics in the innovation space because all the metrics are zero by definition.
If the metrics existed, then we wouldn’t be doing innovation.
We’d have data.
We’d already know.
And so that’s where the kind of the innovation accounting, study of innovation accounting is, which is what are the leading indicators of those traditional accounting metrics that we could measure today to let us know that we’re on the right path.
And so traditionally, I’m going to keep it simple, right?
It would be like desirability metrics, feasibility metrics, and viability metrics.
So, you know, early milestones would be, is there really a customer problem worth solving?
Maybe another milestone would be, does our solution sufficiently address that need that we identified if we did identify it?
Then you’d get more into assumptions about how we’re going to grow it, how we’re going to move across the adoption curve, across the chasm, what the business model is.
Each milestone is a blend of desirability, feasibility, and viability.
It’s different for every company.
It’s different for use cases.
The principles are the same.
Now, in the AI space, it’s actually pretty different than normal because for any general product service software offering, we as humans who are in this space, who have these types of jobs, are generally aware of what’s buildable, like what can be built and what can’t.
That’s not the case when it comes to AI.
There’s actually very few of us, and I’m certainly not one of these people, that understand for a specific use case, like is this even buildable?
We’re still trying to figure out what AI can do and what it can’t, and it changes on a weekly basis.
So having a data scientist and an AI engineer involved from the beginning, and with a much more higher percent dedication to the project is really important.
Typically, we’d actually say, don’t do that, because teams tend to over index on feasibility and viability too early before they even know if there’s a problem worth solving or if the product is desirable.
But in this case, we actually go against that convention and say, you should actually have some of those technical specialists involved up front because we don’t want to get six months down the line and say, we’ve got this great idea and then find out it can’t be built.
So I would say different milestones for Lean AI.
And then I accidentally mentioned another one, which is the team members are different.
So the principle is with innovation is what’s the smallest number of team members that you can put together that have the cross-functional expertise needed to start validating the most critical assumptions.
That’s why it’s called the two pizza team, because it can be fed by two pizzas.
That’s not always accurate.
But in this case, the principle is still the same, but the cross-functional expertise that you need in the beginning is different.
And so the two pizza team or that initial team needs to involve more the technical side than usual, for the reasons I already highlighted.
So the team members need to be different.
And then I would say just the last thing to keep it simple would be strategy.
And in Lean Startup, a lot of people kind of forget about this part of it, which is it’s really important that we know what we should be working on in the first place.
So we shouldn’t just go out and run experiments on anything.
So in the case of AI, for example, it’s what’s our company trying to do?
What are our business units goals?
And let’s make sure that we have an AI innovation strategy that ties back to what the company is trying to do.
So that if we do successfully innovate or incubate rather, one of these AI initiatives, somebody is going to do something with it.
They’re going to scale it.
It’ll pass into a business unit or somebody.
There’s a team that’s going to take it and say, this is great.
We can use this now.
Too often, and this is true even outside of AI, we find that there’s incubation teams that do a great job incubating something.
And then they knock on the door of a business unit and say, hey, we’ve never met before, but we have this great thing for you.
You should scale this.
And they’re like, get out of my office.
I don’t know who you are.
I don’t know what, you know, get out of here.
And that’s certainly the case with AI as well, because everybody’s just excited about everything when it comes to AI.
So a really deliberate strategy around AI innovation is also super important.
That’s not necessarily different.
It’s just, I would say, almost always forgotten in today’s world of, let’s just go invest in AI anything.
So milestones, strategy and teams, kind of three, three key things to keep in mind that are examples of the general framework, just maybe a little bit different or a little bit more emphasized when it comes to Lean AI.
Can I double click on the on the third point around strategy?
So many companies are, as you described, sort of creating an AI strategy as opposed to thinking about how AI enables company strategy.
And you find across leadership teams that CXO 1, 2 and 3 all have completely different understandings of where we’re going as a business, what our vision is, even though they’re operating from arguably the same strategic framework, what does best practice look like in getting the team on the same page and getting an innovation roadmap in place for the business that actually drives out strategy?
Yeah.
So I’ll talk about the strategy, not sure about the roadmap, right?
Cause that can mean different things in different contexts.
But I think when it comes to a strategy, especially when it comes to high level executives, you just have to get them in person.
You have to have a sprint or a workshop or a time where people are actually in the room, they’re not on their phones, they’re not on their laptops.
Good.
And you say, okay, let’s review the corporate strategy, let’s look at what we’re trying to do.
And then remind people that a strategy is mostly about what you’re not going to do.
I’ve been reading Melissa Perry’s book here lately, re-reading it rather, Escaping the Build Trap.
It’s a great book.
And one of my favorite quotes in here is that a strategy isn’t a plan, you’re establishing a way of making decisions for people.
And I really like that.
And so, if you have a strategy, you can say, you know, with AI, hey, somebody came up with this idea for this cool thing we could do with AI.
You’re like, okay, does it fit these criteria?
No?
Okay.
Then we’re not going to invest in it.
It’s not a personal, nobody’s slighted by it.
Nobody feels insulted.
It’s just, hey, we established these criteria, it doesn’t fit those.
We only have limited bandwidth and budget, and therefore we’re not going to do that.
Or if we are, it has to meet these other criteria, and that kind of blends into the milestones that we talked about.
And so what I found is it’s a problem even outside of AI, because like you said, even if AI didn’t exist, different levels of management have different interpretations of what the strategy is in the first place.
So it doesn’t matter if it’s AI or blockchain or 3D printing or drones or whatever was cool 10 years ago, radio in the 20s, right?
It’s been a problem for over a century and it’ll continue to be a problem.
And the best way to kind of iron it out is in person through a structured process.
And that’s the only way I’ve ever seen it work.
If I’m a CEO listening to this podcast and I’m feeling a little vulnerable, like you know what?
I don’t think if I quizzed all my leadership team that they would articulate strategy in the same way or place AI in it the same way.
Am I alone or just prevalent?
Yeah, I mean, obviously, it’s the norm, right?
It’s absolutely the norm.
And it just has to do with humans are busy.
We get distracted easily.
And, you know, kind of communicating a strategy is difficult, especially when there’s so many, especially the biggest organizations in the world, where there’s just so many layers, right?
It’s such a matrix organization and there’s constant reorgs and there’s constant leadership changes.
It’s just a, it’s a tough thing to do.
So feel no shame, but also, you know, be assured that it’s something that can be overcome with, you know, with a very simple, just kind of get your people together, align on it and say, here are the themes, here’s what we’re not going to work on, and here are the criteria for making decisions.
By the way, we can revisit those criteria every six months or so, right?
For example.
But for right now, here are the criteria.
And if, if we’re going to invest in something that doesn’t meet those criteria, maybe that’s okay, but we have to like vote on it and say, yes, we know we’re out of process, but we’re going to let this one through because we think it’s important.
Even just that level gets you kind of like 80% of the way there, I think.
Of course, you can go through a week long workshop and all that kind of stuff.
But, but just keeping it simple, I think is always, always a good idea.
A personal interest I take in these conversations is the people factor.
At what stage in the innovation development process does the question of the people pop up?
How will jobs change?
How will teams change?
How will structure change?
Is that something that, from your perspective, as you advise companies or as you see companies show up, is that something that’s being discussed early in the process or is it toward the end?
It’s all about people and it’s getting people to work in different ways, right?
And what I found is it’s also not just about the teams, it’s also and even more so about the executives.
One of the things that we talk about a lot is that Lean Startup or Lean AI is a system.
It’s not just a process.
And so, if we have a Lean AI team, for example, that says, hey, there’s this new interesting use case and we’re going to go and run some experiments, we’re going to validate the problem to be solved, and then we’re going to go in and we’re going to do a front door test and then we’re going to do a concierge and they do all these cool things, right, in like a 10 week project increment.
And then they go to an executive and they say, hey, we ran six experiments.
We had 67 experiment participants and we’re going to make a pivot recommendation.
And the executive says, I have no idea what you’re talking about.
I just told you to go build something.
What’s an experiment and what’s a pivot?
Then the system falls over.
Out in the wild, in startups outside of corporations, the system works because venture capitalists want to know that there’s a there there and they’re going to do a really, they’re going to do the smallest round possible to give a startup a little bit of runway so they can come back and say, here’s the evidence that shows we’re on to something.
If they can’t, they don’t invest again.
They don’t invest the next round.
That kind of metered funding, that’s something we actually haven’t talked about yet, is the metered funding methodology within a large company.
All of that comes back to people and the way that they behave.
Teams need to behave in a different way, leaders need to behave in a different way.
It’s not complicated, it’s actually simple, but that doesn’t make it easy.
Because if you’re an executive that’s been working in a certain way for 25 years, even though it’s simple to work in a new way, it’s still not easy, right?
It still just doesn’t feel normal until you’ve got a couple of reps under your belt.
Ben, thank you so much.
It’s been terrific to spend time with you and appreciate you sharing all your insights.
Absolutely, thank you so much.
We’re already a month into 2025.
It’s hard to believe.
And here’s the question.
Have you lost a once reliable client who suddenly decided they no longer needed your services?
Ouch, it hurts.
We get it.
It’s exactly why we created Knownwell, to deliver real time insights into how your client portfolio is actually performing.
Say goodbye to guesswork and driving your business on gut and trusting every optimistic report at face value.
Visit knownwell.com to learn more and let us know if you’re interested in our AI powered platform for commercial intelligence.
Two people who have first-hand experience with building the right AI team are my AI Knowhow partners in crime, David DeWolf and Mohan Rao.
I was excited to talk with them to get their thoughts on the most critical things leaders can do when they set out to build their own AI teams.
David, Mohan, you two have first-hand experience building an AI team.
Congratulations.
You’re like, I mean, seriously.
I love how you give me credit for what Mohan’s done.
Thanks.
Appreciate it.
That’s kind of true, we’ve done it together.
I guess I could say that I also have.
Now that I said that, we’re just going to include all of us in that.
But Mohan, you have had very first-hand experience in building an AI team.
And we just heard Pete’s interview with Ben Hafele from Lean Startup.
So, my first question is, I’m curious to get your thoughts.
As executives who’ve literally built a team from scratch that’s developing an AI product from scratch, where would you recommend leaders start building their AI teams?
If they’re in the realm of considering, should we build that kind of team in our company?
How do they get started?
And I should caveat that, by the way.
Ben’s advice is largely around cultivating AI teams within an enterprise.
So, you may not have exactly the same advice for different levels of, or sizes of organizations.
Now, building a core AI team is such, is probably the biggest differentiator there is.
First of all, they all make us look really smart, which is an important criteria.
But, you know, what is different about building an AI team versus a regular SaaS team, or an enterprise IT team, is really the, you got to get people in who have a strong data culture, right?
So, it’s almost a cultural thing of saying it’s all about data.
So, if you look at our team, 80% of the people in the engineering team are some kind of data specialists.
There are data integration or data engineering or data science or whatever it is.
It’s data, data, data.
Of course, UX, app, all of those are important as well.
But it’s a very different way you build the team.
But this extends across into the enterprise as well, because data culture is super important.
I’ll start there, but there are many other differentiators as well.
Mohan, I would back up and say that I think this is a continuation, or maybe an exacerbation of some of the fundamentals you have to look at when you’re building any products team.
So a couple of different frames and principles are just jumping through my mind as we talk about this.
First of all, there’s the age-old debate of, is software engineering an art or a science?
The way I would describe it is, the answer is it’s a craft.
There are tools and techniques you can learn to take more of a manufacturing approach.
But when you’re doing true innovation in R&D, you want the masters of the craft, not the ones that know how to do carpentry, but the ones that know how to make beauty out of wood.
It’s different between knowing the skills and knowing how to just do the work of a master.
I think understanding that matters.
Related to that, there’s been this age-old concept of a 10Xer.
Those individuals in software teams that, for whatever reason, because they know how to abstract and because they know how to translate from concept to reality, all these different things, they literally perform 10X the average individual.
These types of concepts really, really matter more, not just because of AI, but because it’s new and emerging.
And I fear for a lot of people who are building teams and have built software teams before, if they didn’t go through the earliest stages of kind of the software wave as it was coming up, they may not remember what it’s like to build at that new innovation step.
And there are so many things that are new.
You talked about the predominance of data.
That’s one aspect that’s important here.
Another one that really comes to my mind, that I know we’ve had to grapple with and iterate on, is the visualization and the user experience for these intelligence-first applications is very different from the user experience in a data-based application.
And you need talent at a different level to be able to do that type of innovation and lead an industry.
And what makes this particularly difficult is you need the 10x error, but you also need generalist over specialist in a startup.
Right?
So in some ways, it’s sort of oxymoronic when you think about it.
You need like deep specialization.
I need to be able to go 10x productivity, 10x way of looking at things and strong problems.
But also, you can’t have 100 people in the company, so you need generalists.
So building the AI team in a startup is hard, and there are very few people out there, and you’ve got to assemble them.
Enterprise is a little easier than SaaS startups, I believe, and we can get it to that.
Yeah, yeah.
The other thing that I would say is you think about that, it’s funny, Courtney, I don’t think you meant specifically this word, but the word that I heard when you asked the question was, where, like, where would you be?
And this actually comes to when you are doing anything, taking constraints off, right?
When you’re looking for a unicorn, don’t go look in Peoria, Illinois for a unicorn, right?
You want to search the whole globe for a unicorn, right?
And one of the decisions we made, that candidly was a hard decision for us, right?
Mohan, we’ve grappled with this over and over, but we took the constraints off and said, we need the best talent in the world.
We’re going to go find the best talent without borders, right?
And now we have a team of five in Romania, we have three in India, of course, we have folks here in the US, right?
We literally have people all over the world, and that’s an intentional decision that for a startup, there’s definitely pros and cons of that, right?
It’s not as easy as getting on a whiteboard together every single day being together, but the investment we’ve made in it has paid off.
Mohan, you and I have both been building product teams, literally between hundreds of teams we’ve built over the years, you know, decades of building product teams.
And we look at each other all this time and are like, this is by far the best team that we’ve ever been able to be blessed to work with, right?
And I think a big part of that is because we’ve been willing to take those lumps of, yeah, we’re going to run a global team, we’re just going to look for the best talent.
Every services company is going to be a technology-enabled services company in the future with AI, right?
Because there is technology that is the underpinning of how work gets done.
But you can generally break it into two parts.
Part one is about making the people that you have more effective, right?
So how do you give them tools to be so much more effective, carry large workloads in the background for the consultant to be able to provide better services to the client?
That’s one.
The other is AI itself can do some of the work.
So these are the digital consultants that you’re going to have.
So you can kind of start looking at your own professional services business.
Obviously, making your people more effective is the easier first step between the two and start investing in both of them, but starting with making your people more effective.
Lean AI is a spinoff of Lean Startup.
I’m sure many people have recognized the name, a book by Eric Rees that really took the product world by storm.
Mohan, are there any lessons from Lean Startup that you think are directly applicable to AI?
The lessons of it, in broad terms, that you can bring into the AI world is around when you build the data science models.
You’ve got to have a concept of a minimum viable model, because data scientists by nature are trying to solve for y, the variable y, which is what is the answer?
What’s the churn?
What’s the thing that I’m solving for?
So the concept of MVP becomes more MVM in terms of the minimum viable model.
When do you say, this is good enough to ship, and then you iterate on that forward.
You measure, iterate, measure, iterate.
So the loop of build, measure, learn still holds good.
Sometimes I quibble with it and say that shouldn’t learn come first, but that’s just an asterisk in this discussion.
But also the fast feedback loops and culture of experimentation that that book says those are all still valid.
The only addition that I’d make is around the modeling.
It used to be much more of an exact science.
In this world, you got to kind of get to a concept of MVM and iterate from there.
Yeah, Mohan, the thing I would build on there, is you brought up the minimal viable model.
I love that.
The MVP has been a word that has been so overused that it has no meaning anymore.
And when you read the book, like the fundamental principle makes a ton of sense.
And I think what’s important is for people as they start to step back and look at AI and real platforms and products are being built.
AI first now, right?
That’s what we’re talking about here.
The right way to do that is with extreme experimentation and being customer led, right?
I am shocked how many times I go tell people the Knownwell story, and people hear that we had a signed contract before we even had an engineer on staff, right?
Because despite talking about the Lean Startup and the Lean model, people go build a technology product, and then they take it to market.
Even, oh, little secret of Silicon Valley, the VCs expect it.
They won’t fund you until you have de-risked from a technology perspective.
What?
Like it’s the exact opposite of what they tell entrepreneurs to do.
And so I think it’s critical because of the phase we’re at with this being really, really emerging technology that we all step back and look at the principles, not the practices that have become popular.
Mohan, David, really interesting conversation.
Thank you too, as always.
Totally.
And I can’t wait for the episode where you say it wasn’t an interesting conversation.
I’m still waiting for it.
That might be next week.
Tune in to find out.
Thanks as always for listening and watching.
Don’t forget to give us a review on your podcast player of choice.
Legit, I say this all the time, it really means a lot to our team here at AI Knowhow.
At the end of every episode, we like to ask one of our AI friends to weigh in on the topic at hand.
So hey, ChatGPT, OG, what’s happening?
This episode, we’re talking about building an AI team.
What do you recommend?
Focus on a balanced blend of data-savvy pros, skilled engineers, and curious problem solvers who excel at collaboration.
Pair that with a culture of continuous learning, strong communication, and flexible infrastructure to unlock powerful AI innovation.
And now, you’re in the know.
Thanks as always for listening.
We’ll see you next week with more AI applications, discussions, and experts.