Ever wondered how AI can transform commercial intelligence? What does an AI platform for commercial intelligence allow you to do, and how can you get started using one?
This week, Knownwell’s CEO David DeWolf and Chief Product and Technology Officer Mohan Rao cover the private beta release of our AI-powered platform for commercial intelligence, which was recently rolled out to Knownwell clients. David and Mohan explore how this technology can give professional service leaders real-time, objective intelligence on the health of their commercial relationships.
They kick the episode off, though, with some exciting news about Knownwell’s recent $4 million seed round, which will further accelerate product development and grow the Knownell team.
Diving into the Latest Knownwell Product Release
Along with the funding news, the backdrop for this episode is that we just rolled out an updated version of the Knownwell platform that includes a number of improvements and upgrades.
Mohan shares the breakthroughs that have led to the latest beta product, highlighting major improvements in user experience, scoring insights using LLMs, and the platform’s real-time, unstructured data analysis capabilities. Together, they unpack the complexities of integrating AI into commercial operations and why early customers are already seeing transformative results.
They also discuss some of the recent AI breakthroughs that are making it possible for a platform like Knownwell to even exist. “Now what we are doing is streaming this data in real-time, analyzing it in real-time based on the intelligence that an LLM is providing us, along with the domain expertise that we have in terms of developing the scoring rubric,” Mohan says. “That just would not have been possible even a couple of years ago.”
You can see a guided demo of the Knownwell platform here on our site.
JPMorgan Chase Revises Their Projection for Value Derived from AI Up to $2B
Courtney Baker and Pete Buer join us to round out the show with another installment of AI in the Wild. They cover the recent news that JPMorgan Chase is equipping 140,000 workers with generative AI tools. The company is projecting they’ll see $2 billion in value driven from AI initiatives, up from a a previous estimate of $1.5 billion.
Pete notes that JPMC is putting their money where their mouth is, as their AI work is part of a larger technology transformation initiative that includes moving to the cloud, refactoring applications, and more. It has also included the addition of several new roles to the company’s C-suite, demonstrating their commitment to AI as a long-term, strategic play.
Watch the Episode
Watch the full episode below, and be sure to subscribe to our YouTube channel.
Listen to the Episode
You can tune in to the full episode via the Spotify embed below, and you can find AI Knowhow on Apple Podcasts and anywhere else you get your podcasts.
Timestamps and YouTube Chapters
You can jump directly to key moments from this episode using the YouTube links below.
- 00:00 Introduction to AI Platforms for Commercial Intelligence
- 00:35 David and Mohan Discuss Knownwell’s Recent Funding and Beta Launch
- 01:47 The Importance of Validation and Funding
- 05:58 Deep Dive into the Beta Product
- 10:25 Technical Insights and AI Integration
- 17:51 Challenges in User Experience and Data Interpretation
- 24:12 Customer Feedback and Team Dynamics
- 26:48 AI in the Wild: JP Morgan Chase Case Study
Show Notes & Related Links
- Sign up for the Knownwell beta waitlist at Knownwell.com/preview
- Watch a guided Knownwell demo
- Connect with David DeWolf on LinkedIn
- Connect with Mohan Rao on LinkedIn
- Connect with Courtney Baker on LinkedIn
- Connect with Pete Buer on LinkedIn
- Follow Knownwell on LinkedIn
What does an AI platform for commercial intelligence allow you to do?
And how can you get started using one?
And what are the first people getting to use a commercial intelligence platform actually saying about it?
Hi, I’m Courtney Baker, and this is AI Knowhow from Knownwell, helping you reimagine your business in the AI era.
As always, I’m joined by Knownwell CEO, David DeWolf, Chief Product and Technology Officer, Mohan Rao, and Chief Strategy Officer, Pete Buer.
Later in the show, Pete Buer will join us for another installment of AI in the Wild.
But first, tune in for a conversation between David DeWolf and Mohan Rao about our recent funding round and our AI-powered platform for commercial intelligence that’s now in private beta.
Mohan, we are back in Better Than Ever, and as you can tell, I am not Courtney.
Courtney is out and about.
She was having in-person meetings and got caught in traffic is not back yet.
So we’re going to go without her because it has been a huge month, all sorts of news, and we want to talk about it and we want to get it out there.
So how are you doing?
Are you feeling as good as I am?
Yeah, absolutely.
September, it’s only a few weeks into, and it already feels like the whole month.
So Mohan, two big announcements this month for us.
First and foremost, we are going to spend this episode really diving into the private beta that we now have in market.
Super exciting to go from proof of concept into a full beta.
But before we do that, right after we launched our beta, we announced a brand new funding round, and I think it’s appropriate for us to talk just briefly about that.
So give me a sense when you think about this funding, why is it that this new seed round that we have announced has been such a big deal?
You know, first of all, it’s validation of what we’re doing.
You know, when you start something, when you’re trying to create something out of nothing in a startup, you always look for validation points.
You look for validation points from customers, but you also look for it from investors.
Investors generally are well-read.
They do their diligence, and it’s fantastic to get that validation.
Furthermore, you know, we need funding to keep going, right?
So we need to build the next versions of product.
We need to scale up our go-to-market efforts, so on and so forth.
So for a variety of reasons, it’s such a momentous occasion for us.
How would you, what would you say?
Yeah, I think those are spot on, Mohan.
I think there’s a couple of things that strike me about it.
I think first and foremost, you know, this was a $4 million seed round.
When we first started the year, we thought we’d probably be seeking a $3 million seed round, and we ended up accelerating that because we have seen those validation points that you’re talking about in our commercial demand, in our beta customers that are ramping up.
We have, in spades, been able to validate the pain point of commercial intelligence, right?
That there’s this void of understanding of the economic relationship with clients in services businesses, in businesses where you are more, you have more of a relational business model than a transactional business model.
And that has not only resonated with the market, but it has resulted in these proof points, these early proof points that have said to us, accelerate, accelerate, accelerate, let’s go.
And so we are doubling down.
We’ve raised a little bit bigger of a round than we originally planned.
The second thing that really strikes me about this raise is that as we have been out in the market talking to VCs, what we hear over and over again is that our two things, number one, our approach to solving an operational problem versus just increasing productivity is unique.
And where a lot of VCs think ultimately the next enterprise platform will come from.
And I think that’s been a great kind of proof point of our hypothesis to hear that that resonates so much.
And that that world feels like we’re a good four to six months ahead of the market, I think is a great thing.
The other thing that we’ve heard is that our usage of the LLM is quite novel and advanced compared to what a lot of others are doing as well.
Those two things, I think, are areas that you really need to get the validation from the VCs.
You can’t make that up yourself because nobody else really sees the purview of innovators that they see.
Yeah.
And we also got the choice investors that we were impressed with during the process, the ones that asked the right questions, that pushed us in the right ways, and delighted to have ended up with the slate of investors that we did.
Sovereign’s Capital, we’re just so impressed by, and we’re just a pleasure to work with through the process and feel good about that moving forward.
And then Studio VC out of New York as well.
So those two funds, as well as a couple of private investors have joined the mix as well.
So enough about the funding, but a fun thing and especially piggybacking off of the episode we did a couple episodes ago around the funding market, right?
So now we’re participating that in Q3.
So next time we’re looking at those numbers, we can’t just call out, you know, OpenAI for its percentage or XAI for its percentage of funding.
We’ll have to say, well, caveat, Knownwell has 4 million of that too.
Maybe not 4 billion, but we get 4.
Kudos to you, David, and kudos to the entire team.
It’s a lot of fun.
Well, let’s turn our attention to what the funding is going to be used for.
So really, this is all about doubling down on product, right?
Product development is what it’s all about at the seed phase.
How do we accelerate those features?
How do we accelerate those integrations?
How do we accelerate the security infrastructure to really make sure that we’re not just validating a pain point and have an initial solution, but that we build a robust enterprise platform?
I think with this step that we have just taken of releasing our beta, we’re now there where we have the baseline for an enterprise platform.
Can you walk us through what that means?
What is the milestone that we’ve just reached in releasing our beta product?
We released very early on in our cycle an MVP product, and we put it out there and started collecting real customer feedback.
Using that feedback, we’ve been able to iterate to our next major release point, which is our beta product, which is a private beta product that we have in the hands of our early customers and we’re just continuing to get a great valuable feedback.
As you know, David, when we started building the product, we had a few objectives in mind.
One is we wanted to solve some of the critical challenges for client focused leaders, right?
How do they master their client relationships?
How do they optimize their efforts?
How do they provide strategic oversight?
These were some of the big questions that we asked ourselves.
And what the beta product does is it moves us in that direction of answering these questions in a services context with our services customers to be able to provide that client focused relationship.
Specifically, what we’ve built in terms of features are a much, much, much more improved user experience, right?
We got a lot of feedback and we were able to iterate on it and we built an user experience that is leapfrogged over earlier versions.
I’m very proud of the UX work that we’ve done.
In addition to that, we improved the scoring and the scoring rubric and the use of LLM to power the scoring as well as to provide insights and recommendations.
That was the second major leap forward.
We’ve also introduced a chat and recommendation capability.
You can ask questions about your clients and say when was the last communication with a particular client, what was the major last status update.
You can ask the platform any natural language question, any open question that you’d like to ask.
We also improved the social map and the insights about who’s talking to whom, what is the quality of communications.
So it’s been an improvement across the board.
And the last thing I would say, David, is when you build an AI product, the underlying technology platform is like the hardest thing to do, because you’ve got to get the data in a certain way, you’ve got to massage it a certain way, you’ve got to transform it, you’ve got to put it through the intelligence layer in a certain way, deal with all sorts of rate limitations, so on and so forth, be able to deal with its non-deterministic outputs.
So the underlying technology has been another big leapfrog here.
So these are some of the things that we improved in the beta product.
Wow.
Okay.
So there’s a lot there to push in on.
You know, the first thing you talked about was the user experience.
And I think one of the ways we’ve talked about that internally is as we’re getting more customer usage, really figuring out what makes this not just a nice to have, but an operational necessity where it’s adding so much value that our customers have to have it.
Also taking it from making sure that it’s not just testing out technology and kind of a tool, but making it a true enterprise platform.
So that was the first thing you talked about was the user experience.
I want to dive into the second piece though, because I think it really sits at the heart of what makes this an AI-first platform.
You talked about the enhanced scoring.
You talked about how the Knownwell score has gotten so much more powerful, accurate, if you will, if you can be accurate in a probabilistic world, right?
Meaningful maybe is a better word.
Also the insights, right?
The LLM has come so far in the way that we leverage it.
Can you dive deep into the technology and talk a little bit about the iterations we’ve been through in leveraging AI and experimenting and using the different types of analysis and approaches that we have to really get to where we are now and kind of give everybody a high level, maybe architecture of how are we leveraging these LLMs?
What does it actually look like under the covers?
Especially in light of, I think we mentioned that the VCs have said that we’re further along than they see most organizations being.
Why are they saying that?
What is this approach that’s so novel that maybe others can learn from?
Yeah.
It’s one thing to use the chat applications that we all know about with your fingers on the keyboard, and you type in a question and comes to the answer.
Right here, what we’re doing is we’re using LLM in the background with our APIs, essentially calling the LLM with the right set of prompts, and these are very complex prompts.
What makes this a fascinating application for LLMs is the nature of problem that we’re trying to solve.
We have a lot of unstructured text that’s coming through, whether it is call transcripts, whether it is video conference transcripts, emails, other types of messaging.
You have a lot of unstructured data that’s coming in, that has to get analyzed.
This was not previously possible to deal with this unstructured data as a platform service.
It was just technologically not possible before.
At best, you could have built an expert system that said, let me just do a regex and get some of the strings out and analyze and score it in some way.
So you could have built an expert system.
I would call that AI 1.0 from 30 years ago.
That’s what you could have done.
And then the 2.0 version would have been, go to the clients and say, get a lot of data, like three years of data, and know which clients have churned, and then be able to fit a model to what it is and find patterns there so you can then predict.
So that’s how we would have done it five to 10 years ago.
Now what we’re doing is we are streaming this data in real time and analyzing it in real time based on the intelligence that an LLM is providing us along with the domain expertise that we have in terms of developing the scoring rubric.
That’s what makes it why now, answering why now in terms of why it is possible to solve this now and how novel the application of this technology is, that just would not have been possible even a couple of years ago.
If you’ve been listening to the show for a while, and if you’ve been paying attention, which I know you have been, you know that our team has been hard at work building an AI-powered platform to drive commercial intelligence.
Specifically, it’s going to drive up client satisfaction and reduce surprise churn for professional service firms.
Now, you can actually take a look at what we’ve been up to.
Go to knownwell.com/demo for a guided tour and to find out more.
I think that evolution of how we got to that place is a really interesting story, right?
So one of the things from the get-go, we originally started experimenting with training our own LLMs.
We experimented with what you called the expert rubric, right?
And leveraging that with traditional machine learning and really using the LLM more for the insights that we derived in the course scoring itself.
It’s been fascinating to watch at how much better the LLM is when you prompt it, right?
And when you structure the ask correctly at doing all of those things, right?
So one of the things we learned is you can’t just throw all of this at it and say, give me a score, right?
It doesn’t do a good job at that.
It’ll give you a score, but it doesn’t really mean anything.
But by breaking down what we mean by commercial health into these sub-components and getting specific about what you’re asking it to grade, right?
To give you sentiment and tone and, you know, topical analysis about, and then leveraging this rag architecture that brings in like rag for those that don’t know, retrieval, augmented generation, right?
So you’re bringing in all this data with your very specific request that, as you said, is very, very complex and dynamically created in and of itself.
You then can prompt and then you can build up all of those answers together to create the score.
So it’s almost the intersection of what we’ve done before with what we do today, right?
This expertise, this domain expertise, still matters in a very, very real way.
And it’s one of the things that differentiates us is that domain expertise, the data, the research that we have done in the area of professional services, commercial health that is intersected with these LLMs and is able to leverage.
Ultimately, we’re using the major LLMs everybody hears out there in order to do this prompting.
And so it’s the combination of all those things.
It’s not a one or the other approach.
And we went through that iteration cycle, kind of playing with these different things before we landed on this more mature usage of bringing all these worlds together, but really leaning on the LLM for what only it can do.
Exactly.
Exactly.
You know, the analogy here is when you go to the doctor for a wellness check and they do all sorts of testing and outcome the reports, right?
But you still need expertise to say, cholesterol under whatever 200 is normal, under 175 is desired, there’s HDL, there’s LDL, there’s triglycerides.
You need that expertise to still look at the data and say, what looks healthy, what doesn’t look healthy?
So I look at it as really a lab test results that come in, and on top of it, you have to add your domain expertise for this to make sense.
There is also complexity in building UX on top of a system like this, because the LLM inherently, you don’t know what it’s going to come out with, and therefore, it’s sometimes hard to predict what exactly the UX should look like, and for the UX to be adaptable to handle different types of outputs that come in, because you don’t control the inputs of data that are coming in, has been a particular challenge, and it’s a really new type of product management that we’ve had to adapt to.
Yeah.
So I want to go there in a minute, the user experience, because there was so much that was complex about it that was really interesting.
There are a couple other challenges too, but before we do that, I actually want to add to what you said about the expertise in translating the outcomes, because I was actually just at the hospital last night.
My father got rushed to the hospital, and thank goodness he’s healthy.
But this actually struck me sitting there in the hospital.
The doctor comes in and orders a bunch of tests, and they were different than the initial ones that the nurse had ordered.
And I sat there thinking, I don’t get it, why would a urine test show us anything for what he’s dealing with?
But the doctor knew, right?
And was able to guide the inputs just as much as the doctor could process those outputs, right?
And I think that domain expertise, yes, LLMs can give us conclusions on different items if we prompt it the right way and know how to interpret it.
But we are still needed to be able to prompt it the right way, to ask for the right tests so that we can then put it in.
And I think both sides of that equation is just as important as what we’ve learned.
Totally, fully agree.
All right, so let’s go to some of the complexities.
You just mentioned this first one of the user experience.
What is the nature of the data, the information that we’re getting out of these LLMs that makes it so different from the past world?
And why does it change the user experience paradigm?
Previously, when we built SaaS products, it was a fairly deterministic process, right?
So you came up with the requirements, you said here are the thresholds or whatever it is, you specified the business rules and requirements, and it was a very A to B to C to D process, right?
So it was a very linear process.
I’m not saying there was no complexity, but it was a linear process, right?
So that’s how we built SaaS products for the last 20 plus years.
Here what happens is you don’t know exactly, you don’t control the data that’s coming in because it’s our clients’ emails and transcripts and others that are just flowing in.
Then that goes into and it goes through a vector database and various transformations that you talked about, which are fairly complex and eventually gets to the LLM in a particular way.
We have scored it based on the LLM output a certain way and say that this thing is a strength in your client relationship because that’s what the net result of scoring has been with the algorithms that we’ve developed.
But the LLM output, when you read through it, may not really be a glowing recommendation for a strength.
It may be much more lukewarm than that.
You don’t control the content of what’s come back.
Those are some of the types of challenges.
Then you then had to think about what happens in all of these boundary conditions.
Usually, as you can imagine, this happens not when it is a strong strength or a strong weakness.
It comes when it is in the boundary area.
It’s a yellow.
It could be red or green, but it’s hovering in the middle.
You think it is green on some attributes, but it’s a red on some attributes, and that’s real life.
There’ll be some weaknesses, some strengths.
When you put it together, it doesn’t fully make sense to the user reading it.
So there is a user training component of this that we are working with our customers on to say, listen, some of this, it’s making predictions, and hopefully these predictions are super helpful to you, but occasionally they may be wrong, and that’s the grace that they will need to show because the positives outweigh these false positives that come in once in a while.
Yeah, and what it reminds me of is every now and then, you’ll have, let’s just take a simple example for the use cases we’re working on with commercial health.
You may have a more junior account manager working on an account, and they’ll come and they’ll tell you this story about something that happened, and they think it’s the best thing in the world, and you’re listening to the story, and you’re like, hold on a minute.
They said, what?
And you realize, oh, they’re just not tuned in to it.
That is where we’re at with the LLM, and why the doctor interpreting the results is so important, right?
And so as the product matures, I’m sure we will find ways to deal with that more and more, but it’s one of those early struggles that we have.
Exactly.
You know, you and I are both big baseball fans.
Sometimes you can have more hits than the other team and still lose the game, right?
So that’s the type of problem that you have, right?
So because the commercial relationship is a very complex one, it is not based on a scoring model, but it’s the aggregate of these things that in a certain way, in a certain sequence, is what matters in the end.
And it is complex in real life.
Mohan, I could talk about this stuff forever.
It’s been so fun to be in the trenches with you and the team.
I mean, kudos to the team.
You and I get to sit here and talk about it, but they’re actually making it work and doing the R&D, right?
We just have, literally, you and I have looked at each other so many times and said, this is the best engineering team either one of us has ever worked with, and that includes our data scientists, and it’s been so much fun.
But I want to wrap up with one final question here.
What are customers saying about it?
What are we hearing as initial feedback from the beta product that’s been launched?
They love it, they’re intrigued, and they’re now starting to consume this, and now starting to weave this into their operations.
But obviously, it’s not a straightforward system, like the system of record products that they use.
So it requires a different type of, it’s almost like a co-pilot application, right?
So you have to use this as your co-worker that is helping you.
So that is the process that we are going through.
And David, if I can also grow about the team, we built a very purpose-built team with extreme talent and focus, right?
And it’s hard enough for this team to pull this off.
It’s virtually impossible, I think, for any team that has 10 other priorities to do this.
It requires a lot of focus and dedication.
Yeah, no doubt about that.
And I think the other piece of it, that as you were saying that hit me, was extreme dedication-focused talent, obviously.
But we’ve also been really purposeful about building just an insanely good culture that is very collaborative.
We’ve said so many times, like, the humility of this team, they’re so brilliant, but there’s no pride, there’s no ego, right, in there.
I think that’s important because we’re dealing with hard problems that haven’t been grappled with before.
And so the ability to say, hey, I’m stuck, can you help me?
The ability to dive in and say, hey, I’m struggling with this ass, let’s brainstorm, so important.
So I think we’ve learned a lot, not just about how to leverage the LLM, but all these things around the edges, around the importance of that team dynamic, the importance of translating the results and the user experience and integrating it all together.
It’s been so much fun and I hope everybody else listening is starting to see their experiments pay off as well, just like we are and seeing them come to fruition.
So congratulations Mohan, you have been the big driver of getting us to this point and it’s just been a great month at Knownwell.
So thank you everybody for your continued support and kudos to you and the entire product team.
Thank you David, thanks to the team and our customers.
Pete Buer and I sat down around the campfire recently, I wish we did, but we didn’t really, with some graham crackers, marshmallows, and Hershey’s chocolate for another installment of AI in the Wild.
Hey Pete, how are you?
Hey Courtney, how are you doing?
I’m doing good.
This week, I went to dig into a story from CIO Dive that’s headlined JPMorgan Chase to equip 140,000 workers with generative AI tool.
Pete, what are the highlights here?
This is such a great business case, and it’s hard not to start with the punchline because it’s kind of staggering.
president slash COO Daniel Pinto just revised upward his estimate of value to be created in the business by AI Project Rollout from 1.5 to 2 billion.
These are projections based not on magic, but on concrete estimates from increased productivity, boosts to cost savings, and a huge chunk going to improvements in fraud prevention.
These are the kinds of ROI figures that investors and CFOs absolutely love to see.
It’s exciting stuff.
It will, of course, be no small task rolling all of the new capabilities out to 140,000 workers.
That’s a lot of change management across what the bank is estimating to be three to five years worth of implementation.
All the part of a larger sweeping technology modernization initiative that the bank has underway.
And they put their money where their mouth is, not just committing a ton of resources to the rollout and development, obviously, but also they’ve gone so far as to reshape their C-suite.
They’ve got a new head of technology and chief data and analytics officer.
So change is driven all the way to the very top of the business.
What I love about the story is the comprehensiveness of the approach.
And this is really what we’ve been advocating all along.
Looking across the whole of the business for all the many ways that AI and other technologies can help drive major game changing improvements strikes me as the only real way to get to a thorough view and a real believable prioritization of all the right project in the right order.
Yeah, I love this.
I will say as a Chase customer, this really gives an extra layer of confidence in the sense of this being used for security.
So, I love that.
Really great to see how they’re doing this very holistically.
I think you’d love also to further to be a footnote in the storyline about how cost savings and productivity improvements are going to work their way back to reductions in fees for customers.
That is true.
I couldn’t find that in the article.
Well, Pete, thanks as always.
Thank you, Courtney.
Thank you as always for listening or watching the show.
Hey, really quick, would you do us a big favor?
Would you review the show?
It’s a great way to help more people find out about AI Knowhow.
We would really appreciate it.
At the end of every episode, we’d like to ask one of our AI tools to weigh in on the topic at hand.
Hey, perplexity, what’s happening?
This episode, we’re talking about the AI-powered platform for commercial intelligence that we’re building at Knownwell.
What, if anything, do you know about what we’re up to here at Knownwell?
Knownwell is making waves in the AI landscape with its platform designed to synthesize communications and data into actionable insights for professional services firms.
Recently, they secured $4 million in funding to enhance their commercial intelligence capabilities, aiming to improve client satisfaction and reduce churn.
And now you’re in the know.
Thanks as always for listening.
We’ll see you next week with more AI applications, discussions, and experts.