How Clean Does Your Data Have to Be to Utilize AI?

We’ve all heard by now that data is the foundation off of which all AI initiatives are built. But how clean does your data have to be to start utilizing AI? And what does it even mean to have “clean” data anyway?

Tune in to episode 30 of AI Knowhow for a discussion on those topics and more. Data cleanliness, as Knownwell’s Chief Product Officer Mohan Rao puts it, is about far more than making sure data is accurate. Data must also be secure and private, and it shouldn’t introduce any sort of bias into AI models that are trained on it.

Just because all of these things must be true, however, doesn’t mean companies should wait until all their data is absolutely pristine before getting started with their AI initiatives. As Knownwell CEO David DeWolf says, “Organizations are spending way too much time and way too much money trying to normalize, engineer, and get their data in a state where it can be used. It is a broken, fundamental problem with the ‘modern data stack’ and with our business intelligence tools.”

So what are business leaders to do? Mohan recommends that leaders look at ERPs, CRMs, and other core business systems to ensure that the appropriate level of data hygiene is being adhered to throughout their organizations. And David suggests not just forcing more rigor in humans managing data hygiene but also considering how AI tools might be able to help with some of these age-old problems around things like sales teams entering all relevant data into Salesforce.

For this week’s guest interview, Pete Buer speaks with Jon Gillham of Originality.AI about the company he founded, which is an AI plagiarism checker that publishers and media companies use to ensure the content they publish is actually written by humans. Among the reasons companies should be paying attention to having clean content as well as clean data are legal and reputational risk and understanding the negative impact that AI-generated content may soon have in areas like SEO rankings.

Listen to the Episode

You can tune in to the full episode via the Spotify embed below, and you can find AI Knowhow on Apple Podcasts and anywhere else you get your podcasts.

Watch the Episode

Watch the full episode below, and be sure to subscribe to our YouTube channel.

In the News Highlights

This transcript was created using AI tools and is not a verbatim, word-for-word transcript of the episode. Please forgive any errors or omissions from the finished product.

 

Courtney: [00:00:00] How clean does your data actually have to be to leverage AI? And before we get too excited, what does it even mean to have clean data? Let’s talk about it.

Hi, I’m Courtney Baker, and this is AI Knowhow from Knownwell, helping you reimagine your business in the AI era. As always, I’m joined by Knownwell, CEO, David DeWolf, chief Product Officer Mohan Rao, and Chief Strategy Officer Pete Buer.

We also have a discussion with Jon Gillham of Originality.AI about why clean content is also important in the AI era. But first the news.

AI News

Courtney: Pete Buer joins us as always to break down some of the latest AI headlines and how they apply to your business. Hey, Pete.

Pete: Hey Courtney, how are you?

Courtney: I’m doing good. for our first article this week. Let’s look at a Wall Street Journal article from Christopher Mims titled, Want to Know if AI Will Replace Your Job? I Tried [00:01:00] Using it to Replace myself. Pete, what did you take away from this one?

Pete: So I, I like this article a lot on, on two levels. One part for the content that it shares in the way of background, and then one part for. The experiment that Christopher Mims went through. And so some thoughts for you on both. on the content, just useful clarity around what we can expect, um, for the a hundred million knowledge workers, for instance, in the United States, about how, uh, AI will cause an evolution of their work.

So, as we’ve talked about a million times, AI can handle. A ton of rote tasks in knowledge work. Um, and so jobs that are specifically built around executing rote tasks will of course be the ones most obviously and most early at risk. referencing some McKinsey data. However, um, there’s an estimate that 30% of all work hours ultimately can be automated by 2030.

Uh, [00:02:00] with the outcome being that AI takes both tasks and complete jobs in some cases, uh, to the 80% complete or 80% reliable quality standard, and then humans kind of on the hook for going the last mile. I think that’s just helpful. That’s a logical progression and anyone who’s trying to think about how to, reevaluate their talent, uh.

strategy, their, their skills, their teaming, their workforce approach, that starts to give you, um, a, a blueprint for doing, for doing the work. But then, uh, part two, the experiment that this author put himself through, um, I, I feel is, is just beautiful. spend a week starting every project that you undertake in work or life with the help of generative ai.

See all the places where you can get a fast start, get leverage in your thinking from the technology and see on the far side of the project how much more [00:03:00] quickly you got it done, or how much better the quality of the work was by having that initial input by the end of the week. You’ll have a visceral feel for what work in that McKinsey 2030.

Ends up looking like, I think every leader listening who’s not using generative ai day to day to get leverage in their work, should take this experiment and see by the end just how differently they feel about the work to come. The author references, uh, his first real use and experience of a smartphone as, um, his, his corollary, just being able to see how different going through life will be.

And I think this is probably five or 10 x that, so, um.

Courtney: I love the premise of this and I feel like if we all looked at the technology with this framework right now, we’re the ones that benefit from the gains of it. You know, no one is actually replacing us. Um, we’re just able to get more done in less time, hopefully. That’s what you [00:04:00] find. And so I love this framework.

I’m tempted to do it myself. So Pete, maybe you and I will challenge each other, uh, to see if we can replace ourselves as well. up next Tech Radar reports that a New York business chatbot is sending out some particularly bad information. Pete, I bet you remember us talking about this earlier this year, the news of this, uh, what’s your takeaway here?

Pete: Yep. So we were talking about, um. The tax, you know, company assistance, chatbots, and how they were giving bad advice on how to fill out your taxes. We’ve seen a whole bunch of examples now in places where chatbots are giving faulty advice and in some cases, life altering faulty advice. The case that they use or that they share in the article from the New York City.

My city portal is a bit of advice around, uh, businesses being able to operate cashless establishments [00:05:00] when that was a practice band back in 2020. I think there are two takeaways here. First, for businesses, um, always take the high ground on transparency and responsibility. transparency, provide really, really obvious.

So that is to say not fine print, uh, guidance for the potential shortcomings or limitations of your new AI powered offering. Rather than turn users away, you’re gonna build trust with them and on responsibility. Especially in cases where lives or livelihoods are on the line, or where precision really, really matters to the outcomes of a business or a person.

Keep your standards super high for completion and QC before going to market. Not every new AI powered application can be test and learn right outta the gates. Second then for users, uh, caveat mTOR, right? Do your diligence on the offering so that you’re super clear on its limitations [00:06:00] and shop around.

There will be alternatives. I think maybe something that happens now is, is accuracy of outcome. Uh, precision of results starts to work its way up on the list of buying criteria that we use when making a purchase decision. And good. That’s how it should be when you’re, uh, shopping around with looking at something new.

Courtney: Pete, thank you as always.

Pete: Thank you, Courtney. Love it.

​

Courtney: If data is the foundation of all AI initiatives, how clean does your data have to be to really start leveraging it? I was excited to talk with David and Mohan about data cleanliness, why it’s time to move past the concept of the modern data stack and more.

Panel Discussion

Courtney: David, Mohan. I’m not

sure today. I, it’s not that I’m excited or not excited, I’m just a little cautious walking into this one with you two

David: Uh oh.

Courtney: because I have heard [00:07:00] David talk about a vision for the future when it comes to data. And I am not sure that Mohan agrees. So I, I am a little, I’m kind of excited for it, but I wanna open a can of worms.

Uh, that’s my best way to describe this. And

it may not be.

David: instigate a fight. Mohan, don’t

disagree with me spiritually. Okay.

Courtney: Yes. So I wanna talk about the cleanliness of our data. What that means for ai, because I think you have a lot of companies right now that feel like they can’t even engage in what’s happening in the AI conversation because they’re looking at their current data and how it’s structured. It’s a mess, and they feel like, oh my goodness, we gotta go get that straight before we can even engage in what’s happening in the AI space. So, David, do you mind opening this conversation [00:08:00] up and talking a little bit about how you see AI kind of helping in data cleanliness and maybe a little bit of technology jump. I don’t know if

that’s how you would describe it. Kind of sharing a little bit about that, and then Mohan, feel free to rebuttal.

Mohan: Uh,

David: She,

Mohan: you

David: she’s teeing this up as an argument. I’m gonna find a way to agree with you, moan. Alright,

listen, here’s the deal. I think you have to take this from a business perspective and I will just flat out say. That organizations are spending way too much time and way too much money trying to normalize engineer and get their data in a state where it can be used.

It is a broken, fundamental problem with the quote. Modern data stack and with our business intelligence tools is there’s just way too much work that has to go into it. And I think that it is such a big problem that it’s causing tons of waste, which means there’s a [00:09:00] huge opportunity there and.

Organizations, especially as data has become more and more necessary to operate business, and that’s what’s happening with ai, right? To operate a competitive business, you’re gonna have to have data that’s consumed by artificial intelligence. Um, as it becomes more and more important, it’s becoming more and more important every day.

problem has to be solved. There is no more five years from now. Ability for an organization to put all of its operational objectives on pause while they go build these huge data lakes and warehouses and do all this data engineering and wait for a couple of years before they get any value and insights from it.

I just think that has to change Oh, by the way, the second problem is because it has to be all this quote managed data. Um, and I don’t just mean structured data, like just data that’s actually put into the warehouse, um, and managed by [00:10:00] your team. It’s ignoring 90% of the data that’s actually out there, all of the information that flows that we haven’t thought of that isn’t caught.

And I think there has to be innovation in this space, and we’ve gotta make it easier for organizations to operationalize their data.

Mohan: fundamentally, right? So when you think about AI technologies, it relies on data, right? And there is a version of garbage and garbage out that you cannot dispute, right? So I understand the business urgency around, let’s get something out there. But if the data is. Garbage. Your output’s gonna be garbage.

Uh, right. It’s really as simple as that. Then the question is how do you deal with it if you don’t have, uh, great data and, um, uh, how do you get there sooner? the other thing that I like lay out before we start thinking about this more is people think of. Clean data as um, something that’s accurate, not accurate.

It’s gotta be this way. Right? So, you [00:11:00] know, people think of it as cleaning your room. Let me just get everything off the floor and put back where it should be. Right? So there is more to data cleanliness, right? So it’s, yes, it has to be reasonably accurate, but also it has to be secure. gotta think about the privacy aspects.

Gotta think about the fairness aspects, right? So otherwise you’re gonna train something that it’s gonna be, not fair in its output. And then you gotta think about how can you add to the data that you have through external sources, through license data, so on and so forth, to make it into a more, uh, more robust data set that you have.

Right? So, so these things are super important. And, the way that organizations go wrong, uh, and I’m gonna a little bit support here, David’s point of view is they try to boil the ocean. Like, I gotta get every room in the home clean before I can invest. Uh, I can invite anybody, uh, into my home sort of thing, right?

It doesn’t have to be, you just had to clean up [00:12:00] your, um, family room and living room and kitchen before you have you invite somebody over,

David: the mess in the closet and close the door.

Mohan: Close the door. Exactly right. So for whatever reason, people are not able to do what we do every weekends when we invite over, uh, of how we clean our homes, right?

So into this space. And it’s really important that be able to pick a use case, pick data that’s reasonably in quotes, clean, but I kind of already described what clean means,

David: Mm-Hmm.

Mohan: then start with that, and then the rest can be done in parallel.

David: So, Courtney, I gotta say, I, I think the, the disagreement in the fight is gonna have to be with you here because I don’t hear Mohan saying a single thing that I disagree with. Um, what I think is we’re coming at it from two different angles, right? I’m coming at it saying, there’s an imperative that as technologists, we have to get better at this.

We have to solve this. It does not make. Any sense for us to be spending this time, this money delaying the use of the data, Mohans [00:13:00] coming at it, saying, here’s what practically needs to be done. And I would agree with that. I, I say yes, that all of those factors are true and yes. Now given those two realities, we’ve gotta figure out how the bridge meets in the middle.

And I think this is the innovation that’s going to happen is there is enough waste there, there’s enough dollars for the taking that companies are gonna start to figure out how to use technology to innovate here and to solve some of the. Manual, heavy lift, erroneous work that’s being done, um, in, in order to bridge this gap and, and to allow us to yes, have clean data, but at the same time do it in a way where it’s not such a drain.

Courtney: So the Jetsons promised me a flying car as a child. I have yet to see my flying car, yet it hasn’t happened. But what I hear you saying is that Rosie in my house, in my data [00:14:00] house, the Mohan data house is what we actually need and might have much sooner. And you two are in agreement with that.

David: Yeah, you, you know that just yesterday SpaceX announced that, um, the, uh, uh, investment that it made in a flying car company had said that it hit, uh, almost 3000 pre-orders, right? So, like, I think the

Courtney: Oh, so yeah, so you’re saying I’m gonna get both.

Mohan: Courtney, since you’re a keen observer of, uh, humans and organization. Let me ask you a question, even though we agree on, um, you know, we come in from two sides, but David and I fundamentally agree, why do you think organizations fall into this bad pattern of trying to boil the ocean and getting everything.

Clean and so on and

David: Hmm.

Mohan: they can launch projects. Uh, right. So why, why do you think that’s the case that people fall into this bad pattern?

Courtney: You know it. It’s interesting. [00:15:00] I don’t know if you’ve ever been in an executive team meeting, but it seems like as soon as you find one thread of data that seems wrong. It feels like all the data must be wrong, and I, it’s a good, I thought your analogy was a good one because it feels like, oh, I, on this report, this doesn’t make sense.

It can’t be accurate, and so it feels like the whole house is in disarray and so it feels, I feel like there’s a sensitivity that things must be. Perfectly accurate 100% of the time. I don’t think we have much tolerance for it to be spiritually correct,

David: Yeah, I, I.

Courtney: correct.

David: Totally agree with that. I, I totally agree with that. And I think on top of that, as AI begins to consume it and it is a black box, we don’t understand it.

Courtney: Right.

David: afraid that the [00:16:00] conclusion that will come out. Is erroneous because of that one thread that we don’t even know is being used that we don’t understand.

And I, I think the scariness of AI is doubling down on that, but that’s always been true. Absolutely. Courtney,

Courtney: Yeah.

Mohan: I mean, that’s a, that makes sense to me as well. Right. So, you know, it’s very hard for us humans, uh, to worry 1%, about 1% problems and 10% about 10% problems. Instead, we worry a hundred percent about 1% problems.

David: Right.

Mohan: and that sounds like, uh, what you’re saying that, and which makes sense to me.

Courtney: I think for people listening, I want to, you know, in our last. Conversation. Our last episode, we really outlined this opportunity, this unique moment in time that if you’re a smaller organization or a mid-market organization, you really have an opportunity. And I think I wanted to pull the thread on data [00:17:00] because I think it’s one of those things that can hold you back.

It’s like all of a sudden the speedboat has a big anchor. You know, it’s a really big anchor and all of a sudden the cruise ship is all of a sudden passing you. Um, because you know what they have, they have data and they know how to have data governance, and they’ve structured their data really well.

They’ve got all of that. So is there some, Confidence that we can give them about data and not, again, I love your analogy of the house. It doesn’t have to be perfect. You don’t have to have every house cleaned up to still really start engaging in AI and start deploying it in your organizations.

Mohan: I think a good place to start would be what many companies, competent companies do all the time, which is look at their ERP and um, CRMs and their core business systems and practice data hygiene. Uh, right. So if you don’t have a good sales pipeline, you can’t run us. You want a slightly scaled sales organization.

David: [00:18:00] Mm-Hmm.

Mohan: don’t have good, uh, JIRA, uh, tickets, it’s really hard to do product development at scale, so on and so forth. Uh, so a good place to start is all the business systems that you use and just clean up, not for AI’s sake, but for your own operations. Uh, right, so which is, so that would be

David: I love that. I love those specific use cases, Mohan, because it starts to actually solve this problem, right? So let’s look at the example of the CRM and the sales data. that’s ever tried to run an organization and overseeing sales know how hard it is to get sale clean sales data. In your CRM, right?

And it’s not because salespeople are bad people, right? They’re just wired in a way, um, that we love, right? That they are not the type of people that are gonna stop and take the time and record every single data point and make sure every email is uploaded to the CRM and like, it’s just hard. But technology can do that now, right?

There’s no reason why [00:19:00] technology can’t. Monitor the communications that they’re doing and automatically translate it into CRM data, right? So, yes, we could take your advice and go impose a new program that’s gonna totally tick off the sales team and get them frustrated at management because now 30% of their time is not with their customers and not with their prospects, and it’s a wasted overhead effort We can say there’s new technology arriving, and why don’t we use it to solve this pain and this chasm that has existed between operators and sellers, and let’s make both of our lives easier and help to get the clean data into the system. Right? And that’s the type of thing that I’m talking about. I think we have to start looking at.

Mohan: Completely agree, right? So if there’s an overlay, uh, ai, Program function, whatever you call it that is just flagging data that’s wrong or likely wrong. Uh, right. It’s so [00:20:00] much easier to go, uh, fix those things because I think, um, you know, computers are good at figuring out anomalies, right? So, which is, you know, something as simple as, uh.

So a record has not been updated in 60 days and it is in the deal part of the pipeline. You know that something has to be done with that, right? So, or likely something has to be done. So clearly I think that’s possible and that’s gonna help with this friction that you just, uh, mentioned.

Courtney: Yeah. Thank you for that

use case. I think that’s really interesting and maybe a thought provoking way for. You as an executive just start thinking about how do we clean the house in a way that you know doesn’t slow us down? In deploying AI might actually deploy AI to help you solve the problem which win-win all around.

David Mohan, thanks for this.

David: Absolutely.

Mohan: Thanks.

Courtney: ​We recently published a white paper on AI [00:21:00] transformational Readiness.

If you wanna find out if your organization is ready to transform with the help of ai Click the link in our show notes or the YouTube description to download the paper today.

​Jon Gillham joined Pete viewer recently to talk about his company originality.ai, and how they’re helping organizations ensure content that appears on their site isn’t generated by ai.

Interview

Pete: Jon, hi. Welcome to the podcast. So glad to have you.

Jon: Yeah, Pete, great to be here. Thanks for having me.

Pete: If we could, uh, let’s give listeners a little bit of context. Can you tell us, uh, about originality ai?

Jon: Sure. Yeah. So originally AI came out of, um, a need that we had, uh, when we’d built up a content marketing agency, eventually sold that, where we would have writers coming in. content for us, us selling it onto to clients. but our ability to know with certainty on whether or not that customer [00:22:00] was receiving AI generated content or human generated content was always challenged.

We had an generated content division where we transparently communicated that that was AI generated and then we had a human generated, uh, division, but we didn’t have sufficient controls, um, to make sure that. We were ensuring that all of our content was human generated, and, uh, that’s where we, we built, um, originality.

Do AI to help anyone that’s publishing content on the web know, uh, if the content that they’re publishing is AI generated or human generated.

Pete: Awesome. Thank you. And, and can you take us a click in how are we determining if content is AI generated or human generated?

Jon: Yeah. So I mean, the simplest way to think about it is it’s kinda like the good Terminator, sort of like version two of the movie when, uh, when Arnold, when Arnold went good. Um, that that’s sort of how it works. It, it’s, it’s our own AI that’s been trained to detect. Uh, what human content looks like and what AI content looks like, and to tell the difference between the two.

Um, it’s a, so it’s a call to classifier, [00:23:00] um, and it provides a probability score on whether or not that content was a generated or, or human generated.

Pete: Nice.

Um, so we’re using AI to keep AI in check.

Jon: yep.

Pete: customers, uh, want to know in their content what is AI generated and what is human generated. Why, why is that important? I.

Jon: I mean, it’s, it’s really fresh news right now. Um, I, I mean, ultimately, Publishing the AI generated content is, has a ton of efficiencies, ton of be benefits. The sort of simple answer is people are often happy to pay a writer, whether it be a hundred dollars or a thousand dollars for, for a piece for an article.

They’re not super happy to find out that that was copied and pasted out of chat GPT in five seconds. And so if they’re okay with publishing AI generated content, they wanna be the ones that are accept, that are receiving the efficiency benefit of u the the use of ai. Um, and not, not the writer potentially misleading people into saying that they’re, they’ve written it in, in, in reality, they just [00:24:00] copied and pasted it.

Um, and, and so then why, why do they care? Um, why do publishers care? And for Google, it’s, it’s potentially an existential threat for them on whether or not. Uh, their search results get overrun with AI generated content. If they do, why would anyone go to Google? Why wouldn’t they just go to the AI that would then have better understanding of who they are?

And so, Google, I mean, this is fresh news as of, you know, March 5th with some manual actions starting to take place today on March 6th, where sites that have been mass published with AI generated content are getting, are getting hammered, um, by, by Google. Um, getting de-indexed. And so there’s a, there’s a right way that Google says to use AI generated content and, and a wrong way.

And the, there’s certainly a risk to using AI generated content and publishing it on your site and our tool, we want the owner of the site, the publisher, the decision maker, to be the one making that, um, accepting that risk, making that decision and not, not the writer.

Pete: If I’m a [00:25:00] executive listening, I might be tempted to. cast an eye across the organization to figure out where people are using, uh, AI to create whatever their content might might be. Um, based on this conversation, are, are companies at risk? Are, are they exposed?

Jon: Yeah. Uh, there there’s a reputational risk. Um, there’s potentially a legal risk. I think that one might be overblown and, and not my world. So I, I’m not as, as focused on that one, um, in terms of copyright, ownership of, of the content that gets produced. but I think there’s a. There’s a, there’s a marketing risk if it’s published on their site.

Uh, there’s a reputational risk if, uh, the editorial process is not as thorough as it needs to be. And AI is hallucinate. Uh, we, we’ve seen multiple examples of, the misuse of AI for text generation resulting in significant harm to companies. Microsoft in their news articles, published a tourism article for Ottawa and mentioned as a great place to go eat the, the [00:26:00] soup kitchen. Um, like the, as you know, that clearly a reputational harm occurred from, from that, Amazon, there was an done, um, that we were involved in that was published in The Guardian, where there were books on Amazon that were very likely AI generated, um, that recommended people.

Determining if a mushroom was safe to eat or not by tasting a little bit of it.

Um, obviously significant, significant harm and, and, you know, uh, fortunately that was removed and seem like anyone was harmed. But where, where would the risk, where would the liability of landed, um, if, if that had, if that had gotten into the hands and someone followed it?

Pete: Okay, so your mushroom story just took the executive who was listening and a little bit worried about risk to, uh, to a new level of, of concern. what should that executive go do?

Jon: an understanding on where text is used within a company and then a spot check on the rate of [00:27:00] usage of, of ai, um, and, and trying to understand those risks. Is it, are they concerned about the, the legal risk around the copyright ownership? Um, are they concerned about the reputational risk?

Are they concerned of, of misinformation being injected into. Into their, their communication and, and not being identified. Are they concerned about, uh, the marketing risk from, from Google? Um, you know, AI is phenomenal, has so many benefits, but there are, there are risks associated with it that need to be managed.

And so what do they do, evaluate their company on where those risks could exist? Probably not internal communications, probably not like internal chat. Um, but potentially that their marketing team and their website would have, would have a risk that they should understand. Um, and. any sort of documentation, uh, that flows outta the company, um, again, potential, potential for risk there.

And then an, uh, a quick check on an, on an AI detector like, like originality to, to understand the, the probability that that [00:28:00] content has been generated by AI or human.

Pete: Elsewhere. In this episode, there’s a panel discussion about how clean your data has to be to utilize ai. I feel like this conversation makes me want to rethink the definition of clean. Would, would, would you agree? Like are there’s, there’s more to clean than just sort of a, you know, organization and tagging and so forth.

Jon: Yeah. No, it’s a, it’s, it’s a question that we’re wrestling with right now. Um, you know, we, we we’re running some studies around the prevalence of AI in different locations on the internet and as sort of. Every, for the rest of humanity, the only clean data sets of known human writing have already been created, which is kind of a crazy, crazy

Pete: Wow.

Jon: about.

Um, because as, as of right now. All data sets that are from a publicly available sources, IE the internet are becoming synthetic with AI content being injected into them. Um, and that, that creates some, some questions that I’m not smart enough to, to sort of [00:29:00] predict the answer on. but it, it definitely, it definitely is redefining what, what clean data looks like and, and.

It’s also redefining, and I think every company needs to come to these own, their own decision on the tolerance of use of ai. Like for example, internally, our marketing team’s not allowed to use ai. Our, research team, um, who have members that are English as second language, our absolutely encouraged to use AI to communicate what they need to communicate to us because of.

We can better understand what they’re communicating. So, it’s a bit for every company to define and, and where the use is appropriate. But then also on the, the cleanliness side, it’s, it’s a, it’s pretty crazy to think about right now. on how messy the data sets are now and are going to continue to get.

Pete: Jon, thank you. It’s been such a pleasure to have you here.

Jon: Great. Thanks Pete.

Courtney: Thanks as always for listening and watching.

Don’t forget to give us a five star writing on your podcast Player of Choice, and we’d really appreciate it if you’d [00:30:00] also leave a review. At the end of every episode, we’d like to ask one of our AI friends what they think about the topic at hand, so. Hey, Claude, what’s happening? This episode we’re talking about how clean a company’s data has to be to utilize ai.

What’s your thoughts? And now you’re in the know. Thanks as always for listening. We’ll be back next week with more headlines, round table discussions and interviews with AI experts.

You may also like