
Ops Cast
Ops Cast, by MarketingOps.com, is a podcast for Marketing Operations Pros by Marketing Ops Pros. Hosted by Michael Hartmann, Mike Rizzo & Naomi Liu
Ops Cast
The Future of Martech: How AI is Revolutionizing Marketing Operations with Luke Crickmore
Text us your thoughts on the episode or the show!
Unlock the secrets to transforming your marketing strategies with the power of AI, featuring insights from Luke Crickmore of Algomarketing. Learn how artificial intelligence is not just a tool but a game-changer that elevates data analysis and campaign strategy to new heights. Luke shares his expertise on automating the mundane, freeing up time for creativity, and crafting more personalized marketing experiences that cut through the noise. Discover how AI empowers marketers to concentrate on what truly matters: interpreting data to make informed decisions and fostering innovation in their campaigns.
Imagine a world where entire marketing campaigns are nearly complete with AI-driven insights, needing only the human touch to refine and finalize. That's the future we discuss, exploring AI's role in generating dynamic hypotheses for campaigns, moving beyond basic multivariant tests, and ensuring brand alignment through finely-tuned models. We delve into the crucial topics of intellectual property and data security, explaining how AI can enhance content creation while safeguarding brands' unique voices. Through our conversation, we paint a vision of a more efficient and effective marketing process, driven by AI yet validated by human expertise.
Episode Brought to You By MO Pros
The #1 Community for Marketing Operations Professionals
Connect Scattered Data with AI Agents
Explore how easy AI can be with Forwrd.ai for Marketing Ops
Build AI agents that can predict, forecast, segment, and automate the entire data processing workflow -- integrating, prepping, cleaning, normalizing, analyzing, and even building and operating your models. With Forward, you can get it done 100 times faster.
Hello everyone, welcome to another episode of OpsCast brought to you by MarketingOpscom, powered by all the MoPros out there. I'm your host, michael Hartman, once again flying solo, so we're looking forward to Naomi and Mike rejoining hopefully before the end of 2024, but it may be 2025 when we get that going again. Joining me today is Luke Crickmore from Elgo Marketing. Luke is the Marketing Technology practice lead there at Elgo Marketing and we are going to discuss what the future of Marketech will look like with AI as the backdrop so really excited about that. We've had fewer than I thought conversations about that with our guests, anyway. So prior to joining Elgo Marketing, luke spent the bulk of his career in several roles at another technology or a consulting company with a focus on marketing automation and marketing and sales technology.
Speaker 2:Luke, thanks for joining me today.
Speaker 1:Yeah, so if you can't tell Luke is not from the US, that means I'm probably very brief.
Speaker 2:Don't like to talk about myself.
Speaker 1:We'll have to draw that out of you, that's okay, no, so okay. So let's get started, because I always feel like I should know more about this whole AI revolution than I do Someone who's even posting on LinkedIn recently about asking people are they using AI for recording and capturing notes for meetings and things like that? And I'm, oh, I've wanted to, but I just haven't. So I'm not opposed to it, I'm just a slow adopter, I guess. All right.
Speaker 1:So when you and I talked before, though, I think one of the things that we were pretty aligned on was, you know, the impact that AI can have on, dare I say, revolutionizing the way that we do analysis, especially in the marketing and marketing and sales context. So, you know, my belief today is that the you know there's two sort of two limiting factors, two sort of two limiting factors, uh, you know, the primary one just being it takes time and effort to do analysis, um, for any you know a person, even if they are skilled, to be able to to do analysis and do it well and identify, you know, uh, interesting stuff that might be useful for an organization. So I'm curious, like do you think are curious? Do you have a similar belief, or am I off my rocker?
Speaker 2:Personally, I think the thing that AI unlocks more than anything is time and expertise. Really gives people that don't necessarily have that expertise the ability to go beyond what they can do today, but it also unlocks a lot of time. So, if we're talking purely about data analysis, ai can obviously find trends for you in that data, get you to, uh, one step ahead of where you were. If you didn't have it, maybe two or three steps ahead actually now thinking about it, um, and then, like, give you the ability to ask more questions, to upskill yourself, to be able to go deeper into data.
Speaker 2:And I think the other thing that it unlocks is really the opportunity to be able to do more, um, like, if you you know, if you think about a standard marketing campaign that we might run today, not that many, many people are running many tests on those marketing campaigns. If they are, they're generally quite simple tests that they're running day to day. What I think AI is going to do is enable people to do that way more in an automated way, with lots and lots of control over how they do it and deliver it, and then AI can give them the skills to potentially be able to report on those different tests that they run. So, yeah, I think AI is going to really, really change people's day to day.
Speaker 1:Yeah, I think the idea of AI I'm going to paraphrase what you said, but, like, changing the nature of what you do when you're doing analysis is really what I expect out of it. It's not so much that there won't be more analysis, but doing the heavy lifting of pulling the data together, doing the analysis is going to leave more time to do additional analysis, because that's typically what happens right you do one set of analysis, you get some outcome, there's subsequent questions come up but also allows you to really spend more time assessing what came out right. I mean, the numbers are not all that interesting by themselves. They're more interesting in the context of what. What does that mean? What can we do with?
Speaker 2:it? Yeah, exactly right. I think if people are using ai to right now to try and do analysis on their data sets, they're probably not going to have the best outcome if they aren't, uh, considering what they put in to the ai. So, uh, what I'm trying to say is if you purely if you go to chat, gpt or gemini you upload an excel sheet, you say, give me the trends in this spreadsheet, it'll give you some really broad level trends.
Speaker 2:But really, what you need to be doing as a marketeer is saying what you want it to produce for you.
Speaker 2:So you've already go into it knowing that there's an output that you need to produce.
Speaker 2:The other thing that you need to be able to do is feed it with the right information to be able to do that. So if you're just chucking in this massive spreadsheet with tons of information that it doesn't need, it's still going to try and use that information in its output or it's going to consider that information as output. So if you are doing analysis to really try and streamline that analysis sorry, that data so that it's really specific to the task at hand, and then leverage the ai to get you to that next stage, so ask questions of the ai, ask other things that it might try and analyze in this data set, because that again just gives you a bit more creativity, um, you know, yes, you went into looking at this data in a certain way, but maybe then there's a reason for you to want to ask different questions of that data, um, and that then lead to additional questions that it can then help you answer. So, yeah, I'm in complete agreement. Really, I think, great opportunity with analytics and AI.
Speaker 1:Yeah, at some point I want to come back to this idea of limiting the input data, because maybe it's again going back to my lack of understanding, but I thought that would be. One of the benefits is that you could check in a bunch of data and let it kind of do its own thing. But I think you hit on another thing, though, too, or maybe I I was hearing what I wanted to hear, which is, yeah, to me, the other sort of limiting factor today and I still think it's an important one, even with AI doing the, maybe the heavy lifting is that there's still not, from what I can see, a lot of people, particularly in marketing, marketing operations, revenue operations who have really good I'm trying to think of a way to phrase it. Well, I'll just say it this way they don't really understand statistics or analytics in a deep enough way. Maybe they know what questions to ask, but they don't know how to analyze the output. And part of that comes from my own experience in leading teams and coaching people, and particularly when I've coached people on how to present data right.
Speaker 1:One of the things I see a lot of people do is they get asked generate this report. They generate the report and there's an obvious sort of anomaly right, something that doesn't match the pattern, and they're not prepared to answer the question that is going to come up when they put it in front of somebody who's less familiar. Because, like, what does that mean? And it it can undermine, uh, everything in terms of the, the, the belief that you know what you're doing right. So, uh, do you? Are you, do you see the same kind of like, uh, gap in skills and knowledge and those, those fundamental things that I think will still be important, even if you've got ai, because you still need to interpret.
Speaker 2:I can relate to that because I also struggle with my analytic skills, like trying to bring things together, trying to be able to analyze large data sets. I struggle with it will become easier.
Speaker 2:And one of the ways that it becomes easier. So we develop applications for our clients, obviously enable them to do things like uh, uh, understand data and create narratives of that data using ai. What we do in that is we create these really bespoke uh models that do a lot of that analysis on behalf of the um, the marketeer. They will understand the concepts that they need to put into the data, they will understand how it needs to analyze the data and then we will basically tell it what to look for and the way then to present that back to the marketeer.
Speaker 2:And one of the things that I think a lot of AI isn't doing at the moment but it will become more regular is the explainability. So when AI is making decisions, when AI is showing results to you, a lot of the time you won't necessarily know if those results are right because you haven't been the person that's done the work to get there, but AI will start to show you how it came to those results and from that I think there is going to be an upskilling as well in sort of people's understanding of how these types of analytics is happening, and it will benefit the marketing ops community in the long run. But I think in the short term, absolutely, absolutely there is a requirement still on uh more data science or data analyst uh roles to help you get to the point where you need to get to interesting.
Speaker 1:So, um, do you one of the things I guess I've noticed the few times I've used some of these platforms and I think I know it's. You know, seeing my nieces and nephews I see them every once in a while when they change, it's very obvious to me. When it may not be to the parents, but like I'm not in and out of AI platforms to say, chat, gpt, and I've started noticing that there's a little more of you. When you put in a question and it comes back with stuff, it will now provide more like reference, like here's where I got this input from. You think that same kind of concept would come in the in the terms of analyzing data, where it was like. This is how I came to this conclusion. I don't know that means putting in whatever absolutely, at least yeah everything that we produce
Speaker 2:okay has an ai element, always has an explainability layer which tells the end user what we did and why we did it to get to the result that we're showing you, especially when it's a recommendation.
Speaker 2:So if we do some analysis and then we make a recommendation, say your best performing campaign is x, y and z. This particular data set, say, is oversaturated we think you should build this type of campaign that we will have an explainability element behind it that will say we've made all these decisions because of x, y and z reason and then show the underlying data that got us to that point. And on some things we've even made it for the data analyst. We've written SQL queries that they can then run on the data to be able to do some of their own data analysis. We're early stages really, in a lot of this tooling and I think a lot, a lot of the audiences that we work with are the data analyst types and the marketing operations types that are a bit more technical at this stage because they're obviously more willing to sort of jump in and try these new things and in that we're sort of creating tooling specifically for them to be able to do some bigger, better um analysis interesting.
Speaker 1:So, yeah, I've become think more convinced that AI is going to have more of an impact on analysis than it will on content, even though the hype I think was centered around content initially, but I don't think it's not.
Speaker 1:This was also that um, like, I think we are on the same page generally about the ability of AI to really help with the, the analysis of data. You went so far as to say that at some point AI is going to and maybe this is what you're getting to on a recommendation, right that? Or maybe not a recommendation, but that AI could even develop hypotheses, right? So one of the things it's actually been talking about this on other episodes recently about um, you know, sort of applying the scientific method, right, if you think of tests is in that mode, right, part of one of the first steps is to define a hypothesis and then you do your analysis and then test to see if the hypothesis holds true or not. Um, but you think you think it's going to enable us to actually have it generate hypotheses like how does, how would that work?
Speaker 2:just stepping back to your previous comment about analysis and content, I think there is huge value to actually be had from both sides. One of the things I think that is probably less exciting about content is that we're just doing more of the same, like if you think about any marketing use case for sure personalization, sentiment analysis, competitor analysis, like any sort of market research that marketing teams might do. What ai is now enabling us to do is way, way, way more of the same, and that unlocks new use cases for us where we are able to do this really truly hyper personalized content. We are able to launch more campaigns than we've ever been able to do before. We are able to analyze when people are potentially oversaturated or when they want their emails to be received in their inbox. All of this stuff enables us to do way, way, way more of the same.
Speaker 2:What I think and sort of leading on to this hypothesis statement, what I think we will be able to do with AI, and actually what we're already testing in our labs team with AI, is this idea of generating hypotheses for marketing campaigns.
Speaker 2:If you imagine, at the moment, multivariant tests are often happening, say, subject line based, maybe the hero image based, and you test that one thing at that one time.
Speaker 2:What AI will enable us to do is actually write a hypothesis that showing someone a certain image at a certain time and then them landing on a landing page and seeing a different image, or the same image with some different content, with a different call to action to somebody else, is more effective than a different variant of that. So, rather than us just testing one thing and seeing if that's more beneficial, ai will enable us to test a number of things over time automatically, and then we'll be able to see the value of that. So it will be creating its own hypotheses about your personas, about the data you have, about the behavior they're seeing, and starting to try and react to that and provide the best in class experiences at all times for all people. So, yeah, I think there's a great. One of the major benefits for marketeer market to market is actually going to be able to see how effective their content is being, and for AI to enable them to have way more effective content than they've had today.
Speaker 1:So, ok, I'm still not 100 percent convinced about the content piece, especially today, but maybe go a little further. So one of the concerns I would have with content and why I think the ChatGPT stuff was overhyped, is that ChatGPT has access to enormous amounts of content to then generate the next letter, next word, whatever, whereas if you have, but it also if you're concerned about IP or whatever right, you don't want to put your stuff into that model, so you might want something that's limited, which now limits the ability of the engine to really come up with stuff that's new and creative, because I think what you're describing is a little bit of a combination of content somehow being a part of the analysis as well, and so maybe go a little further on what that means.
Speaker 2:On model, just to make people not worry if they're already doing some stuff. Unless you're using free version of ChatGPT and for any of these sort of use cases I've been talking about it's automation You'll need to use the ChatGPT API or vertex api through google or amazon. All of that stuff is not trained on your data, so you can trust that they're not learning from your data, they're not storing anything about your data. But one of the ways that we can be super impactful in uh, this content, when we're talking about content and generating content, is to create our own fine-tuned models, so our own like trained models on our own content. So if you think of the use case where imagine, you've got marketo, with marketo, you can use the api to pull out all of the data about an email, including the html. You could then train a model on the behavior data versus the ht HTML data and the values that are in that to see which is better.
Speaker 2:And this isn't necessarily new to AI, right?
Speaker 2:This isn't using the generative element of AI, this is just machine learning. But what we are able to do now is now generate content based on that best in class content that you currently have and then layer on top what I was talking about with testing. You're then not only creating the best version of the content you have today, you then are continually creating better versions of that content as you continue to test, leveraging ai. So I I really do see there being one of the quick, to be honest, one of the quickest wins here. I think do see there being one of the quick to be honest. One of the quickest wins here, I think, is being able to create content is really focused on your brand and will enable you to really quickly create new types of content are written in your brand voice, your message, with your products at heart. Just leveraging the data you already have sat in your marketing automation platform and then marrying that up with the behavior data, I think there's a really, really quick win for a lot of businesses there.
Speaker 1:So you're matching up the actual source email content, html, subject line, maybe audience characteristics, along with the performance of said email and maybe, if you can go a step further, right conversion, if there's a landing page, something like that, and you're sort of marrying those two. So I guess, theoretically you could say oh, I'm looking at the performance and I'm looking at the characteristics of the email. Maybe it is the specific content, could also be the structure, um, the elements that are used are using different fonts or things like that. I mean, is it getting to that level of stuff?
Speaker 2:to be honest, I doubt all of these things would. They would have a priority and I don't think we would be really considering fonts as a priority, but we would consider, say, the type of message. And one of the ways that we would leverage LLMs and generative AI in this is to tag all of the emails that we have to be ones that we expect to have higher conversion.
Speaker 2:For example, if it's an autoresponder, that's saying download this click this button, we expect that to have a really high conversion and therefore it get tagged in that way, whereas, like a nurture email, that would be tagged as nurture the most, the highest performing nurture emails. We would then start looking at the characteristics that are driving that and then using that to train our model on what it should try and produce in the future. And then the marketing team they go into that tool where the ideal, the, the vision is. The marketing team go into that tool. They click generate me a campaign based on some recommendations that they've been provided through the analysis another ai agent has performed for them, and then they've already got content. That's 90 of the way there. Then they're the human in the middle that's checking and validating before this goes out to the end user, and you can imagine just how that would scale.
Speaker 2:Um, you could really scale that. We're building some things like this in our sort of labs at the moment and, uh, yeah, it's proving to be really effective. Uh, their marketing teams have so much data. It's unbelievable how much data marketing teams have, how much metadata they capture. Oh, yeah, how much data. It's unbelievable how much data marketing teams have, how much metadata they capture how much data is stored in marketing automation.
Speaker 2:Salesforce in GA like that is just ripe for building this automation on top of.
Speaker 1:Sure, yeah, I mean that's part of why I believe for a while that I get the free versions of ChatGPT and the risks associated with that. But I thought this is part of why I thought, like, even if you have a limited amount of content, right, you're still going to have a ton of data. If you have it in a tool that is isolated to your data, that it could learn from that. That's part of why I believe believe that you mentioned um along in there about like uh I think you said 90 done right on this stuff like uh, is there some sort of I think there's an implication there then that um, there's still a human element of helping to build a model or train a model. What does that look like?
Speaker 2:Everything that we build uses human reinforcement learning, which is where humans are in the process determining what good looks like, and there is so much that we can understand from the data.
Speaker 2:So, if it's behavior data imagine we're talking about email still it's behavior data we can obviously see which ones were looking good and which ones weren't looking good from the behavior data.
Speaker 2:But, like when we create the content that's related to that, we still need a human to tell us if that reads well, if it makes sense, if it's related to the thing that we're sending out, and we use that information through feedback to fine-tune the model again to determine what good looks like from. Whatever it is that we're doing whether it's creating insights using generative BI, whether it's building content, whether it's automating some process via an agent there's usually always a human involved in that process for validation. Over time, that human will get involved less and less, because you're obviously able to train out some of the things that the AI is doing. But up front, there will be this intensive process where humans are used in almost every stage to validate that everything that's happening is the right thing. Be this intensive process where humans are used in almost every stage, sure, and to sort of validate that everything that's happening is the right thing I mean, it's a little bit like how the deduplication platforms that I've seen work right.
Speaker 1:Where there's, you generate a uh here's, so I get right again. This is another one where the nature of a marketer or content person changes over time based on this. But, um, is there a risk of doing that human element? Uh, is there a risk of like, uh biases, uh, driving the driving, the model or the engine towards things that are less productive? How do you put that into account for that?
Speaker 2:I would say there is already a risk with bias in everything that happens today, and actually one of the really good things about AI is that you can unbias it by using there's a few different ways that you can sort of unbias models.
Speaker 2:Initially, they're already unbiased because they're purely looking at data. When you start to tune them, train them, the people that are submitting the feedback, you need a wide enough range of people that are submitting feedback that you can sort of determine where there are outliers amongst those humans that are potentially biasing this model. Um, obviously, there is a risk where all of your humans that are involved might be biased towards one way or another. Uh, that's where, that's where the testing comes in, right, so you don't only rely on human feedback. You're still trying to test uh on, as you were saying, around productivity, for example. You're still trying to test, uh, even if you didn't have that feedback occasionally, what the output would look like. Um, and then using that feedback in your model to then continue to train it going forward. Um, I'm not I'm not a statistics person, so that that's probably the best I can represent.
Speaker 1:No, and I barely am. So when it generates this content almost, can it also produce a predicted outcome of whatever I said? Productivity? It's not really what I meant. I really meant whatever metric it is you're trying to affect, right or set of metrics, can you also generate expected results of a given?
Speaker 2:variant yes. Expected results of a given variant yes, by leveraging simulations. So with one of the apps that we built it's our generative BI tool this enables us to sort of look at data in different ways. We use at the very top level. We have target data data and attainment data, so how close someone is achieving that target. We then have funnel health data, uh, underneath that, and campaign data underneath that.
Speaker 2:By looking at the target and attainment data, we can tell the ai where to look next, uh, then it can go and look at funnel health and it can, say, determine if opportunities are moving through the funnel, if there's one massive opportunity, if there is a stall at some point in the funnel, if sales are potentially not following up with certain leads, and then we can look at campaign data and we can determine if some campaigns are performing well or not, which campaigns are producing the most conversions, and we use that data to then sort of marry those two areas together so we can say for this particular region there is a stool in the funnel late stage.
Speaker 2:You should be building campaigns that are about late stage conversions for that particular marketing team, so we can make these recommendations. Then we run simulations based on other regions that are doing late stage campaigns, we can then run simulations for this region to determine what kind of impact that might have on the conversions that go through for the next phase. So like the fourth level of insight really that we create, if you consider first level is that top level. Second level is looking at new layers of data. Third level is like joining the data together and then fourth level is making the recommendation. Based on simulations, we can really show people really accurately, based on the data that they have, what the likelihood is of doing these marketing actions would have on potential pipeline and obviously for most marketing teams that is what matters. At the end of the day, it's that pipeline.
Speaker 1:Well, yeah, it should. But so you use the term generative BI somewhere in there. I mean, is what you outlined is sort of the process? Is that what that means? Or is like what is that? I'm not sure what that means.
Speaker 2:So generative BI is like a more broad term, it's using large language models to help you interrogate data and make sense of data. We I guess there are a few different ways of doing it but we, when we build generative BI platforms, we make them hyper-focused to specific use cases. And the reason why we do that is because loading and coming back to the top of the call, where you mentioned the idea of loading in lots of data or using them for analysis the reason why we make them hyper-focused on specific use cases is because LLMs don't really perform very well when you upload a huge amount of data. Google Gemini will let you upload 2 million tokens, which is quite a reasonable amount of data, but the more data that you load in, the more likely it is to hallucinate or not give you the right result. So we provide and it makes it does make sense, because it's now having to consider a huge scope of data to answer your very streamlined, specific question. So we provide it with as little data as we can to get the output that we want to. So we will.
Speaker 2:If, if we're doing data analysis, we provide in aggregated tables, summarized tables, we then provide a huge amount of context about what this data means and what kind of questions the AI should be asking on this data to create the insights that we need to create, and we do that at every level that I was talking about. So we do that at the top level and at our target level, let's say. Then we move down to the funnel level and we're asking it questions like are there any particular regions with huge opportunities? Which opportunities are at risk? Which ones haven't moved in the last six months? Ask it all of this information to give it a picture. We then start prioritizing the results that it's producing, before then giving the insight back to the user. So I guess in that use case, what we do is we do a huge amount of analysis and then we give the user back three sentences of the most primary thing that they should care about right now.
Speaker 2:And one of the great things that we can do with generative BI is we can that output that we're producing, we can make that as verbose as we want, so it could be three, five, ten paragraphs, all with rich information for that end user, or we could do what we do with this particular use case and we stick it straight into a PowerPoint deck, which then gets presented to the leadership team.
Speaker 1:So just curious. So I think I know I probably conflate this a little bit. When I hear BI or business intelligence, I often think about visualization in addition to analysis. So are you able to do?
Speaker 2:visualization stuff as well. Yes, yes and no. Uh, yeah, yeah, out of the box. No, it is quite hard to do like get ai to produce you some really impactful charts that would make that are beyond what marketing teams normally have today, but what we do really well, there's two different things that we're doing at the moment. Actually, in our labs labs team, we're testing this as well.
Speaker 2:One of the things that we're doing is we're providing people the ability on their existing dashboards to ingest the data that's relative to the thing that they want insights from. Hit a button and then it will go through our insights engine and then generate insights on that dashboard or even without hitting a button, when it refreshes. Obviously, that will just generate insights related to the thing that they're looking at, things that they should care about, the so what factor. We only want to show them things that make them think so what, or give them a reason to think so what.
Speaker 2:The other thing that we're doing is we're looking at integrating with looker. Uh, looker has a model in it called look ml. Uh, that model lets us sort of abstract the data um from the sort of underlying sql queries that you would need to run. So we can just ask ai uh, go and look at this data, consider this, consider that, and then it will produce us a dashboard alongside insights, which is where we're trying to take our sort of generative functionality. But a lot of businesses already have dashboards really good dashboards At least a lot of the businesses that we work with. They have, like, best-in have dashboards really good dashboards. Um, at least a lot of the businesses that we work with they're like best-in-class dashboards.
Speaker 2:So really, what they want is the additional layer on top, which is just providing them that real-time insight deep into the data that they need um, where they're looking at, uh, where they're already in. That's another thing with AI really is you should build it, yeah.
Speaker 1:Right, or it's like one of those, one of those, one of those typical complicated analyses. It's done one time because an executive asked the question and now you can do that in a more automated way, right, yeah, so I like that. I still want to get I'm still struggling with this, I think cause again my own bias right Belief that you know you can throw a bunch of data at this. It's going to comb through it, find things that you wouldn't have otherwise found. I'm still unconvinced that you can't do that. I'm not saying that you can't like that you should start with.
Speaker 1:But maybe you have examples of where you tried throwing gobs of data and it came out with something that was nonsensical or something like that. But you know, you kind of walked through a little bit of a process. You know top, top level, they go a little deeper, and going a little deeper, if I understand it right, is it, is it, was that something you learned over time through working with your clients at Alga Marketing? Or how did you get to the point where you said well, really, what we need to do is start with a smaller I'll call it a smaller question, I don't mean necessarily as a smart, but smaller data set. Asking more specific questions is maybe a better way of saying it and then going from there.
Speaker 2:Obviously very new. We are hiring, and we have hired experts in particular domains data science, in ai, in machine learning, in prompt engineering, which helps us get to the point where we are now, which is like having these tools working inside uh, you know big enterprise businesses. But what we have to do is be unbelievably agile. Even today, gemini 2.0 has come out and we're already testing with how Gemini 2.0 would potentially impact some of our models. We have worked in an extremely agile way. We have had to build, test, refine the outputs that we've been able to produce. We have gone through many, many different stages. One of the things that's really, really hard, actually when you're building this type of generative intelligence, is the data validation. So when AI has said something, how do you validate that that is actually correct? And a lot of that actually is quite a lot of hard hours and effort to look into the data and determine it is correct or isn't correct, and if it wasn't correct, where did it go wrong?
Speaker 2:So we've just put in a lot of time energy to build these tools, as well as a lot of expertise and smarts like. We are experts in sales and marketing, so we understand how that works and we've hired to be experts in ai and we're really, really keen to sort of drive sales and marketing ai forward.
Speaker 1:Um yeah, yeah, that makes sense. No, I mean I've often when I've had responsibility I mean like a broader responsibility, a more specific responsibility for reporting and analytics as a marketing operations leader when I've had the opportunity to try to hire somebody like a data scientist, I've struggled, finding no problem finding people who have general data science sort of experience and knowledge. But the domain of sales and marketing, which is, I think, is been the hard part, because it's I tell people all the time when I started my career and sort of database stuff doing, doing stuff in the financial world right, where the data is pretty well, you know, clean and got controls and you know, you know and, and so when you move to sales and marketing data, especially in the b2b context, I think it becomes really messy really fast because you don't have the controls. You have a lot of humans interacting with uh data and content that um are not incentivized on the quality of it. So I think that's a big challenge.
Speaker 2:Yeah, data scientists come in with grand visions of what they want to do and then realize that the data isn't in any way clean, which then limits them. But yeah, absolutely Like finding someone with domain knowledge and that sort of understanding. Generally they're always a bit scrappier as well, so they're more willing to just try and get things done. Those have been the people that have definitely succeeded in their teams.
Speaker 1:Yeah, no. I think it's really easy to sort of throw your hands up and say, oh, this data's a mess and we can't do anything about it. I think people the premise that you're assuming the data is going to be quote right or quote clean is the first mistake you made. So if you assume the data is going to be a mess and dirty but I think that also goes back to why it's important to understand whether you're doing your own analysis or using AI to do it To understand what comes out of it needs to be.
Speaker 1:I use the old term trust but verify. I mean that's yeah. I use the old term trust but verify right. I mean that's yeah. I think that's what I'm hearing and maybe over time, your trust seems to go up. Wow, my brain is on overload at this point now, luke. So this has been a great conversation. Thank you very much for sharing what you and Elgo Marketing are doing conversation. So thank you very much for for sharing what you and algo marketing are doing. Uh, if, if people want to follow up and hear more about what you're talking, what you're doing with your team or what algo marketing is doing, what's the best way you can visit algo?
Speaker 2:marketingcom. You can find me on linkedin luke crickmore on linkedin. Um, I'm yeah, you can't necessarily find me anywhere else because I'm british and I don't really like to share um but yeah, I'm really happy to have a conversation about anything martech, anything, ai, anything technology. Used to be a developer, so I'll be really happy to just like, yeah, talk about some proof of concepts and things just geek out on that.
Speaker 1:Yeah, so I also I'm, I'm, I'm I can't say I'm British, but I am old and so you know I limit what social media I'm on. So this has been a lot of fun, luke, thank you so much. Thanks for letting me, you know, throw some hard questions at you.
Speaker 1:Thank you man Today, so appreciate it. Thank you to our audience, as always, for supporting us and providing your feedback. If you have feedback or suggestions, uh, you want to be a guest? Feel free to reach out to naomi, mike or me and we will be glad to talk to you. Take your feedback until next time. Bye, everybody.