Ops Cast

Email Deliverability in the Age of AI with Mustafa Saeed

Michael Hartmann, Mustafa Saeed Season 1 Episode 179

Text us your thoughts on the episode or the show!

The rise of AI tools has dramatically changed the landscape of email marketing and sales outreach, creating both exciting opportunities and significant risks. As Mustafa Saeed, co-founder and CEO of Luella, explains in this eye-opening conversation, many revenue teams are now "scared shitless" about how their reps might abuse these powerful new technologies.

When AI-powered automation tools are implemented without proper guardrails, organizations face serious threats to their brand reputation, domain health, and even compliance standing. The problem isn't AI itself, but rather "AI coupled with reckless automation" that floods inboxes with content that lacks genuine value. As email service providers like Google and Microsoft respond with increasingly aggressive spam filters, even legitimate messages from trusted senders are getting caught in the crossfire.

The solution isn't abandoning AI but reimagining how we use it. While many AI tools promise to remove humans from the loop, Saeed argues for bringing them back in through thoughtful collaboration between humans and AI agents. This approach combines the best of both worlds—AI's ability to analyze vast datasets and humans' talent for building authentic relationships.

Episode Brought to You By MO Pros 
The #1 Community for Marketing Operations Professionals

Support the show

Speaker 1:

Hello everyone, welcome to another episode of OpsCast brought to you by MarketingOpscom, powered by the Mopros out there. I am your host, michael Hartman, flying solo today. Mike is in the throes of getting Spring Fling 2025 off the ground, which is the week after this one that we're currently in in May 2025. So hopefully you're going Naomi's off doing whatever she's doing, but joining me today is Mustafa Saeed, and we're going to talk about email deliverability in the age of AI. I think email deliverability is one of those mysterious topics anyway, so always good to recover that again. Mustafa is the co-founder and CEO of Luella. Mustafa is a serial entrepreneur who paid his way through university by creating and monetizing a YouTube channel. He has also been a consultant with a variety of different industries and companies, and so, mustafa, thanks for joining me today.

Speaker 2:

Thanks for having me on the pod.

Speaker 1:

Yeah, this is going to be fun. Well, we talked about that before. I did the two-sentence thumbnail sketch of your career, which doesn't do us justice. So before we get into the core topic of email deliverability and how AI can be a part of helping with that, why don't you talk through more about your career and life journey and share with that? I think our audience always finds it interesting. I find it, even if they don't. I find it interesting to hear all these different variety of stories about how people ended up where they are.

Speaker 2:

Yeah, absolutely, and, like you said, I'm an entrepreneur and revenue growth leader. I've been building ever since I was 16 years old. Like you said, I started a small YouTube channel that paid my way through university. My first job out of school was at a marketing agency that failed after Apple released iOS 14. And I got to be part of the journey of transforming that failed agency into a now successful SaaS that generated several hundreds of millions of dollars for a lot of really cool e-commerce brands like HelloFresh. I then joined Clearco as an account director, which is a fintech unicorn.

Speaker 2:

I was pretty much a Swiss Army knife sales guy and I've done a lot of growth consulting. I've worked with a lot of incubators and startup accelerators, largely from a go-to market perspective as well, and my team and I came together largely because of our shared experiences. We saw a lot of the same challenges. We saw a lot of new AI sales tools entering the market, a lot of AI SDRs that were making it easier than ever to spam seemingly infinite numbers of emails and content, and we started to see early signs of larger policy shifts from Google and Microsoft when it comes to their spam filters, now LinkedIn when it comes to aggressive web scraping on their platform, and that's why we wanted to enter the space and wanted to build Luella just to help revenue and marketing professionals better protect the reputation, improve their deliverability and scale their outreach volume safely with guardrails to prevent AI abuse. So we want to help teams make the most of AI without abusing AI.

Speaker 1:

Yeah, that's interesting. My first job, pre-college was actually very different than yours, so I cleaned an auto body shop everything from the shop floor to the bathrooms. So lessons learned there still haunt me, I think is probably the best way to put it. But, yeah, I've shared that with my. I have three boys, so I've shared that story with them and I was like this is why we push you to like go to school and learn and and be prepared to do something else if you wind up wanting to work like that, I have no issue with that. But like, yeah, give yourself options, right, right. So that's great. Thanks for sharing that Fascinating stuff.

Speaker 1:

So when you and I first chatted and you touched on this, I think a little bit as well, the catalyst for you starting Luella is is the way that you're seeing revenue teams abusing AI tools or I think even just automation tools in general has been an issue for a while, leading to people who we like to cater to, right Marketing ops and rev ops teams put in the position where they have to clean up after the fact. So I think I know the answer to this, but like, hey, do you still see this as an issue and then maybe elaborate a little bit on like, how big of the issue do you see it in the market? And then, what are the risks if people who are out there thinking this is a new tool that can really help us scale, which it probably can right, what are the risks of doing that, without which it probably can right what?

Speaker 2:

are the risks of doing that without, like you mentioned, guardrails? Yeah, Most of the RevOps leaders we speak to they see a lot of opportunity with AI and automation but they're scared shitless to how their reps might abuse AI and automation.

Speaker 1:

And it is really In my opinion, they should have been scared shitless already.

Speaker 2:

Yeah, exactly, and it's really revenue operations marketing operations that we're seeing step up to help implement these kind of guardrails across their sales and marketing organizations. These are the teams that are more focused on operational efficiency and protecting go-to-market systems and very grateful to have been working with a lot of incredible RevOps leaders. The senior VP of RevOps at Oracle is a strategic investor and advisor in Luella and it was the RevOps leaders at Travelperk that really helped take this over the finish line. That was our first really big enterprise client and if it wasn't for their RevOps leaders, I really don't know where we'd be with that. But it really depends on every organization, right? I feel like the definition of RevOps like changes every single organization that we speak to.

Speaker 2:

Some organizations, revops is literally just Salesforce, like anything and everything. Salesforce. If it deviates from Salesforce, they won't do it. Some organizations it's a growth marketing leader, it's a VP of sales. It's very scrappy that's implementing it. So it really depends on the organization to how much resources they have and how they're structured as to who will own this initiative, but more often than not, it has been RevOps and marketing ops that's been stepping up and in terms of the range of risks brand reputation risks, domain reputation risks, compliance issues, spam and blacklist incidents and account bans as well.

Speaker 2:

The last thing that you want is one of these AI agents misappropriating your brand. We work with a lot of health tech companies, for example, that are very sensitive when it comes to the claims that they can make about their offering, about their software. Even just one word can land them in some compliance hot water. The last thing you want is for an agent to spam an insane amount of content and hurt your domain reputation and contribute to account ban. So those are just some of the risks that we've seen in the wild.

Speaker 1:

I mean, are you? Are you? Have you in this something I haven't really been paying that close attention to, but I know, like I can imagine, that there are scenarios where, yeah, there's actual financial risk too, and by that I mean like actually like penalties and fees for going around or not, not, not, not. Not complying with regulations? So of course GDPR castle whatever. Not complying with regulations? So of course GDPR. Casl, whatever. Have you seen or heard any stories around that where some of these tools that enable scale are actually creating that kind of issue, or is it mostly more about the reputational stuff?

Speaker 2:

Yeah, mainly on the reputational side that we've seen. Yeah, mainly on the reputational side that we've seen. Uncontrolled automation can contribute to violations of, you know, cam, spam, casl, gdpr and those kind of compliance frameworks Like we have seen examples of AI agents saying, you know, very sexist remarks to a prospect, for example. So like those kind of things like can land you in hot water and there is a financial impact to that for sure.

Speaker 1:

Yeah, that's interesting. Yeah, I mean, in some ways, this is not a new problem. I think there's always been this risk of inappropriate stuff going out and all that. I think it feels like the big difference now is twofold One, the ability to scale and do more faster, but also with this belief that there can be personalization that can take into. It's not just rules-based right, it's kind of based on context and has some more insight. It can still generate stuff that's inappropriate and it and then, because it's not rules-based, it's a lot harder to diagnose like how did this happen?

Speaker 1:

right, exactly, so, um, I can imagine that being a problem. So, um, I guess, how do you think that I? I mean, it seems like it would be an obvious potential risk to me, right, if someone's considering that. So why do you think that was not something that was anticipated when these companies were rolling out these kinds of tools? These companies were rolling out these kinds of tools. Is this because they were so focused on the benefits?

Speaker 2:

or potential benefits that they hadn't considered the risks. I think you know, whenever we get new technologies, there will always be risks that people are unaware of. Like these technologies are continuing to improve over time and we're always staying on top of that to continue to build guardrails as well. So I'm not surprised. There is a lot of unexpected and I think we're going to continue to see that as well. So it's a matter of experimentation, right Taking these tools, experimenting with them, seeing the pros and cons and mitigating any problems that we see during that process.

Speaker 1:

Yeah, maybe it's as simple as expectations were. Maybe too optimistic about it. All that's interesting, so so again, I can't can imagine there's multiple sort of components to how this could be a risk um, to reputation, since that seems to be the main thing. Um, so Is the issue with AI, llm technology being used to create content that is bad, wrong, inappropriate, whatever, or is it with the automation component of it that can really drive the scale of this with limited oversight and intervention? Or is it combination? What's the mix? Or is there something else that I haven't even really considered?

Speaker 2:

Yeah, so AI by itself isn't the problem. It's AI coupled with reckless automation, that is, you know it's one thing to use.

Speaker 1:

Yeah, that's a good way of saying it, yeah.

Speaker 2:

Exactly and like. It's one thing to use AI to, you know, move faster, sharpen your thinking you know that's something we very much encourage. You know people should be experimenting with these new tools and finding ways to innovate and incorporate them as part of their workflow. But it's another thing to flood the internet with AI generated emails, linkedin posts, comments, you know content in general that you know is just going to, you know, annoy the end user. Like, especially with zero oversight and zero limits. Like that's the kind of AI spam that Google, microsoft and, recently, linkedin are really focused on combating, and that's why we believe we're in urgent need of AI systems that do enforce guardrails. So, things like human oversight, you know, while everyone else is trying to remove humans from the loop, we want to find ways to bring them back in right.

Speaker 2:

We've seen what happens when you let an AI agent just go completely autonomous. Human to agent collaboration is something that should be encouraged. Things like approval processes certain pieces of AI generated content should be approved by an admin on the account or from the sales rep themselves. Things like automation limits right, testing your messaging at a much smaller volume before pressing your foot on the gas and scaling further just to make sure that people are responding favorably and you are providing value to the people that you are creating that content for. You know, abuse monitoring, just so we're able to catch issues before they do become a problem, and educating your reps to the ethical boundaries of these technologies and how you want your brand represented in this AI native world. Like anything less than those things? I believe that's just a risk to your brand.

Speaker 1:

Yeah, it's interesting to me, sort of just in general. Right, my view of this stuff has been evolving, but I think where I land right now is that I try to make a distinction between the AI technologies, because I think that encompasses a number of different things LLMs probably being the most common one that people think of, but there are others and a distinction from that versus automation, because I think we've had automation for a long time. We had a great guest on a few episodes ago where we this really hit me right the distinction between automation and AI. Ai is it can be a part of automation that might replace something that was very rules-based in the past. You know where it can be intelligent in quotes. The other thing that you touched on and I think this is a really interesting one, where you touch on, like to me, I think there's this, maybe assumption that this is replacing the human components of this, and I've gotten to the point where I think really the the true value in all this is going to be where the ai tools, agents whatever are going to be great tools for that can be can augment and accelerate someone's ability to get things done.

Speaker 1:

What I'm on a advisory board for an engineering school here in Dallas that I went to and we just had a meeting recently a new Dean, a new department chair. We were talking and there's, he was talking about this in particular. It was like there's actually been research that has already been done that shows that students and this is more like middle school students students who use AI as a tool actually tend to perform better in the outcomes. So grades, scores, whatever and interestingly, there was a difference in sex, gender right. And interestingly, there was a difference in sex, gender right. Girls got a bigger bump than boys at that age, which I think to some degree I could attribute to just school, for kids at that age is better, more adapted to the way girls work at that time than boys. But that's a whole like, probably a subject I don't want to get into because I'm not an expert, but it was interesting to me, like it was another sort of aha moment for me, that the tools themselves are not necessarily the replacement but they can really augment. So, which is kind of like.

Speaker 1:

I think it's been in the back of my mind for a while because I thought really like, even if you think about analytics and reporting, if AI, I'm really bullish about the idea like that can really have some huge impacts. You still need somebody who can interpret the output because sometimes it could generate, say, a, an insight that says go do x, you know, and you're going to get a better result, but like x is actually not a realistic or practical thing that you could do. So like you got it, like okay, well, that's not going to make sense. Um sorry long diatribe, but like um. So I mean, it seems like that's really where we're at is this stage where the idea of ai and automation could replace is now becoming one where it could augment, but we still have this sort of, but we still have this sort of the initial steps towards that really were focused on replacement and I think it feels like that may be the root cause of this.

Speaker 2:

Yeah, we've seen these AI agents act independently and we've seen sales reps act independently and like we're seeing more success. You know, combining the best of both worlds. And I hate some of the messaging that we're seeing from some of these aisdrs where, like it is very aggressive, you know, trying to replace sales reps, and like I, I don't think that's the direction that we're moving in. Uh, even if, like, these ai agents are going to continue to get better, but, uh, there's a lot that we humans can do that these ai agents can't. So, uh, blending the best of both worlds, that that's the direction where we're seeing this industry going in.

Speaker 1:

Yeah, I mean, maybe I just haven't been able to. It's interesting because on LinkedIn I see a lot of people going like, oh, I can tell when something is AI generated and I'm like I actually don't think you can. I mean, I think it's pretty obvious. Maybe obvious when someone is really bad at prompting, but if they're good at prompting, I think it can generate pretty good stuff. So I'm not sure that people can really distinguish between the two if they're like if we really did a test and they were really focusing on it.

Speaker 2:

There are some telltale signs. So like ChatGPT very often uses em dashes for that very long dash, but at the same time, like people have been using that since the 1800s, right. So like I feel like people are starting to move away from unique forms of grammar just because Chachibiti does it. So there are some telltale signs, but even then, like it's very difficult to know for sure.

Speaker 1:

Yeah, yeah, and I think the reason I bring that up is when I think about mostly LinkedIn but I get an email too, but I get outreach from things. I find myself wondering is this because I haven't really noticed much of a distinction in the bad ones, right, they're just bad much of a distinction in the bad ones? Right, they're just bad, and I don't like, to me it doesn't really matter if it was ai generated or human generated or collaboration, even right, if it's bad, it's bad. Um, I mean so like it? I mean, is that kind of where you're at, like it doesn't really matter how it's done, it's like the quality and the effect on it. Reputation is what matters as opposed to, like this, the amount that is automated or developed from AI.

Speaker 2:

Ultimately, we want to provide value to the people that we're reaching out to, and you know there are forms of spam that are created from AI agents independently and sales reps independently, so that's why we need to leverage these technologies to move faster in areas that are currently slow, just so we can increase the rate at which we are providing value to the people that we're reaching out to in any form of the content that we are creating.

Speaker 1:

Interesting that we are creating Interesting. You touched a little bit on the LinkedIn's, some of the things that recently happened to some of the big data providers. I don't know if we should mention names. I think people will already know if they are paying attention to this at all, and I actually don't know. It's been a long enough that I don't know if they're back on LinkedIn or not, but anyway, they're not. Okay, yeah, I actually meant to go look at this um recently, not related to this because I just there was something else that came up and I wanted to go go look anyway. So you know, yeah, are, are those? How? How is that tied to this? Because I mean, this is a reputational thing yeah is it a reputational thing for those data providers?

Speaker 1:

or is a reputational thing, yeah, is it a reputational thing for those data providers, or is a reputational thing for those brands that use those data providers, or like is it? Is it both?

Speaker 2:

There are unique challenges for both. You know, like you mentioned, linkedin has removed the company pages of several very large companies in this industry that I will not name. And like, I believe that this problem is this is just really the beginning. Right, this is going to get worse before it gets better, just like, with a lot of policy changes, and like we're going to see a lot more scrutiny from these platforms, especially LinkedIn. And like, unless you are scraping LinkedIn aggressively, I wouldn't worry about your company page being removed.

Speaker 2:

Like, the reason for why certain companies had their company page removed is because they've been abusing the living hell out of LinkedIn systems for the last several years, even before 3.5. Like, especially one of the larger ones, they've been in court with LinkedIn over this over the last several years, so this is really nothing new. What is new is their company page being removed A unicorn in this industry having their company page being removed. That is very eye-opening, but for the majority of brands, the risks are more related to brand erosion, compliance and legal exposure, blacklist incidents and deliverability destruction. I wouldn't worry about, you know, company pages being removed. Like that is just something that we're seeing from the companies that are doing insane volumes of scraping, especially for personally identifiable information.

Speaker 1:

Right. I can imagine a scenario, though, where someone maybe, probably maybe a large enterprise goes oh, we've been spending X amount of money with one of these data providers. We can. Now, with these new technologies, we can just replace that on our own rate to something bespoke. It could run. It could run into the same issue, right?

Speaker 2:

Yeah.

Speaker 1:

I mean exactly Maybe, maybe not as likely, Like it's not going to be at the same scale Like these companies are.

Speaker 2:

They're doing it like literally hundreds of millions of contact details that they are refreshing on a very regular basis. So but who knows where like LinkedIn is going to go when it comes to controls over web scraping automations, that there is a potential that they can go that aggressive. And there's a lot of speculation that I won't mention because it is just speculation, but potentially LinkedIn venturing into this area as well and kind of building like a sales nav 2.0. So, yeah, a lot is still up in the air when it comes to this, but we're tracking this very closely, okay.

Speaker 1:

Yeah, okay, interesting. So you know one of the other things we talked about and you said something to me that kind of stuck in my head and maybe I'm going to tie it back to my mental model of rules-based versus I was going to say intuition-based, but it's not really that it's like but um, intelligent kind of um automation is, um, the idea that the one of the other challenges is that, like this adoption of ai as a part of the automation was not really. It didn't really take advantage I think I'm putting words in your mouth here, so correct me if I'm wrong but not taking advantage of the ai component of the automation to be smarter. It's essentially just laid it over already existing static flows, um, I can imagine there's nurture programs, things like that that are out there sales, sales, uh, cadences, etc. I mean, what, what do you see like? Is that what you're seeing as well? And that's a contributing factor that it's maybe not just the technologies themselves, but because of the way it's been adopted and not really changing the way they're approaching it.

Speaker 2:

So the way that millions of sales reps all over the world are currently operating and when I was at Clearco, our team operated this way as well is we'll take several thousands of leads from some stale database like Apollo or ZoomInfo and following a static sequence where, on day one, you send an automated email.

Speaker 2:

On day five, you send an automated email.

Speaker 2:

On day 10, you send an automated email.

Speaker 2:

That in itself is a very robotic way of doing business.

Speaker 2:

Now imagine throwing LLMs at these legacy workflows and making it easier to add contacts to a campaign, making it easier to generate copy and making it easier to add contacts to a campaign, making it easier to generate copy and making it easier to write personalization that really isn't personal and really isn't pulling the right context that you actually need to provide value, like that is also contributing to this broader problem of spam.

Speaker 2:

So, rather than just throwing AI at these legacy workflows like, we urgently need to build the right infrastructure for this AI-native world, and that requires more comprehensive data integration. There are thousands of data sources that give us an understanding to who are the people that actually have a problem that your company can solve for, instead of spamming some database. That also requires algorithms to be able to move away from those rule-based systems but actually have those algorithms be useful. And there is a lot of more work that needs to be done by the broader industry just to make sure the content that we're putting out is contextually relevant and, rather than following a static sequence, we're doing a better job of nurturing those prospects over time.

Speaker 1:

We're doing a better job of nurturing those prospects over time. Yeah, so I'm going to throw this totally unplanned thing for me here, but it popped in my head because I recently was talking to a young salesperson who was picking my brain about the mind of a buyer. But I've been in sales where I had BDR, sdr, things, things. One of the things I always made sure I did or tried to do is, if I get into an interaction with a prospect and it becomes obvious that our product service, whatever, we're not going to be able to help them, I always want to try to do something to help them along the way. When the next thing comes around, they might think of us, do something to help them along the way. Right Is that's? Because I think it leaves like then there's, you know, when the the next thing comes around, they might think of us is my was my thought process, but do you think that?

Speaker 1:

Um, I mean, that's one of the things about these, the, the way that the traditional whether it's rules-based, you know stuff or very regimented it always has felt like this is what you need from us, as opposed to trying to understand what is it that you need as a buyer that we might be able to solve, and if we're not, like how can we help you solve the problem? You have, right, put you in contact. So do you think there's an opportunity? Is this a kind of rethinking of how to approach this? Because I could imagine AI LLMs actually could be really good at like first identifying when that's the scenario, and then two like tapping into, say I'll call it, a knowledge base of maybe not direct competitors but alternative solutions that you could then provide. Do you think that that might be something that we could see in the future?

Speaker 2:

That is going to be crucial. On the nurturing piece, because the overwhelming majority of the people that you are reaching out to they're not going to be ready to buy right away. And if you're just going to continue spamming them over the course of the several months of years until they are ready, you're not going to be top of mind. But if you are providing value and you're solving a problem that is more relevant to them, that is how you stay top of mind. So, like one of the biggest deals that I closed in my career, like it took over a year to get them over the finish line.

Speaker 2:

So like, when they announced a round of funding, they got an email from me congratulating them, letting them know how I've supported other venture backed startups and scaling cost effectively. When they announced they're expanding into international countries, they got an email from me letting them know how I've supported other companies, expanded to English speaking international countries. And I went very detailed in terms of the vendors that worked for me, vendors that wasted my time, tactics that I EB tested and saw a measurable improvement in performance. And when they started complaining about their agency, they got an email from me and that's when we finally managed to get them on a call and over the finish line over a year of nurturing of relationship building. But I provided value along the way, I didn't spam them.

Speaker 1:

Following some sequence yeah, I mean, one of the things I told this early career salesperson was like make it interesting and helpful, and if you can do that, you're going to earn the right to have the next communication. All right, so one of the things we talked about and I think this is maybe where kind of part of what you are trying to solve here is trying to integrate the human and the AI slash automation elements of these, I think, specifically go to market motions. How are you thinking about that? How does that like? How do you, how do you approach that challenge?

Speaker 2:

How do you approach that challenge? Yeah, like, while so many AI SDRs are looking for ways to remove humans from the loop, we're looking for ways to bring them back in because we see the value in it. Right, we're going to see a lot of agent-to-agent collaboration very soon and we would also encourage human-to-agent collaboration, just so we can make the best of both worlds. You know there's a lot that AI can do that we humans can't. You know AI can analyze enormous data sets. You know, very quickly, I, you know, looking at a spreadsheet for a couple of hours, I'm tapping out. My short attention span can't handle it.

Speaker 2:

Luella did not complain, on the other hand, but there's a lot of things that we humans can do that these AI agents can't. Right, I can meet a prospect in person at a conference or for coffee to do a more meaningful job of building a relationship with them. Like, an AI agent is not going to do that. Like a robot can show up in person, but that's not going to be as valuable as like a human being actually showing up and being present. And with the proliferation of AI, like, human connections are going to become increasingly more valuable and that's why, like, we're working towards're working towards augmenting instead of replacing, just like you said. So the two aren't mutually exclusive, right, you can use both. These AI agents don't have to be entirely autonomous, and there's a lot of value in bringing humans back in.

Speaker 1:

Just for clarification, when you are using the term agent, what does that mean to you? Because I hear that term a lot and I think it's one that right now has everybody's going to. It kind of has their own interpretation of what that means to them.

Speaker 2:

So an agent is an AI system that can operate semi-autonomously not entirely autonomously, that can do some tasks autonomously, but can still take in inputs from other AI agents and other humans as well. Usually they'll have access to unique data sources and an AI model as well. So, for example, one of the AI agents that we have is a deliverability agent that will protect your sender reputation, monitor your email deliverability and prevent blacklist incidents and spam incidents over time. So there's a lot of things our deliverability and prevent blacklist incidents and spam incidents over time.

Speaker 2:

So there's a lot of things our deliverability agent does entirely autonomous for you. There's a lot of things that Luella will get your input for. So you're right, there are a lot of definitions, and a lot of the definitions that I've seen out there define agents as being entirely autonomous, no human input whatsoever, and I don't believe that to be the case. I think this industry is moving to a world of more collaboration between agents and with humans as well.

Speaker 1:

So maybe a very personal or personal world example might be I want an agent that's going to go hey, you haven't had a haircut in, you know, two weeks, three weeks, whatever your normal pattern is. Do you want me to schedule that for you? Yeah, yeah.

Speaker 1:

And then go like, are there any? Like, are you having any restrictions? And then it will go actually go book the appointment and get put on your calendar, including drive time, et cetera. Yeah, is that the example? Yeah, okay, okay, so I can. Okay, that makes a little more sense. Um, so maybe go a little more about like this deliverability agent, like, like again, like I don't I, I I'm imagining a lot of potential things that mean so, like, how does that work, maybe just at a high level? Um, and how is that similar or different from something like? Um, I mean, when I've run into problems with potential risk, you know, uh, reputation center, I've used, like center score right, or something like that, to be able to just keep an eye on it. Help me understand what the role is it plays and then how it works.

Speaker 2:

Yeah. So it starts with the infrastructure that you are sending from. So a lot of legacy outreach tools on the market will use these shared servers with several thousands of senders in them, all linked to just one IP address. And in these servers you're going to have both good actors that are running cold app bound, tastefully and ethically, or you're also going to have those bad actors that don't give a shit and are spraying and praying. When you're using these shared servers, you're gaining exposure to those spammers and rifters. Google and Microsoft know that, and that's going to hurt your reputation and you're more likely to land in spam because of it.

Speaker 2:

So for us, the first line of defense is, rather than using these massive shared servers, we will create your own mini server, and we call these mini servers clusters. Each cluster has its own IP address and each of your sales reps has their own cluster, just so if one sales rep goes rogue, it's isolated, it doesn't contaminate your entire infrastructure. So that is the first line of defense. The second has to do with deliverability monitoring. So, michael, do you know what a placement test is?

Speaker 1:

So let me see this is testing me. So you have I'll call it a seed email address, and that inbox is then monitored to go like. Did it actually make it to the inbox?

Speaker 2:

Yeah, we're sending emails on a regular basis from your mailboxes to some of ours to see where they land the inbox, spam folder, the promotions folder. Sometimes they don't get delivered at all and that is one of the best indicators that we have to the overall health of your email infrastructure. Why? Because Google and Microsoft don't tell you what your true spam rates are. They only give you self-reported spam metrics, which is a small fraction of the data that they actually have, and most outreach platforms will have a deliverability score or a sender score, and that's also a bullshit metric because they have full control over what goes into it. It's not a real metric. They get to control the inputs and it is, more often than not, aggressively inflated. So these placement tests allow us that much-needed visibility that we won't get elsewhere and that allows Luella to diagnose your email infrastructure. And we do have an integration with Slack just to be able to push notifications to your team and ours whenever there is an action that needs to be taken. Let's say your authentications break to your team and ours.

Speaker 2:

Whenever there is an action that needs to be taken, let's say your authentications break. Let's say Google and Microsoft make a change on their side that they are more vocal about. That warrants action on yours. Let's say you have a sales rep that has gone rogue and is sending a bunch of BS. Luella is able to use those placement tests and a bunch of other metrics as well to diagnose your email infrastructure and surface alerts. We're also simulating natural mailbox activity. When you're sending cold emails, that is not natural. When you're sending a high volume of cold emails, you're going to have a very low response rate. Google and Microsoft don't like that. That is not natural behavior. So the bylaw will send exactly the number of simulated conversations needed to balance your response rates, just to not trigger a red flag in the eyes of Google and Microsoft.

Speaker 1:

Oh, I see what you're saying, yeah. Yeah, it's interesting because I think, as we're talking about this, I'm reminded of who Chris Arundale is. He's in the marketing ops community. I've had him come in, worked with him as a client of his, and then also had him come in and speak about deliverability. I'm now really remembering that conversation is like he made a distinction between deliverability and delivered right, which is did it make it to the inbox, which to me is probably more important, because like deliverability actually is kind of like not really meaningful, but I think your the reputation matters, which is, I think, what people equate to deliverability. But inbox, like there's so many ways that you could even have a good, say, center score reputation, but things are not actually making it to inboxes and unless you are getting people you're sending to to go like why didn't I get? Like they're expecting something and they didn't get it, that's like almost the only way you know.

Speaker 2:

Yeah, and like response rates as well. Like, yeah, and like response rates as well. Like, uh, you run those kinds of AB tests too, uh, but, but yeah, it is a bit of a black box without those placement tests because, like you, don't have that visibility elsewhere. Um, but yeah.

Speaker 1:

Well, I think it's. It's interesting. So, if I understand it right, so not only are you doing the inbox placement kind of and then monitoring that to verify that it made it to the inbox, or where, where, where an email landed, you're actually in some cases, um, through automation, having the, the bot you know how would that call it actually take some actions on that email that are indicators of the give you, send signals back to the email service providers that, yes, this is, you know, a legitimate, like improving your sender score, right? Yeah, interestingly.

Speaker 2:

We're simulating natural, like email exchanges and also browser activity, just to make sure we're not triggering red flag with email service providers.

Speaker 1:

Okay, that's interesting, right, okay, that's interesting Right, have you, has, have you? Do you think there's any risk of that being identified in sort of you know, like the email service providers or the big platforms right Going like, oh, now we're seeing this happening, that they're going to adjust their algorithms, et cetera, et cetera, for how they manage that, or are you working with them? I'm just curious now.

Speaker 2:

We've had many conversations with current and former engineers at.

Speaker 2:

Google and Microsoft specifically responsible for Gmail and Outlook spam, and they provided us a lot of inspiration, especially for the guardrails that we've built. It's really a necessary evil because of how aggressive Google and Microsoft have become with their spam filters. Even trusted senders, even the biggest enterprises in the world, are seeing more of their sales teams' emails land in spam. So, just to make sure more of your trusted emails actually get delivered, this is something that we do need to do, and there are several things that we are doing differently compared to our incumbents here. So most players in the space will simulate conversations by sending random templated messages that have nothing to do with your unique customer or buyer's profile.

Speaker 1:

We're actually ingesting that context yeah.

Speaker 1:

Okay, that makes sense. Okay, yeah, okay, that's interesting. So one of the things that I haven't heard people talk about, although I've I guess I've experienced it when I've used, say, chat, gpt on personal stuff Right, I try, I've learned it two things One give. I give really elaborate prompts in a lot of cases, and I can think of several cases where the prompt is much more elaborate than the actual output because I wanted something concise. But when I've had something where it's been like, oh, I need to add, like, oh, I need this next thing and this interaction with whatever that it feels like it gets smarter or at least the context continues to improve. So how is that built into this too? Because I can imagine kind of like that point right, microsoft and Google start going we recognize you're doing this now, but even without that, how does it get better at formulating or just monitoring and things like that? Is there like a built-in sort of improvement on that or how does that work?

Speaker 2:

So, michael, do you know what reinforcement learning is?

Speaker 1:

I could guess, but no.

Speaker 2:

So reinforcement learning is a branch of artificial intelligence that allows these models to learn and improve over time. So LLMs are just one branch of AI. Llms are like, really just like glorified copywriting machines and reinforcement learning. That is the optimization layer. That is what allows us to look into the past, understand what has worked, what hasn't worked, in order to better predict the future. And have you ever run Facebook ads before, michael?

Speaker 1:

So it's funny cause I like I've been involved with that, but not hands-on, no, so I've done it.

Speaker 2:

Yeah, we built Google to operate in a very similar way to how Facebook does their ad targeting, so we've built a lot of the same algorithms. The way that Facebook ads work is you give them 10 different ads. They'll test every single one of those ads against a small subset of your audience and will automatically place more weight to the version that is performing the best, the version that's getting the likes and clicks and people buying your product. Luella operates the same way, right? So Luella will take, say, 10 different message variations that you provide her and will test each and every one against a smaller subset of your contacts and will automatically place more weight to the version that is providing value to your audience and resonating the best. And, over time, luella will suggest new message variations with new hooks and offers and lead magnets and CTAs that are likely to outperform what you have tested in the past, especially once she does have more of that data as to what has historically worked and what hasn't.

Speaker 2:

We're using reinforcement learning throughout the platform, so identifying who to reach out to when, what we should be reaching out to that prospect with and also the volume that we should be sending out every single day. It's really the messaging that is going to be most important, even from a deliverability perspective. If you're blasting a bunch of of like AI generated bullshit to your audience and you are going to land in spam folders because if you're not providing value like, you are a spammer and you deserve to be in the spam folder right. So it's reinforcement learning is what we're using to do a better job of making sure your messaging is resonating and providing value over time, instead of just using LLMs independently.

Speaker 1:

Interesting, yeah, okay, so we're kind of towards the end of our time here. I know you've got deliverability stuff and you've talked about some other things you're doing that are agent-based, that were maybe in the works still when we last talked. Any updates on those, and then any other last-minute things we haven't covered yet that you wanted to make sure we hit, and then we'll wrap up.

Speaker 2:

Yeah, we have two agents that are launched so far. The deliverability agent was the first one that we went to market with. Before you can send emails, you need to make sure those emails get delivered in the first place, and email deliverability especially, like. We interviewed over 250 people in the space enterprises, agencies, startups just to validate that direction as well. We also have a prospecting agent, so this is what allows you to scale your outreach volume safely, with those guardrails, you know, without burning your TAM or your reputation.

Speaker 2:

The third agent that we're still working on is the nurturing agent. So this is the piece that will allow us to move away from static sequences and do a better job of providing that value to that prospect over time. There's a lot more work that we need to do to get that agent launched. We've really been going deep on the deliverability and prospecting side, but we've built the algorithms for it. We've built the algorithms for it, like we've built the workflows for it. The only missing piece right now is the more comprehensive data integration. So there are again thousands of data sources on the internet and we have over 150 of them currently on the roadmap. That is a larger priority for us, but those are the three agents that Luella will be orchestrating very soon.

Speaker 1:

Awesome. All right, mustafa. I continue to be fascinated by all these things that are going on. I tell people all the time I was kind of a slow adopter, but I'm now fully on the bandwagon. This is not just a fundamental shift, but it can be truly valuable if people use it in the right way. So appreciate that If folks want to learn more about you, know what you're talking about or what you're doing what's the best way for them to do that?

Speaker 2:

You can go to our on our website, luellaai. My calendar link is on the website itself, so feel free to book some time. Always happy to geek out, and my name is Mustafa Saeed on LinkedIn as well. Feel free to connect.

Speaker 1:

Awesome Again. Thank you, mustafa, thanks to the support of our longtime listeners and new listeners. We are grateful for that. If you have suggestions or feedback for us, please reach out to Mike Rizzo, naomi Lou or me. If you have ideas for topics or guests, or want to be a guest, also reach out to us. We'd be happy to talk to you about it. Until next time, bye, everybody.