Ops Cast
Ops Cast, by MarketingOps.com, is a podcast for Marketing Operations Pros by Marketing Ops Pros. Hosted by Michael Hartmann, Mike Rizzo & Naomi Liu
Ops Cast
How to Create a Culture of Experimentation with Brendan Burke
Text us your thoughts on the episode or the show!
Unlock the secrets of a thriving startup culture with insights from Brendan Burke, Senior Marketing and Growth Manager at Baydin (Boomerang). Ever wondered how a company can successfully run 52 experiments in a year? Brendan reveals how Boomerang, a bootstrapped company since 2010, champions data-driven decisions and adaptability, creating an environment where ideas are tested quickly and inefficient ones are discarded without hesitation. His intriguing journey from politics to tech underscores the unexpected paths our careers can take, offering lessons in flexibility and growth.
Explore the art of managing experimentation in product development as we dissect Boomerang's 2024 initiative using a modified RICE framework. With guidance from Boomerang's CEO, we shine a light on the meticulous process of refining a long list of potential experiments into actionable strategies, focusing on reach, impact, effort, and reversibility. Discover the power of minimizing bias through structured frameworks and how iterative testing can enhance user engagement, all while keeping the experiment list fresh with new ideas and insights.
Finally, we dive into the world of marketing operations, discussing the critical role of specific metrics and the challenges of replicating results across different platforms like Gmail and Outlook. Learn about the surprising effectiveness of quick tests on seemingly minor details, and how unconventional strategies, like intentional misspellings in ads, can capture attention. We also highlight the importance of fostering a culture of experimentation that enhances collaboration, especially in remote work settings, where communication and team relationships thrive through continuous learning and adaptation.
Episode Brought to You By MO Pros
The #1 Community for Marketing Operations Professionals
Find your next great hire in the MarketingOps.com community. Reach specific members seeking new opportunities by region, years of experience, platform experience, and more.
Hello everyone and welcome to another episode of OpsCast brought to you by MarketingHousecom, powered by all the mo pros out there. I'm your host, michael Hartman, flying solo today. I'm sure Micah and Naomi will join again soon. Joining me today is Brendan Burke, and he and I are going to talk about creating a culture of experimentation. Brendan is currently Senior Marketing Growth Manager at Baden slash Boomerang. Maybe he can explain what the difference is there. Prior to joining Boomerang, brendan did digital marketing and strategy in the world of politics, including working on campaigns. Brendan, thank you for joining us today.
Speaker 2:Thank you, michael, it's my pleasure.
Speaker 1:Yeah, so I think I called it out there Baden boomerang. What's the? What's the difference there?
Speaker 2:Baden is the legal name of the company. Boomerang is the product. Yeah, boomerang is the product, so it's email productivity and calendar tools. Um kind of old for a startup. Uh founded in 2010. Um, so I've been around a little while, um, but uh still going strong.
Speaker 1:Okay, Well, I did like the two second version of your career. Anything else you want to fill in, or you know, I think. I think, if I remember right, unless something's changed we last talked to you're the only marketing person at the company too. What's that like?
Speaker 2:I am, yeah, I guess a little bit of color on how I got started. It was kind of inauspicious. I was working in, I was working for a member of Congress and they were having trouble hiring a digital communications director and someone brought up in one of the hiring meetings that I had like a funny Snapchat, and so that has led to me getting the job, to me getting getting the job, um.
Speaker 1:So I don't know what that says about um, you know technology and politics, um, but uh, I, I don't even, I don't even know what snapchat is, my kids give me a hard time they're like oh, you're with the old person, you're still using facebook, dad, yeah, so it's a.
Speaker 2:It's a real career path um to you know, tools that that are uh just silly um can become career opportunities before you know it. Um, but uh, you know now I work uh in tech and I work specifically for a productivity company. At the same time that, um, the current congress is historically like unproductive. I've been reading stories about they've passed the fewest bills in 100 years or something. So I don't know. I'd like to say that they all should use Boomerang for email productivity, but I don't know that that would solve all the political problems in Washington with one app All right, all right, all right.
Speaker 1:I was a little bit worried we might have to wander into the politics realm, so I think I'm going to steer us away from that because I do not see any upside of that conversation. On this podcast Happy to chat with you offline, but maybe not for our audience. Sure, that's a separate podcast. Yeah, there's lots of podcasts that would. That would be a totally normal and expected topic. All right. So when we first talked, yeah, you did say that your, your company, has a culture of experimentation, which I thought was interesting, because I think a lot of companies say they do, but in my experience, most don't really, and in fact, the company's referring to 2024 as the year of experiment. So walk us through that and what it means and maybe a little bit of the background about how that came about.
Speaker 2:Yeah, well, the company having that culture and I'm coming up on three years now at Boomerang, but they have basically been bootstrapped over more than a decade and so just making the most with what they had has really been the culture of the company.
Speaker 2:The most with what they had has really been the culture of the company, and that means you know investigating what is working and you know promoting that and cutting what isn't working in terms of tactics and things like that. So it's not that there was an experimentation framework at the company really until this year, but it was definitely part of the culture that that was going to make data-driven decisions and make sure the results are measured and go with what's working, um. But when this year came around, um, it was really just a conversation between rolled that out in our uh, we call it a work away, we're fully remote. So we were actually all together in person, um, at the beginning of this year, um, and said you know we're going to set a goal of 26 experiments and that that was quickly, quickly, uh, doubled to 52.
Speaker 1:It's crazy, what a week.
Speaker 2:Um, yeah, yeah, Um, and we can. We can get into that pace, um, but that's kind of the beginning of it. Um was just, you know, this is something that we try to do um more of an ad hoc basis. Um, we know it's been part of our success. What if we lean into that and set an outcome goal of 26 of these and see where that leads us?
Speaker 1:So a couple of things One, a comment and then a follow-up question on this. So one I was like you mentioned early on. What you were describing is that the company is willing to cut things that aren't working or stop things that are not working, and I think just my commentary there is like I think that's actually highly unusual, even though I don't think it should be right. There's a lot of. I've seen many places where you've get into this fallacy, where we sunk X amount of money and time and effort into something we just just a little bit more and I'm sure it'll work right, when in fact it's just throwing good money at bad stuff and I so applaud that. So just to be clear like, this is a company-wide thing, this is not just a marketing thing or a sales thing or a development thing. Right, Product thing, Is that right? Is it a full company?
Speaker 2:That's right. It's definitely cross-functional. There ended up being sort of a core group of us who are working on it regularly, but it's touching, you know, many parts of the company. It's not just within marketing, it's not just within product and it's definitely a collaborative effort that's involving people you know, across the company from all those departments.
Speaker 1:Gotcha. So how many people are with the company from all those departments, gotcha, so how?
Speaker 2:many people are with the company right now? Yeah, I say across the whole company. There's 18 of us right now 18?
Speaker 1:Yeah, but there's a subset of that, whatever. So I'm going to guess how many are on that core team.
Speaker 2:I'd say there's six or seven of us. It's mostly it's marketing, it's product, it's design, it's the leadership of the developers um, it's the co-founders.
Speaker 1:So okay, well, I mean I'm, I'm gonna not count the co-founders, right, so just. But I mean that's, I mean it's not. It's not a huge amount, but it's also not given your overall size, right, it's not a huge amount, but it's also not given your overall size, right, it's a pretty significant investment to have that many people. Definitely, one could argue that it's extra work that doesn't align with your core goals, right, probably as a marketer or product or whatever. So I think that's really interesting. A marketer or product or whatever, but, uh, so I think that's really interesting. Um, what, what? Um? So what kind of impacts have you seen from the experience that you've done? Uh, you keep it to marketing, if you want, or overall or both?
Speaker 2:um well, I think our win rate um in terms of experiments that succeed, is like slightly over 50 percent.
Speaker 1:Um so that's win rate. Win rate meaning we set a goal and we achieve the goal, or win rate. Uh, what do you mean by win rate?
Speaker 2:just I guess like ab tests, the. If the new variant that we're trying to introduce is beating the, you know the status quo, the control variant uh, okay making an improvement on on what existed before. Okay, um is like over 50 percent um. And then we did a uh sort of back of the envelope. Um, you know tabulation earlier this year and it was about uh,000 in new ARR on top of our about 8 million ARR. So not insignificant in terms of a growth on top of the company's existing revenue no-transcript.
Speaker 1:How did you? Did you just project what the baseline or the incumbent model was? What you projected, that would be in terms of AR and then just new projection based on the changes.
Speaker 2:That's how you came up with the incremental arr number yeah, and for some of these it's that's more concrete than others. Um, like I said, it's a cross-product marketing, you know, whatever, and so some of these are, like you know, literally each weeks to like our purchase flow. Um, others are much more top of funnel, like changing a landing page or landing page template.
Speaker 2:So the proximity to revenue varies. But that's the top line impact that we, or the bottom line impact that we arrived at, and it is a I should say it's a product led um company. You know, that's how they, how they grew the company. Now it's um, it's becoming more of a mix of like B2B Um, but historically the user base and the customer base have been sort of the prosumer customers. So that product-led motion is a really I think is really ripe for this type of experimentation.
Speaker 1:Sure, sure, I hadn't heard that term prosumer but it's a good one. I like that. Is it a common one? Am I just out of it like I am with TikTok and you know, whatever?
Speaker 2:I don't think it's like TikTok. I don't know. There's probably a corner of TikTok dedicated to prosumer. I'm not on TikTok, but it's people who are purchasing for themselves but it's not TikTok, you know it's not a hobby are purchasing for themselves, but it's not. Uh, it's not tiktok, you know it's not a hobby. It's um people who may be independent contractors or freelancers, or you know they may work for a company, but um, they're, they're buying it for themselves in their own use for themselves but for their professional, mostly for their professional life, got it okay, yeah, um.
Speaker 1:So so I'm curious. I know you said that this got kicked off at the beginning of 24 at a call your all hands kind of meeting. What, um, was there a like? Was there a uh, something that was the catalyst or you know they, it's something that prompted this. And then how did you go about even coming up with and maybe you didn't start with 26, right, initial experiments, but like, like, how did you like where the experiments come from?
Speaker 2:yeah, I think we actually started with a whole lot more. I think we started with a list of uh over 90 um ideas and the way that that was generated. That was one of the most collaborative parts of it was like an all-hands brainstorm um at our all-hands meeting um. So any you know when, when they announced that this was going to be part of our um focus for the 2024 um, you know everyone was encouraged at that outset to contribute ideas for for things that you know they saw that could be improved, that could be tested, questions that they had um, and that led to this huge list of like over 90 um experiments or, uh you know, places that we could look at to make an improvement.
Speaker 1:Uh sorry to interrupt again, but um so 90 was it like after I assume some people had similar ideas or duplicates so it's 90 unique okay yeah, um.
Speaker 2:So then, after we returned home from that get together, um a smaller group looked and consolidated the list and then, from there, prioritized the list and um the ceo was. The was the decision. Rco. Mo was a decision maker on. You know which ones we were going to select. Um right for execution just what did um?
Speaker 1:did you have some sort of model or um thought like what was the thought process and how you prioritized those ideas? Because 90 is a lot, even after consolidating.
Speaker 2:Let's say it's half that 45 is a lot yes, um, we use the, uh, they use the rice framework, which I think comes from product management. Um, which is uh, oh, let me see if I can remember them all now uh, it's an acronym, uh, reversibility. Uh, I know, c is uh, c is confidence, he is effort. The eye is escaping me.
Speaker 1:I'm looking it up right now.
Speaker 2:So let's see R is uh reach impact confidence effort, uh okay oh okay, yeah, we add an r there for reversibility, um, because if, whether, it's permanent change or right right, right, something that can actually, if it fails, can be reversed. So we use r-i-c-e-r.
Speaker 1:Um, yeah, so each impact, uh, ricer effort this name does make me think of uh mashed potatoes, sorry um, so that we actually omitted the C there, though, because normally you're making a guess on confidence. Yeah.
Speaker 2:But in this case, the point of the experiment was to get that confidence and, really just know, get more certainty. I guess C could be for certainty, but it's not something that we use as part of the framework, and so each one was scored, and then um, that led to the list that we started with gotcha.
Speaker 1:Yeah, I feel like I've heard this rice model before. I've used similar things. It's interesting because, uh, my, it's too bad mike's not here, because he would. He would probably jump right in and say like this is again another example of like a product management kind of framework that would work well for marketing ops folks. So he's a big believer that marketing ops is kind of more aligned with product management than I tend to think, although I'm coming around. I'm coming around.
Speaker 2:Big of you to admit that, even when he's not here to stand up for it.
Speaker 1:That's all right. I'm totally comfortable admitting when I'm wrong, because it happens too often not to be um, so I do. I do think, like the c1, if it's confidence and the ability to actually deliver, execute on whatever the experiment is or the idea is, that to me that's an important one, right I? I think I think you know if you didn't use it in yours. I think that's fine too, right? I think I think the key, the idea, is you've got something that you're using that's trying to take some of the uh, the bias out of the prioritization that is more than one one person's opinion's opinion, I think, which is often how these things happen. Um, and sometimes you even have a framework and then, yeah, person with the highest title gets to decide anyway. So it's like, why do you do it? Mostly, mostly bigger companies, I think, but, um, yeah, that's that's what I think.
Speaker 2:Is um really helpful about this process and why um I think it? It can be really beneficial to approach um running. You know, even if you're not going to do a year of experiments, like to run, run things this way because, um, it does take that bias out of which direction you're going to go in right and it, um, you know the best idea wins and you know ideas that aren't working.
Speaker 2:Um, you don't need to argue about whether it's going to be effective or not, because you can just measure it. Um, and so, as long as people are willing to accept what, uh, the results are, um, you know it can take a lot of whatever politics or or whatever, out of the out of the equation.
Speaker 1:Yeah, absolutely so, I'm just curious. So I'm assuming that that 90 or whatever got whittled down to at the beginning. There's been additional ones added over time. At the beginning, there's been additional ones added over time. So do you do like, are you doing a regular review, applying that rice model with the full the list as it stands after you've tried some so they're no longer on the list, or added ones that weren't originally on the list?
Speaker 2:yeah, we did a refresh um, so there's been about like really just one refresh um as far as like a comprehensive look at what we've done. That was around august, um, because you're right, we we definitely come up with more ideas as we have done them and as we get results. It's's really helpful we've learned to kind of keep pushing in the same area. For example, when new users sign up, they're on a trial of our product.
Speaker 1:Sure.
Speaker 2:And we found some success changing our messaging and our communications at the end of the 30 day trial. And we did, I think we did three or four iterations Um and they kept being successful once Um. So it kind of revealed for us that that was a higher leverage um moment in the user journey and we just wanted to keep iterating it. You know a little bit better and better and I think that that's kind of been key, as, as you do these and you get the results, sometimes there are like the thing that you're and sometimes the thing that you're testing. The area of the website visitor journey, user journey that you're focusing on is just a high leverage spot and so it's worth it to keep iterating in the same area, even if you get a win or two wins. It can really make a big impact if you keep going.
Speaker 1:Sure, you get kind of a little bit of a multiplier effect. So I mean doing once, one of these every two weeks, I think would be massive for most of the people who are listening to this. And you're now at a pace of one a week in terms of how are you like, how are you managing that whole effort and how do you, how are you able to move that quickly?
Speaker 2:um, we were really fortunate that this came from the top down, because it meant that everything was resourced. You know, um, we, we have developers who are able to work on these for us, build these for us. It's prioritized in our company and, um, you know that's key to anyone who wants to do something like this um is to get that buy-in from the top. Um, to make sure that there's going to be the resources for it.
Speaker 2:Um, I think one thing that sort of quickly became apparent, though, was you can only do one test at a time on on one thing. You know, if you, if you have this landing page, you're like, oh, I want to, you know, convert more people on this landing page, um, well, you can. You can probably only run like one ab test at a time, and that might take a couple weeks, and so if you want to do something in those two, three weeks, then you can look at other areas, other parts of the surface area to run a different test on. We're able to do so many over the course of a year is. There's a lot of different places from, uh, lifecycle email website, um, you know, landing pages in product user activation stuff. There's a lot of different places that you can be running these on um, sometimes simultaneously.
Speaker 2:Um and so you know, once, once, you, uh, once you get started, there's really no end to what you can, what you can do.
Speaker 1:Sure, are you. Are you just tracking all this stuff like in a spreadsheet, or how's this?
Speaker 2:Yeah, sorry. So tactically um it started with a spreadsheet. We moved to trello um so we have, I'm sorry uh any any kind of kanban.
Speaker 2:We we find the kanban most helpful because it shows we have uh statuses for um, for each of our columns are statuses, so you can see what's active, what's being built um. And then we also have like a couple automations. So if we know it's probably going to take three weeks to get enough data for one of them, something will automatically move into a column where it tells us it's time to decide side um. So, yeah, each, each experiment, each ab test, is its own card in trello um and that way we can comment on it in that place. It. You know, we use other tools to actually do the. Maybe, if there's development work required, that's, you know, going to go into our, our ticketing system, um. But as far as managing the experiments themselves, it's it's uh sure, uh, yeah, yeah I, I was just.
Speaker 1:I'm only sort of sort of serious about I'm. I'm not a huge fan of trello. I know plenty of people like it, so I'm um, it's hard for me to bite my tongue sometimes so there are work tools, I think base camp comes to mind, but I haven't had the pleasure.
Speaker 1:Well, you're not missing much, Okay, so. So I think we actually have talked to somebody about this. We've had a couple people on recently who, I would say, came out of a science background, right so familiar with the scientific method, which should be of a science background, right so familiar with the scientific method, which should be a highly structured approach, right? You develop a hypothesis and then you develop a test, you run the test, evaluate the results. Does your hypothesis hold? So? And for anyone who's a scientist out there, if I butchered that a little bit or missed it by a little bit, please don't come get me. I think I'm close enough. But do you have a structured process like that you mentioned? Some of these happen simultaneously or overlap, I assume. What does that process look like for any given experiment, so that you can, like I said, ultimately you want to know at the end did it work or not and what was the impact, Right?
Speaker 2:Yeah, I hope that your brief outline there was accurate because that, basically, you know, the framework that we use is, you know, start with a hypothesis and this is a lesson that we learned is like it has to be one sentence long, like it has to be just really clear what's the change you're making and what do you hope that it will do. Um, so we do have that at the top of each Trello card is the hypothesis for this experiment.
Speaker 1:And then, okay, I like that.
Speaker 2:Yeah, uh, another lesson that we learned is like you have to identify, um, which number you're going to use to evaluate success and whether it should go up or down, because it's surprising how you know weeks go by and then you're like, wait a second, what you know, what exactly was this intended to do?
Speaker 2:Right, because you know you can see, you might be able to see, you know other effects, but you need to be able to identify, you know, what does success look like and and should it make the number go up or go down? All of our experiments are that hypothesis, the success criteria, the instrumentation for how we'll actually measure the results. This is another area that we have had to improve on, because there was some, there is some communication breakdown. Basically, we've been good at saying you know, okay, this screen looks like this, we want to make it look like this other thing as part of this test. That part of the communication has been very straightforward. The part that we've had to um improve a bit when we got started was how are we going to measure whether the first screen or the second screen, you know, converts more? Whatever result, we're looking for um, because sorry, I've got this.
Speaker 1:I know I've got this weird look on my face like I'm. Do you mean like the mechanism by which you'll measure it?
Speaker 2:exactly the mechanism uh, okay because we don't have like uh, we don't have an optimization platform. We're not using a third party to do this. Our developers are setting up in many cases, it's like Google Analytics events and so, as they're building it. Being very specific about what are you going to name it? What is that action that you're measuring?
Speaker 1:And where can I find it? Yeah, okay, okay, that makes sense.
Speaker 2:Sorry, I'd no, that's, that's a fair question. Yeah, um, because I, at the beginning of this year, you know, I I think if someone had told me you have to run a year of experiments, like, okay, well, what platform are we going to use? Would probably be my first question. And, um, we've, we've done it without that. So I don't know, maybe we could do 104 with a platform or something.
Speaker 2:So instrumentation is a key part that we've learned needs a lot of focus and very explicit communication as an experiment moves from team to team. And then also guardrails, um, so there's the number that we want to see go up or down as an indicator of success. But you know, it's like, um, it's like sending an email. Yeah, you want clicks on that email, but you might also want to keep an eye on people who unsubscribe um, to see if the net effect is more harmful than is actually, uh, going to be helpful, right? So, um, for each one, just as a thought exercise, like, what is the guardrail that we want to prevent? Um, I think, uh, you know, email unsubscribed is is kind of the clearest thing there.
Speaker 1:Well, I can I can imagine, though. Let's say you say we've defined, this is the metric, we want to increase. But let's say you run the experiment it starts to decrease. I assume you also have a floor right it says we're gonna this is where the reversibility comes in like we're gonna undo that experiment before it. You know it has a truly negative impact on the business Is that what you're describing Okay.
Speaker 2:Yeah, even if you estimate it's going to take four weeks to get enough data to know whether we succeeded here, within five days or so of it launching we want to just check on those results and make sure that there's nothing totally crazy happening, um, so that we don't wait for four weeks to to find out that we got a bad result it's really interesting.
Speaker 1:Anything else that's part of that, that sort of structured process, that's important um, those are the, those are the key parts.
Speaker 2:Um, so there's one person who owns each experiment and then, um, you know, for example, if it's in the marketing domain, that would typically fall to me, and I'm going to write all this up and then I'm going to send it to our CEO, who's the sort of decision maker Make sure that I've covered all my bases plan, and then it will move on to a designer, if, if necessary, um for the design, and then onto our developers, um, and then we will line it up to um to be to be launched you know it was as you were describing this.
Speaker 1:What was popping into my head a little bit is uh this came up recently and with people I've been talking to because it's that time of year right, goal setting is the idea of if you're her, smart goal setting, smart right and the, the. What you described is having like the one sentence description of what you're doing with the desired outcome and then the measurement or metrics behind it. Right, those, basically, are the first two letters in the smart rate, specific and measurable. Um, and and I think a lot of people miss that in how they do goal setting and probably would miss it in the context of doing experiments too yeah, yeah, yeah, the measurement I think can be intimidating, um, but it it's really, it's really key and say we want to do an A-B test and when they describe it I'm like that's not really an A-B test, it's like two completely
Speaker 1:different scenarios, right, because you're not really testing one small, I wouldn't even call it multivariate. Right, it's like we're going to do two completely different landing pages and see which one quote wins, but then they don't really do the test of the wins, they run the whole campaign. They run the whole campaign and then they may or may not take the insights from that campaign and apply it to a future campaign, which is not really an A-B test. I'm not saying you can't learn from it, but it's not really what an A-B test is.
Speaker 2:Yeah, I think this is how I end up aligning more with marketing operations and not something else within marketing. It's because I can get energized by the creative process, but at the end of the day, I'm going to be disappointed if I don't know the impact or results of the thing that I did, um, and I think that's one thing that sets marketing operations apart. But the repeatability that you mentioned is another thing. That's, I think, tricky because you know case in point, like we actually have sort of two products we have Boomerang for Gmail and Boomerang for outlook um, that people can use with either of those email accounts, and that's one been. One of the surprising things is we'll take a result from um, you know our gmail user base and try to do the same thing with our outlook product, and the results are not repeatable.
Speaker 1:Um to our to our surprise and dismay um, and it's I was cinder smiling as you were starting because I was like I'm pretty sure I know where this is going. Uh, yeah, because I've. I've worked working right now where I've got on one side a client or an employer that I've worked for that had outlook, but I use gmail as on personal stuff and they I mean they operate differently enough where, like I can imagine that how you, you would use a product like yours would also vary.
Speaker 2:Yeah, the way the product works, the people who use them. I wish I knew why, all the reasons for the differences.
Speaker 1:No, but I think this is interesting, right, I mean, this is so. My first exposure to being able to do something like this was when I basically was leading the charge on doing paid search marketing for a big company. This was back in the early days when paid search was still relatively new. But I remember and I still see this, like people agonizing over the copy you put into a paid search ad right, what's your headline going to be, what's the words you're going to use. And I remember one time I was like I can't remember what the specific scenario was, and this was for something like, really like specific. It was for semiconductor, like semiconductors, so it was like not a consumer product by any means, although we did have a challenge with audio semiconductors being confused with search terms that were for people who are audiophiles. But that's a whole other part of it. But one of the things I remember one time my friend was like look, we're debating on this, one word in this line of copy of like, there's's like the, the relative cost for us to just try both and see which one works, cause we think that like, we think they're synonyms and the, but the reality is one ended up, and then we're all like. Everybody had an opinion. I had an opinion about which one would perform better. Once we did this and I was wrong. And so it was like the first time when I was like, oh, we don't need to agonize over this if we've got something that we can move quickly on right, yeah, and also I was like I may have my opinion about what's going to work.
Speaker 1:It was like one of those. I was like I need to be ready to admit when I was wrong, because at the end of the day, I, if I let my ego get tied up in, yeah, whether or not my choice of, in this case, copy was right or better or whatever, and it's not, I'm going to be like I'm. I'm going to go down a path where I'm going to sub optimize what we're going to get to. So I was like it was really, really important for me to get that. I love the idea we could get that quick feedback and and stuff. But, um, it was like so are you like you're running into surprises too? I mean, you mentioned that one. Right, where are there other examples of things where you were, you did an experiment, you thought something was going to work or did not work, and and then you saw the opposite or something different. Maybe totally, yes, um, definitely, and that's a.
Speaker 2:That's a great case in point and I think, like what you mentioned, you saw the opposite or something different. Maybe totally, yes, definitely. And that's a great case in point in myself, as I've, you know, gone on in my career is like I'm it's not that I I mean different or don't care anymore about you know the choices that are made, but I know that I might be wrong. Uh, you know, I've had that happen to me enough, where I don't want to get too wrapped up in thinking that one way or another is the right answer, even if I do have an opinion, um, because I just want to get it out into the world and and see what happens. Um, so, yeah, I mean another example of uh, I want to be glib and say another example of where, uh, you know the user was wrong.
Speaker 2:Uh, right, those pesky customers right uh, but no, the customer is always right, as the data informed us. Um, we tried to redesign some pages in our purchase flow that used an older, we would say outdated, form of our blend, our branding, and we could not, um for the life of us, improve the conversion rate on in some steps of the purchase flow. Basically, that we, you know, subjectively thought was um going to be an improvement uh, on the design, and I think, objectively, like, following design principles, actually like would have considered it uh a more intuitive, you know, higher converting um example of a of like a purchase page, um design, uh, but the numbers, you know I guess the numbers didn't lie Um, and so there's some, there's some compromises that you have to make there.
Speaker 2:Um but yeah, um, I think those are, those are some of the most interesting examples, um, and just really the, the ones that that reinforce why, why it's important to, or you know why it's helpful to test everything. Uh, yeah, as as time and resources allow yeah, no, I think I also.
Speaker 1:I think, um, one of the things that I try to tell leaders that often, especially at bigger organizations, when I've been at Led Marketing Ops yeah, we're there supporting all the go-to-market activity. Some of that includes content, whether it's emails, landing pages, web content, et cetera. Is you get this never-ending or really extended review and approval process right really extended review and approval process Right? And um, what I have tried to do is educate the rest of those teams about not again to your point, like it's not that I want. I'm not discounting the importance of caring about the work to a point and and really I go, I try to get tell them like you need to think about, um, if something is either wrong or not right, like suboptimal right, I don't even know there's right or wrong, right, there's like there's always trade offs in these things is, consider how, how easy is it to change it or fix it? Should you realize that something has gone wrong?
Speaker 1:and if you think about it that way.
Speaker 1:Yeah, you want to invest a little more time and effort on uh email, because once it's gone, it's gone right especially if it's an email that has important dates or times, like invitations to a webinar or a you know event, that you're having something like that, um, and know that you're going to miss stuff, because it always happens. No, but if you're doing like we're putting up a landing page, right, well, you can fix a landing page like that, right, it's seconds, theoretically, so should you just publish whatever, right? No, but I'm like the level of review and approval should be aligned with the risk associated with something going out again. Now, if you have again, if you have content that's wrong or misleading or whatever, you should fix it and that needs to be fixed. And if you have to do some sort of communication, um, then do it. But otherwise, like, don't get so caught up in this and that's what I'm like.
Speaker 1:The. The other one is interesting me. So, again, going back to the search marketing service, it was. It's been amazing to me how many times I was like people get caught up in the uh, the grammatically correct copy or ads, especially in search ads, when what I find, what you find, is actually misspelling sometimes. Do better, because people don't know how to spell or they're typing fast and they misspell. It is astounding to me and like people don't really read this stuff and so I'm like try misspelling.
Speaker 2:Like intentionally put a misspelling in there, see what happens yeah, I mean, you mentioned email as an area where you want to get more right. I don't disagree, but some of the best emails in terms of like open rate subject lines, like oops, yeah, like, yeah, you know, we sent the mistake, um, yeah, no, it's great people, it's kind of I might.
Speaker 1:I suspect some of that is like that. It's the you know people can't. It's why it always traffic always slows in an accident, right people? They want to see what the train wreck was that you did. Yeah, yeah. Yeah, human psychology is a whole other thing.
Speaker 2:Not something you can repeat every week. Fist bump.
Speaker 1:No right, yeah, I mean okay. So we're kind of ready. This has been fun. I think I want to hit a couple other things real quick before we have to wrap up. But the one in particular, like you mentioned that this is a cross-functional thing and there's other teams Do you feel like and I know you're a small organization anyway, but do you feel like this is like this? This culture of experimentation, this year of experimentation, has fostered more and better ability to collaborate across those functional what is often divides I don't want to use the word, but that's what it is, I think, for a lot of people.
Speaker 2:Yeah, I think it's fair to say divide, especially because we are remote, and I would 100% agree with that, 100 agree with that. Um, in the past you know for me to be working with developers or something like you know, if we have a new feature or a feature update, um, those would be much more effort intensive um projects that would be on the scale of like weeks and months and not right days, um, and so being able to increase that cadence of collaboration through these experiments small changes, like you know, tweaking landing pages, things like that um has definitely uh allowed for more collaboration, just better, um, you know, better working relationships um improve communication between these teams.
Speaker 2:You think, yeah, knowing what's important to communicate cross-functionally, yeah, Um, you know, just knowing what things they need to do their work and what things you know letting them know what things I need to do my work, Um, those kinds of process things. It's an iterative, you know. It provides opportunity for a lot more iteration when you're doing a lot of small changes rather than these. You know, a few big changes each year.
Speaker 1:Right, and I would expect in your case, like if you were, you'd be sort of at the receiving end of oh, we're launching this new significant change to our product line. Go start to market it right Now. You're like playing catch up as opposed to like being a part of that process. Sure, yeah, Okay. So one last question, and then we'll have to wrap up what's the most painful lesson you've learned from this? Like, what is like? Yeah, it sounds like a lot of positives Like have there been? Like what's the biggest challenge you've run into as part of this?
Speaker 2:I do think that that example about a design that everyone in the room agrees is better quote unquote better than the thing you're trying to improve upon but does not convert as many customers. It's interesting to observe that the strategic decision that has to be made there about whether you're going to go with something that might seem more aesthetically pleasing, but you know, however small, however small the difference is could hurt the business sure um and you know this company's three co-founders bootstrapped for over a decade, now profitably.
Speaker 2:I think that their success is enough evidence that you've got to be able to make those decisions. Back to one of our earlier points about letting things go is like a rebrand might look nicer, but if it's going to harm the business is, is that really the right thing to do? Yeah, that's, you know, sort of been a learning experience for me, coming from you know, the marketing side of things, where you want every I to be dotted and t to be crossed. Um, and you know, have everything sort of picture, picture, perfect. Um, you know, it's, it's, uh, it's not easy to no, I mean like, so this is.
Speaker 1:I keep I come back to this all the time is that the world is full of trade offs, right, the business world is no different, and I mean essentially what you described as a way of sort of forcing that tradeoff discussion to happen in a positive way. So I love it. I wish more teams did stuff like this. So, brendan, thank you for sharing what you've done, what you learned. I'm sure that our listeners have probably taken away some ideas that they can take and try to bring back to their organizations. If folks want to, you know, learn more about what you're doing or reach out to you or whatever. What's the best way for them to do that?
Speaker 2:you can find me on LinkedIn. It's just searching Brendan Burke, boomerang and I do have shareable versions of our templates. If anyone's interested in in trying to see some of this for themselves, you can email me. Brendan B at Badencom. B R E N D I N B atB. A Y D I Ncom.
Speaker 1:Got it Perfect. We didn't even get a chance to talk about the templates, so I knew we could have talked for longer. Well, again, Brandon, thank you so much. Thanks for for joining us. Thank you to all of our listeners out there for continuing to support us, as always. If you have ideas or feedback on topics or guests, or want to be a guest, feel free to reach out to Naomi, Mike or me, and we would be happy to talk to you about that. Until next time, bye, everybody.