Ops Cast
Ops Cast, by MarketingOps.com, is a podcast for Marketing Operations Pros by Marketing Ops Pros. Hosted by Michael Hartmann, Mike Rizzo & Naomi Liu
Ops Cast
From Tasks to Transformation: Scaling AI Adoption in Marketing with Spencer Tahil
Text us your thoughts on the episode or the show!
In this episode of OpsCast, hosted by Michael Hartmann and powered by MarketingOps.com, we are joined by Spencer Tahil, Founder and Chief Growth Officer at Growth Alliance. Spencer helps organizations design AI and automation workflows that enhance go-to-market efficiency, streamline revenue operations, and strengthen marketing performance.
The discussion focuses on how to move from experimentation to execution with AI. Spencer shares his systems-driven approach to identifying automation opportunities, prioritizing high-impact workflows, and building sustainable frameworks that improve strategic thinking rather than replace it.
In this episode, you will learn:
- How to identify and prioritize tasks for automation using a value versus frequency model
- The biggest mistakes teams make when integrating AI into their workflows
- How AI can strengthen strategic decision-making instead of replacing people
- Practical prompting frameworks for achieving accurate and useful results
This episode is ideal for marketing operations, RevOps, and growth professionals who want to turn AI experimentation into measurable, scalable execution.
Episode Brought to You By MO Pros
The #1 Community for Marketing Operations Professionals
Ops Cast is brought to you in partnership with Emmie Co, an incredible group of consultants leading the top brands in all things Marketing Operations. Check the mount at Emmieco.com
Hello everyone. Welcome to another episode of Opscast, brought to you by MarketingOps.com, powered by all the no pros out there. I'm your host, Michael Hartman, flying solo this week or this episode again. But joining me today to dig into how to move from experiments to experimentation to execution with AI is our guest Spencer Tahill. I should have asked Spencer so you can thrust me if I mispronounce your name, is the founder and chief growth officer at Growth Alliance, where he helps companies design AI and automation workflows that drive growth across go-to-market, revenue operations, and marketing. His systems-driven approach focuses on using AI as a practical tool to free up strategic thinking and improve operational performance. Liz Spencer, welcome to the show. And uh remind me, where where in the world are you right now? I forget.
Spencer Tahill:And so I'm usually in Prague, uh, but currently I'm in Genoa. I'm in Italy. So uh yeah, that's uh that's a little bit of a change up. So yeah. It's a pleasure to be here.
Michael Hartmann:Well good. Glad to have you. I I knew it was Europe, I couldn't remember where, and I would have like if I had to actually remember, it sounds like I would have been off. So I what I do know is it's but at least early afternoon, if not late early early evening, there, right? Yeah.
Spencer Tahill:It's uh it's just turned almost 6 p.m. almost. All right.
Michael Hartmann:So there you go. Yeah. All right. So you're seven hours ahead of me. So good to know. All right. Well, let's when we start like with a little bit about your background. Like, how did you get into AI and automation, especially for like the go-to-market rev ops kind of domain?
Spencer Tahill:So it's kind of a long story made short. And I was always told to if you're gonna do something, do it right the first time. And even if it takes you longer, just make sure it works the first time. So um, when I was starting to work in kind of like a HubSpot sphere, I think there's probably gonna be a lot of people maybe in the Salesforce sphere listening. Um, I did a lot of freelance work in the HubSpot world. It was setting up marketing hub, starting up service hub, setting up sales hub, um, all of these different modules, as well as connecting everything through everybody's favorite marketing operations platform, Zapier. Um, at the time that was at the time, uh that was kind of like the leading edge of the market. Um, and as my kind of freelance business evolved, I was able to step away from you know a full-time engagement where I was a marketing manager at a startup actually here in Europe. Um, and I dove into doing full-time freelance work for a lot of American um Fortune 500 companies, like creating their HubSpot instances, helping them keep up to date with their modules and all their automations there. So how I fell into it was very naturally. It was just playing around with the workflows and the sequences features inside of HubSpot. And then when I wasn't able to build something inside inside the platform, just moving over into no code tool of Zapier was the first one that I learned really well. And then I played with make, but um, it wasn't until at least a few years ago that I kind of doubled down into into AI development. So that's what we do now. Um we we build out HubSpot portals mostly, sometimes in addio, but everybody wants to do more. And AI is kind of like putting the pedal to the metal at that at that rate. So what used to take a team of five or ten, you can do it in if you get a nice prompt going or an AI agent chain going, you can do it in a matter of like minutes or hours. You know, it's it's crazy the things that you can build now. And it's really interesting. So that's kind of long story made short, kind of fell into it. I love engineering, it's it's business engineering.
Michael Hartmann:So it's an interesting way of thinking about it. I hadn't heard that. Yeah, it's it's um it's interesting because so I'm just really curious. Like, do you because I've had to start doing this in my own head, like do you differentiate between AI and automation? Because I think to me, what I my view is that automation is a thing where AI can be a part of an automation automated workflow or whatever. Um, and I think I feel like a lot of people sort of conflate those.
Spencer Tahill:Yes. There's a huge issue. Um, the long answer to this is that they're two different things. Uh, or the short answer is that they're two different things. The long answer would be that there's a lot of misnaming in the industry, especially when it comes to anything regarding automation, anything regarding AI. And probably you've heard it. I just built an AI agent on LinkedIn or something like this. I've replaced a 20-person B2B sales. No, you didn't. And uh sometimes you can, you can do it. You know, like we've done cool things with Slack integrations and you know data across the world, but these types of linear flows, and I'm talking, you know, you have you know, single input, single output, you've got multiple inputs, multiple outputs. The more complex you go, the more essential AI is going to become. So when you build a workflow, my definition of a workflow is A, B, C. You have input, you have some sort of transformation step or processing, and then you've got C, you've got an output. And then you have an AI enhanced workflow, which is you have an input or you have multiple inputs, and then you have processing, the orchestration layer, and then the thinking that is going on, instead of saying, you know, if A put one, if B put two, if there's some sort of lead scoring and the lead scores have different weights, and you know, you have to calculate it on different weighted. It's a really good use case for AI if it's way too complex for formulas. In that instance, that becomes an AI-enhanced workflow. Where the industry is going now, and where there's a lot more confusion, is that there's workflows, AI-enhanced workflows, AI agents, and then there's MCP, which is model context protocol. So there's it's all different. And I would say the more, the more inputs you guys you have, the more context you have, the more transformation steps you have, and the more output possibilities you have, yeah, the more the more AI is gonna accelerate that. So that's how I define it. Workflow is just gonna be linear or it's gonna be a single path input, and then it's gonna do a formulation about output. And AI powered workflow is gonna do a lot more thinking, thinking that's been behind the scenes. Right. Yeah.
Michael Hartmann:Yeah. And it's uh you brought up MCP, which I still haven't quite wrapped my head around, but we can maybe we'll get there in the course of the conversation. But yeah, I want so this leads really to one of my next questions is that one of the steps that you think of towards I think a lot of people struggle with like, where do I start? Right. I want to do like I want to AI, I need to learn AI, I need to do something. And but what what one of the things that was interesting to me is when we talked before, you said one of the first steps towards doing automation, especially AI enabled or AI powered, whatever you want to call it, is identifying the things that piss you off, right? So I think that was a phrasing you used, but like what do you what did you mean by that? And like I how like maybe expand on it a little bit.
Spencer Tahill:Yeah. So um some people know this about me, some people don't know about me, but um, I I am at the end of the day, I want to do the least amount of work possible. And that's probably not a very professional thing to say. But I'm an Amazon, right? But but you know what? Like it's a game of efficiency and it's a game of focus. And when I used to work for a few different startups, even when I was like freelancing, it was all about trying to like optimize my day. If I can try to like build something that takes somebody eight hours to do and I can do it in two, oh my god, like you know, the value goes you know up the ladder, right?
Michael Hartmann:Right.
Spencer Tahill:When I start to think about when I want to not do something anymore, it's it's more philosophical. It's just like, I just don't have I don't have a great feeling, you know. Like I don't want to wake up, I don't want to turn the coffee machine on at six in the morning to have coffee at six and six thirty in the morning, right? Like you press a timer on your coffee machine the night before and then it's done, right? That's like very simple use case. When you start talking about it in like the business aspect, everybody has different pain points. And I'll get into that later, but I start by looking at the real ground level threat of what do I just hate doing every day? Like, yeah, okay, let's say I'm a sales guy and I hate picking up the phone and dying. Okay, maybe that's not the right, you know, industry for you to be working in. Right. But but if you if you're a sales guy and you hate going into Salesforce and HubSpot after every call, then what do you do? Like if you're a really good closer, but you're just terrible at just like putting stuff in this ARM, okay, that's your pain point. Let's solve that, let's automate that first so that you can focus on what you're good at and you guys can produce more, you know, you can be more efficient at your job. So that's why I always ask it's like what pisses you guys off the most? Ah, you know, I just want to get on calls all day. I just want to close, you know, make whatever, every whatever everybody wants to do what they want to do. But I start by looking at it as like, I don't like taking notes. Very simply. Like I enjoy the process of listening, I enjoy the process of conversing, but I don't, I'm not able to focus fully on you, Michael, if I'm just like, you know, off to the side writing my notes. So now you see all these AI note takers, and that's why I use, you know, like something like Fireflies or Fathom to be able to take my notes for me, because then I can focus on what I'm good at. So I've gotten rid of everything that I that annoys me. Now I can 100% give my focus to you. So I think like when you start doing, when you start making automations on the things that, you know, the processes that piss you off every day, that's where you should start. That's the zero to one. Get the idea of like what you don't want to do. Because eventually, once you get all the stuff out of the air that you don't want to do, the path is gonna light up. You'll be like, okay, this is this is the direction we want to take it. This is what I want to build. And then you bring somebody in to build that or you piss yourself.
Michael Hartmann:Like I and I'm with you, like the things that I don't like to do, but like to some degree, there's every job right now has things that we don't like to do, right? You brought a salesperson, I think in the marketing context, the one that pops to my head is you know cleansing and you know getting lists cleaned up to be able to upload into a marketing automation platform or something like that, right? Which is necessary in a lot of cases, right? And it's an important piece of work. But like I'm sure there's a handful of people out there who actually enjoy that work. I'm not one of those, and I would love absolutely love to just replace that in some automated way. So I think it feels like there's a little bit of like I what I'm hearing from you is not just like things that annoy you and bother you, but like those that are still important, but are like things you don't enjoy doing. Maybe it's a slight, slight tweak from what you were saying.
Spencer Tahill:Yeah, it's it's uh if you've ever heard of the eisen, I think it's called the Eisenhower matrix, it's uh skill and will, and or it's gonna be something along between is it urgent and important? And it wherever it falls on that matrix, if you have to delegate that work, you should automate it. You should automate it to the best of your ability. You should save a hundred percent, you should save 80% of your brain function to focus on the tasks that are the most important and the most urgent because nobody's gonna be able to replace that, right? But if you're how I got into it, very true story is you know, my girlfriend at the time was a virtual assistant, and I was trying to figure out what does a virtual assistant do so that I can work from my laptop. That's how it all started. And I was like, oh, they, you know, in my silly head, like, you know, you book the calendar meetings, you you keep an eye on the email, you keep an eye on the Slack, you you maybe take a look at the car reservations, you know, you do your job. And I was trying to figure out a way, okay, like what if I just got a notification from all of these different platforms in one place at every, you know, all summarized in one place. Would it make my job easier as like a person? So I built something for my own inbox that that does that for me. So um these annoying things, the repetitive aspects, like how many times a day, how many times an hour, how many times a month are you doing it? What is the impact that it's gonna have in the short term and the long term? I think this is the way that you could start thinking about kind of like going from manual processes, you know, like you just said, like enriching your CRM. That is a pain in the butt. Um, that is that is something like people come to me for. It's like we have 400,000 contact records in HubSpot. What do we do with them? Oh, there's a strategy. Let's go pay for Zoom info, let's go, you know, use HubSpot Breeze to enrich it, right? That's like kind of like the whole thing now. But to do it in a sequential order and to do it in a way that you can actually get money out of that project is a completely different strategy. So I think like before you do any automation, before you do any like thing, you should probably think about the strategy before you do that and how how you might make money back, you know, if that makes any sense.
Michael Hartmann:Yeah, it's like there's a like uh I mean, I'm people with long time listeners know I'm a big fan of having sort of thinking in the financial terms, right? So there's an ROI, right? Like whether that investment is a time and effort one or actual dollars, like doesn't really matter. It's like are you gonna get more back from that in the long term or whatever time horizon you're looking at, right?
Spencer Tahill:Exactly. It and it and that's not even just talking about I do the same thing. It exactly what you just said. It it is everything has to be ROI adjusted, whether that's gonna be you're gonna build an automation, you know, somebody comes to me like, oh, what is the, you know, we're gonna pay $10,000, $20,000, $30,000 for something, you know, monetary value, but what is the actual realized ROI? Okay, within three months, you're gonna your SDRs are gonna be spending 35% less time because we've timed them, we've looked at their data entry times, we've averaged it. That's gonna be like the time that they save. But what is the percentage of their salary that you're paying them that's gonna be freed up? What is the 35-25% of their salary that they're gonna be able to go and focus and do something else? That's your margin, that's your ROI right now. And that's how I, you know, that's a whole different story. But yeah, I agree completely. It's everything needs to tie back to revenue.
Michael Hartmann:It's interesting. The other like the tie back when you bring in other, like now you if you're if you're opening up resources to be able to apply their skills and experience in in ways that are more productive or more uh provide better return for your organization or whatever, right? It seems like it's a really good mesh with I don't know if you are you familiar with Strengths Finder.
Spencer Tahill:No.
Michael Hartmann:Okay, so this is like one of those personality test kind of things, right? But the the real uh the precursor to that book and then the the actual assessment was a book called Um First Break All the Rules. And it was kind of a one of the first management books that I read where I was like, oh, they like it actually really changed how I thought about things. And the basic premise is a lot of modern um like management was about right uh fixing people's weaknesses, right? Uh and this sort of flipped it on its head and said, really, what you should be doing is take advantage of people's strengths and minimize the impact of their weaknesses. So like it's not ignoring it. But where I'm going is like this meshing of like what do I like what's like what could I get? If I if I could free up somebody who's got a strength in a certain area, but they're not able to take advantage of it because they're spending a lot of time on stuff that they don't like, they're not good at, but it's a necessary part of the job, um, you start to free up more of that time. Now you can apply that strength to something else where it's a bet going to be a better return and it's probably gonna be more satisfying for them. I think this is a really interesting way of thinking about it. That like literally in real time, I'm like it's hitting me.
Spencer Tahill:So uh this is like very common. And I know I know we're kind of tangenting, but I I think it's very common when people will come to our door and they'll say, We want to take call intelligence and we want to figure out what competitors are being mentioned. Okay, that's that's a whole call intelligence workflow, you know, that has to be built out. But why? Why would that, you know, like what where where's that revenue gonna be made back? What type of insight does that data give you? And what are you gonna do with that insight? What's the action you're gonna take? And and I think being able to steer the course before you start building is very, very important. Because uh what I see a lot, and I don't I'm not sure if you see this a lot, is like sales works in their little bubble, and then you've got revenue team and marketing team kind of like sitting over here, and but nobody had nobody actually sees each other's data, and I think it's a huge problem. So it's I think it's about when you're starting to think about like making AI workflows and agents and stuff like this inside of a like you know, I work in mid-market, so if you have to connect the teams, you have to make everybody play nice together somehow, and it's not an easy job to do. Um, it takes a lot of coordination, but there's a lot of issues with the industry. It's that like you just build one thing and call it an AI workflow, and it that's just not what it is, you know.
Michael Hartmann:Yeah, I think you're hitting on something that really like I think it I've always thought it's important to go like what's the end result? Like, what do you want to do with what you're asking? Very often how I it pushbacks maybe not the right way, but I I want to understand the why when I like somebody has come to me as a marketing apps leader. Um, and I think example I could use is actually before marketing apps was really a thing, but I ran website for a big global semiconductor company. And when the product marketers came, he's like, hey, we want to get like we want to get asked, you know, put up a form and ask for feedback from customers. Great, easy. What are you gonna do with it? Because I'm gonna tell you right now, if you open that up and people start providing that feedback, they're gonna want to know what did you do? And they hadn't thought through that part of it. And I think it feels like that is gonna needs to be even more and more important with these things where they're less deterministic if you're especially automating things, right? Where you've got some sort of LLM or AI sort of component in the middle of it where you don't always know exactly what's gonna come out of the other side.
Spencer Tahill:Yeah, I think um I think that comes down to all, it just comes down to operations. And it comes down to processes, it comes down to nailing goes down, it comes down to hey, if you're gonna do one, this second thing is gonna happen. If you're gonna do A, B is gonna happen. Um, but it's the edge cases. Like humans, you know, like we make mistakes, but then we get reprimanded and we're like, oh, we're not gonna make that mistake again. But the the the bigger issue that I'm seeing now is that like when you start to replace humans with machines, is how do you make sure that machine handles that edge case a different way the next time? And I think like you can tell somebody what to do and they're gonna learn not to shove the circle into the square hole, but a machine doesn't know inherently like what they did wrong. So you have to tell it what to do before it does it, before it tries to do that thing, right?
Michael Hartmann:Yeah, you have to give it it can't be totally free form. Yeah, you need to provide it some rules. It's interesting because I hadn't really like my own experience, just personal level of using ChatGPT is I've had a lot of several times where I've been frustrated, where I get an output that is say 80 to 90 percent of what I was hoping for, and I give it very specific instructions, go change this or that on the output, and it completely changed, like it does some something completely different, like it actually goes away from where it was going. And you know, that's like my fear would be if I'm doing something that's automated and that's like something like that's in the middle of it, that if it like call it an edge case, call it a hallucination, call it whatever, right? That's a risk that we have to deal with.
Spencer Tahill:Yeah, and I I think that's gonna happen anywhere. It's you're gonna have that in organizations, you're gonna have that, you know, in in user, you know, a user level, you know, things like that, right? Um when I started using GPT, you know, no hate here. Like OpenAI did a great job by getting GPT into the hands of the consumers, the consumer market, right? But behind the scenes in the organization level, like there's already intelligence layers that are crunching and formulating and you know, like spitting out intelligence and insights. Um on the consumer level, I was not a power user at all. I was a power user of workflows, I was a power user of like, you know, formulas and API calls and you know, sequences and HubSpot and stuff like this. But damn, I hated it. I hated the process of like interacting with it because, like you just said, it's like a lot of people get frustrated. I was one of those people, you were one of those people, you type something in and it's like, uh-huh. I'm gonna go get that thing over there, and it's not anything relevant to what you do. And and now it's been like two or three years since we've had like this kind of like this power in our in our hands to be able to tell it what to do. And now you can just like you can do so many crazy things. Yeah. Um but it takes so much precision. Like if you want it to do a very specific thing, you need to be super precise about it. And and the problem is actually not being able to produce the quality result once. Um I feel like on a on a consumer level, the hardest thing to do is to aim for consistency.
Michael Hartmann:Yeah.
Spencer Tahill:Can it do it again? Can you do it again, basically, right?
Michael Hartmann:Yeah, yeah, no, it's um I think I heard someone the other day talking about like training in AI on, especially if you're thinking about like for your purposes and like as a it's almost like um teaching a baby, right? Because that's kind of where it starts. It doesn't really though it could it's just what's weird is it actually speaks your language already, right? Which is not what a baby can do. So like that's that part's different. So it's a very interesting one. So uh you kind of hinted at something. So one of the other things you and I talked about before was um systems thinking and and how that needs to come into play. Um, when you think about AI and adoption of AI. Like, what did you like? So what did you mean by that? Like what like from your from your frame, maybe start. Like, what is systems thinking from your perspective and then why you think it's so important?
Spencer Tahill:So system thinking is imagine you have, I'll just put it very eloquently. Imagine you have a car, right? And each component of that car makes up the whole machine, right? You've got the wheels, you've got the axles, you've got the engine, you've got the motor, like whatever. If you take the wheel off the car, can it run? Maybe on three wheels. Depends on what you mean by run, but yeah. Depends on what you mean mean by run, exactly. But if you change one thing in a in a s in a machine, if you take that belt out, if you take the engine out, it's probably not going to work as well as you designed it to. System thinking is the idea in my head that before you change one component, you should look at the ramifications of the impact across the whole system. And you see this a lot in like workflows, especially in like, especially in Zapier, right? It's like you've got one Zapier account, you've got 20 folders in there, and you've got 18 different zaps in there that are all floating around. Ideally, they're organized. If you change one custom field inside of HubSpot, how many of those zaps break? If you if one of your interns goes in and breaks one thing, how many of those things get you know jeopardized? And I think this idea of like systems thinking and the way that you can approach making a minor impact is that that one thing can domino affect down the whole system and shut it down. And I'm sure that has happened to many organizations, you know, but I've seen it. Um, but this idea of don't change anything until you're know until you know don't change micro until you understand the macro. I think this is the idea, and this is how I try to formulate like what is systems thinking? Understanding the whole embodiment of an organization, of a system, not just HubSpot, not just the ad campaign, but the whole creative process, looking at like creating the copy, go, you know, go to Canva, post it on LinkedIn. Okay, when you post it on LinkedIn, it's an ad, then you boost it. But how do you boost it? Do you boost it with a UTM? Where does that UTM get picked up? Does it go to HubSpot? Where does it go? You know, like tracing the line back and being able to map out the whole system like a skeleton to be able to see, yeah, if I change this little thing. That's how it changes everything, and that's the impact.
Michael Hartmann:Got it. So this goes back, as we had a guest on uh it's probably been a year ago now, who talked kind of taught me about something called the Kinebin framework, which is a Welsh word. So it's done it's spelled C-Y-N-E-F-I-N, I think I'm close. But it's pronounced very differently. Anyway, long story short, like this the it differentiates across different kinds of problems. And one is the the one that's most relevant here is complicated versus complex, right? And what you described, like in fact, car is a great example of complicated, right? Where um if you have enough knowledge and training, you can understand how it all fits together. But if there's a problem, you can usually kind of figure out and fix it, right? Because it's a singular problem, uh, even if it has downstream effects. The difference in a complex one, which is feels like we get into very quickly it uh, but doesn't always like we don't always know the different like when we're going from complicated to complex, is it like you actually can't fully understand it? And I think this is where AI starts to come in, where it's like you have one input over here and you change it, you actually don't know what the impact is gonna be in that one case, right? And now you start at like you've brought like you got a new case, the input's slightly different. Do you get the same kind of result? Maybe, maybe not. And that's where I suspect we're headed is that we're gonna be moving from complicated, where it's not easy, but you can kind of figure it out to complex. Um, and it will be really, really difficult to figure out if something changes and you don't know because it's all sort of uh hidden inside this quote black box.
Spencer Tahill:Nail that you nailed it. Okay. Yeah.
Michael Hartmann:But I mean, but understanding how those pieces fit together is important, right? I I like I don't want to discount that. Like so that systems thinking I I now that you described, right? Yes, it's gonna be really important. Um, but it feels like there's gotta be this um acknowledgement that even with systems thinking in this new kind of world we're we're either in or headed towards, like it's hard to tell. There's gonna be stuff that is gonna be really hard to figure out. Um both to set it up and to like figure out if something doesn't I wouldn't even say broke breaks, but like produces an output we don't expect.
Spencer Tahill:That and that is probably the hardest part of my job is edge case handling. I can I feel confident telling you like I can build a research prompt or I can build a prompt in two or three days, right? Like stress test and everything like this, but it is the testing period that I tell people about I said, like, listen, like I could build it, but we're gonna test it for two weeks because I need to monitor the edge cases before I hand that over to you. And it's just I think you try to some so many people try to go so fast on burning something or implementing something that they don't do the proper QA, and then you get issues, right? And and that's that's the way of the world now where everybody's going so fast, AI is here, it's at your fingertips, and you're like, it's gonna solve everything, you know, like a a brand new cooking pan is not gonna make you a better cook, right? But like, but it's gonna help. Maybe a sharper knife will help, but it's it's inherently not going to it's not gonna fix the root issue. And I think if if a lot of people more people try to figure out you know, root cause analysis, the systems thinking approach, is they will understand that they have a lot different problems than they actually think they do.
Michael Hartmann:So yeah, I mean, do you so do you think that's one of the big mistakes people make when they're trying to adopt AI? They're moving too quickly, they're not like what yes, like one of the biggest red like things, things that kind of call bear traps, maybe like the people you watch out for when they're trying to adopt AI.
Spencer Tahill:I think when I I don't really get it so much anymore across you know, across my desk, but a few months ago, I mean, even I would say even the past like six months, right? What I've noticed is that more and more people that come to me, they know like, okay, we're gonna do something complex. Going back to complexity, yeah. Um, I want to go, like, for instance, I want to build a Slack bot that connects one community to another community, but also routes into Adio and uses call intelligence and everything like this to make content. Whatever. That's very complex, and you need different API nodes. But I think a lot of people that come and they try to use a new tool. Tool. They use this like shiny object syndrome where it's like you see AI, they're like, oh my God, like I see all these things on LinkedIn. This is like the stemming of the problem. Is that they see everybody else doing something just super complex and it looks complex and it looks crazy, and it somewhat sounds like it can solve their problem.
Michael Hartmann:Right.
Spencer Tahill:But they don't realize that every problem is unique. And I think like that's the very like pinnacle of the problem, like, not even regarding AI adoption. I think it's how you how you hire people. Like, why are you hiring them? Is the problem unique? Are you making a workflow or something like this? So I think understanding not being able to understand the situation before you try to adopt something like AI and not being able to define the problem properly and not being able to define possible solutions to implement is a huge problem. And and I don't I don't touch that because it's it makes work very difficult when you've got a project manager. It makes it makes everything slow down and you're like, oh wait, uh-oh, I built something and you didn't want that. Everybody's been there, you know, and then it costs everybody time. So I think it's rather just slow down, define the problem, define the desired result, work backwards, look up any roadblocks, try to, you know, set up the task matrix, just who's gonna handle what based on their strengths, delegate, start at the beginning, and then say, okay, this is gonna take this much time. And then that's done. And I think I think companies and people that fail to do this in this order, just bound to fail. And I it just it's too fast, not enough thinking, it's just failed.
Michael Hartmann:It's the whole the whole like to try to eat an elephant in one bite is like doomed to fail, right? Um so what I'm curious what your take is. I think I think when we talked before, you're switching gears a little bit, is that yeah, a lot I think there's a lot a lot of fear out there that AI is going to replace people. And I think if I remember right, your take on it was that AI wasn't gonna replace people, is gonna replace people who didn't know how to use AI. Is that has it is that a fair assessment? And like has it evolved at all since then?
Spencer Tahill:Has it evolved? It's probably I I'll probably double down on that statement. It's we are apex predators as humans because we know how to use tools and because we know how to think ahead. That is that is my double-down statement. AI is a tool that allows you to do more way past your cognitive abilities. I'm not a P, I don't have a PhD in microbiology, I don't have a PhD in you know, mathematics or anything like this. But these LLMs that, you know, OpenAI and Propic, and you know, any of the other LLMs out there that are amazing, they've been trained to do this. I, you know, they've they've gone through the steps, they've gone through the data, they've there you have you have a billion perspectives and a billion different data points outside of your own cognitive ability to be able to tap into. You just need to ask the right questions. And I think the people that know how to ask the right questions, the people that are inherently inquisitive about new tools that are, you know, how to become a better prompt engineer, how to use AI better. This is step number one. It's like, don't ever just use GPT, login and it's gonna say, hey, help me do this. No, no, no. Go to GPT, open two GPT windows, say, help, I want to do this, write a better prompt for me to help me do this that I can copy and paste. Do this. It's gonna be way smarter than you about prompting. Take that, put it in the second window, do your task, and then after you're done with your tasks, say, This is what I did, this is the prompt I used, this was the original prompt that I wanted to use, but help me learn how to go from my original idea, my original intelligence level through the process and teach me how to do it at a higher standard.
Michael Hartmann:Yeah.
Spencer Tahill:And I think like the people that know how to do this are gonna soar over the people that that don't know how to ask the right questions or don't adopt AI. The people that just you know don't adopt it, I think it's quite frankly, I think it's negligence. And uh it's a tool. You don't need to use it to be great at what you do, but I think if you're if you're trying to do more and accelerate, it's clearly the next step. And yeah, I saw that's just my my forte, Anna.
Michael Hartmann:I saw somebody post on LinkedIn, I don't remember, but it was recent, like the last week or two. Somebody was saying, like kind of echoes what you're saying, AI is a tool. And he's like, as a writer, like my right, like I didn't lose the ability to write when I started going from pen and paper to a typewriter. I didn't lose my ability to write when I like from that to a computer to no. It's but it's it and it didn't change when I started using grammarly or something like that. Like it just helped me get better and more efficient at it, it's as opposed to atrophying, which we we could talk about. Like, I I think it's gone by the wayside now. There was all those headlines about like AI causes brain atrophy, but I think that was just headline fodder, but I don't know what your take was on that.
Spencer Tahill:That's the MIT study, right? Yeah, yeah. The one I was covered a few months ago. Yeah, I think uh that's a whole discussion. Um but I had read I had read the synopsis of that study uh like a few whenever it came out, a few months after it came out. Um for the people that didn't read it, it was like a study by MIT with a control group, and it was it was it was roughly talking about like people that use AI and the people that don't use AI and kind of like the adoption and the competence levels there. Um, but roughly uh the synopsis there that I would sum up is that the way that it was presented in the news that like GPT users have you know lower competence of like XYZ. Um that's canon fodder, but I would say that there was like my personal beliefs aligned with that article in the sense that somewhere in there it said something along the lines of like higher competence users of AI will lapse lower competence users at a baseline level, but lower competence users will fail to learn if they continue at this pace of low learning like aptitude. Yeah, long story short, lazy people get lazier because they depend on AI as a shortcut, but the higher performers, the ones that are more inquisitive, will use that as an acceleration jump in their learning. So it's that's how I understood it.
Michael Hartmann:Yeah. Yeah, I'm with you. And I it it's consistent with other stuff I've heard uh more in the education realm, um, where the highest performing kids who started using AI as a part of their learning process continue to like distance themselves and those that didn't use it, and and those who who were not performing as well and used it also improved. They just their improvement was less, right? So it's not quite what you were saying, but it's very similar kind of results. Um I'm sure there's studies there. I just I that one I've heard just anecdotally about. Um okay, so I'm trying to decide where to go here. So you we kind of talked about the system prompt, you know, like using the tool to help you write better prompts. And um one of the things you you and I talked about, and I still feel like I don't quite understand, is the difference between I think you called it a system prompt and a user prompt. Like so is this well, I'll I'll let you explain. What's the difference?
Spencer Tahill:Yeah, so in in the context of a user-level interaction, this is it layman's term, you log onto your computer on your phone and you type in the GPT. You're the user.
Michael Hartmann:Yeah.
Spencer Tahill:On the system side, that is inherently going to be how the system thinks, right? It's going to be a blank slate. You know, like imagine you as a human, just in a machine form, right? You're a system as a person, your brain is a system, it's wired to think a specific way. Your brain, it's wired. You've learned all these different interactions, you know how to speak, maybe French, Italian, Spanish, English, whatever. Maybe you have a maybe you have a doctorate, maybe you have you're in a bachelor's degree of engineering, maybe you're in business. You have all of these unique different ways of connecting the synapses, right, in your brain. Everybody is unique. That's what makes us us. But we're not a machine. And if you want to replicate the way of thinking, like a machine, this machine thinking, quote unquote, right? I'm gonna kind of like simplify. A system prompt is the back-end design of what you're talking to. A user prompt is what you're actually talking, like how you're interacting with the machine, if that makes sense. So when you put in, hey, help me make do this research, you are prompting it as a user. But you are prompting it as a user and not a machine. The machine, the system, will think in a very specific way. And and it's trained to do this. So you have very specific use cases like I want you to act like a market researcher. I want you to tell me how to make a podcast episode, but have you ever given it extra pot like context right before you tell it what to do? Like, hey, I I want you. This is contextual prompt prompting.
Michael Hartmann:Yes.
Spencer Tahill:The system prompt is behind the scenes. So you can already give it all of the intelligence and all of the use cases. You can fuse shot it, you can give it edge cases to look out for, you can tell it what to think. So not a lot of models will, well, nowadays you can, but not a lot of models six, seven, eight months ago made it easy for you to actually design the system prompt. You were only able to interact with it from a user prompt point of view. So when you are building a workflow, you know, maybe you there's some listeners that, you know, understand API calls and stuff like this. If you have the chance to put information inside of a system prompt window, this is where you're going to give it its thinking ability. This is where you're going to give it full context. This is how you're going to give it all of its edge cases, its limitations, and everything like this. The user prompt is to drive the conversation and the contextual output forward. So these are the two differences. And I think like when you are, you know, you're just in your computer or you're on your phone and you're just trying to like find the answer to something, you're prompting it. You've not designed it.
Michael Hartmann:Right.
Spencer Tahill:The first iterations of a system prompt came with like a fine-tuned LLM where you would kind of design it to you know handle these use cases to think in a specific way. It's not thinking. I mean, it is, but it's connecting synapses, it's connecting nodes of data together. That's thinking. That's a system prompt and I think those are two inherently different things that you can.
Michael Hartmann:I was thinking like programming languages like that most people would use versus assembly language, but I don't think it's quite the right analogy there, right? This is like um, if I understand it right, it's almost like um providing that baby, right, um as much ex like experiential stuff you can, right, and knowledge before you then go ask it to go either answer something or do something or whatever, right? Is that the way more of a better way of thinking about it? Yes.
Spencer Tahill:I I would I would up the complexity a little bit, but on a on a back end, yes. Uh imagine like you have that one friend that's really annoying that you just don't want to like talk to about anything, but you know that if you tell him about this other thing, that he's gonna be there for you or XYZ, right? This is like a very complicated scenario. But the system, that person is a system, and they're gonna handle stimuli in a very specific way. When X occurs, that's gonna happen, right? Because it's you've already tried to talk to that guy about you know, 20 times, you know, two in the morning comes over, whatever, you know. Yeah, but the way the the consistency, like the training of this, that's that is like the personality, that is the way that the system operates. It's the operating model, it's not like assembly, but the user input is the scenario. So instead of instead of looking at it as like, oh, the you know, like you're gonna give the toy to the baby and it's gonna try to shove it through the box, whatever, right? You teach it, you tell you tell it what to do and how to do it. And when it doesn't do that, these things happen. That's a system prompt. And then when the scenario happens, when you when you kind of give it the scenario in the context of like what's going on at the user level, then it'll know inherently, oh, this scenario matches scenario 47 that I already have knowledge base about. Let me handle it in this specific way.
Michael Hartmann:Interesting. Okay. Yeah, I think I'm I'm getting that nuance now a little better. So um I think I brought I mentioned the word hallucination at some point. Um, and I think pretty much anybody who's used this stuff has experienced a hallucination from one of these LOMs. But um so one of the things you and I talked about, you said that you suggest there's like five things that you suggest that you do with every LLM every time you try to go through something. I don't know if this is a system prop or user promp, so maybe there's a different shade there. But what are those five things uh um that we our listeners or audience could benefit from?
Spencer Tahill:There are I'm gonna explain it terribly, but I'll do my best. Um I always get it context, background, goals, constraints or limitations, and formats. So if you can give it those five things, and in not it doesn't have to be in that order, but and you can look up on the OpenAI website and Anthropic COD website, like how to make a good system from, how to make a use good user prompt. But if you give it context, you know, you are this, you have this data set, you you know, you're connected to the API and FBI, whatever, you've got all that data. That's context. Background, you've handled a hundred and a hundred thousand iterations. These are the a hundred thousand results that you got that you produced. Out of these a hundred thousand, only 50,000 of them should be produced, whatever, right? That's that's gonna be the background of like, what is it? I'm a senior marketer research, everything like this. That's step two. So context and background. The third one is gonna be the goals. So it's gonna be the directed output. What are you trying to achieve? Okay, I'm a market researcher. I have act, I now I have access to Google, I have access to web search capabilities, I have access to NPI, you know, all of these different web pages, whatever. But now my goal is to be able to crunch data. I want to analyze the data, I want to synthesize the data, I want to triangulate the data sources, I want to see if there are any data discrepancies. Okay, awesome. Now the fourth step. Now we have context, background, and goals. Very clear the system knows what to do. Then you have limitations or constraints. This is the fourth one. This is like what a lot of people forget, and this is leading into the discussion of like hallucinations, you what not to do. Because you you summed it up perfectly. You aren't on a GPT months ago and you were super frustrated because it produced something completely different.
Michael Hartmann:Yeah.
Spencer Tahill:What you had forgotten, what you had forgotten, exactly. You had forgotten, and naturally I have too many times, is you didn't tell it to retain its memory. You didn't tell you told it, you're like, hey, I want you to make these changes. But then it did, hey, I'm gonna make these changes across the whole thing. But you didn't, you but what would have been better is hey, great job. Give it feedback, just talk to it like a person. This is what I do. I do Windows key H on my keyboard, or you open up speak to type. There's a lot of speech to type paths, and I speak to it. I give it feedback. That's how I work, but with the limitations, do not do this, do not output a string, do not output an array, output rich text or you know, take out all the spaces, take out all the vowels. I don't know why you do that, but do not do this, do not list it down because it's not gonna know unless you tell it. A child isn't gonna know that the the letter of the alphabet starts with A unless you go A, B, C, D, you know, right? So you have to tell it, you have to treat it like a child, right? It doesn't know. They're so smart now that most of them do, but if it's never seen the scenario before, then it's not gonna know, right? So now we have context, background, goals, limitations. Now you have the fifth one, which is almost the most important, which is the format. So a lot of the form the formation of the output is the result. That's the thing that you need to get ROI from, right? Like if you're making a workflow or a program or transformation program, that's probably what you care the most about as a result. The result doesn't matter if it's not formatted properly. If you go into HubSpot and you have a first name field and it says Spencer Tahill dot whatever, but it's it's not spaced out and it's my full name, it's not parsed correctly. Like, what's the point? You need to do output and then format the output in a way that makes sense. I'm a senior researcher, don't do researching about this, don't take X threads, don't look at social media, but I want you to make the result in an array or like in a summary, a one-page summary in this format executive summary, sources used, exclamations, data discrepancies. That's the format. And that's if you can nail that, you've pretty much got a great prompt on your hand.
Michael Hartmann:Yeah.
Spencer Tahill:Context, background, goals, limitations, format. And you're pretty much good. Outside of that, you can do all of your learning on Open AI. Yeah, I think I I won't.
Michael Hartmann:Yes, yes, yes. I think that constraints one was the one that was the biggest help for me when we talked because I started go like to write the elucidations come from that. And I started doing things like, you know, only use the stuff that I have provided you specifically for this task, and don't use the other resources because otherwise that's where that's where I found that it was finding stuff it had access to, but wasn't really relevant. Um yeah, I think that's that was helpful. I still I I had already started doing the context. I think the the big thing I've learned. So I've got some things where I've got long threads of things where I've provided a whole bunch of context and examples of other up with it I've done to give it guidance, and it was really good. It's gotten better over time. But what's happened is it's all in one big long chat, and the performance has been bad. So I actually did kind of what you said. I was like, I'm gonna go ask again. I'm not advocating for Chat GPT, it's the one I happen to use. Um, and in fact, I feel like I should like I there's a part of me is like I should be looking at others more, but I like I've invested in this one. But I actually said, like, I'm having this, like the results are good, the performance isn't is getting worse and worse every time I use it. How should I restructure this? Like in if I'm gonna do it again. And it gave me a great like we went back and forth, and it gave me it actually gave me like here's the structure do this, do this, do this. And um that's how I'm gonna implement it in the future. So I get the benefit of all that, but continued better performance. So yeah, I love it. That idea of using the tool to help you get better at using the tool.
Spencer Tahill:That's that's how I learned.
Michael Hartmann:Yeah.
Spencer Tahill:I sucked at it and I got better at it. The first step of being great is sucking, unfortunately.
Michael Hartmann:Yeah. Um, well, this has been awesome, Spencer. Thanks for for this. Um, if folks want to kind of keep continue the conversation with you or learn more about what you're doing or what you're doing with uh with Growth Alliance, what's the best way for them to do that?
Spencer Tahill:Um guys, anybody that's listening, book a call with me. It's uh or text me on LinkedIn, you know, whichever. Um I'm completely open. My you know, my eyes can be your eyes in the industry. Um building a lot of complex things. It takes a lot of time. So networking chat, dude. Like uh book a time on my calendar, tell them that Michael Hartman sent you from marketing ops, and we'll just catch up. We'll just get a coffee. I like meeting people. So um everything is on my LinkedIn. It's just like slash Spencer Tahoe on LinkedIn or something like that. Gotcha. Um, but yeah, don't think that you're alone in trying to learn a new thing because I sucked at it too. So it's okay.
Michael Hartmann:Yeah, no, like I've it's definitely been a like you have to invest a ton, right? And it's I wish I had more, but it's it's been I feel like I've gotten better and better over time. So yeah. Anyway, well, it's a pleasure, Spencer. Thank you so much. Thanks for staying up late. Uh I I know this was a long time coming, so I appreciate it. Thanks also to our our listeners and our rest of our audience. Uh, we always appreciate you. If you have suggestions for topics or guests, or you want to be a guest, you can reach out to Naomi, Mike, or me, and we'd be happy to get the ball rolling. Until next time, bye everybody.