Ops Cast

The Value-Driven Path to AI Adoption with Aby Varma

MarketingOps.com Season 1 Episode 209

Text us your thoughts on the episode or the show!

In this episode of OpsCast, hosted by Michael Hartmann and powered by MarketingOps.com, we are joined by Aby Varma, global business and marketing leader and Founder of Spark Novus. Aby helps organizations adopt AI strategically and responsibly, guiding leaders from early adoption to self-reliant innovation.

The discussion explores how marketing teams can move beyond experimenting with AI tools to building long-term, value-based strategies that drive measurable impact. Aby shares real-world examples of AI implementation, frameworks for defining a “strategic north star,” and advice for leading change across every level of the organization.

In this episode, you will learn:

  • How to apply a value-based approach to AI adoption
  • Why productivity is only the beginning of AI’s potential in marketing
  • How to build responsible-use guardrails that support faster innovation
  • The evolving role of Marketing Ops in AI strategy and execution

This episode is ideal for marketing, operations, and business leaders who want to use AI with purpose, balance innovation with responsibility, and prepare their teams for the next phase of intelligent marketing.

Episode Brought to You By MO Pros 
The #1 Community for Marketing Operations Professionals

Ops Cast is brought to you in partnership with Emmie Co, an incredible group of consultants leading the top brands in all things Marketing Operations. Check the mount at Emmieco.com

Support the show

Michael Hartmann:

Hello, everyone. Welcome to another episode of OpsCast, brought to you by MarketingOps.com. I am your host, Michael Hartman, Flying Solo. I am hopeful that Naomi and Mike will be joining again soon. In fact, I know we're planning on doing a recap soon of Mops Blizzard 2025, so watch out for that coming out soon. But today I'm diving into one of the biggest challenges and I guess opportunities facing marketers and our marketing ops folks today is how to adopt AI strategically and responsibly. Joining me to have this conversation is my guest, Abby Varma. He is a global business and marketing leader and the founder of Sparknovus, where he helps marketing leaders navigate their AI journey from adoption to self-reliance, driving innovation and marketing and business growth. He also hosts the Marketing AI SparkCast podcast and leads the Marketing AI Pulse community, both focused on AI's real world impact on marketing and business. So, Abby, welcome to the show. Thanks for joining.

Aby Varma:

Thanks, Michael. Thank you for having me. Such a pleasure.

Michael Hartmann:

Yeah, we may have we may have to talk about. I know we chatted briefly before this. Uh maybe we can talk a little bit about the community. Uh, I think it's a local one that you have there as part of this, but let's let's dive into it. Let's let's talk about what you're doing with SparkNovis and how did you like what's the origin story of how you got started helping marketing leaders navigate the adoption of AI?

Aby Varma:

Yeah, great question. So this was all sort of um timed well, in the sense, well in quotes, because uh I got into AI right before COVID, you know. Um we had hired uh uh as part of our RD department and a company I used to work for. Uh we hired an uh person who was very focused in machine learning and I'm kind of a um nerd at heart and got into it and everything, and then it was, you know, timing-wise, I was just before COVID. And so that gave me plenty of time to sort of dig into it. And uh I went from a predominantly travel sort of role to being completely kind of in lockdown mode, like we all were, and like it just sort of afforded me the opportunity to dig in and learn more about it, and I was like, wow, this is game changing, it's really going to um kind of change the way marketing is going to work. And you know, uh for the this was uh I was in uh a marketing leadership role in an enterprise organization, so uh I was just sort of it started off with just me thinking about um you know how this was going to solve my problems, and the more I got into it, I was just fascinated that this was just going to be game changing, and I was sort of the quote unquote spark, the spark know this.

Michael Hartmann:

Ah, okay. Love it. Nice connection there. Um for the for those who are watching this, maybe like I'm not now I'm wondering that piece of art that's behind you, Abby. Like, is that AI generated?

Aby Varma:

No No, it would it is it is not. Uh that that's my sister is a commercial artist in and whatever visits. I sort of twist her around to create that.

Michael Hartmann:

There you go. There you go. It's great in a hubby way. Yes. I mean, I'm worried like at some point here in the near future, it's gonna be hard to distinguish. Um, it's like I see posts a lot of times now on like Facebook or Link uh Instagram. I haven't seen as much on LinkedIn that I'm aware of, but like where it's like this story seems too good to be true, and then you start reading, like, oh yeah, yeah, it's definitely like bullshit.

Aby Varma:

AI slob, yes. A lot of it is making its way.

Michael Hartmann:

Well, so let's talk about one of the things you and I talked about was that a lot of companies when they're starting to dip their toes or starting to try to like, oh, somebody says we need to go do AI, right? It's they think about it probably like AB ABM, right? They really think about it as technology first. But you said that you think about and and advocate for companies to think about on a value-based approach. So Michael, we're concerned. But what do you mean by that? And what does that look like in a data, you know, kind of at the ground level, if you will?

Aby Varma:

Yeah, absolutely. So so the value-based approach is really changing your perspective and saying, uh, hey, you know, what can this tool do, or what can AI do, to really what can what can I do with AI to achieve my business goals, right? Very subtle shift. Um, and when I say I, you can replace the I with me or my team or my organization to to achieve business goals. So it's really kind of changing the framing that you are putting AI to use in a way that is adding value to what is relevant to your yourself, your team, your department, your organization at that point in time. And as simple as that sounds, it is often lost in the noise of AI adoption. Like it's not uncommon where when we're working with CMOs and you know, they'll they'll ask me that, hey, can you help me doing, can you help me with AI? Can you help me quote unquote do AI? And I'm like, yeah, what would you like? You know, what do you what do you want to do? And there's an awkward silence. And that's sort of this symptomatic where people have, you know, they're obviously getting overwhelmed. There's an assault in all senses these days with AI and whatnot, but um really pausing and thinking about what is the value, you can use it, but use it towards something that is moving the needle forward, uh, you know, with your department, your team, your organization, and that is what we call a value-based approach.

Michael Hartmann:

Yeah, it's interesting. This is a like not a great analogy, but in my head, what I went to is sort of the the John Kennedy thing about like ask not what your country can do for you, but what you can do for your country. In this case, like ask not what AI can do for you, but what can you do with AI? Isn't kind of uh There you go. Right, there you go. You can steal that if you want.

Aby Varma:

I will. I I will, but you know, at the end of the day, you said it to me, it's the you know, the AI is not the protagonist of the storyline, right? It's it's human beings still very much so. So it's really you know, the really I look at it is AI is an enabler, but uh, what is it enabling? It needs to add value, it needs to move the needle, it needs to do something more than using AI for the sake of AI.

Michael Hartmann:

Yeah, I mean it's I like I the way I've been thinking about it is it's it's almost like a I was gonna say a thought partner, but it's like a partner to kind of like it's a it's um yeah, it's an even enabler, it's it's a multiplier, maybe in some way. Um and I think that's that's the way I've been starting to think about both personally and professionally, but mostly honestly, like mostly personally, I think is in my world, but I think I happen to work at an organization now that is really truly trying to adopt AI. Um, and uh I don't know that they thought about it as value-based per se. I know that I've seen your leadership talk about like a time dividend, which I and I like that um way of thinking about it. But I think um it feels like there needs to be a sort of a thoughtful way of doing it and finding early adopters, maybe. And so, like maybe like some like I have that example a little bit in my head from personal experience. Like, could you talk through maybe some examples where you've seen this either not done well and you come in and helped? Can I correct? Or where you've come and brought in and you and you did help, right? Where there wasn't that like there was that silence, and you go like, okay, well, let's talk about what your goals are and then figure out how to bring AI as a part of it.

Aby Varma:

Absolutely. So I'll give you maybe three examples. We work with all sorts of firms. So I'll give you an example of an enterprise company, a mid-market company, and a small business, because that value-based approach is sort of agnostic at the size of the organization you work within. So um, for an enterprise company, their Fortune 500 firm, uh, we are uh working with one of their business units that launched a new product. And there was um a lot of urgency in terms of launching the product, trying it out, and proving the product market fit early on before they put in more investment dollars into that. And that entire cycle. Yeah, it was it was uh it was a uh software and solution consulting sort of packaged offering uh within the sustainability industry. And uh yeah, and so longer sales cycles, but the idea was hey, how can we do it? And if you did it the traditional sort of way, that process could easily be a one-year type you know cycle where you're coming up, conceiving with the idea, coming up with the idea, coming up with messaging, your go-to-market strategy, activate your channels and all that kind of stuff. And um, that was something so we were brought in and saying that we were brought in just really almost for the content piece of it and help us with the content generation. But when we started working with the executives, we sort of faced the idea, hey, this is you could use AI to accelerate the entire cycle. And the outcome of that, like within a four-month cycle, we were able to take the solution from concept and take it to market with a you know qualified pipeline of over 2,000 leads. So that was we were able to compress that within four months, which would just not be possible with you know, without AI, just in terms of the mechanics that sort of went into it. Um so a great way where we sort of took um, there were there were a lot of ideas on how to leverage AI, but we really took it to where, hey, let's we we kept asking the question, you know, so so so what? Like, hey, let's do content, or like, okay, so what to to achieve what? Okay, we wanted to make sure that lands with the right people. And you keep asking that question, so what till the time you get to that nugget of value that you're trying to extract? So that's the enterprise example, a mid-market example is where we're working with a mid-market or five in a million professional services uh firm uh out of the Midwest. And in that example, you know, 90% of their marketing dollars are spent for paid media, and uh their entire use case is that hey, anytime they want to grow, there's an expectation of you know, more budget dollars are needed in order to grow. So, how can they take get more out of the same amount of spend? And that's where you know the higher sort of value was leveraging AI to really improve their return on ads and ROAS uh and really help them um grow without additional investment in their paid media strategy.

Michael Hartmann:

And we've been bending the cost curve is the way that was absolutely, absolutely.

Aby Varma:

So from um, you know, campaign ideas to content creation to imagery to analysis, analytics, the whole value chain for paid media is under review where AI is already adding value. And then um for a smaller firm, uh, we work with a training and services firm, uh, small company, 25, 30 employees. And uh in that example, we enabled the entire organization, rolled out business chat GPT. They don't have a lot of investment dollars for different types of technologies. So we just rolled out the business version of Chat GPT, which is 25 bucks per user per month, very affordable. And then we came up with like this series of uh custom GPTs designed very specifically for these different use cases. And my favorite use case was um making sure that they can really improve their sales effectiveness. So uh the very first prospect discovery calls, we came up with a rubric of how to evaluate these calls. And uh this was designed for the salespeople, so they sort of uploaded their transcripts at the end of the call and then just gave them guidance and what to do. And uh the value there was prior to that, the CEO himself would be guiding the salespeople and and spending 30, 45 minutes, um, which is not physically humanly possible, uh, for him to do that with all the salespeople and kind of provide um constructive feedback. So we were able to develop this, and then you know, now we're seeing uh uh a better result where you know um sales velocity is faster, um, the uh close rates have increased and those sort of things. So again, in the list of use cases, there were lots of use cases, but this is the perfect example where this was value driven. We prioritized all the sales use cases, uh sales enablement use cases, some of the delivery use cases, but we picked the use cases that added the most value to the org. So um that gives you sort of a sense of enterprise mid-market and small business, but in all of these cases, in this sort of swarm of AI ideas, we really hone in on the ones that is going to move the needle for the business in some way.

Michael Hartmann:

I mean, the last one is interesting to me. It sounds like you get sort of two pretty immediate benefits, right? One is sort of embedded coaching from the tools, as well as probably insights and guidance on like what are the best next steps? And if you do that over time, right, you can continue to make that better. Um I'm curious across those and maybe others, right? Have you gone in and you had like these are the use cases and you figured out the one that looks like it's gonna be most value or multiple? And then um have you been surprised either either way, right? It either didn't achieve the value you hoped, or it was like off the charts, like, or is there something else that you didn't expect that you like what were the surprises you ran into?

Aby Varma:

Yeah, I mean, great question. I think in the um in some of the examples that we see, there is an assumption that um there's an assumption that the concept of AI, leveraging AI, is going to automatically sort of make everybody's life easier, faster, better in some way. And the surprising part is that it is so intrinsically tied to culture. And the bigger the organization is, um the the more sort of egregious um resistance there is. And some of that resistance is subtle, right? That people won't even acknowledge it. We do, and we typically come in, we will do some sort of a baseline understanding of culture and knowledge and that sort of thing. But um, that to me has been really surprising, where um a lot of people just have a f philosophical stand on no AI or um it's a habit thing, right? Like they're like, hey, you know, my my workflow is this. I've been with a company for 15 years and this is working. I don't know why I need to change my process. So a lot of those sort of things. So that culture-based resistance is um very surprising, even though you can prove out value. So I think those are the ones where you got to put in some extra TLC, focus on really working with the team, you know, being heard, understanding, and it's not a rip the bandit off, some sort of a cultural transition um in helping those sort of people. And well, you know, most adoption with AI follows a very traditional bell curve. You have the early adopters, and you know, the majority of the people will sort of join, and then you have the Lagards, and so it's that um uh but I think the the most surprising thing pretty much across the board has been that sort of cultural resistance. Um, and then the second part is it's almost the underinvestment in training, right? So that that's sort of surprising to me, right? Where you have leaders and you know, they focus on I'm gonna add training and governance, right? Where people are like, yep, I just want the value and I don't wanna I don't want to spend too much effort in training in training the team and investing in training dollars and that sort of thing.

Michael Hartmann:

It's easy, right? Yeah.

Aby Varma:

Yeah, just just you know, give me the easy button. So those are things and as part of our implementations, we definitely um try to impress upon the leadership that are making some of these decisions that those elements are important. And yes, it may not be, you know, it may not show its head right off the bat, but I can guarantee that these things will matter, you know, during the you know, uh a longer, a longer term.

Michael Hartmann:

Yeah, I mean to me, what I'm hearing is like change management and in tailoring the change management to like to match the cultural readiness is still something that needs to be done here, like it is in a lot of this about any significant change, right? And this certainly would be a significant change no matter what you're focused on.

Aby Varma:

100% and I I see that that change management element is so critical when it comes to knowledge, there's just general insecurity about um about AI based on the lack of the knowledge for AI or the lack of how the organization is intending to use it, right? So people are just insecure that hey, is if I'm gonna use it, eventually are they gonna replace me? Or hey, I have I'm giving you real life examples of where people are like, hey, I'm I don't want to use AI because I've put in a budget for two people to increase my team, and now I'm not gonna get that budget because now people are gonna be like, hey, I'm gonna already have that, and that you know, somehow detrimental to the growth, their personal growth or whatever. So these are hidden things that are there that are in people's minds, some articulate it, some don't, but till the time there is a kind of an effort put towards sort of nurturing and getting an understanding of these thoughts genuinely and then finding a strategy to address it, whether one-on-one or as an organization, these sort of things come up uh pretty routinely.

Michael Hartmann:

So on the change management piece, is there um it sounds like there's maybe layers to it too. Like there's like the you talked about senior leaders maybe don't fully understand it, but also people kind of the middle levels and then uh kind of individual contributor levels within organizations. Yeah, there's resistance or um maybe um beliefs about what's possible that are a little bit not like aligned with what's real today, and that kind of stuff. Like, how do you like are do you are you doing sort of are you adjusting that change management at all those different levels? How do you approach that?

Aby Varma:

Yeah, uh so as part of our uh process and methodology, definitely. We we have sort of a three-pronged approach where we talk about change management and it's sort of a tight feedback loop. Um so the first is really discovery. You don't know, if you don't ask, you'll never know. So you've got to have a formal mechanism of asking. And uh we recommend um anonymous input, which is much more richer, right? Because people are not afraid to sort of um answer or you know communicate what they're thinking. Um and some of those feedback elements are could be reflective of the leadership or the organizational culture itself. So that is really good to know. So, for example, it could be like, hey, we are a very conservative organization, we don't typically lean in on technology. So I am not optimistic about how what AI can do for us because I haven't seen that based on historical technology. And that has nothing to do with AI, that's really everything to do with the culture of the organization. Um, so some of these sort of anonymous um sessions will reveal that. And then we have pointed sessions which are tied to the nature of the or the type of org. You know, is it is it a B2B, BOC, services, product, global, local, size of team, workflows, and all that kind of stuff. Like this, the mid-market firm I was selling you, they're 90% of that budgets are going in paid media, right? All the other channels, um, not a big thing for them. So that has an impact on how AI would be leveraged and the the emphasis on um, you know, the various AI-focused programs within the organization. So all of those things sort of are a key part of it. So one is listen and learn. Um uh the second is I already said it, enable and train. Uh, so making sure that there's that enablement and training uh happening on an ongoing basis. Third is culture of experimentation. So allow people to fail and fail fast and recover. Um, so that's another thing. There's an assumption of perfection, right? And to me, it's like um it's there's AI and people are gonna try it, and you know, voila, that's the silver bullet, and it's gonna all my problems are gonna go away. Definitely not happening. So to me, just be realistic about that. Um, you know, there's all the technology providers, their sort of perspectives are definitely rosier than what real-world teams experience. And so just acknowledgement of that is gonna go a long way. And that those three things just need to be repeated. It's not a one and done, um, especially when it comes to training. People are like, hey, if I train people and they're gonna leave my team and I've just funded their education and now they're gonna leave the org. And I think is like, is that worse than if you don't train them and they stay? Like to me, what is what is uh what's what's worse? So my my take is that invest in your team, empower them, let them play with it, get hazardy. And that could be something as simple as even making time. Because, you know, I don't know of any marketing team that works 40 hours, right? So to me, um, you know, that's the thing of the past. But if you are carving out and articulating that, hey, take two hours on a Friday or whatever, you know, whatever is in alignment with your organization, let people play with it. And you know, hackathons is not limited to technical teams. You can have marketing and sales organizations do that, encourage that culture of experimentation, very essential.

Michael Hartmann:

Yeah, I love that. Yeah, yeah. Certainly, there's no teams up there. I actually just talked to a coaching client recently, and and uh I said nobody executive team cares that you're busy. Arrow's busy, right? Like, so get over it. Um so all this implies that there's and I think you called it like there's a North Star for AI, right? And so what do you mean by that? And then if you were like when you go in and talk to these new clients or uh you know others, like how do you help them think about what that should be and identify it and then articulate it well?

Aby Varma:

Yeah, so uh think of it like a pyramid almost. Like so the value-based approach that we spoke about earlier is right at the very tippy top. And that to me is your strategic non-star of what that value is. So, for example, um, your value in your organization could all could be all about business growth. Uh, your value could be all about um client acquisition or client retention, or the value could be, and this is a real life example, working with an agency where um their entire sort of North Star for their AI strategy was not to get rid of people within their existing teams, but to reduce the headcount per new client that they got. So they wanted to make sure they're pivoting the entire organization. They get a new client, which is worth it's a two million dollar account. What they don't want to do is go hire four people now to service the account, which is what they traditionally did. They're like, hey, we want to make sure we can add one more person and then some AI or two people in AI, and that two people plus AI is more cost-effective than four people, uh, and faster and maybe improved quality and insights in some instances. So making sure that that underlying sort of your North Star becomes that underlying decision point. And I we we physically do this exercise in in workshops, but just for your listeners, imagine if every use case was a post-it note and in a half-day workshop, it is not uncommon for the wall to have like you know 50 plus post-it notes. Right. And then it's funny because the moment you add you work with the leadership and you add a strategic Northstar and saying, hey, our focus is going to be on retention because our churn is really high, you will see magically all the non-retention post-it notes sort of wither away. I'm not saying that they are not important, but the relative importance in those becomes sort of secondary. So to me, the when when we say Street Ignore Star for AI, it is really making sure that you're answering the question of why and why now, and what are you focusing on? And um a great example uh was we're working with a company where their CMO and marketing leadership wanted to redo their digital asset management system that was there for a decade, and you know, if there's a dam for a decade, it's there, it needs some cleaning, right? Right. They had like millions of assets and they wanted AI. They were like, oh, perfect use case. Can AI to scan all these things and recategorize and blah, blah, blah, based on our new go-to-market? Great, great use case. Definitely AI can do that. You know, that's one. The second thing is they're entering a brand new market. They're entering new markets in Europe and they want marketing to support them in that go-to-market uh play. What do you think got the most value? Again, the the digital asset management was not a bad use case, but in the relative importance of that North Star being business growth, you know, all the effort uh uh where AI could play a part in was for the new market play, the new market go to market play. So that's an example of where your North Star can guide everything. It prevents random tool adoption, disconnected pilots, you know, kind of displaced investments. Everything that you do is aligned to something bigger.

Michael Hartmann:

Yeah, it makes sense. Um so another thing that you talked about, and maybe this feeds into the pri like it feels like what you're describing there helps with prioritization, which totally makes sense. Um one of the things you said, I think when we last talked is that you know get it like productivity improvements are sort of table stakes in adoption of AI. Um but there there should be like there's like but that's like the table stakes. So what should be people thinking about beyond that? I mean, you've kind of touched on, I think, but um and maybe that that like that it feels like that idea you just talked about with the the dam is because it falls in that category, like that would be a productivity benefit, at least short term, right? Maybe there's more too, but the other one felt like it was more of a high value uh aligned with business goals. Like is that what you were talking about there?

Aby Varma:

Like yeah, I think the when it comes to AI use case, productivity is like when I say table stakes, it's pretty much um kind of embedded. That's why AI is so popular, you know, already. So you you'll see people regardless whether it's generative AI or analytics or creative or whatever use case, it is essentially helping you do the same thing less time, or helping you do the same thing in more, in better quality or whatever, in less time than what you would, you know, traditionally do. So when I say, when I said you know, productivity is sort of table sticks, that's what I meant. But I think teams should really think beyond that, right? Like the value addition discussion that we had. And to me, there's sort of three pillars. Um, productivity sort of is the underlying pillar uh or or sort of a horizontal layer. And then on top of that, think of it like as three pillars. And to me, that is you know, speed, precision, and data-driven insights. And those three things, again, sound simple, but those are things that we have chased in the past and with moderate success at best, right? Like, and uh, I think the best example that exemplifies like all of these three pillars would be personalization, right? Like we've all it's been the holy grail for marketers. Sure, you know, we want to personalize things, and you know, pre-AI has not been easy, like it's time consuming, and you know, we're not truly apparent, it's not truly personalized.

Michael Hartmann:

It's like you're in maybe it's a a you're part of a group that's bigger or smaller, maybe, but it's not truly personalized.

Aby Varma:

Correct. So it's been like you we come up with segments and then we personalize messaging to the segments, and then we do the outreach, and hopefully those segments line up with the messaging and the message resonates. That's been sort of a traditional way of doing things. But now with AI, you can have a segment of one. You can really do things and have a segment of one and really go personalize things and do it fast. So to me, if if your AI use case is able to beyond productivity, is able to achieve all three elements, all three colors, speed, precision, and data driven insights, amazing. But even if it is able to achieve one or two, to me, that's a win.

Michael Hartmann:

Got it. So one of the things that you wanted to At least in my own life, and I think maybe I think you've echoed this as well, is that like for me, it took me a little while, but like on a personal level, like I like I I unless I know that something's gonna be a very binary, like this is the answer, like I almost never use a search engine anymore. Like I if it's anything that even uh has the hint of being complicated or is going to require, I'll call it in quotes research, right? I tend to tend to use an LLM for that now. Like, but I it's pretty much a normal thing for me on my personal level, but even at work, even at the place where they're really pushing for adoption, I still like I don't I'm not personally doing it, and I don't see a huge amount of adoption that's obvious, other than sort of a small set of people. Like, what do you think is the challenge there?

Aby Varma:

You mean personal adoption versus organizational adoption?

Michael Hartmann:

Yeah, yeah, yeah.

Aby Varma:

I think the uh to me, with when you were in when you're sort of playing with AI and experimenting with AI, using AI individually, it's just liberating and there's freedom. You can do whatever you want. You can play with it and and and sort of get whatever results you can. There's no barriers, there's no box you need to fit into, there's no governance, there's no compliance, none of that. But the moment you sort of extend that out to uh a larger organization, there needs to be a level of orchestration because everybody's not doing things the exact same way. So you want to make sure there's four copywriters using AI to write content. Well, you can't have the content sound four different ways. You want to make sure that your brand voice and tone and messaging and everything is all in alignment. So now you want to put in a way where, regardless of whatever the prompting happens with those four copywriters, the output is sort of consistent in some degree. Okay, then you have to have your governance, your do's and downs. You know, make sure our tone of voice is not bombastic, it's humble and fact-based or whatever. I'm just making it up. But you know, now you got to kind of come up with those sort of rules uh and and make sure you're doing it. And then, you know, you start talking to compliance teams and you know, all uh those sort of things where an IT organizations and legal organizations, there are things that they are very careful about, what kind of input is going into your AI, right? So I think the challenges become very um different. It is just basically more strings attached to the way you use it. And um the success of that is predicated not on just individual performance, it's all sort of you know different, different performance, uh or or orchestrated performance, if you will, like team performance. And that is to me is sort of you know um critical.

Michael Hartmann:

Yeah, that makes sense. Yeah. Yeah, I mean, I think even I'm thinking about this like even personal, like we're I'm I think I talked to you before we started recording, right? That I'm doing some stuff in a little with my wife, right? And just even adding a second person into the mix makes it complicated, let alone broader organization. So that definitely makes sense. Um you like a talk before, you mentioned another phrase that I want you you said that there's a you think about something to call this the human AI sandwich. So I'm gonna see you smiling. So like what is that? And uh yeah, how should we think about that?

Aby Varma:

Yeah, absolutely. So um, shout out to um one of the members of the marketing AI Pulse community who we use this term, so not my term, just acknowledging it, but I loved it because to me, um I feel that uh AI for the foreseeable future is going to be the human AI sandwich where really you have humans at the start and the end of the process, and you have the AI in the mill. So humans are triggering and defining the problem and um and sort of articulating what they want AI to do. AI does the heavy lifting, whether it's research, the generation, the analysis, what have you. And then you have the human in the loop on the end of it, kind of looking at the output. And I feel that that sort of keeps the um that's the best way to sort of leverage what AI can do for you today. It is um keeping the output intelligent and emotionally resonant, if you will, based on you know the human values of the human that is driving and seeking the output. Um very often uh I refer to the term as AI slop in the beginning uh when you and we were talking. And to me, I think when I see AI slop is where the the quote unquote, but two slices of bread on either end fail, right? Where the input given is very generic and there's no human review at the end to guide it. So you're pretty much taking AI output and leveraging it, and that to me isn't does not give you sort of the best output. Um, but I feel the human AI sandwich is that premise is going to change in the agenc future that stares us all in the face, right? So with agents, uh agents are going to be designed with what today human beings are the trigger in terms of what we want AI to do, and that is changing fast. I think unions will still have a role to play in it, but that you know what how that sort of translated into the human AI s handwritch remains to be seen. But uh, I certainly feel that um that is critical recognition of what I said earlier that humans are definitely the protagonist in the story.

Michael Hartmann:

Yeah. It's interesting because I remember talking to somebody, it may have been on one of our episodes, where someone uh we kind of got into a discussion of if you were to quote onboard a virtual employee that's AI-based, right? Why, you know, would you how would you handle that in terms of giving it more autonomy? And my take was I was like, I don't think I would treat it much differently than uh you know a new employee, especially one who's doesn't have a lot of experience, because I would probably early on spend more time with that employee, uh, give it guidance, support, handle specific situations, and slowly give it more and more um autonomy to make decisions independently. And I think I would do the same thing with an age and agentic kind of employee as well, because I think you know um part of it is just like trust is not really, but like I need to trust it to make reasonable decisions, and I want to give it guidance on you know, you know, if you feel how confident, you know, a good model for how confident it is about a decision or direction it's gonna go. And if it doesn't meet a certain threshold, right? And over time that threshold might get may get lower and lower and lower just because it becomes better at at doing that. So I don't know. Then you're sending yeah, I think that's that's where I'm kind of seeing it. It's interesting because like every time I say like uh the for the foreseeable future use that term, I'm like, I don't know how far the foreseeable future is anymore. Like it's just it's just moving so fast.

Aby Varma:

It is true, and I mean uh an interesting part is I've said I'm seeing this already. It's sort of a double-edged sword because the more you know LOMs themselves or AI that itself is moving at a galloping pace, it's getting better, more parameters, and that sort of thing. Um, but in addition to that, the more information and context that it has, the output keeps improving, right? And to a point where now people do have a little bit of trust in it, especially if you're using an app or tool or whatever, where you've trained it on your brand voice and your style and all those things, and facts, and competitors, and keywords, and whatnot, and you suddenly start getting content, which is more in an alignment with what your brand would have. But despite that, I feel that there is more than anything else, sort of a word of caution that we get very complacent. Humans get very complacent where you trust it three times, and you're like, right, the fourth time, the the human in the loop portion can sort of the risk is it withers away. And I would I would caution people against that, where I would want that, you know, despite great, you may have gotten amazing outputs the first ten times, but the eleventh time still needs a human in the loop. So every single time human in the loop. But um, you know, I'm sure in the future that may change, but um that level of trust, um, don't let your guard down after just a few times, you know, still remain in the loop. Um it's you know, especially if you have if you're leveraging AI to represent, you know, your your word and your ideas.

Michael Hartmann:

Yeah, it's it's it's funny the analogy. I just popped in my head, but you said is like I'm in the middle of teaching one of my children to drive, and you know, the the the reminder, you go every once in a while when you forget to flip over your shoulder to before you change lanes, right? You find out that somebody's there and uh it's that like that reminder that, oh yeah, I need to continue to do that.

Aby Varma:

Right 100%. Love that, love that.

Michael Hartmann:

Yeah, so um that's interesting. So like so let's maybe um it feels like you've been a little bit of a down here, but um, I still think there's positive stuff here. But like okay, so these were all the challenges, and we talked about some ideas on how to overcome that. Like, how do you how do you think about or how do you guide organizations to build um guardrails you know that are responsible but to still enable teams to move move quickly and adapt um over time? Like, what do you what's your best recommendation on how to approach that?

Aby Varma:

Yeah, I think we have a very simple framework, and it is input, output, transparency, right? So people get lost in the acronyms and the laws and the compliance, and anybody who's been through GDPR knows all about this. Like it can't it's a it's a slippery slope. But if you were to really abstract it and you think about it in terms of input, output, and transparency, I think that that would serve as sort of a framework for how to approach responsible use. So when we say input, establish rules for what kind of data goes into AI. Some of that would come if you're a marketing organization, some of that should come from your overarching company level IT folks, compliance people on what you are doing. So PII data don't download a CSV of your Salesforce and upload it into Chat GPD and ask it to do analysis, don't do that, right? Um, that sort of stuff. Um, so establish rules for input. Um establish obligations for output so whatever comes out of uh chat uh or AI, not chat GPD alone, but whatever, any kind of AI. Make sure that there is you know, look for compliance, look for you know, brand voice, look for you know factual efficiency or uh uh uh factual data, factual uh efficacy, and those sort of things. So establish rules for what is the output. And from a transparency standpoint, um, this is part governance, but part culture, organizational culture, where the establish, you know, work as a team. If you have an AI council or AI working working team, I've seen different different names you know within organizations like a working committee or whatever. But whatever that is, establish rules for uh transparency. At what point and how are we gonna inform people that AI has been used to leverage this? So for example, um, I see that you know, I'm gonna give you a very simple example of video, right? Like where, in the case of video, um one of our clients, their CEO, is completely okay with an AI avatar. And like, great, the avatar comes in as he's okay with it as long as he's approved the content, it's his voice. He doesn't want to go through the logistics of recording and all that kind of stuff. So great. So he comes on on the video, but at the bottom of the video, it says, so hey, this is an AI-generated avatar, the CEO, but um approved in alignment with approved messaging and da-da-da-da-da whatever, you know, some sort of term like that. So that's one way of your disclosure. The other way of disclosure, same video. You'll watch the video by the end of the video. Um, it says video is made in alignment with our AI policy. Go to website.com slash policy. Same way, but the way you the same video, but the way you've disclosed it is two separate ways. Uh, and that is a business decision that teams have to make. So the input-output transparency, obviously, I'm super simplifying it for the sake of this podcast, but that serves as a very broad framework for every organization to think about it and then make sure that you're formalizing it and implementing it. But uh especially within marketing teams and marketing ops folks are have a great role to play in this. They should really make sure that there is a body within the marketing organization, uh, and then some marketing like a small team or a body meeting person.

Michael Hartmann:

Okay.

Aby Varma:

Uh it could it depends on the size of the team. So, you know, we we have a marketing council where it's the CEO and and us, so that's the marketing, that that's the AI council. In that case, that's the full organization. But then some of the bigger organizations, there is a marketing AI council where there is key members of the functional and geographical like uh regional leadership form that council. Any decision making uh that needs to be made is goes through that organization. So make sure that there is some structure in that as you are deciding, deciding rules for input, output, and and transparency, that you can go to that body and sort of have those open-ended discussions and debate and come out with something and formally publish something, and spend time again, educating your organization as to what the you know, what that um outcome is, what the policy is. And another thing I'm seeing is that the that those lines are moving fast, right? So it's not again one and done. I would encourage teams to look at their governance every six months. So, I mean, when they when all of these AI note takers started, you know, everybody was like, whoa, apprehensive about allowing a note taker in a meeting or whatever, or announcing that, hey, I have a note taker, is it okay for me to record it? Well, you know, water on the ridge a year later, there's like, you know, four people in eight note takers and nobody bats an eyelid, right? So so to me, that line's moving fast. So from a governance standpoint, make sure that whatever you know, input app or transparency guidelines you're putting into place is reviewed on uh some sort of a regular cadence, and I wouldn't go beyond six months at this point.

Michael Hartmann:

Yeah. I mean, um, it's some marketing apps role in this. Is there to when we finish up here? Is there anything else like given our core audience of marketing ops folks, right? That you like how what role should they should they be playing in this? Should they be proactively pushing it? Should they be um following along? What's your what's your suggestion on what what they should do at this point?

Aby Varma:

I think uh to me, marketing ops, uh, the role for marketing ops professionals is critical. Like to me, I really see that they are the connective tissue between the strategy and the execution. They understand processes and data and governance and all these elements which are critical for AI to scale. And so at the end of the day, you need marketing leaders, CMOs or CEOs, need ops people, need they need a Sherpa to sort of guide them along, right? Um, in this journey. And to me, I think ops people, just because of the nature of what they do, they understand the processes and um and roles and responsibilities, org, and technologies and workflows and all those sort of things, are really well positioned to play that role of being that kind of quote unquote AI Sharpa for an organization or a team. So to me, I think it's sort of critical in a lot of organizations where, and we work with organizations where there are AI ops leaders and we sort of help them enable them to do that for their organization, then that works really well. And in organizations where AI ops or ops doesn't exist as a function, marketing ops doesn't exist as a function, there is a there's a little bit of a struggle as to who should own this.

Speaker 1:

Yeah.

Aby Varma:

And while there is a perspective that really everybody should be doing AI, which I completely agree with, because in five years it's gonna be like the internet where no one's gonna be talking about AI, because if you don't do AI, that is newsworthy. Um, but at this point, there is or it's without a marketing ops, people suffer from that ownership thing. And, you know, that's where we work with a lot of CMOs and help them and play that role, if you will. Um obviously in the context of AI where there is no ops people, but I think it's the it's it's the perfect kind of role in that connective tissue for strategy and execution.

Michael Hartmann:

Awesome. That makes sense. Well, Abby, uh, I'm quite certain we could have gone on for another whatever it has been 40 minutes, 50 minutes. Um, but I appreciate it. Thank you so much for joining. If folks want to continue the conversation with you or learn more about what you're doing, uh what's the best way for them to do that?

Aby Varma:

Yeah, I think um obviously um would love for people to hit me up on LinkedIn. Um, but also you can go to sparknovis.com, that's S-B-A-R-K-N-O-V-U-S.com, sparknovis.com, and all the top navigation you should see community where you can learn more about the um the marketing apples community that we host. Uh it is it is we we definitely have events here locally in Atlanta, but it's not only a local community. We do a lot of online events and we're doing different events. We did one in Charleston, we have uh one planned in uh Miami and Chicago, so there's more coming uh next year. And then um, yeah, we'd love to uh have folks um tune into my podcast, Marketing Ask Podcast. Again, if you go to sparknovis.com, you should see podcast of the top navigation.

Michael Hartmann:

Terrific. Well, again, thank you, uh Abby. It's been a fun conversation. I've got some good ideas myself, so I'm sure it's going to help our audience. As always, thank you to our audience for continuing to support us and uh listen and now view. Uh, if you have ideas for guests or topics or want to be a guest, you can always reach out to Naomi, Mike, or me. We'd be happy to get that that started. Until next time, bye, everybody.