Ops Cast

Building Trust in an Age of AI with Karen Kranack

MarketingOps.com Season 1 Episode 196

Text us your thoughts on the episode or the show!

In this episode of Opscast, Michael Hartmann and Naomi Liu are joined by Karen Kranack, Director of Applied AI Strategy and Experience, to explore the intersection of AI, brand strategy, and trust. Karen shares her insights on how AI is transforming marketing and operations, while emphasizing the importance of building and maintaining trust in this rapidly evolving field.

We dive into key considerations for marketing professionals as they navigate the challenges of implementing AI, from transparency in AI usage to addressing data privacy concerns and ensuring ethical AI practices. Tune in to hear real-world examples, including how AI-generated content impacts brand perception and how organizations can foster a culture of trust internally while driving AI adoption.

Key Takeaways:

  • The importance of transparency and honesty when integrating AI
  • How AI is reshaping consumer experiences and internal workflows
  • The role of ethical considerations and privacy concerns in AI adoption
  • Real-world examples of successful AI use cases in marketing

Join us for a discussion on how to leverage AI to enhance brand strategy while maintaining trust with your customers and employees.

Episode Brought to You By MO Pros 
The #1 Community for Marketing Operations Professionals

Visit UTM.io and tell them the Ops Cast team sent you.

Join us at MOps-Apalooza: https://mopsapalooza.com/

Save 10% with code opscast10

Support the show

Michael Hartmann:

Hello everyone, welcome to another episode of OpsCast brought to you by MarketingOpscom, powered by the MoPros. I'm your host, michael Hartman, joined by my co-host, naomi Liu. Today, naomi.

Naomi:

Hi, it's been a minute, michael, yeah.

Michael Hartmann:

It has been. Rizzo is, I think, the day we're recording. This is day two of inbound for-.

Naomi:

I think so yeah, In San Francisco this year, right.

Michael Hartmann:

Yeah, san Francisco, and getting ready for Mops Bluza, which is now less than two months away. So, yeah, well, I see you times. I am still up in the air. I think I said on one of our recent episodes I need to do it up front, like I am open to doing something crazy and stupid. If somebody would pay for me to go, like I will dress up in something whatever. But I think we could do it. We could do a live episode maybe, right that?

Naomi:

would be great yeah.

Michael Hartmann:

Yeah, so all right. Well, let's get started. Today we are digging into the intersection of AI, brand strategy and trust, a topic that continues to evolve rapidly. So our guest today to talk about this is Karen Kranich. She is Director of Applied AI Strategy and Experience at is it Deep Depth, Deep, Deep Depth. Okay, she's worked with major global brands like PwC and brings a deeply human perspective to AI adoption, from internal employee trust to external customer experience. So we're going to explore six key considerations for building trust in this new AI-enabled world and what it means for marketing and operations professionals. So, Karen, first of all, welcome. Thanks for joining us today.

Karen Kranack:

Thank you so much for having me, Michael and Naomi.

Michael Hartmann:

Well, it's good, well, so we framed this episode as around building trust in the age of AI, and it's interesting. As I have conversations with people, I get really mixed signals about how some people are really like I'm doing, ai, do a bunch of my thinking for me, or it's a partner, and others are like I'm using it, but I don't trust it Right. So why do you think this is such an important and timely topic right now in the kind of evolution of AI?

Karen Kranack:

I think it's super timely and topical because I think all of us are touched by it on a daily basis. So, for example, like two days ago was on YouTube you know when you always get served ads beforehand, right, and I saw Oprah selling Himalayan pink salt and I thought, oh, it's Oprah selling Himalayan pink salt. And then I thought, wait, would Oprah sell Himalayan pink salt? And just in taking that pause, I looked more closely and I could sort of see that the audio track was not totally syncing with her voice track, um, and I realized that this was ai generated.

Karen Kranack:

And so you know, it's one of those moments where, if we aren't really paying attention anymore, uh, we're just lost in what's what's real and what's reality, and so I think, with all the AI slop out there, it was a great example of like AI slop, and so I think you know, we're being barraged with that now, and that's the first reason why I think it's important to like sort of learn about these tools and pay attention to them. The second reason why I think it's such a timely and important topic is because you know, we're recently going on vacation. Literally. Such a timely and important topic is because we were recently going on vacation, literally like a week ago, and I used ChatGPT's agent to both find me a gluten-free restaurant in Wellfleet in Cape Cod and book it for me book a reservation, and what was so interesting about it? I don't know if either of you used that yet or played around with those.

Michael Hartmann:

I keep meaning to, but I've been playing with other stuff.

Naomi:

Yeah, I find that a lot of things that I ask Chachi PD to do and the recommendations it gives me. From some of the vacations over the summer that we've taken. Half of the places are closed and it doesn't recognize that or it doesn't exist anymore, or the hours are completely wrong, or they've moved locations and it just doesn't seem to update that. And then if I call it out and be like, hey, this place actually closed. Oh yeah, you're right, let me find something else, come on it you can.

Karen Kranack:

you can actually see on the screen that it's sort of you can see what it's doing as it's interacting with like resi or open table, and it was making tons of errors, like it was making a lot of mistakes like it, but it was catching itself for the most part and like going back and like retyping my name or selecting. You know it's first startup is selecting like one person for the reservation. It should have been two. So it was really underscoring for me too. Like it didn't make me think badly about Resi or OpenTable, but it did make me question obviously you know LLMs and like their actual abilities and where are we? And then the third thing and I think makes this so important is like I read an article in the Wall Street Journal two days ago that said there's recently been a study that proved that the less literate a person is, the more likely they're to be awed and wowed by ai and actually use it. So it kind of brings to mind interesting yeah, that old um.

Karen Kranack:

What was the? Uh?

Karen Kranack:

arthur c clark like uh, sufficiently advanced technologies are indistinguishable from magic yes, like so literally, like literally, people the more they think it's magical, the more they are likely to use it. And that brought up an important marketing question for me, which is should we be transparent if it reduces marketing efficiency? And I think, as human beings, I think one of the ways we can distinguish ourselves from machines is to be honest, right and actually try to actually provide truthful information for each other. So I think there's like this human element, but I also think the awe kind of passes sort of quickly because, like, once you start using these things, you start to get increasingly more used to it.

Karen Kranack:

It's kind of like television. I mean, the people who first saw television were, you know, blown away and then now it's sort of like is a box that sits in all our lives and just does its thing whenever we want. So I think technology is very similar in that way, and so I think we absolutely should continue to be transparent, because we're just going to continue to be bombarded with brands themselves, too are kind of endangered. Like thinking about, like the Oprah example, right, like this is someone who is trusted, brands are trusted. What becomes that center of truth? And how do we support brands and actually promoting themselves as that truthful entity.

Michael Hartmann:

Yeah, it's interesting because I think video is quickly becoming harder and harder to detect, um, but I I've started noticing on I don't know if it's on Facebook or Instagram, but I get these ones that pop up and it always begins with the same clip of Joe Rogan from his podcast going have you seen this one, jamie? Go, pull this up, right, his, his cohost, and it's like wait a minute, every one of the I started noticing like every one of these is starting with the exact same clip. I'm like this is not real, right, there's just like people trying to take advantage of how well known he is, um, so it's kind of like that too. I don't even that's an ai thing, but it's clearly like. It's like it took me I don't know 10 to a dozen times of seeing that to kind of like click and go. Like wait a minute, I keep seeing the exact same clip. It's weird, totally Okay.

Michael Hartmann:

So, all that being said, right, I mean, I don't know about you, but like the organizations I kind of interact with, I keep hearing people like there's actively people trying to adopt AI into their organization and they're rushing, but a lot of them are stumbling. Some are more public than others. What are some of the lessons you've seen or learned from all that?

Karen Kranack:

Totally. It's interesting. One of the things I'm really seeing as I work with large-scale enterprise clients is they have this intense need to keep up with the Joneses, right, and so AI is top of mind. I mean, literally when I go into meetings, like you know, one of the first questions I get, you know it'll be like the entire agency there, where we, you know we do builds from soup to nuts. But, like, the first questions we get are like what AI are we including? So I think every organization, every company feels this intense pressure. But what's interesting is that it still takes time to build those tools and it's really important to assess whether these organizations actually need these tools, and so that's part of what my job is is to help come in and say you know, is it the appropriate time either for a front-end experience or a back-end experience to improve workflows?

Karen Kranack:

So what's interesting about technology and wanting to keep up with the Joneses is that a lot of things appear to be table stakes to clients.

Karen Kranack:

So I'm hearing them say things to me like yes, we must absolutely have conversational search. Yes, we must have next best action opportunities, you know, opportunities for when people are looking at content or products on our site. But when I talk to them and try to explain them about technology, is that these things really require thinking about, like, what marketing platforms and what tools and technology are required to actually build those, and it's not a question of just plug and play. Like you know, like Adobe Experience Manager, for example, like doesn't have a plug and play conversational search at this point in time. So, although they all want these things, they're actually not necessarily that many that are really great out there in the marketplace and, although we need to build and work on these things, it's still it's still a nascent technology, and it even though, like you know, all websites have a search on them. Migrating over to, say, conversational search isn't necessarily, like, the simplest thing to do yeah, it's, it's interesting.

Michael Hartmann:

I mean, the thing that I've seen an organization I'm at that's been most useful is implementation of an ai tool that can be added into meetings when it's recorded and it transcribes, and it does a really good job of it. I've been truly sort of blown away at how accurate it is. I may get occasional to me minor things like people's names are misspelled, but that's an easy miss. Humans do that all the time with my name, so I would hold no grudge. Easy miss. Humans do that all the time with my name, so I would hold no grudge. But it feels like beyond that, right, I haven't seen. I'm starting to see more and more, but usually it's led by like a handful of people really kind of taking, taking advantage of it and sort of leading the way on what has worked. So yeah, so we at the beginning we talked about sort of like I forget, was it six considerations for building trust in this world. Like what did you mean by that? Can you talk us through what that is?

Karen Kranack:

Sure. So really, the first one is kind of what we've led with, which is leading with transparency and honesty, to being really clear and forthcoming. I mean, one of the things we've seen in the marketplace, I mean everyone. I think the biggest story of the year was probably, like Duolingo sort of stating publicly or actually it was an internal memo that got leaked but stating that you know they were going to start replacing their translators with AI, and there was a public outcry about this, especially around like translation is actually something artificial intelligence can do really well for the most part, but there's the emotional, human piece of this and the whole purpose of Duolingo is obviously to provide language training for people. So it really like, once this leaked, like people were very angry and upset and so there really was a lack of transparency and honesty about what Duolingo was trying to do on the background, and I think that's what really sort of hampered them in that case. So, thinking about things like that, transparency and honesty very important. Secondly, clearly articulating and demonstrating computer sorry, clearly articulating and demonstrating consumer value and benefits.

Karen Kranack:

You know, again, like going beyond AI strategy and use and really thinking about like why, why would a person care about this? Why would your customer want this? And so you know thinking about like a problem. Example was Air Canada. Remember this I think this was about a little over a year ago, but anyway, air Canada had a customer who was interacting with their chatbot and this customer was trying to get a bereavement fare and the chatbot told them that they could, in fact, retroactively get a bereavement fare. They thought it was all good, they bought the full price ticket and it turned out that when they tried to do this later, they could not. And air canada said um, said oh, it's not our fault that the chatbot told you that, even though it's our chatbot.

Naomi:

So not good again, like poor customer value right, like obviously you can't tell your customer one thing and then undermine it um in another way so I do have like a follow-up to that, you know, as, as we're talking about adoption within an organization, right, how do you balance, um, you know, as someone in marketing ops, right, how do you balance, like, the speed to adopt? Right, so, like for myself and the folks on my team, like there's all these tools and like features of our existing tool set that we want to implement, how do you balance that speed to adopt with building sustainable trust internally? Right, because it's just exploded in the last, you know, 18 months? Right, but trust, I feel, you know, as we've been talking about this year, hasn't really kept pace, right, there's like concerns about misinformation, data privacy, job displacement, like you're, you know, there's all these LinkedIn AI influencers talking about how they, you know, vibe coded a platform in a week and it replaced, like you know, their entire team, you know.

Naomi:

So it's just like, how do we, how do you balance that in term, internally? And have you seen examples of where a lack of trust potentially could have, you know, derailed adoption somehow?

Karen Kranack:

Totally. I have a couple of thoughts on that. The first is that you know I'm talking to a lot of different corporations and what I've heard from most people is that kind of like what you're saying is that a lot of the people working in companies are trying out these tools on their own. So there's that piece of it but that obviously undermines like confidentiality potentially and like if people are using this, you know, sort of outside of school, so to speak, so that that is a problem. And what I'm hearing is that they're not a lot of people employees are actually trying these tools and taking them back to their employers and saying like hey, we should, we should investigate whether we can adopt this. And they're getting a lot of pushback at a lot of times. And so it really seems to be kind of like the cultural top down where a company or corporation organization really needs their CEO at the top to kind of be open to investigating and trying these things.

Karen Kranack:

Within our own organization, we're super into experimentation, but we're also actually codifying and collectively creating.

Karen Kranack:

Actually our own group, the group I'm in is actually doing a large portion of our work on internal enablement, and how do we actually, you know, pull in these tools to improve project management?

Karen Kranack:

How do we improve our creative tools that we're using, our marketing operations tools, our media buying tools? So, you know, it really is about, like, I think, the organization's maturity cycle and for those people who, for the most part organizations know that AI is important and most organizations are shifting Now in highly regulated organizations like, you know, banks or healthcare, I'm seeing I'm still seeing actually enthusiasm for adoption, but the rollouts are kind of slow. You know where it might be like. Oh sorry, all you get is Microsoft Copilot because it's already embedded in Office, the Office suite which we already have. I'm not really seeing, I'm not really seeing resistance, which is super interesting, and the people that I've worked with, even in the people who don't really know are a little scared of AI, are still open to learning about it. So I think there is, overall um, an openness and a willingness to learn these tools and try them and it's really about the maturity of the organization and the willingness to adopt.

Michael Hartmann:

So that's skepticism about it right.

Karen Kranack:

Totally.

Michael Hartmann:

Yeah, yeah.

Karen Kranack:

Which is well-founded. You know it's like it, like it's very. I mean, obviously we know that, like ai tools are often used to like train, you know like it's important, like I know the enterprise models will actually say you know, like of gemini or chat to like this. This model is not training on your corporate data, um, so it's very important for for people to, for corporations to get enterprise licenses.

Michael Hartmann:

Sure, yeah, Okay, so you've covered, I think, two of the six today, if my math is right. Yeah, totally yeah. What are the other four?

Karen Kranack:

Okay. So number three is proactively addressing and managing privacy and data concerns and explicitly communicating to people like how is your data being used? The big sort of, I think, stinker in the marketplace has really been kind of meta, because you know their algorithms are a black box. Nobody really knows what's happening with your Instagram, facebook, you know WhatsApp accounts. Whatever their claims are, we don't really know, and I think that that has been really underscoring. You know a lack of like, trust versus, say, like Apple, which, interestingly, is falling behind in the AI race because of their emphasis on privacy. So, like you know, apple tools mainly are on device. So, like, when you're using AI tools on your Apple phone, it's just kind of whatever's you know personally identifiable information about you is just staying on your phone, whereas you know, when you're using meta, you're obviously helping feed the beast, as it were, and like helping train all their data, no matter what you do on it. So definitely, proactively addressing privacy is important. Fourthly, committing to ethical AI and actively mitigating algorithmic bias.

Michael Hartmann:

Um, this, is one that you know it's.

Karen Kranack:

it's it's been scientifically proven and well-received that you know there is bias inherent to these systems. Um, for example, I mean Amazon kind of got themselves into a pickle a couple of years ago. They had an AI based um tool, a hiring tool that would look at resumes, and because historically Amazon had gotten more resumes from men than women, it was deprioritizing women and if the resume said anything about, you know, women, like they had been involved in, like women's sports or something, it would actually kick out the resume and not put the candidate forward. So obviously they recognized this bias was happening and was like a problem. You know, so obviously and interesting that I just read about NVIDIA is using synthetic data sets to reduce bias. So because they know that there was a lot of inherent bias on the way that LLMs were trained, they're actually trying to come up with synthetic data that is more generic. That will actually sort of create a more level playing field.

Michael Hartmann:

Very interesting.

Karen Kranack:

Fifth, emphasizing human oversight, empathy and accountability, really just reassuring users and consumers that human oversight remains, that things aren't just going into a black box. And someone who's doing this well in the marketplace, especially for MarOps, is Salesforce. Salesforce they're really with AgentForce, they're really positioning their AI-based tools as your partner and sort of emphasizing the human element in a way that I think is really kind of refreshing and that they want you to be freed up to do more of your tasks and really emphasizing like that empathy piece, which is pretty profound, very interesting.

Michael Hartmann:

Yeah, I thought it was. Yeah, I saw it just today a headline on something or a Salesforce thing where they described themselves as an AI, AI based CRM platform or something like that. I was like, oh, they're just going to throw AI in there into the name of this version of what they are, Right.

Karen Kranack:

Totally.

Karen Kranack:

I mean, I think all of these major platforms are a little out over their skis in terms of what the AI can actually do, to the degree that you know, sure, but yeah, and the last thing, finally, is just really educating and empowering consumers.

Karen Kranack:

You know, to like demystify AI, and you know part of that is again giving explanations for when and where it's going to be used. You know there was a lot of controversy over, you know, midjourney and other, you know, image generation tools, just sort of like siphoning in artists from all over the world without any permission, right, and so you know that's really undermining to people. And so, you know, looking for ways to actually say like, hey, we are using this data to train, we're not, we're or we're not. You know, or you know, like Unilever had used AI called a Unibot has a robust like training blueprint that like sort of actually provides some education for people. So, you know, letting people know like this is why we're using this. It's not just some random feature either that we're just bolting on also to just keep up with the Joneses Cause that that's not helpful either. Yeah, that we're just bolting on also to just keep up with the Joneses, because that's not helpful either.

Michael Hartmann:

Yeah, so I'm just curious because my experience with AI being used inside organizations has been mostly for as one CEO I know called it time dividend, right so sort of leveraging that as a tool. So I think it felt like a lot of what you described in those six things was mostly in the scenario where the tools are used, either consumer-facing or close to consumer-facing. Do those same principles apply when you think about it from the standpoint of employees? Are the customer in this case?

Karen Kranack:

Totally, and I think employees have to like really have a great reason for using these tools.

Karen Kranack:

We're doing a lot of and I'm personally doing a lot of implementations of looking at internal workflows, like, particular, on CRM and helping optimize those workflows for employees.

Karen Kranack:

And one of the things we're finding again like there was one like major technology company we worked with, um, you know there were like basically like 100 steps to kind of basically, like you know, pushing these emails out through this marketing right, yeah, and one of the things we found is that, um, you know there might be.

Karen Kranack:

There were literally like 30 ways that it can be enhanced with ai, but the other 70 were really about improving human communication, and so that's one of the ways that we actually support our organizations to help build that trust, because we come in and we say you know, this is something that, like you can be improving on, but it's also about looking at your actual workflow and your human interactions and people can sort of then see that like, okay, you might be able to use this LLM to sort of, you know, improve your generative engine optimization so that you know you get better like sort of SEO traction on LLMs themselves. You know, and like. You can use this little tool to do that, but it's not going to like take away your larger job, for example.

Michael Hartmann:

Right, okay, yeah, it makes sense, okay, yeah. So this is. It's interesting because I think it'll be like these two sort of scenarios like customer facing or client facing or whatever you consumer facing versus internal. How that plays out Cause it feels like a lot of the things are similar, with slightly like the nuances, is where you get into differences, I think. So I'm sorry, I'm sure you saw the oh, do you?

Karen Kranack:

mind if I? No, no, I think Totally. I'm sorry, I'm sure you saw the oh. Do you mind if I?

Michael Hartmann:

No, no, go ahead, go ahead.

Karen Kranack:

Oh yeah, I'm sure you saw the MIT study that came out last week that said 95% of AI projects fail.

Karen Kranack:

But what they did say that does succeed to your point is that internal operations optimization is actually the place to focus more than the customer facing side. So I mean that's something that I am seeing as in the work that I'm doing and that actually, like sort of you know, creating those relationships with the internal teams and then helping them level up has definitely probably seen more success than say, like some CPG saying, oh, we just need to stick a chat bot on our website.

Michael Hartmann:

True, it feels like MIT's got people out there researching why AI is a bad thing, because I think there was a study I thought that came out of them Maybe it was Harvard, so it's up there in Boston, right, that was like I think it was. They had done something where they had students come in and use either not use LLM at all, use it as a, as a partner, after writing something, and then those just used it to basically generate and they basically said it makes you dumber. I mean, that was at least that was the shorthand that people used, promoting it, and I was like, well, so, first off, like I don't know that it's a general, that it has a specific case that's probably not generalizable, but also kind of matches what I would expect to some degree. Like I don't know, like when you use it as a tool, like I have found with my use of CatchyPT, this is what I mostly use Like, if I don't actually spend time to do a good job of providing an input context and reviewing what it generates, it's not great.

Naomi:

You know, michael, I used to be amazing at math. And now I'm like, okay, what's 10% of my phone? Right, it's, it's. It's true, you don't use it, right, and you know we have all these other tools. But at the same time it's like, why do you need to do back on on the napkin math anymore? Longhand math right, yeah? So you don't use it, you lose it, right?

Karen Kranack:

so it's true. We just have to hope we don't lose the, the technology well, I think it's interesting.

Michael Hartmann:

So I I get fascinated, this total, total, total rabbit hole that we could go down to. But I I get hooked into these videos of people doing like survival type stuff, right, or whatever, and I think that's like part of that. Is this like desire to not forget, like how to do things by hand or without all the technology and tools that we have, right I? So I think I don't know what it is, but like like maybe there's some I need some sort of psychologist to evaluate me but like I just it feels like there's. I do fear that, right Is that? I do think like, if all we did was like just let AI stuff do all this heavy thinking, then we will lose some of that. But I don't think that's the way we should use it to your point, right, I mean? I mean it's's like we should. The calculators came around, you know, I mean we've abacus was around before that and all these things?

Michael Hartmann:

right, they're all disruptive technologies. Um, tractors came around to replace, you know, cows pulling plows behind them.

Karen Kranack:

So, um, but I think you're making a good point about, like, not losing critical thinking skills. Right, I think that's the key. You know, whether you're talking about education or you know the workplace, it's like we still need to be thinking big picture. We still need to be able to pull things together. They're recently saying I just read another article, too saying, like you know, like taste, like human taste, is actually something that, like you know, AI can't replicate.

Michael Hartmann:

Yeah.

Karen Kranack:

So you know, how can we, Like you know, AI can't replicate, yeah, so you know how can we?

Michael Hartmann:

Well, I was talking to somebody who's in a like a journalism role and I we were talking about something else, but at the end I was like what do you guys like think about it? Because of the people I know who are like writers, actual like that's their. They don't, they're not really fans of it, but she said something that was interesting in that, because they are in the business of breaking news, right, new stuff like ai can't really replace them. There's an ai is dependent on other stuff to consume from right now.

Naomi:

so right now interesting stuff.

Michael Hartmann:

Yeah, right now right right now, yeah, um, right now, yeah, okay, so let's keep going. So we were talking about the customers versus internal, so external, customers. Let's stay focused there. I mean, you touched on a few missteps, but are there major themes of missteps that companies are having when they're introducing AI into their kind of customer experience or customer facing interactions?

Karen Kranack:

I think the biggest missteps, honestly, and you know, first of all, it's expensive to implement these things and I think it's like that sort of like willy nilly, not putting a lot of thought into it. Like you know, we do hear from some clients like we need AI, you know, and it's sort of like to do what you know and that's part of what trying to figure out, like, how do you really serve that customer value? And so I think the missteps are really just like putting the time, effort and money into something that's like not really going to necessarily go anywhere, which is why I often recommend just doing like a proof of concept or like a pilot study, rather than like sinking, you know, like a ton of their money into just overtly doing things that may not matter. The other piece of this, too, is that, again, it takes a lot of effort to, like, you know, you can't people think that like sure, we have a search engine, now we can just, you know, plug and play, conversational search, but the truth is you need to actually, you know, set up that LLM, train that LLM.

Karen Kranack:

It needs to observe the interactive behavior over time of customers, and so I think there is this misunderstanding in the minds of some clients where they think that, well, search is table stakes. Therefore conversational search should be table stakes, and I'm seeing that quite a lot, where it's sort of like this sort of misunderstanding of just the technology itself, where we are what, the abilities of that technology and then the time it takes to still make really great technology. Which kind of brings me back to what you were saying about. Like you know, you have to actually put a lot of thought into how you're engaging with AI. You can't just say like, write me a poem and expect to get you know as good of a poem as if you like had something you know that you were giving it as a framework to cope, to move from.

Michael Hartmann:

I'm laughing Cause my first sort of playing around with chat GPT was to get it to write rap lyrics for.

Karen Kranack:

Nice.

Michael Hartmann:

That was a long time ago, feels like it feels like forever ago. So I'm curious is one of the things cause I know again I we talked about this before we started going live here. But there's an organization I work with where AI is being really embedded into the way that the company works and it's from the top down, but there's dedicated resources for it too, to try to drive that adoption. So are you finding any difference? Are you seeing that much with your clients or people you're talking to, and do you see it making a difference in the success rate?

Karen Kranack:

I see a huge difference and it's interesting because it kind of depends on so. Especially, like the highly regulated companies, they, if they really sort of you know, put forward a dedicated like staff to work on that. I'm seeing them look at very successful use cases. You know, like, say, this is not a client. But I've read about, like you know, like a mortgage lender, creating a chatbot that can actually, you know, access secure data without actually pulling that secure data like into the chatbot responses, right, or whatever, so it can sort of relay them, but it doesn't train or stay, like with the chatbot. So things like that are very successful, like that are very successful. But you know, there are cases too where, if you don't get I think it really is about the organizational maturity and organizational buy-in and, you know, are they willing to invest in the time and the people to solve those internal problems? If they're not doing that, if they think it's a bandaid, if they think like, oh, we just need to throw something up on our website, that's not going to work.

Michael Hartmann:

Yeah, that makes sense. So I want to go back to the internal folks a little bit, because I do think there's you got a real mix of people. So when you're rolling things out internally, what should be, what's your guidance for people in terms of getting the best chance of success when they roll out and get adoption of AI systems internally? For, like I said, the time time benefit there. So and if you've got any examples, that'd be great.

Karen Kranack:

Yeah, success really comes with, again, looking at those workflows, like that sort of marketing CRM workflow I talked about Like you know. Success there was again like talking to the people. It's funny, you know, I've taught qualitative research methods and it's still one of the most important parts of this job. You know, actually I was listening to another one of your podcasts and it was you know about like, doing this work is almost kind of like psychology and being a therapist in a way. You've really got to bring in the human element and really talk to people and that's how I found the most success is that talking to clients and sitting down with them and literally mapping out their workflows with them. I do a tremendous amount of that mapping out out and they can see what that looks like, they give input to that and they understand like, oh, this is, this is very clear, and I think it's about providing that clarity is what provides the most success for clients. I think no one. You can't just say to a client or anyone really like, go adopt adobe experience manager for the cloud and then figure it out or like or um.

Karen Kranack:

I actually did a keynote talk at Adobe about Adobe experience manager or gen studio um for performance marketing and, um, you know it was hands-on, it was a hands-on learning session, and I think that's the really important kind of work that we need to be doing is that kind of educational piece. Um, and I think that's where um of work that we need to be doing is that kind of educational piece, and I think that's where, um, you know, I personally provide the most value, because it is about like coming in, working with people being very one-on-one, collaborative, and then you get the organizational buy-in and then those folks that you're working with can then, you know, obviously cascade that out to the other employees that need to learn yeah, yeah, I know, naomi, that probably she was probably reacting like to the idea that you just roll out technology for technology's sake, Cause I know you still measure like adoption on your technologies.

Michael Hartmann:

For sure, yeah, yeah.

Naomi:

Because, well, especially if, like, there's tools that are like user-based licensing, right, you know, you kind of need to be able to make sure that that doesn't get overlooked. And I think, just, I feel like we could have like an entire episode just based on like internal trust, employees and adoption. Technology adoption how do we change like change management, all of that stuff? Technology adoption how do we change like change management, all of that stuff. And you know, I think the thing that I have been finding that I'm coming up against right right now, um, internally, is there.

Naomi:

I think it's like it would be interesting if there was like some kind of playbook for, you know, building confidence with your colleagues who feel unsure about AI or feel threatened by it, or, like you know, if there's people who are digital native employees versus those who maybe technology is not the strong suit, and it's kind of this like threading the needle of like how can you know and I asked myself and the team about this is like how can we, as marketing ops leaders, how do we champion adoption internally without making it feel like it's some kind of forced compliance, because I just never think that that's going to turn out in the way that you want it to the way that you want it to, right.

Naomi:

I almost feel like every time I use the word AI now internally, that I just never really know the reaction I'm going to get, and so something that I've been thinking about as well is putting together an AI bootcamp, almost like an internal training session for the people that I work with on a very daily basis, to make sure that we're all on the same page. Right, because it's very easily, you know, it's something that I consume and learn about quite extensively, but that knowledge gap needs to be bridged, right? Yeah, absolutely.

Karen Kranack:

And you can even do that with something fun to start. You know, I can be, like you know, learn to do like a mini vibe code session with Claude, where, I mean, I've done this with people where, like, it's literally like, pick something you like, like, or just you know, like, uh, I had it make a crossword puzzle, interactive crossword puzzle based on the tv show severance, you know, or something like um just getting people to kind of get familiar with something enjoyable totally yeah yeah, I mean I think it would be good to do.

Michael Hartmann:

I know as an individual I've tried to do things like. I've gotten like pretty. I feel like I've gotten I wouldn't say I'm an expert, because I don't know what that even means for like using ChatGPT, but I feel like I've gotten pretty darn good at it as an individual. I haven't tried the ChatGPT agent thing, but it's on my list. I've tried other things and it's just like I did one the other day where I wanted to.

Michael Hartmann:

I had an audio track for something and I wanted to transcribe it. I was like, oh, there's got to be a free, low-cost option for that, and there is, but it requires coding with Python to use an AI or API. I'm like I'm not going to do that. So I tried something else with a kind of drag and drop solution and it worked great.

Michael Hartmann:

But I I'm on a free level went right through all my AI credits like in no time, and I was like there's like that's the part for me Like I want to try doing that stuff, and if I had like a company that was willing to support that and spend a little bit of money to try it, I can think of lots of things that would be super valuable, Right, and that's. That's the part I, like I'm struggling with. I want to learn, I want to try to do more, try not to spend a lot of money on it as an individual. So if you've had any suggestions for where people can try to do some of that or how to convince their organizations to do it, I'm all ears.

Naomi:

Can you use chat GPT? Sorry, I was. I was saying like can you use chat GPT to like learn about chat?

Karen Kranack:

Right, yeah, I've done that.

Naomi:

Yep.

Karen Kranack:

Totally, and I think one of the things that we've done is tried to collect, like we've been, as we have in our internal enablement team has collected data on, like asking people, like just surveys internally. What are you using and then going out and trying to get some licenses for people to try it, maybe like the most popular tools?

Michael Hartmann:

no-transcript llms. Uh, you know you can't cheat, um, which is interesting, my son, who's in high school. We just got through his curriculum and they're actually specifically calling out like we're going to tell you when and how you're allowed to use these tools as part of work, which I think to me that's a much more reasonable one. What I worry about for my two, who are in college, is that when they come out because they were told not to use it, they won't have that experience, but all these companies are going to be expecting it.

Karen Kranack:

Interesting, are they? What I tend to see in the university level, and having taught at the university level, is that it's it tends to be instructor based, like whether they could use it or what degree.

Karen Kranack:

So, like what I'm. What I've just basically seen is that, to your point, it can either be banned entirely. There can be a hybrid approach, which is what I personally am used. We're like you know so. So I teach qualitative research methods Right, and one of the great things about qualitative research is that you can't fake it Like.

Karen Kranack:

You actually have to go out and talk to people, so, but I I do allow my students to use LLMs to like analyze that data or at least, you know, get some ideas or thematic concepts like from that data using LLMs. But they have to tell me and prove to me like how they did it, like, so you can't just like go up and do it and you know, turn in a paper and say that's that. And then, of course, there's those who have unrestricted, I think for anyone these days. It's funny. I Googled, I was thinking about this question.

Karen Kranack:

I Googled yesterday, like Gemini, what do you think like should you know what is the best path forward for students? And it said a hybrid approach is the only way forward, or something. You know what I'm saying. Like, of course, the AI is like you must use AI, but I think I think that the reality is. It is a hybrid approach and I think that I think the most important thing is teaching people critical thinking skills, and that has to come both, I think, from like parents and also from the schools themselves. And that's like even thinking about and this is getting esoteric, but like you know logical fallacies, like you know what is a red herring argument? Like what? What is a straw man? You know like what are these?

Karen Kranack:

and sort of starting to think about, like, when you're using these tools, are you getting back content that actually makes sense? And then you know for yourself, always saying like your students, saying, like I have to start with like a draft of my paper, I cannot, I can't, ask chat gpt to start the draft of my paper. It might be able to help me, like brainstorm ideas, but it can't be the main tool that I'm using and I think it's unfortunately something that that kind of has to be investigated and taught. It's very, I think, human nature like pulls us towards a bit of laziness sometimes.

Karen Kranack:

You know, kind of like sure, I'll just get rid of that. But I think I think young people today do have to learn to use the tools, and I think you know, maybe using them in very specific contexts, like teach me how to code, you know something that will analyze this data, or you know, like take my data and turn it into graphs. You know graphs and charts that I can then put into my presentations, like sort of our. I think it's about looking at things in manageable chunks and also still getting you know. I'm hearing about too, like instructors.

Karen Kranack:

You know doing oral tests, or you know like it's like you can turn in a paper, but then you also have to do like sort of an oral argument around your paper, for example. So I think it's about you know it is. It is probably a hybrid approach, yeah, but there's no easy solution. It's actually, I think, actually going way back to my Oprah example. It's actually, I think, actually going way back to my oprah example. It's almost like when you're working with any of these tools is that can you take a pause and bring more consciousness to what you're doing and how you're interacting with these tools, um, and really put some thought and care into it, which I'm not sure how the exact way to like teach that is, but other than to model it right I think it's interesting.

Michael Hartmann:

You've at least twice in this conversation brought up the term critical thinking and it's um. I have a big criticism of the educational system, at least in the us, you know. I mean I'm not gonna say anything about canada, um, but I think our educational system does has kind of lost that as a core skill right, it's more about compliance and testing and things like that, but it's interesting. Today I was in the car listening to a totally different podcast, totally different thing, and the guest was someone and I forget the name of it. She started a school that is using AI agents. Like truly. It sounds like I haven't seen it Right, but students get only two days, two hours a day, with using some AI tools to do their their core curriculum stuff and it meets them where they are, regardless of where they are in the grade, and then they have what the teachers are not like teachers where they're up there lecturing. The teachers are like coaches, more like coaches.

Michael Hartmann:

And I think it was interesting because she also brought up the idea like critical thinking. She like repeated that over and over when she was talking about the way they do it. It's like they still want, like they're focused on teaching that and like discerning it and they're not just like opening up chat gpt for these kids and using it. They're it sounded like they're using some like, uh, bespoke solutions, that kind of picks and chooses, the engines behind what they're doing for each student. But I thought it was really interesting. This is like the first truly like time I've heard about ai being embedded into education where I thought like that's, that's really out there. That's also a really expensive school, so it's not like scalable. That's part of the challenge.

Karen Kranack:

I mean it sounds amazing. I have not seen that like in action at like the university level, but I'm sure that there are some professors out there who are doing something similar to that. But I mean, I think that's where we want to get to, and I think, think that's where we want to get to and I think that that's yeah, it's an amazing.

Michael Hartmann:

I'll have to look for that I mean you're, you've got one is all the way like way early on. Are you seeing anything like?

Naomi:

that. Yeah, I mean, she just started uh junior kindergarten today. Yesterday was her first day. Nice, you're right, I know.

Michael Hartmann:

Uh, no, not at the age of four so, but just so this this woman talked about, they have kindergarten kids who are doing that, five year olds, so you know, I'm just I'm trying to.

Naomi:

It's interesting. As someone who works in tech and uses devices like a little bit too much, I'm very much okay with not giving her a single device for as long as possible we did the same thing with our single device for as long as possible. We did the same thing with our youngest, you know, as long as I possibly can anyways, yeah, yeah, there's, you know, especially after sorry, no, I think it's smart, sorry, yeah, no, especially after reading that, that book, the Anxious Generation.

Michael Hartmann:

So I just started listening to it and not reading it.

Naomi:

Yeah it's. It's actually a bit you know. I'd love to talk to you about it after you're done.

Michael Hartmann:

Yeah, well for me. I grew up at the point where a lot of that stuff he says started right. Yeah, the cusp of that.

Karen Kranack:

Yep. Well, one of the things to this point is you know I'm sure you've read about the unfortunate, the suicide that occurred, the chat, gpt, openai being sued, and I think that this is one of the things that's really important to also help young people, younger people understand is that it's not a person you know, and it feels like a person, it acts like a person, sort of it seems like a person but it is not. And you know really helping, because obviously I saw that OpenAI is now going to institute some parental controls or whatever. But what we know about LLMs is that they really can be jailbroken really easily, like depending on how you talk to them. Like they don't always adhere to their guardrails. It's really kind of like not a totally safe space, and I think that's something that we also need to emphasize.

Michael Hartmann:

Well, and I think the more they act kind of like human. There's a post today on LinkedIn that I saw where someone said there's research I have not gone to validate it so I won't vouch for it but it's basically said that some people did some experimentation where they put like LLM based engines as people in a a system that was like a social media platform that's intended to like gravitate people towards like controversial stuff, and and it actually led to like this really divisive thing with a bunch of chatbots right. So like it's kind of like you've got that plus these engines that are designed to keep you engaged for as long as possible, and so, um, yeah, it's, it's going to be an interesting time over the next few years. I was going to say the next five to ten years, but I suspect that the window of time is actually shorter than that.

Karen Kranack:

So yeah, that's. Am I allowed to say like I actually think no one should be on social media at this point, but yeah, totally, I think you can.

Michael Hartmann:

I think that's okay. I would love to. There was a point in time where I deleted stuff off my phone and then I slowly added it back on and I, kind of on a daily basis, like what am I doing? Yeah, so, uh, yeah, cause, like the danger is that we've talked to other kids like don't let it control you, you control it. Right, that's the, that's the thing we need to do. Um, karen, lots of fun. Uh, I don't think, even think, we scratched the surface of what we could have covered with you. So, thank you so much for joining us. If folks want to learn more about what you're up to and what you're doing, what's the best way for them to do that?

Karen Kranack:

They can catch me on LinkedIn. So just Karen Kranick. K-r-a-n-a-c-k.

Michael Hartmann:

Yeah, got it OK, good, perfect, all right. Well, thank you again, karen. Naomi, always great to have you, so thank you for joining um. Thanks to our listeners and audience out there. We appreciate your support. If you have ideas for topics or guests or want to be a guest, you can always reach out to naomi, mike or me, and we'd be happy to talk to you about that till next time, bye everybody. Bye everyone thank you.