Ops Cast
Ops Cast, by MarketingOps.com, is a podcast for Marketing Operations Pros by Marketing Ops Pros. Hosted by Michael Hartmann, Mike Rizzo & Naomi Liu
Ops Cast
The Future of AI in Search Marketing with Adi Abdurab
Text us your thoughts on the episode or the show!
Unlock the secrets of AI's role in transforming search marketing with insights from Adi Abdurab, Senior Marketing Content Manager at KodeKloud. Explore how AI has evolved from the rudimentary visions of the 1970s to the cutting-edge large language models like BERT, Lambda, and MUM, which enhance our interactions with search engines. Adi and Michael dive into the progress of AI technology, shedding light on how it better understands human dialects and thought patterns, making it an integral part of our everyday digital experiences.
They complex challenges AI faces in the realm of search marketing. From deciphering user intent to combating the flood of AI-generated content, Adi shares real-world examples and personal experiences, including the competitive landscape of SEO in the semiconductor industry. They reflect on the paradoxes of AI development, scrutinize historical AI failures, and discuss the implications of Google's content filtering on SEO strategies. This chapter promises a nuanced view of the hurdles and triumphs in AI-enhanced search marketing.
They also address the limitations and future prospects of AI in content creation and search personalization. Discover how AI is reshaping online privacy concerns and the evolving landscape of content generation, from improving technical accuracy to streamlining processes like translations. They ponder the potential advancements in voice recognition and localized content, emphasizing the enduring appeal of traditional reading formats. Engage with them as they journey through expert insights and future expectations for AI's impact on search marketing.
Episode Brought to You By MO Pros
The #1 Community for Marketing Operations Professionals
Meet Jeto, your new Marketo campaign co-pilot!
Jeto is an application that centralizes all your campaign intake into a single place by allowing marketers to easily create, launch, and manage campaigns without stepping foot in Marketo.
The best part is that it also fully automates the Marketo program builds, enforces governance, and integrates with your entire martech stack.
Ready to cut costs, speed up your campaigns, and make marketing operations a breeze?
MOps-Apalooza is back by popular demand in Anaheim, California! Register for the magical community-led conference for Marketing and Revenue Operations pros.
Welcome to another episode of OpsCast brought to you by MarketingOpscom, powered by those mo pros out there. I am your host, michael Hartman, flying solo once again. Hopefully Naomi and Mike will be back from summer breaks and just general busyness soon, but I get the pleasure today to talk with Adi Abdurab. He's currently Senior Marketing Content Manager at CodeCloud, a platform to teach students about trending technologies. Adi is a content expert who has written for many platforms and we may let him talk about some of those and even has a Peabody Award under his belt and to his name, so we're delighted to have him here. Adi has done all this at a number of different organizations, including being a founder and leading a few organizations. He started his career as a television anchor and writer, among other things. He has a varied background and passion for teaching, and today he is going to share his thoughts on how the world of search, seo, sem is being affected by AI. So, adi, thank you for joining us today and staying up late there in Pakistan.
Speaker 2:Thank you, Michael. Thank you for having me. Pleasure to be here.
Speaker 1:All right, and hopefully I didn't butcher too much of that you did not. You got all of that.
Speaker 1:I'll take that as a win, right, all these small ones each day. Okay, so when you and I talked about this, it's now been a couple months ago, I think even you said that you believe SEO is headed toward better understanding of human dialect through and thought patterns with AI technology. So for people like me who are still sort of learning what some of this means, what do you mean by that? In terms of maybe both, it might be even helpful to think. When you say AI, what I'm finding is people, when they say AI, mean lots of different things. Right, some just mean chat, gpt, and others mean it more generally. But maybe give us a little context there?
Speaker 2:Sure thing, sure thing. So AI as it was originally envisioned in the 1970s, the idea was to have computing and generally all technology be automated to a point where human input was unnecessary. That's the origin of the idea, and then science fiction had a lot to do with that, had a lot to do with that. Science fiction has always been influencing AI to the point that we expect there to be completely autonomous machines that are living, breathing creatures. Now that reality might still be a couple of hundred years off, but what we have is a very close facsimile of that form of communication where it seems almost natural. It seems natural to us. So the origin of that comes through. As we understand it today is through large language models Before that. I mean, apple was very proud to use the moniker of AI when they introduced Siri back in 2011. And a lot of assistants, virtual assistants, were referred to as AI assistants. You know there was Alexa, there's Google, there are all of those things, but none of them were actually truly AI. They were just trying. They were predictive models, trying to figure out what you're trying to say before you finish saying it.
Speaker 2:Now, as time marches on, we've become better and better at understanding humans, because we have a lot of data about every different human on different parts of the world and we are now at a point where we expect that our interactions with search engines and interactions with most of these media, they're going to be informed, they're going to be better attuned with what we expect, with what we're intending to look for. Now, when we last talked interestingly I think we brought up a few models about how it works. You know, there's BERT, there's Lambda, there's MUM. I'll spare people that detail because that becomes needlessly boring and unnecessarily complicated. But the idea is, first of all, it started off by just being able to predict the next word. So if you're asking, where can I? This is something we've always interacted with. There's entire shows about autocomplete. That's what BERT is Now. In a way, it's trying to figure out what we want.
Speaker 1:So just these technologies like BERT that's a good description of it Is that? I know most people who are listening are going to be familiar with ChatGPT and probably most people who are listening will know that that's a large language model. Is BERT a more generalized kind of technology that is used by something like ChatGPT and I know there are others but or is it a wholly separate thing?
Speaker 2:So it's basically it's a subcategory. So large language models are the umbrella terms in which they're trying to understand how we speak, how humans interact with machines, and these are subcategories of that, like how it makes, how it tries to understand what we're saying is falls under these different terms. There's BERT, there's Lambda, there's MUM, there's a few more, but these are the big ones we need to, we should be aware of. But now, when we're talking about this, the expectation was when we originally talked. The expectation was that AI would evolve to a point where talking to it would be very natural and very comfortable.
Speaker 2:Now and I will probably get into that a little more as we start talking, but for now I think it's safe to say that AI currently we can actually have, we can confidently say it has a mind of its own at this point on where it's headed, and at one point we were controlling where it was going. But now everybody has their own idea for what a large language model looks like, and large language models are some of the most pertinent developments in AI that affect us. So there's image generation, there's image recognition, there's pattern recognition. All of those things are great for medical and other forms of other applications. But as far as the average person is concerned, large language models are the way we interact with AI the most.
Speaker 1:Yeah, yeah, yeah, and it's interesting that you say that. Well, I maybe I want to clarify. It. Sounds like you're saying like are you saying that the, the, the ai in quotes is sort of driving the direction of where ai is developing, or is it that now there are more people interested and involved with it and there are more people having an influence on where it's going?
Speaker 2:So, all right, I can see where I phrased it, like AI has a mind of its own. Well, the way I'm saying that is where AI is evolving into, where it's leaning into. Let's say this way. So we're basically trying to get AI to. First of all, let me just rephrase I'm constantly saying AI. What we actually mean is efficient pattern recognition, Just efficient enough that it does the job we need it to do, Because people can program AI to do all sorts of things.
Speaker 2:For example, there's a very famous example. There was a guy who programmed AI to play chess and AI decided basically the rules of chess. It experiments with the rules. As soon as you find success, it keeps doing more of that. But the problem is, the rule of chess requires the kind of computational power AI is not yet presently fully capable of. It's still just working off of limited sets. So what it started doing was it started removing pieces off the table, started introducing pieces on its own, just random pieces started apparating onto the table. So this is why we've kind of. This is why I was saying that AI's evolution only works in certain directions. Other directions, it's still pretty much stuck where we were five, six, seven years ago. So that's why I was saying that it's kind of only letting us go in a certain direction, because that's where we're seeing growth and that direction is better understanding humans. That's the kind of location we're headed in.
Speaker 1:Got it Okay, so that makes a lot more sense, right? Okay, so you're right. I was thinking you said that AI had a mind of its own, essentially, but now I think, if I understand right, what you're saying is like the direction in which we are really going with AI is really driven by the current capabilities of AI and the limitations of some of the technology underpinning it. Okay, that's really helpful to go with. So you mentioned some of those models.
Speaker 1:Okay so you mentioned some of those models, okay, you know. And so I think when we last talked um kind of getting into search engines specifically, I guess, yeah, you were expecting, I think, that search engines using ai are going to have better understand our meaning, which I think is even for my days, of really building out search like paid search campaigns. At a semiconductor company where we had lots of little different kinds of electronic components and sometimes I'll give you an example an audio amplifier was a really hard one for us to to target because there was so much competition. Yeah, yeah.
Speaker 1:As soon as you said it, I was like high-end audio, as opposed to the people who are making the high-end audio equipment, which is what you know what I mean. So how do you see AI helping search engines to better understand their actual meaning?
Speaker 2:Okay, so, if I may, let me just roll back really quickly on what basically constitutes SEO. So there's a bunch of things that we'll do that make up the whole process of SEO. So the first thing is there are two components the people who are searching, that's us, the users, and the people who are giving us the results. At this point I'm saying people, but I mean search engines, because SEO, technically, if you go to a mall and ask somebody where's Gap, that person's your search engine. At that point Sorry.
Speaker 1:So if you went to a mall and there's some light, Sorry, I was expecting a third component there, which would be the content that is being. So I was coming to that. So, basically, at this point.
Speaker 2:The main interaction we have with any kind of search engine is basically we have a question, they have an answer. Now that answer could be content We'm coming over that, that we're a hundred percent coming to that point but it can also just be a one line answer. It doesn't have to be a whole thing, right so? So basically, it starts off with there's the the search engine optimization component comes through on page SEO. That's all the stuff that we do on our website to make sure that we communicate very effectively with the search engine for what we're selling or what we're offering, and the search engine is able to fully articulate what we are in fact talking about here. When I say talking about, we mean the content we're creating. Second comes the off page. There's the technical side, there's local, if it's applicable, and so all of these types of search engine optimizations are dependent on what we're trying to convince the search engine to do, and that is, send us traffic for a certain keyword Right Right Now.
Speaker 2:Where content comes into play is when we create a ton of content. Let's take the example of where I work currently. I work in a DevOps training company, and whenever we create DevOps related content, we want people to come to us to understand even the remotest of technical issues related to DevOps. So now there's no way for us to directly communicate with all of our customers other than the ones we already have, the new ones we still have to reach out to through search engines. And that's where we kind of run up against a brick wall every few months where Google runs an update and decides to change its mind on what constitutes a search engine, what constitutes search engine optimization?
Speaker 2:It started a long time ago with context, right, it's not just if you're looking for to your example, if you're looking for audio equipment, let's say audio amplifiers, those two words. If you just filled your website with the word audio amplifier, you know, back in the night back in the late nineties, you'd get a hit from Yahoo. But with Google it needed more information before it would give you a hit. As that evolved, they kind of started leaning towards who can write the most, who can write the most informatively about this, who can give the most value, and then there were a bunch of other factors over how much authority you have, how good your content is. All of those things came into play Now with AI. The expectation was that a lot of this heavy lifting would be done for us. So people who started creating content using ChatGPT. The idea was we'd tell ChatGPT to write something for us and it would be in compliance with Google's guidelines. And there we go problem solved. What happened was too many people.
Speaker 1:It always goes like that, right, yeah, no problems.
Speaker 2:Yeah, no problems. Exactly no evil portals will open. Unfortunately, a lot of evil portals opened up. Everybody started doing that and the first 20, 30 pages, the first 20, 30 hits from Google, were all AI-generated content. At a certain point, so Google had to step in and they had to kind of start filtering out AI content. And the biggest problem is and this was the thing I don't know if people remember this anymore, but when AI started coming out the biggest problem with Facebook, the content community, was AI is being trained on user generated content, on user-generated content, but now user-generated content is starting to take a steep decline and AI-generated content has taken an incline, and now we're just going to be conflated over who's teaching who. So, if you expect, we have historical examples of chatbots gone wrong. Microsoft AI bots launch was an atrocity. You started off with something, ended up this weird racist creature that nobody could communicate with and they had to take it off within the evening because of how much weird data it collected and how much poorly it had learned.
Speaker 1:So I know I had heard about I don't remember the exact details, but about somebody doing a search about some historical event that we knew quite a bit about, right Not that long ago history, like within the last few hundred years and it clearly came back with information that was just not accurate.
Speaker 2:Exactly, exactly so. So now we now it's become a joke again. It's circled back to becoming a joke on a larger level. So when Microsoft decided to launch Copilot, everybody was expecting Bing to become the new Google, because Google at this point has devolved into a Reddit redirecting tool. You know, we just go to go to Google and look for a look, look up content on Reddit because for some reason Reddit themselves don't have a good enough search engine, otherwise we'd go directly to Reddit. So now we go to Google, we cite Reddit colon whatever we're looking for and it'll pop up some good results. We're done. Reddit colon whatever we're looking for and it'll pop up some good results. We're done.
Speaker 2:Now Microsoft with their BARD and Copilot. They were expecting this to be this next generation thing. That also kind of jumped the shark. They launched it way too soon and it gives you results, but they're not really relevant. The best example is if you have a problem with a tool let's say my smartwatch isn't charging what do I do With Copilot? If you run this experiment right now, it will first try and sell you some smartwatches. And the problem with Google is Google will try to sell you some smartwatches for somebody else, so you cannot trust the first five or six links from either website.
Speaker 1:This is crazy right, and we've all been trained over the last 20 years to trust the top five ten links on a Google search result.
Speaker 2:We used to make the joke where's the best place to hide a dead body? On the second page of search.
Speaker 1:I've not heard that. I may steal it though.
Speaker 2:That was a thing our content instructors either drill into our brains. Nobody's looking at page two, so forget that it even exists.
Speaker 1:That's really interesting, which is true.
Speaker 2:I mean back in the days of Yahoo. I don't know if our listeners are that old, but back in the days of Yahoo search, going to page two and three was the surefire way of finding something relevant, because the first page was usually full of spam links. If you want to read a book, it's just going to be a bunch of pages with every word in the dictionary just printed like 500 times, and since that way, whatever you're looking for will appear in that search and pop there. So yeah, now the. The problem that I've that we now face is when Google launched their AI tool, like AI labs, oh my God, that was, I mean, not to say a disaster is an understatement, but there isn't a bigger word. I mean, without exaggerating, there's no bigger word for it, because the first thing it did I'm sure everybody has seen those jokes.
Speaker 2:The jokes are just, they write themselves. I'll give you an example. Somebody asked what is the best temperature to set your air conditioner at, and it said the best temperature to set your air conditioner at is 66 or 67 Fahrenheit, or if you want to keep it really cool, you can set it at 260 Fahrenheit. So at that point they just went for half to 660 Fahrenheit. At that point they just went for half the temperature of mercury.
Speaker 1:Right.
Speaker 2:And the reason they did that was, you could see, because it was pulling information from a Reddit post and that Reddit post basically somebody. Instead of using the symbol for degree, they used the letter O and the AI. Despite knowing the difference between the two, did not have the context to make that distinction.
Speaker 1:That's really interesting. Yeah, I'm hearing a couple of things right. One is that the AI still has a ways to go in terms of inferring context for the person who's searching and inferring context for the content that it's using to push that against. The second thing, I think you I don't know that you said it explicitly, but I know I've heard about this and you can probably confirm it is that you know even these well, chatgpt, right, which is not really a search engine per se, although I think some people are using that way, is it has.
Speaker 1:I'm trying to think of the right term here. I use a similar term to what my son talks about with how fast he can go in his car. Right, there's something that's throttling him back, right, he's not allowing him to go to the maximum speed for whatever reason. Right, and it feels like there's. I think I've heard a number of stories where things like chetchy pt are also being limited on what they could bring back and on some things. It's probably reasonable, like you don't want people necessarily going out there and building a plan for how they're going to be, say, a pedophile or whatever right, like I, I think we can generally agree to some of those things, but pretty quickly it gets gray.
Speaker 2:Yeah.
Speaker 1:Right when it's like some people would be okay with it, some people wouldn't.
Speaker 2:Exactly, Exactly so. With humans. We're very quick at understanding patterns and that's something we're trying to teach AI to recognize those patterns in a way that's actually helpful to AI and humans. But this is where we kind of run up against a brick wall every time. Ai isn't really it's not a living, breathing thing, so it doesn't really have any goals, any agendas. The idea of success or failure is completely subjective for AI.
Speaker 1:So I'll add one thing I would argue it's probably subjective for humans too.
Speaker 2:Yes, but with humans it's subjective within reason, right? If somebody was to say something stupid, at the very least there would be five or six people who would acknowledge that that was not normal. With AI, there's nobody telling it that that's not normal. That's AI just teaching itself. It's just patting itself on its back like nice AI, good AI, good job, yeah, yeah.
Speaker 2:So the expectation with AI, let's go over that. I mean, the idea was, ai at this point should have been able to understand us much more, much better, and would have been able to, should have been able to give us personalized content, to the point that if we if I'm in Pakistan, I'm looking Pakistan, I'm looking to buy a new cycle right Now. Right now, if I look up mountain bike in Pakistan, it'll run me through the local websites. It'll just tell me what's available. It's not going to help me as much as I'd like Now if I look up the same thing in Thailand and it'll give me a different set of information.
Speaker 2:So now, with AI, this should have been easy. It should have been able to eliminate the results that have nothing to do with me. If I cannot access those products, I should at least be able to. If you want to buy, these are your choices on column A, and if you just want to learn, these are your choices in column B. This kind of distinction is something we were expecting to have, but instead we have now come to a point where there are no columns, there's no personalization, and it's trying to do the same thing over and over again. So our biggest search engine, google, has for the past two to three years just been a medium for us to redirect to a page that's selling us something.
Speaker 1:Right, yeah, it's probably more often than not yeah.
Speaker 2:Yeah, exactly, let's say marketing operations right. If you want to learn more about this, the likelihood of you running into somebody who can post or a content that could actually help you is going to be two or three pages down, but first it's going to be some branded site like HubSpot that's telling you what is marketing ops right. And then, if you want to learn more, sign up for their newsletter. Then they go into that tangent.
Speaker 1:but because they have the authority and the traffic and everything, according to Google, it's fine you just go ahead and visit them, and some of those things might be useful, but they may not be. The most useful is what you're saying.
Speaker 2:They might not be what people are looking for. Yeah, because we know historically, google is capable of giving us results that we are looking for exactly Right and Google knows right. Yeah, because even Google knows. Right now we're looking at Reddit for most of our content, ideas, most of our solutions, because it's user generated content and that's where it's most useful. That's where it's most useful. That's where it's most valuable. The second option is Quora, but Quora is also just as we're discovering. It's also less reliable as we go on.
Speaker 1:Okay, it's interesting that you're sort of I mean, this reminded me of a, so I'm, in addition to hosting podcasts, I've listened to lots of different stuff too. And there's uh, one of my favorites is the Freakonomics podcast, right.
Speaker 1:So related to the books, if you've read those, and they had an episode I was just looking it up now um, that they did a ref, an update on back in February of 2024. The title of it is is Google getting worse? Right, and it has, and it. I think I think there's a pretty strong case, like in many ways, it has.
Speaker 2:Yeah, absolutely.
Speaker 1:I'll have to drop that in the show notes or something.
Speaker 2:So I mean it's whenever. Whenever we're talking about SEO and AI at this point, the biggest laughingstock is Google on both fronts. Not because they're doing everything poorly it's not that in every case but if you go to a website and you're mentally already filtering out the first few results, we have now instinctively gone past the AI-driven results. I don't think people stand to look at that anymore. They just skip past to the next thing and then the first few links are also. Just by looking at it, we can tell this is not relevant to us. So that in itself, we have filtered out Google for our own benefit, but Google didn't do all of that filtering for us.
Speaker 1:Right, yeah, it's interesting. I was just thinking of my own behavior. I think I've always been one who's been more inclined to go through multiple pages of Google search results if something didn't come up right away. So for me it doesn't feel a whole lot different. It doesn't feel a whole lot different. I'm trying to think anecdotally. Can I think about have I, in the more recent past, been more inclined to scroll through multiple pages? It feels like I have, but I haven't been tracking it, so I don't have true data. So it's interesting.
Speaker 2:So have you used DuckDuckGo and Perplexity as of late?
Speaker 1:I've used DuckDuckGo and I do use that sometimes when I and I have done searches on that, after I've been a little bit frustrated with Google results.
Speaker 2:Yeah.
Speaker 1:And how do you?
Speaker 2:find the results on DuckDuckGo and what do you think about its results?
Speaker 1:So I don't use it all the time. I don't think it's overwhelming, that it's better. But part of why I got it was the supposed privacy component to it. I say supposed just because it's not that I'm saying they aren't fulfilling what they say they are, but I have no idea. Anyway, if I'm saying they aren't fulfilling what they say they are, but I have no idea, and anyway, if I'm on my phone, I'm on a Google Pixel.
Speaker 2:Android phone, which means I'm being tracked anyway I had a friend, sorry yeah.
Speaker 1:No, so you brought up those other two search engines, but I'm curious what your point was there, right? So just before that I.
Speaker 2:I'm curious what your point was there, Right? So just before that, I have a friend who's a wonderful writer. He used to write for crackcom I don't know if you remember that was a good, funny site. He once said that you know, as long as I have an Android, I'm pretty sure it's already aware of what I'm thinking before I am. So there's really no point in me pretending there is such a thing as privacy of what I'm thinking before I am. So there's really no point in me pretending there is such a thing as privacy. Yeah, so, yeah, Anyway. So no, I've gotten to that point as well.
Speaker 2:Yeah, so we have another tool called Perplexity. It's not mainstream yet and they still have a paid model. It's not completely free, but there is a free component to it. So basically it's using GPT-4 and cloud I don't know how to pronounce that I think cloud or cloud, I'm not sure, because, well, cloud is the up-and-coming AI that's competing with GPT right, so people consider it right there. Neck and neck. If GPT is Microsoft Office, then cloud is Google Docs.
Speaker 1:Is that the one that Elon Musk has started investing in, because I know he had been a supporter of OpenAI.
Speaker 2:He's been investing in literally everything. Yeah, I think it's cloud, because he did at one point. Yes, I remember he was trying to pull back funding from OpenAI Right, because they had gone from being truly open to being profit-driven.
Speaker 2:So with Perplexity it's much more contextual and it's much more user-driven. So then, when people used to ask questions of Google, at one point all of your search queries were posed as questions. So whenever we ask, let's say, what color do colorblind people see Something like that. Now Google is going to first like, if you do this experiment as we talk, if the listeners do this experiment as we talk, we could probably see with Google, with Bing, with perplexity, you'll find a much more favorable response with Perplexity. Duckduckgo isn't for that stuff anyway. Duckduckgo is basically as you said, it's when Google fails you. That's when DuckDuckGo will give you something usable. And because Google, because of the whole corporate thing, it blocks out a lot of websites.
Speaker 2:You know if, let's say, for example, if recently Adobe is, people are just vehemently detesting what Adobe has become as a company these days. So for numerous reasons, we needn't get into. But if you want to know what happened there, google will take you so long to get to that solution. They'll take you to articles where people are criticizing it. They'll take you to article videos where people are criticizing it. But if you want to know how to bypass it, there's no information because that is bordering on software piracy.
Speaker 2:Because the question is, how do I use anything other than, like, you want to look up Adobe alternatives?
Speaker 2:That's one thing, but if you look at like, how do I solve this XYZ problem, google won't give you the right result because of its implication of being too close to piracy. Dr Go is able to give you that information without actually leading you down to a cracked or pirated site, leading you down to a cracked or piloted site. So, anyway, just circling back to AI and search engine optimization, the biggest application was, as we've always seen, is, writing content with AI. Second is understanding what to do with that content using AI and then figuring out how that relates to you, the user, and that's where there's so much work left to be done. I mean, when we last talked, I feel like we were, like I was living in a fantasy land, thinking all of these wonderful things will happen where we will have and I remember I think we talked about how it will be contextual to us, it will be localized to an individual and it will be customized to your search history, because it already has all of that data.
Speaker 2:It might eventually get there, but it's not getting there any time in the next couple of months because of the direction it's currently taken. They're going to spend more time undoing all of this and then a little more time finding the right solution. More time undoing all of this and then a little more time finding the right solution.
Speaker 1:Yeah, so and I think all that is spot on, I've thought for a while that the, the excitement about I'll call it really more large, like when we're talking about chat, gpt from a content standpoint was going to take a drop just because I think people are realizing that there are limitations to it. There's risk to it if you have IP that you're putting in there that now becomes public domain, like all kinds of things, things, and then you have sort of um, smaller, uh, smaller sets of data and content for those model the ai to to use. Right, you get less benefit. So I think so, but I do think it'll eventually get solved. I mean, I think it sounds like you believe that as well.
Speaker 1:I so where I'm my where I'm going to now is is having led seo sem efforts efforts way in the past, when you're right, I mean I think you could pretty much game that through a little bit of. I wouldn't even say like I never was into like word stuffing and things like that I mean, but I was into like you have to make sure your content is. Content was the key right. There are small things you could do to tweak it with your URL and your title tags, et cetera, et cetera. But if your content was not using the words you wanted people to find that bit of content for, you were like it's going to be hard no matter what you did.
Speaker 1:But now, with this idea that I mean let's take a step forward. Say that these platforms figure out how to get better understanding the context of the person who's submitting a query. And now it's like, okay, how do our content teams build content that would hopefully best match up with the context of what the searcher has? And I'm like that's the struggle I have. It's kind of like the struggle I have with trying to wrap my head around websites that have this. Theoretically, you could have this hyper-personalization where pages are almost dynamically being built based on that particular user's behavior. I don't know where the content comes. How do you build content for that? Is there another AI engine that does that? I don't know.
Speaker 2:Yeah. So the idea is from that. What I can tell you about this, from what I've recently observed, is we're becoming more and more visual as we evolve. So people will sit through a 10-minute video essay on, let's say, how to better cool your home, rather than read a 2,000-word blog on the same topic. So if I want to say how do I create a low-pressure system in my own home, yeah, a 10-minute video video is 100 getting those clicks compared to a 2000 word blog, no matter how visual it is I have two personal examples just in the last month right where I had to deal with an old car that someone tried to steal out of our driveway.
Speaker 1:They didn't successfully steal it, but they caused enough damage. I tried to figure out how to fix it myself. That was one, by the way. I end up having to take it to somebody exactly. The other one is we, we, um, we inherited a really old um. It'd be like a mantle clock, right and just like. Trying to figure out how do we like and like for trap for, for, for shipping it. It like had pieces have been taken apart, like, but we didn't have anything to go on. I'm talking about something that's I think we figured out. It was made somewhere in the late 1800s. There's no user's guide.
Speaker 2:The user guide is good luck.
Speaker 1:Yeah, I went to doing searches and I looked at the videos, not the other stuff.
Speaker 2:Exactly Because, visually at least, we can determine. You know, this is close enough. Maybe we can find we can go further into this. A good example of that for me personally would be just recently I was looking at a friend's laptop. The fans were completely caked with dust. So, yeah, so I was helping him out with the repairs. So I just wanted to open up the hatch.
Speaker 2:But the problem is, this laptop was built in 2021, back when the right to repair movement was dead in the water and there was no option for you to have access to the internals of a laptop, so everything would be hard Now. In that situation, I preferred an iFixit guide over a YouTube video. The iFixit guide is step-by-step illustrations of what you need to do. So you do this, do this, do this, do this and that, and you're done.
Speaker 2:The YouTube video in that point was actually detrimental to me for the reason there was a five-minute introduction over the history of this device. I bought this device, they relaunched this device, then this happened, then that happened and now I think only recently YouTube's introduced the get to the good parts option and you can just click and go to right where you need the information. You don't have to sit through the whole thing. So, yeah, it's still a combination of that, but it's still visual. I did not sit down to read anything. It's either an image or it's a video. So I think that sort of stuff is going to still take a while, because AI image generation is nowhere nearly as good as it needs to be.
Speaker 1:It's still pretty darn good.
Speaker 2:I mean, it depends on what you're looking for, right? If you just wanted to make a picture of an old man shaking a fist at a cloud, yeah, you're fine, it'll do just fine. But if you want two old men talking to each other about a certain subject, that's never going to happen. It's going to be two old men talking to each other about a certain subject. That's never going to happen. It's going to be two old men doing something else.
Speaker 2:So I ran this blog I guess I can plug it. It's called poloaltopostcom and the whole idea was just like onion style news for startup, for Silicon Valley, anytime. We got AI to generate the image. The generated image itself was far funnier than anything that we could have written, because we're basically just saying you know, the old idea was that there was a whole article saying old man shakes fist at cloud, saying get back from my lawn, get off my lawn, and that was the joke. The joke was your cloud computing is now an old man's lawn, and there's the whole thing. And this was a throwback to one of the Simpsons old joke. But yeah, so anyway, the way that panned out was it started drawing pictures of old men holding shotguns, firing them in the opposite direction. There's a cloud and it's firing in the off direction. Anyway, to your point.
Speaker 2:Yes, ai image generation is so far away from anywhere close to being usable that we kind of have to just accept that this is going to be a while, and that means what we're looking for right now, what we're actively looking for, is going to be if we need something visual, it's not going to be AI supported, it's just AI is going to look for something, it's going to give you something and as long as that's happening, it's still not AI. It's not generating anything for you, it's just redirecting. Yeah, got it. Yeah, so got it.
Speaker 1:Yeah.
Speaker 2:Now, since I know I just spent most of this conversation this complaining about AI so far, I can give you ideas on maybe, maybe make it more valuable for the reader, for the listener, and talk about how it can. What are the good things that we can look forward to and some things that we could probably see within 2024, possibly 2025, not too far off into the future. So the biggest thing is content quality. That's something people are still struggling with and AI is getting really good at it. We have to acknowledge that.
Speaker 1:Credit is real credit Meaning like providing kind, of like being an editor. Yes, exactly An editor meaning like in the newspaper publishing context right Somebody's reviewing and providing feedback and trying to make the quality of it overall.
Speaker 2:And it's a wonderful brainstorming buddy. So if you want to talk about something incredibly technical, it'll give you the technical information and it'll verify it for you. So you can't still trust it to do the whole thing by itself, because it will eventually do something you cannot use and it'll probably get you arrested. But outside of that, if you're writing about, let's say again, stereo amplifiers, you can just ask it. You know which technology was good, which technology was bad, what is good for people, what is compact, what is not too compact, what uses less power. It'll give you that information that you can still use.
Speaker 2:So from a content generation perspective, it's only going to get better and better and better. And from user, that's based off of user-generated content. And now from the search engine result page, the SERP, we can start seeing a lot of Reddit-like voting systems. I think that's something that we're going to see very quickly, where people can start just deciding this result does not work, take it away, and that will then again kind of circles back into Reddit. But that will start giving us more information that is relevant to us and relevant to people in that same community. And the biggest thing I think I still am hopeful for is cross-language optimization and it needs to start understanding. The AI will need to be able to get better and better at understanding local dialects from different regions and help people translate that for other different regions. So if somebody from South India is creating content on Microsoft Azure and somebody from Ireland is having a hard time understanding, ai needs to be able to bridge that gap and I think that is going to come in very quickly.
Speaker 1:I think that would be hugely valuable I mean, I think a lot of people, I mean that by itself if I was a content marketer and I had responsibility for building out my website and we just like the idea of having ones that are more localized, with, with translation or, at least, you know, trying to be a better match, the local nuance of how a language is used. But that's typically runs into two roadblocks One is that it's super expensive still to do that at scale, and the other the other I found is that a lot of people don't trust the translations that come out from those regions.
Speaker 2:That's true. Both of those things are true.
Speaker 1:Yeah, so that means you want people who can then review the translations, which is also a challenge. Right, you've got a resource limitation, so if those get better and better. That would be great.
Speaker 2:Right now, the way they're kind of addressing that problem is through intermediaries. So right now, if you have content from South India, it's probably more easy to understand for somebody from North India compared to somebody from, say, the UK. So it's going to come through that whole process of it's going to going to go through a filtration process, and that is pretty much what they're doing. They basically look first standardizing it to the location and then standardizing it through global to a global level. Now this usually, right now, it's limited to youtube videos. So if you want to create, if you want to generate captions, it's going to get you 70% of the way there.
Speaker 1:If you let's say, that's a lot.
Speaker 2:It's a lot. It's a lot yeah. But not much for technical things.
Speaker 1:But for general entertainment, yeah, it's good. Well, I mean, we don't use YouTube today to distribute this podcast, but we use other tools and we have at least two that in the process could and do do um transcription. Yeah, okay, I don't think they're using AI much, but that that might make it better. They might say they're using AI, so I won't mention names, but it would it. It's something that I think is, I think would be helpful for us. I know that process of doing it to the point where we don't really review the transcripts seriously. Yeah, right, yes, absolutely.
Speaker 2:You don't need to worry about that stuff. That would be the end goal, right. At one point you can just say do this and you don't have to worry about it. For example, right now, if I asked AI to write me a blog article on something you best believe, somebody will have to check it over and over again. But if it's just going to be, just write this person a thank you note, there's next to no chance that there's going to be something suspicious or weird in there. It's going to be pretty reliably good.
Speaker 1:You can just send it in, yeah.
Speaker 2:And still, I would still recommend it. By the way, interestingly, I had this conversation with the people from HubSpot about the content we generate through AI for our platform and their first question was how quickly can you use AI content as is?
Speaker 2:AI content as is and I said, you know, there have been times where, without being prompted, it will generate something that is generally racist, and I've seen this happen a bunch of times. It just goes into like this region cannot do this, this region cannot do that. Nobody asked regions. We're talking technology here. Nobody asked about anything to do with regions and it just introduced it, and I find so that kind of stuff, I feel is actually still important. That means you still have to stay on your toes.
Speaker 1:You cannot just put it out there as it is. Yeah, I mean, I told a number of people this, like I probably would have six months ago, where this was called that early 2024, maybe late 23,. I would have put myself squarely in the sort of skeptic camp about AI and I've gone through sort of an evolution like, oh yeah, I see value, I think there's value. I already said, like my prediction is that the value with, particularly with the development of new content, I think is going to have to like there's some things that have to get figured out to make that a reality a little more. I do think it's useful, as long as you don't care about your content. If it's already like public content, you want to use ai to generate, say, social posts, right, that are based on that content. It doesn't. It does a really, really good job of that to your point, though I would still review it because my experience is it's, depending on what the topic, right, it's anywhere from 70 to 90 percent. Yeah, yeah, exactly. So I like, I'm actually very like for that standpoint. I think it's really valuable, uh, and and it also was valuable and I've used this where I have like some data, again, don't care if it's made available publicly.
Speaker 1:And then I like tried to write something around the data and it helped like refine what I wrote, took the key points from what I had written about it, pulled in the data and sort of merged the two into something that was more meaningful.
Speaker 1:I still had to go edit the output, but it worked really well and saved you. You know it was done in less than an hour when if I had done all that myself right, categorizing some of the data points into clusters and things like that that by itself would have taken probably a day, you know. So I think I think there's some benefits. I I still believe there's no secret to like I actually believe the next big wave for ai is going to be around because you, you don't, uh, even within an entity there's going to be enough volume of data and keep it in just marketing and sales, that that those engines could actually provide some insights that uh just wouldn't. You know, the pattern recognition at scale that humans humans are good at pattern recognition, but doing it at scale is is not really doable right now without that kind of technology. I'm actually really excited about the potential for that nice, so that that is cool.
Speaker 2:So I will. I know we're almost, almost out of time, so I'd like to add like a few more things, uh, about what we can expect with AI and search engines in the future, and then I'll hand this off back to you. So one thing is right now search intent is the key string that we write down in Google. It's going to evolve into voice recognition, so it's no longer going to be what is that inhaler that I? You know what is that inhaler X, y, z? It's not going to be that anymore. It's just going to be you having a long conversation, as we always do. You know that inhaler that we used it was used temporarily, was good. Then it stopped being good.
Speaker 2:All of those things are going to become part of the search data. It's going to take all of these ums and ahs and variables and it's going to give you something that you can use. That's something that's very exciting. The other thing is user experience. So if you go to a website and that user experience isn't going to be good, people are going to hop off that website. So AI is going to be very crucial in figuring that out, and that could be through simple stats, simple analytics, bounce rates and all of that. Or it could even start mapping, heat maps, because we know GA4 is doing a little bit of that already. It's trying to figure out patterns and usage patterns on your websites. And then long-form content. That's the one Long-form content I think it's going to go away. And as it's long overdue for going away, it keeps circling back. It keeps coming back because without long-form content Google doesn't get it, the search engine doesn't understand, so it's less likely for it to explain. But then eventually we'll go back to the world of 300, 400 words.
Speaker 1:That's an interesting one and actually in a lot of places, especially on B2B website content, I think shorter content that's more succinct and not written Like it's written for you know sort of us I don't want to under, like this is going to sound bad, but for people who like don't write it for, for people who are highly educated or at least you know and I see that all the time. So I'd be a fan of less content in that case, yeah, it would work. It's more meaningful.
Speaker 2:And there's this demographic that just looks at a large body of text and it shines off, not even getting into that. Oh yeah, yeah, that audience will certainly come back.
Speaker 1:Yeah, it was. I was having. I am the old man on the porch, you know, saying get off my lawn sometimes. But I was having breakfast with some friends the other day and one of them was saying that he, before we had breakfast, he had been reading the paper. I was like, oh, you're the one who still gets the paper right. I mean, I actually do like I prefer books over eBooks. I prefer actually. I don't. I like the experience of, I like the experience of reading the newspaper. I don't, but I don't do it. I only do it if I'm somewhere else and there happens to be a paper and I'm waiting.
Speaker 2:Yeah, it's a nice ritual anyway. Anyway, just in the morning you set time aside to read something that isn't blaring a blasted light in your eyes.
Speaker 1:Yeah, totally agree, all right. Well, we covered all kinds of ground, adi, and you. You're right, we're running short on time here, so, um, I'm sure there's more we could have covered. If folks want to kind of get more on your perspective or keep up with what you're talking about or doing, what's the best way for them to do that?
Speaker 2:You can reach out to me on LinkedIn. I've recently just paused all of my content. I used to write a lot about how SEO and AI and content strategies and marketing is coming up. I've just taken a pause just so that we can find a new normal. So, yeah, if you can follow me on LinkedIn, I'd be very happy to talk to you about all manners of ways where we can optimize content for everybody.
Speaker 1:Terrific. All right, Well, as always. Appreciate it, Adi. I know it's late there, so thank you for doing that. I know that's tough for everybody with family and everything. So thank you. Thank you to our audience for your continued support and ideas and suggestions for topics and guests, and if you want to be a guest or have a topic you want to suggest, feel free to reach out to me, Mike Rizzo or Naomi Liu, either through LinkedIn or through the MarketingOpscom Slack channel. All right, everyone, Until next time. We'll see you later. Marketingopscom Slack channel. All right, everyone, Until next time. We'll see you later.
Speaker 2:Bye. Bye, folks, have a wonderful day ahead.