How to get your brand mentioned in ChatGPT & Co. | Ethan Smith, CEO @ Graphite

THE ULTIMATE GUIDE TO AEO (DIRECTOR’S CUT) | Ethan Smith, CEO @ Graphite

Graphite is nothing like your typical agency. While most agencies sell slides and strategic decks, Graphite builds things. While others work on hundred-item SEO audits, they obsess over the 5% of work that actually drives results.

They’ve proven this approach in SEO with companies like Rippling, Webflow, Notion, and Upwork, and now they’re applying the same ruthless focus to AI search.

That’s why I’m excited to have Ethan Smith, CEO of Graphite, on the show today.

Ethan has led Graphite to the absolute forefront of Answer Engine Optimization, one of the largest new marketing channels we’ve seen in over a decade.

He already gave the ultimate guide to AEO on Lenny’s podcast a couple of weeks ago, but I felt like a lot of questions remained unanswered. So I asked him to do a follow-up deep dive, and now we’re here.

Full interview with Ethan Smith, CEO @ Graphite

Check out the episode on YouTubeSpotify, or Apple Podcasts.

YouTube

Mit dem Laden des Videos akzeptieren Sie die Datenschutzerklärung von YouTube.
Mehr erfahren

Video laden

Auto-generated transcript

Niklas (00:02.598)
Graphite is nothing like your typical agency. While most agencies sell slides and strategic decks, Graphite builds things. While others work on 100-item SEO audits, they obsess over the 5 % of work that actually drives results. They’ve proven this approach in SEO with companies like Rippling, Webflow, Notion, Upwork and many others and now they’re applying the same ruthless focus to AI search. That’s why I’m excited to have Ethan Smith, CEO of Graphite on the show today.

Ethan has led Graphite to the absolute forefront of answer engine optimization as in AEO, one of the largest new marketing channels we’ve seen in over a decade. He already gave the ultimate guide to AEO on Lenny’s podcast a couple of weeks ago, but I felt like a lot of questions remained unanswered. So I asked him to do a follow up deep dive and now we’re here. So welcome to the podcast, Ethan.

Ethan (00:56.878)
Thank you for having me. I could talk about this subject for days. So let’s ask, let’s answer some questions that haven’t been answered yet.

Niklas (01:03.75)
Okay, let’s try. see if we will end up with a 24 hour recording. Maybe let’s start with a quick recap for everybody that has maybe also not seen Lennie’s podcast, Yuan Lennie’s podcast. I highly recommend for everybody to watch or listen to it. Why should companies care about AEO?

Ethan (01:23.64)
think that they should care about AEO because it’s a substantial channel, meaning the amount of impact you can get now is substantial and it’s growing. There’s many channels you could focus on. This is one of many. So I wouldn’t say that everyone should be focused on it, but it is one of the channels to consider, especially because the growth curve is steep.

Niklas (01:44.196)
And what would you say who should focus on it? Because you said not everybody should focus on it.

Ethan (01:49.487)
I think it depends. So let’s talk about late stage versus early stage. And when early stage companies ask me what they should do for SEO, I say, do nothing at all or just rank for your brand terms, but don’t invest in it because it’ll take a long time to drive impact. And if you’re a startup, you don’t have, if you’re a seed stage startup, you don’t have time to wait two years for impact. Whereas if you’re late stage company, probably most people should be focused on SEO.

For answer engine optimization, I think it depends. It probably wouldn’t be my number one channel. It might be my fifth most important channel. So if I’m early stage, I would probably do a small amount of stuff, but it wouldn’t be my top investment. For early stage and for SEO, would do nothing for early stage, but for AEO, I would do something. And the reason why is because you can win answers by getting mentioned by other websites.

It’s unlikely that your URL as a seed stage company will become suddenly part of the citations because answer engines have algorithms similar to search algorithms where you need a lot of authority to rank. And so the time for your website to rank might be a long time, but you could get mentioned tomorrow by a citation. So for seed stage, it could be interesting because you can get impact quickly. Now I wouldn’t focus on thousands and thousands of prompts. I would focus on a small number of them.

but I’ve probably focused on a few. So I’d do a light investment if I was a seed stage or at least stage company. And then the larger the company, the more interesting it is because typically you will have saturated at least the obvious things in search and ads and stuff like that. So it’s a new channel that’s incremental to what you’re already doing. probably, so like the later, the later and larger you are, the more interesting I think it is.

Niklas (03:31.288)
Awesome. Thanks so much. You talked also to Lenny about, like, who should be responsible for AEO and you gave a good answer already there, but I’d like to dive a little bit deeper into what you think how companies should set up their teams. So if they should rely on their in-house teams, like maybe the marketing team, the SEO team, if they should partner with an agency, what are your thoughts on that?

Ethan (03:58.719)
I think there’s two different strategies and it’s offsite and onsite. Could be the same person or group. It could be two different ones, but the strategy, the skills to do both are fairly different. For onsites, that’s just traditional SEO. So make pages, the target prompts, have the content be good, technical SEO, technical SEO, things like that. So that’s pretty straightforward. The offsite stuff’s the stuff that is somewhat new and the strategies are actually not that different from SEO, but usually you wouldn’t have

Usually, at least today, you wouldn’t have a team spending a ton of time on backlink building. You might do a little bit of that, but it wouldn’t be a core focus of a team. Whereas with answer engines, getting mentioned by citations is actually probably more worth the time than it would be for link building. So essentially, when you say, chat GPT, what’s the best credit card? You want the citations to say that you’re the best credit card. And if you are mentioned in the citations, you’re suddenly the best credit card. That’s not really true where you could suddenly rank

you know, outranked Chase and Wells Fargo for the best credit card in Google, but that is true for answer engines. So having an offsite strategy, I think is quite impactful. The kinds of strategies, they vary quite a bit. Reddit, YouTube, LinkedIn, Instagram, G2, these are pretty different from keyword targeting and content optimization, things like that. That doesn’t mean that it needs to be a separate team, but frequently those are two different types of skill sets. So it may or may not be the separate.

a separate team. Now your question was about agency versus in-house. I think it could be either. I don’t think that there’s a real distinction. The one thing that I’ll mention is that there’s not that many people who are masters of Reddit and G2 and LinkedIn and these new channels. So you could either find an agency who does have experience there or you could find a smart resourceful deals with ambiguity type person.

which typically you would get in-house more so than an agency. Agencies don’t tend to be suddenly inventing brand new strategies, you know, in the middle of a project, whereas somebody in-house would. So if you have an agency who’s already good at these things, great. But if you don’t, would expect that an in-house person would be more resourceful, more adaptive.

Niklas (06:19.161)
And how do you usually collaborate at Graphite with companies? So if we’re thinking about like your collaboration with Webflow, which is pretty public. So you’ve published a lot of things about how you increased the number of signups from LLMs. And also I think Josh Grant, we’ve P growth at Webflow, posted a couple of times on LinkedIn about it. How does a good working relationship between an in-house team and an agency?

look like from your experience at Graphite.

Ethan (06:51.832)
think the main thing is when we started with Webflow, it was about SEO and the stuff that we’re doing were not things that we agreed to and was not on the SIW, it was not on the scope of work. And if we stuck to what was in the scope of work, we wouldn’t have gotten the impact. So we need to be able to work with other people, other teams at Webflow, and we need to be able to work on things that we didn’t agree to and try out new stuff. this is part of why I mentioned

You usually would not do that with an agency. would usually just have a SOW and stick, you stick to the SOW. You’re not adaptive. We would, we try to be adaptive with Webflow, but we need Webflow to be open to us working with other teams and, you know, working on Reddit, working on YouTube videos, stuff like that, that was outside the scope of our original work. And that comes from building, building trust with Webflow and also just building relationships with other parts of Webflow and for us to be adaptive or for our company.

The way that I think about agencies is it’s kind of like a boat. And the bigger the boat, the harder it is to turn. So if you’re in a tiny little boat with one person, you can very quickly turn. But if you’re on a cruise liner, you can’t turn that fast. can do more with a giant cruise liner, but you can’t turn quickly. So we try to be as adaptive as we can. But that’s how you can work well with someone like us.

Niklas (08:18.607)
Sounds good. I like the boat analogy. And what would you say, no matter if it’s someone looking for hiring people in-house or looking for the right agency or maybe even freelancer to collaborate with, what would you look for in people that are the right ones to push growth from AEO?

Ethan (08:39.118)
There’s two things. There’s analysis and experiment design, and then there’s dealing with ambiguity. And I would say that most people in SEO, AEO are not great at experiment design and analysis. And that’s why you have so many best practices that are not true. It’s because either people never looked at it or they didn’t look at it correctly. And they had a false positive where they thought that something had a causal effect and it didn’t.

So being able to do rigorous analysis and set up an experiment to figure out what actually works is very important. And we don’t know most of what works in engine optimization. We’re still early. So there aren’t a bunch of best practices you can pull from. You have to do experiments. It’s the only way that you know what’s working. Otherwise you’re going to waste a bunch of time. So that’s skill one. And that’s hard to learn on the job. If you show up at a job without knowing how to do experiment design and analysis, you could do that. But I haven’t, I don’t know if I’ve ever seen that happen.

We try to look for people who already have that skill and if they don’t have that skill, then we assume that they are unlikely to develop that skill. And then the second thing is dealing with ambiguity. And I think what makes a good person in answer engine optimization and in growth generally is the ability to deal with ambiguous problems. Like if I said, why don’t you go figure out a Reddit strategy? That’s different. That’s an ambiguous problem. Whereas I said, I want you to take these 10 steps and follow this

rubric and replicate what I just showed you. That’s a different type of skill. And I would say most people are the second type where they can follow prescriptive process, but they cannot solve an ambiguous problem. And some problems don’t need to be solved. Like many things have already been solved. A lot of stuff around paid advertising, for example, there’s very established processes for exactly what you do. You don’t need to

reinvent how to do bidding. But with figure out Reddit, you do need to invent that. And so you need someone who can do experimental design and analysis and solve ambiguous problems.

Niklas (10:48.279)
Awesome. It’s pretty, the, the ambiguity thing makes so much sense. If I think about it, let’s talk about another topic that I found very interesting. You also, I think you posted some, some stuff about it already in your, in reference to your work with Vimeo, which is video in AO. So I’d like to understand from your point of view, why video is important when we talk about AO.

Ethan (11:18.424)
I think that UGC and diverse opinions are really important. And video has a lot of that. So if you look at Google, one of the biggest complaints is that the results are all derivatives of each other. Everyone is using a content scoring tool and looking at the top 10 results and then rewriting them. And all the results are just rewritten versions of each other.

And so you have a lack of diversity. then Google then wants to rank Reddit and Twitter and video to increase the amount of diversity. Like if you have one opinion on everything, that’s bad. You want many opinions. You want the wisdom of the crowds. The wisdom of the crowd says that the more opinions that you have, the sum of of the group of people will be better than the best expert. So if you said, what’s how many, how much does this person weigh? The, if you pulled a thousand people,

The average often would be more accurate than the best guess of the best person within the thousand people. And so the more diversity, the better. So if Google is all version rewrites of each other, there’s no diversity of opinion, which we already have that problem of. So then you want UGC. UGC is a great way to have way much larger wisdom of the crowd.

So then what LLMs want? So LLMs want the exact same thing and LLMs are especially good at summarization. So the larger the data set, the better, the more useful the LLM is because it can summarize the opinions of many people. And this is related to why answers vary. It’s not because it’s unpredictable, it’s because it’s a probability distribution. And so the bigger the distribution, the better the output. So then coming to video, where do we have the most unique information? We have it in

Text and we have it in videos and images. So in text we have it with reddit, Quora, X, LinkedIn, which is why those are huge inputs to LMS. And then we have video. And video is already a large input to LMS, but the majority of the information in videos is locked in the video. Most of what’s used is the title and the description, maybe comments, but the majority of the context in the video is not accessible by the LM. I think it will be at some point. So once it and video is already a large data source,

Ethan (13:38.603)
Once you can extract more information from the video, I think that video will overtake Reddit, overtake X, overtake written content because there’s so much context in videos. Vimeo, who we work with, just launched something where you can do a search, like a semantic search saying, I want you to find this type of a scene within a video. So you have the title and the description, that’s mostly what’s used in the element now. Then you have the transcript, which you could make accessible. But there’s so much…

There’s so much information that the transcript doesn’t really give you, like the semantic understanding, the one layer of abstraction above just the words that were said. And the thing that Vimeo just launched, they’re not like feeding that into an LLM, but that type of thing when an LLM can understand what’s happening in a video. Like, for example, we’re having this conversation right now. If you just looked at the words that I said, there would be a decent amount of information. But if you could have a layer of abstraction about

the types of things that I’m, you the concepts that I’m saying, like I talked about prioritization. I talked about, I talked about types of people you could hire. When you add a labor abstraction above that, then there’s so much more unique content that could be fed into the LLMs. And then you have this massive corpus of unique content. And rather than having the 10 rewritten articles that are all the same thing, you have this huge amount of UGC. So that’s why I think the video is so important today will be.

even more important, maybe the most important thing over time.

Niklas (15:09.092)
Awesome. Let’s quickly tap into the rewritten content problem. So I think this is something that is still happening, unfortunately, a lot. So, and since I know that you guys are also creating on-site content, so I’m not talking about UGC and the off-site initiatives here, but on-site content, for example, for Webflow or also other companies, how do you ensure that you’re not…

part of the problem. So creating rewritten content, but actually bringing in unique point of view, etc. How do you handle it operationally?

Ethan (15:48.367)
There’s two kinds of content. There’s human content and there’s AI content. Most of what we do is human content. so part of it’s diversity and quality. So quality is easy. Quality, you just hire domain experts. It can be expensive, we have 13 editors. We have multiple rounds of editing for every single piece of content that we create. We hire…

dedicated domain expert writers for every project that we work on. I think we accept 5 % of all applicants, so we’re very rigorous about sourcing the writers. And then multiple rounds of editing. So that’s on the quality side. And then on the uniqueness side, we try to have uniqueness and information gain. So how we could do that would be, there’s various ways to do that. So you could have expert quotes, you could have unique metadata, you could have

What am I thinking of? You can have stuff about how to use the product to do the thing. Like, how do I use Rippling to pay people in Argentina? So not just like, this is how theoretically to do it, but this is how to use the product to do that. So we try to find unique hooks about the company to add some amount of uniqueness or information gain.

Niklas (17:06.084)
Great. Let’s draw from there to go to a content workflows a little bit deeper because obviously we’re talking about a EO and we’re talking about AI as a marketing channel, but AI can also be a very powerful tool for a lot of people. And I think from the how people use chat GPT study that OpenAI did with Harvard for

professional users, actually the biggest category of usage was writing. So I think 40 % of people like the, latest data point was 40 % of the usage can be categorized as writing. but I know that, you have done, I think multiple studies, but the latest one was with, was, Axios, about AI generated content. So my question is, why does AI generated content not work?

Ethan (17:59.885)
Yeah, so it does work depending on how you do it. usually when people think of I’m going to there is this eight, I forget what was the AI heist, the SEO heist where I’m going to take all your content and rewrite it with AI. Or I’m just going to say, chat GPT write a million landing pages, launch all of them. Here are all the keywords, make pages for it, done, free. That doesn’t work. And that doesn’t work for a few different reasons.

I mean, reason one is I think that Google has an AI detector and they just generally are sensitive to that. Reason two is, well, I mean, that’s the reason. Now, why shouldn’t it work? I’ll reframe the question. Why should it not work? And so if you just say, ChatGPT, write an article about this thing, you’re essentially deriving that from information that everyone has access to. So you’re adding nothing on top of what already exists. And the similarity will probably be pretty high.

And there’ll probably be, I’m guessing Google has some sort of similarity plus AI footprint. And if it’s AI, but it’s unique, it’s fine. And if it’s AI and it’s not unique, it’s not fine. That’s my guess about how they’re algorithmically doing that. Because we have multiple examples where AI content is working, but it works when you have a unique input. That’s not just a, you know, and it’s adding, it’s additive. So a couple of specific examples. Example one is we work with Ceremo, which is a,

community for doctors where they can talk about they gave their patients a particular drug and this is what happened. Like I gave my patients this particular SSRI and they had these conditions and this is what happened. And this is all proprietary, it’s closed. So it’s not in the public. And then we have landing pages saying summarize what the doctors are saying are working and not working. And that actually works quite well. They rank right underneath WebMD and drugs.com and it’s AI generated.

But it’s an AI generated summary of unique information, really high quality unique information, and it’s adding. You can’t find that anywhere else. So that works. We also have examples with Webflow where we have FAQs or product related content, and that also works. But that’s because again, it’s stuff that’s unique to Webflow. when you’re adding, like AI is great at summarizing things. And if you’re summarizing unique information that is not accessible by the public, then you’re adding.

Ethan (20:25.91)
And so again, algorithmically, my guess to what Google’s doing is they’re AI detector, yes, no, and then is unique, yes, no. If unique and AI, good. If not unique and AI, bad.

Niklas (20:39.939)
Okay, makes sense. Can you give us a little bit more of an understanding of how you use, for example, Aerobs with Webflow? Because Aerobs, from my understanding, is also strongly pushing towards AI content generation and like this content engineering movement. So how do you use it and what are also your thoughts on the whole content engineering movement?

Ethan (21:06.84)
I think it’s great. also think AI content is the future. Clearly it’s better if you have AI as a co-pilot than if you don’t have it at all. And it’s also bad if you’re just rewriting each other’s content with no value add. So I think clearly you want an AI co-pilot. Just like with mobile and the internet, there’s a bunch of creator apps. There’s Instagram and TikTok, and it’s way easier to create videos now than it was in 1980. You would need to spend hundreds of thousands of dollars and now it’s free.

So I think that AI will make it, there’ll be a whole suite of creator apps. AirOps is more on the professional side. There’ll probably be consumer side ones like captions and others, but AirOps is focused on the enterprise professional content workflows. I think it’s very interesting. So the kinds of things we’ve explored there are FAQs. think also,

things like category pages where you could have something like an apartments in Santa Monica page and you have unique metadata about the prices and the inventory and you you want a list of apartments and you want to know average price, inventory kinds of features. You don’t need that doesn’t need to be You don’t need to hire someone from the New York Times to write what the average price of a apartment in Santa Monica is But what you do want is you want unique metadata

So there are unique information around pricing that would be useful. then you could use something like an air ops to summarize this metadata. So feed in the unique metadata and get a summarization of that. FAQs, I think the product content is also very interesting. A lot of what people ask in LMS is I want a product with these features and these attributes. Again, that doesn’t need to be someone from the New York Times writing a beautiful story about whether or not you have an integration with Rippling. You just need the information. then the Nero ops would be.

good at that as well. So I think that this space is quite interesting, but it shouldn’t just be some forum on top of chat GPT where you say write a resume and then it’s just sort of suddenly just this thing. You want it to be hooked into workflows such that you have that unique metadata and you’re leveraging that as much as possible.

Niklas (23:13.699)
Got it. We already touched on tooling now with AirOps. And I’d like to go a little bit deeper into the whole AEO tool landscape, because I know that you also published a couple of things about that. So first of all, maybe if people are thinking about getting an AEO tool, I think like marketers always tend to think about, hey, I need a tool for that. If there’s a new channel, what would you say? What should we look for or what should people look for?

Ethan (23:44.771)
I think you should look for what do you want to do with it? And then what’s the cost? And if there’s a new feature, is it as sold? So what do you need to do with it? SEO tools are a good example. There’s Screaming Frog, one of my favorite tools. The majority of the columns are not useful. There’s a few columns that are useful. So when I use Screaming Frog, I’ll do a site crawl. And typically I’m just looking to see the index pages, how’s the internal link?

coverage and so I look at unique inlinks, the rest of the columns are not useful to me. That doesn’t mean that they’re never useful, but they’re not useful to me. So if I had some other screaming fraud competitor and it was one-tenth the cost and it didn’t have the meta description character count, like I don’t care about that. I don’t need that. So what do you actually need? Don’t just look for feature list in a series of check marks and like, well I need that feature. Like what do you actually need?

Number one. Number two is what’s the cost? And I think that the tool space generally is going to rapidly commoditize and we’re already seeing that. But just like SEO tools, there’s no moat around SEO tools. So SEO tools are generally a commodity, which is why everything costs between $80 and $130 a month. Exact same thing will happen with the AEO tools. I think that an air ops and workflows and stuff like that is different, but I’m just talking about

an SEMrush Ahrefs competitor, it’s going to cost $80 to $150 eventually because it’s commodity. And then there’s new features. there’s four features that I think are interesting and probably long-term there’s only four features. It’s visibility tracking, prompt volume, workflows, and content scoring. So visibility tracking, I think everyone’s generally familiar with this concept, but…

For keyword tracking, you would use an Ahrefs SEMrush and say, what do I rank? And you rank position three for content workflows, let’s say. And so then for LLMs, you would say, well, when I ask, what’s the best payroll management software, I want to know if my brand shows up. So this is how often do you show up and where do you rank? I think everyone’s familiar with this. Things to consider with this are where the data come from. So there’s the API, there’s logged out scraping.

Ethan (26:11.384)
and then there’s logged in scraping, and then there’s paid logged in scraping. These are four different things. They all have different types of data. So let’s say that I’m calling the API. It’s going to give you citations that are very different from logged in citations. And what you really care about is probably logged in citations, like nobody’s using the API. so the answers will actually probably not vary that much, but the citations will vary a lot. So if you’re doing offsite,

for the seed stage company that I mentioned, if you’re using API data, it’s going to be way off. You’re going to be optimizing citations that are not used in logged in. So you want to consider where’s the information coming from. And again, for any new feature or data source, I suggest getting multiple sources of truth and comparing them. So rather than just picking one thing and assuming that it’s correct, get a few different sources of truth. So I would get, I would probably use two tools and then I would also compare with search data. And this brings me to

So for visibility tracking, I would manually take questions, log in and manually go into a spreadsheet and put in the citations that you see and compare it. You don’t need to do that for thousands of things, but I would at least sanity check, how close is this tool to my actual experience? So that’s visibility tracking. So then there’s prompt volume. So prompt volume is interesting. This would be how many people are looking for this prompt? What are the most popular prompts? It’s kind of like search volume. How many people are searching for this particular keyword?

And so for search, this comes from Google, essentially. Google Ads API gives you search volume. Google Search Console gives you impression counts. Bing gives you impression counts. Bing API gives you search volume. So we have actual first-party data from four different sources, Google Bing Search Console and Google Bing Ads. So we have a lot of good data there. We don’t have that for any of the prompts. None of the LMS give us prompt data. So then we have to get data that’s

as close as possible to that. So there’s panel data. So what panel data would be would be I have some subset of the total population and that would be coming through browser extensions. You bought it from someone like you bought it from a browser extension. You bought it from an LLM. There’s a bunch of LLM apps. I could just pay them and say, please give me your prompt volume. There’s ISP data. So there’s all these different sources.

Ethan (28:36.238)
and it’s a subset of the total population. So then the question is, how representative is the subset? And the way that I think about this is it’s not just about the volume, but it’s about how representative it is. So if I did a presidential poll and I said, who are you going to vote for? Most presidential, so the voting population, I think, is 200 million plus, and most presidential polls are 1,000 people. So…

And they’re, mean, you they’re not perfect, but they’re pretty close. You’re not going to be off by 20 % for presidential polls. It’s a thousand people out of 200 million people, which is a tiny fraction of a percent. So the relative sample is extremely low. However, if it’s representative, then it can get you pretty close. Now let’s say instead I poll all of California and say, who are you going to vote for? It’s going to be much larger sample, but it won’t be representative. So the, the, the, I think it’s like 30 million people in California or something for voting.

So if I pulled 30 million non-representative versus a thousand representative, it’s a way better answer. So the way to think about panel data is not the size, but the level of representativeness. And so that’s what people are generally using for prompt volume. Now there’s discussion about how close particular prompt volumes are, and I come back to you want multiple sources of truth. So I would compare it with search data. And how to think about this is that Google gets

or sorry, search engines get roughly 25 times more page views than LLMs. So therefore you would expect that the prompt volume would be one twenty-fifth of the search volume, generally. It’s not exactly the same, but that’s generally what you would expect. So the closer prompt volume is to search volume, the more off it is. And so if you just want to generally sanity check, is this prompt volume reasonable? I would just say the expected relative size is take the search volume, divide by 25.

The further away it is from that, the more inaccurate it is.

Niklas (30:29.28)
Very good, let’s step into the visibility aspect a little bit deeper because we got a question also beforehand from the community and I’m trying to weave in these questions now a little bit. If you have a favorite tool to measure visibility.

Ethan (30:47.566)
A favorite tool, tools that I use and like, we have our own internal tool, but we decided to not commercialize that. So that’s just for internal usage. And then other tools that we use and like are, Aerox has visibility tracking, PKI, Surfer SEO, and EverTune. And I haven’t tried the rest. So the exclusion of the rest is not because I tried them and I…

didn’t like them, it’s because I have not tried them. So I would say, think I haven’t had a bad experience so far, that they’re all good. But again, I would compare the data and the citations with your actual personal experience. What we also do internally is we just have actual people logging in and copying and pasting things.

So if we really care, we’re literally just having humans copy and paste things into spreadsheets and that’s how we do tracking.

Niklas (31:49.527)
Refreshing. question we got beforehand, it’s basically a couple of questions and I feel like they fit in very well at the moment. So the first one is how do you decide which prompts are worth tracking?

Ethan (32:05.774)
There’s a few different ways to think about this. So there’s prompt volume from panels, which you talked about. There’s search data. And then there’s other conversation data. And I mostly use the second two. So search data, think, is great because the way that the relative volume of search is probably going be pretty similar to relative volume of prompting. The format will probably be different for prompting.

prompts are longer than searches, but the relative number of people looking for payroll management software in search is probably gonna be similar to prompting. if people related to payroll management software are looking for integrations or what’s the best ETF with these features, it’s probably gonna be roughly similar. So for search data, what I do is I either take my existing search data or I take my competitor’s search data and pay data.

So if I’m a brand new credit card company and I want to know which prompts to care about, I’m going to go find another credit card company. I’m going to look at what keywords they’re bidding on, which are probably going to be the valuable ones, and then transform them into questions. And so you just go into Ahrefs and put in your competitor, get the top paid keywords, export CSV, and then go into ChatGPT and say, make a table with these keywords and then make a question version of that. That’s actually pretty good. It’s not going to be exactly

what people are prompting, but it’s going to be pretty good. It’s going to be directionally good. So that’s a shortcut for finding prompt volume. And that’s what I do. So that’s option one. Option two is panel data. And I talked a bit about this. I don’t actually know how accurate it is because I don’t know what the panel is. So back to the presidential poll, I don’t know anything about the poll. So if I don’t know anything about the poll, it might be really accurate. It might not be accurate. I just don’t know. So if I don’t know, I’m

I’ll wait until I have some more confidence before I trust panel data. But I know that eventually it could work. I just don’t know if it works really well today. Then the third would be other conversation data. And for prompts, the tail is larger and longer. There’s really specific prompts that have never been searched for. So how do we know what that looks like with search data? It necessarily won’t have it. So how do we get information about the tail?

Ethan (34:24.13)
The way to get information about the tail is try to find conversation data in other data sources that you do have access to. So you have conversation access to things like your sales calls, your customer support, Reddit, what are people asking about your brand on Reddit? What are they asking on YouTube? What are they asking on G2? These are all actual conversations and this can help us inform what the tail looks like. So.

What I would do potentially would be go into Chat GPT and say, summarize what people on Reddit are asking about my product or Reddit actually has Reddit answers where you can say, hey, what are people asking about my product? And then we’ll get a summarization of the kinds of questions people have. So that’s how I would fill in the tail. you know, just to summarize, search data, start with that, and that’ll tell you most of the information. And then Reddit answers G2 tail. And then the more we know about how representative the panels are, then we can shift to panel data whenever that.

whenever we have confidence around that.

Niklas (35:20.896)
And how many prompts do you typically monitor at once or what would you recommend people to monitor?

Ethan (35:28.128)
We work with later stage companies and they’ll typically do one to 5,000. You could also ask how many keywords should you track? If you have no information, how many keywords should you track? mean, ideally you’d probably be tracking tens of thousands if you really want a full picture. Like how many keywords should Webflow track if they want to know how they’re doing? Probably tens of thousands. So how many prompts?

Probably ideally tens of thousands, but I think it’s too expensive to actually do that. So I would do at least one to 5,000. This is again for enterprise companies, which is typically who we work with. So probably like one to 5,000. And then what I would do is I would take all of my search topics and then make question versions of that. So for Webflow, they want to target product managers and designers and solopreneurs. And then there’s the various features like creative feature, persona, matrix.

And then you have your keywords. Are you ranking in search? And you have your prompts. Are you ranking for these prompts? And yeah, probably one to 5,000. And then you sort of do like 10 prompts per page and then do this like visibility matrix.

Niklas (36:38.338)
Okay, great. I think this will surprise a lot of people if they hear a thousand prompts because most start with like 10 to 15. So what would you tell them?

Ethan (36:46.58)
I think it’s good to start with 10 to 15 and 10 to 15 for a brand new company or let’s say Graphite, do we need to track 5,000 prompts for ourselves? Probably not. We probably only need to be looking at 10 to 15. So that’s totally fine. It’s also a good way to get started. I’m more speaking to if you’re a Webflow and you’re a multi-billion dollar company, you probably want something.

more substantial. it just depends on how big your company is and what the footprint of what you want to rank for is.

Niklas (37:19.532)
Makes sense. Another question we got, I found this very interesting, given that LLM outputs can vary significantly because of each user’s personalized layer, do you think tracking prompts is still meaningful and if so, why?

Ethan (37:36.703)
I disagree with the premise. So I have not seen a significant difference in answers based on the person. I think that that will happen, but I don’t think that that’s happening now. The majority of the variability of the answers is back to what I described earlier, which is that the answers are probabilistic. So if you said something like, what’s the best website builder, there’s a distribution of potential answers. Webflow will show up some percent of time and Framer and Lovable and

and Ghost and like, you know, they’ll have a probability distribution. Maybe a better example is what’s the best flavor of ice cream? Chocolate and vanilla are the most popular, but there’s hundreds of kinds of flavors. And so there’s a distribution. Depending on when you ask, you might get cinnamon and you might not get cinnamon. Like maybe you get cinnamon 10 % of the time, but you get chocolate 90 % of the time. And so depending on, you know, it’s like a coin toss. So depending on when you ask it, you’ll get a different answer.

But that’s not because it’s because I asked it. And it’s not because the model is changing constantly or it’s a black box and unpredictable. It’s because it’s a probability distribution. And you’re getting a different answer based on that. I think that someday you will get personalization, but I don’t think that’s case today. So then how do you think about tracking? You ask the same question multiple times. So the more you ask the question, the more of the view of the distribution you get. We did a study that

saw that if you ask about seven to ten times, you’ll get a decent distribution, depending on the answer. For ice cream, you probably need to ask it more times to get the distribution. For website builders, seven to ten times is probably fine. So short answer is I would ask it multiple times and that for most prompts, that’ll give you a pretty good sense of the distribution.

Niklas (39:20.322)
So I just asked Chachiwiti what’s the basic flavor of ice cream and it said the objectively correct answer is vanilla. And I asked it the second time and it also said vanilla. my anecdotal evidence says vanilla is objectively the best flavor of ice cream. Do you agree?

Ethan (39:35.96)
Did it mention any other flavors?

Niklas (39:38.656)
Yeah, it mentioned that chocolate, so depending on your mood, chocolate and then I think there it got a little bit fancy and it also says salted caramel when you want to feel a bit fancy without committing to chaos.

Ethan (39:54.265)
salted caramel. And if you asked it again right now without cashing, I will bet you it will not say salted caramel. I bet you it’ll say some other type of ice cream.

Niklas (40:02.613)
Yeah, yeah, I got cookies and cream, pistachio and then mint chocolate chip.

Ethan (40:08.652)
Yeah, so I got chocolate, mint chocolate chip, pistachio, cookies and cream, and yeah, that’s what I got. So mostly overlap, but not salted caramel.

Niklas (40:18.045)
Nice. Yeah. Salty caramel was the fancy one. If you feel a bit fancy. I don’t know. So now this transition, I’m very proud of the transition that will come. Now we talked about ice cream. Now let’s talk about pies because I want to talk about incrementality between SEO and AO. so I saw your case study, from web flow where you basically doubled.

Ethan (40:23.534)
Yeah, maybe that shows up 5 % of the time.

Niklas (40:45.063)
the percentage of, signups from LLMs, which is obviously something that a lot of people are very, interested in. because it’s, I think one of the first like real publicly documented, cases of, people getting a lot of business value from LLMs. So, but my question is, isn’t the gain you get from LLMs the…

pain, so to say, you lose from Google. So is the pie really getting bigger or is it just a shift from one to the other?

Ethan (41:18.702)
The pie is getting bigger. So based on my data, my data, so there’s a few ways to look at this. Ultimately, I think what you would want to look at is incremental conversions across tens of thousands of companies, like across every company in the United States or in the world, or the total number of conversions increasing. I don’t actually know the answer to that. I don’t have data. I don’t have the data for that. I do have data for individual companies. One step before that would be is the usage.

incremental. So that I do know. If you look at similar web data, you’ll see that visits for page views are 1 25th the size of search and they’re generally incremental. is not going down and maybe it’s going down a tiny, tiny amount. But if you sum them, it’s incremental. The pie is getting larger.

Niklas (42:11.115)
Okay. And what about professional users? Because if we think about, maybe you and me also, I can see us working in our day to day a lot with AI tools. And I think the, the probability of us then also turning to cloud chat, LGBT perplexity for research is higher, but isn’t it plausible that one prompt or maybe two prompts we do with chat, LGBT is a substitute to let’s say.

six, seven searches we would have done on Google earlier.

Ethan (42:43.554)
Definitely. So my statement is about the macro effect. You’re asking about the micro effect. What about different specific use cases? I definitely think that LMs are used more for specific use cases. LMs are very good at things like analysis, summarization of large data sets, research. And I would expect that people would use them instead of search because they’re much better at that than searches. Just like I would expect…

for video, would look for travel ideas in Instagram and TikTok and YouTube more than they might look in Google. They might look in Reddit more than they might look in Google. Same with beauty and things like that. there’s different services are better at fulfilling the use case. Same with LLMs. But I don’t think that search is generally going down. I think that these new surfaces like YouTube, Instagram, TikTok,

Reddit LMS are additive to search. The pie is getting larger. The slice for search is the same size and you’re just adding slices on top.

Niklas (43:53.43)
And would you say that a, yo, as a marketing channel is, like has the same potential across different verticals. So if we think about beauty, if we think about, tech products, if we think about P2B SaaS, if we think about, the, example, the, doctor platform, so, so, so medical topics, would you say there are differences in how important or how impactful a, yo can be across these verticals?

Ethan (44:25.4)
Definitely, I the relative size of these is where the impact is. You mentioned a few specific examples. Also in looking at some Reddit data, it looks like the kinds of things people are discussing on Reddit are LLMs are especially useful for things like analysis coding. Interestingly, mental health is another area where people are using LLMs. I’m trying to dig into that more. Help with education and learning, help with research, ideas.

brainstorming, think that LLMs are very good. We talked about a sea of sameness where LLMs are derivatives of existing information. It’s actually great at suggesting ideas for information gain. Like if you said, what are some non-obvious questions about website builders? would actually give you great, or like what kinds of people aren’t using website, no-code website builders, and they should be. It’ll give you really good ideas for that. So it’s actually great at

brainstorming, coming up with new ideas.

Niklas (45:26.421)
Has your own research behavior shifted like from Google to JetJPT, Claude or some other tool?

Ethan (45:36.437)
I use LLMs for research, for summarization. Actually, for some of my research, it’s great for trying to see if there are academic journal articles about the thing that I’m exploring. It was quite painful to look for. There’s millions of, probably millions of academic journal articles. There’s no way I can search through all these, but it’s actually great at saying, has there been any prior research on AI content? Not really. Okay, cool. Well, you know, there are, here’s what they said.

I’m looking at evaluating the effectiveness of AI detectors. And there’s actually very few studies on this and the sample sizes are really low. But I created a table of all the different studies about the evaluation of the effectiveness of AI detectors and the sample sizes and where the samples came from. And that would have, I would have taken hours to do that in Google search. And now I can do that in a few seconds in.

So these are some of the things that I use it for. Also, I’ll say I went to Hawaii. And, no, sorry, I went to New York. I went to New York and there are all these different things to do in New York. There these different shows. And so then I asked Reddit answers, is this event good? And I went to Masquerade, which is this Phantom of the Opera immersive theater. And the Reddit answer said, yes, this is a great event. And then there was another one.

where it was like a light show. I said, what does Reddit think about this? And they said, well, people say it’s overpriced and it’s kind of lame. whereas in Google, I wouldn’t get that. So Reddit is a great source of information for things to do. Similar, I went to Hawaii. And then I said, well, what do people say you should do in Kauai? Everyone says, do the helicopter and make sure you don’t have the doors on. So I did that. Had a great time. But these are some of the ways that I use LMS.

Niklas (47:26.912)
Okay. Awesome. we touched already a little bit on, the results that you have gotten already from a, especially for example, from web flow. so I’d like to focus on the topic of attribution a little bit, because I feel like people are still having a hard time seeing, showing up in AI answers, being connected to driving real business results in terms of signups, in terms of pipeline, contribution. So.

From your perspective and your experience, what’s the issue with attribution in LLMs?

Ethan (48:02.094)
The attribution issue is that most of the answers are no click. So if you say, what’s the best payroll management software? You probably aren’t going to click on Rippling. You’re probably going to open a new tab and you’ll either search for Rippling in Google and then click on their homepage or their ad, their ad where they’re bidding, they have to bid on their own brand term, or they’ll open a new tab and type in rippling.com. So either you’re attributing it to direct, branded search or branded paid search.

But it actually came because the answer said Rippling’s great. And this is really hard to track. Now sometimes there’s something to click on, but most of the time there’s not. So the majority of cases where you actually got a conversion, there’s not something that’s traceable and clickable. So that’s the main issue. The second issue is similar to SEO, where there’s a user journey for Rippling. You probably heard of them a hundred times. So there’s so many touch points. So if you just looked at the last thing that they did before they converted,

you’re missing the previous 99 things that happened. even though you can trace the click, you still don’t know what happened. You have a very skewed view what happened right before that. And so that’s why you need to do things like multi-touch or mixed media modeling or just ask them, how did you hear about us? That’s search. Now answer engine optimization gets even more messy because you don’t actually know the volume of the prompt. like you could track, I’m showing up for this prompt. You don’t know how many people are looking for that.

you have to ask multiple times to see where you appear. There’s no click-through rate, so who knows if somebody, you know, if you were at the top or the bottom, like, did somebody see you? We don’t know. So there’s all these compounding pieces of compounding errors on top of the fact that it’s not traceable. So that’s why it’s hard. So my suggestion is focus on the beginning and the end. The beginning would be, did I appear for these prompts?

Yes, no. And was I ranked high? And just make a guess about what the volume might be, probably based on search data. And then for conversions, ask the person how, where did you come from? And they’ll tell you. Now there’s issues with self-reporting, clearly. Who knows what the weight of that, you know, they said it was from this one source, but we know that it was from many different sources. What were the weight of all those sorts? We don’t know. But that’s the closest you can get, I think, is

Ethan (50:28.248)
Did I appear for prompts with assumed volume with large error rate? And then what did people tell me after they converted? Very messy.

Niklas (50:38.272)
and how did you solve the attribution at Webflow or what you can maybe share about the case with Webflow where you were able to actually attribute a double in signups from LLMs.

Ethan (50:57.74)
I don’t have a full answer, but that is a good question. But I kind of know, but I don’t have the full picture.

But I know that their data science team spent time on it and that they were thoughtful about it.

Niklas (51:11.272)
Yeah. Yeah. Yeah. Probably they’re doing a good job there. And this is why I was generally curious to understand it in more detail, but, I, I could also totally see why, webflow and you would not be comfortable sharing it publicly, but, let’s, let’s talk about, another aspect of the whole attribution thing. you already mentioned that, most of the searches or like the, the, the prompt entries are zero click. can you see.

chat GPT and also then other LLMs or AI chatbots go to showing more links over time. Or do you think like we are now locked in with the, the, the amount of links and the probably very low click through rates we’re at currently.

Ethan (51:58.925)
No, I think it’ll become way more clickable. And I think that LLMs will start becoming more and more similar to search. And we are already seeing that. So if you ask something like, where should I go in Hawaii or what’s the best TV? You will see clickable shopping cards, just like you see clickable shopping cards in search. And it’s useful because I want to be able to— I don’t want to have to open a new tab and go find Rippling and click on their—

brand search ad on Google. That’s not the ideal experience. The ideal experience is just click on Rippling. So I think that it’s very likely that things will become way more clickable. And how Google looked at this is in the early days, they had the 10 blue links, and they added maps and shopping and travel and events and flight booking. So I think the LMS will just go one by one and start having vertical specific experiences.

And as a user, I want to click on stuff. It’s better if I can just click on something and go somewhere rather than having to open a new tab. I do also think that there will be autonomous agents where maybe I don’t need to click on something to actually buy or book. We’re already starting to see that with, I think, think chat GPT wants to do native, native stuffs within chat GPT so you don’t have to leave to buy a product or to.

to book something. So I think that we might also see that. So it wouldn’t be zero click, but it would be click within LM.

Niklas (53:27.712)
And how closely do you watch AI mode? Because I mean, now obviously you’re in the US, we’re in Europe. And AI mode is now also available, but it has been rolled out significantly with a significant delay, also due to the EO regulation, obviously. But how important do you see Google’s AI mode already being alongside chat GPT, or maybe also the more AI native interfaces like perplexity?

Ethan (53:58.709)
I think that when Google enters a new space, they have multiple teams working on the exact same thing with multiple solutions that all do something kind of similar, and then they merge them. And we’ve seen that multiple times. So I think what will happen is AI Overviews, AI Mode, and Gemini will eventually all merge because it’s essentially doing the same thing. You don’t need three different products that all do the same thing. So I think that they have multiple teams trying, know, multiple really good teams trying this out and

figuring things out and then they’ll merge them and there’ll be a single unified experience. Now, if you add AI Overviews, AI Mode and Gemini, that’s a really big footprint. And so I am generally looking at that, but I expect that it’ll be a converged unison of all three of these. And then if you look at the data, ChatGBT and Gemini, sorry, Google.

Gemini AI is not that far off from Chatchie PT. And then everyone else is a distant second. So those two, OpenAI and Google, are by far the largest market share.

Niklas (55:04.916)
And do you think like, open AI or chat to PT will be able to steal markets share? Like let’s maybe think one, two years in the future from Google. Or do you think Google will be able to either maintain the status they still have or maybe even grow market share again, because AI because of this whole, bringing together of AI mode, AI reviews and Gemini you just outlined.

Ethan (55:34.543)
So there’s the surfaces and then there’s the companies. So do I think that search will go down? I don’t think that search will go down, but I think that search plus LLM usage will increase. So search will just stay flat and then LLM usage will be on top of that incremental. Then the question is the two companies and will OpenAI take market share from Google?

I could definitely see that happening because Google has such a large market share for search. I think it’s 95 % depending on what you look at. So it’s hard to maintain 95 % market share forever. I would expect that that eventually would move over to someone that could be OpenAI, that could be a combination of others, that could be Bing. But I wouldn’t expect that anyone in any category would ever have a 95 % market share permanently. I would expect eventually something that high would go down.

Niklas (56:28.256)
Great. think Google had a little bit of an awakening moment, uh, when, uh, I think it was Robbie Stein, um, joined the company again and they, um, were basically already declared that, uh, because Chagy BD won market share and, uh, won, like, what was it? 700, 800 million weekly active users. And now Google somehow comes back with AI reviews and also Gemini, the new models.

Suddenly being very popular also in app download charts. So have you been surprised by this comeback or would you even agree that it is a comeback by Google?

Ethan (57:10.318)
It is a comeback by Google. I’m not surprised, but I’m impressed. So why I say that is because there’s the innovators dilemma where big companies will eventually be disrupted by, and this is a perfect example of a category that would lend itself to the innovators dilemma where new start, new entrance, small market, big company, cruise liner can’t turn their boat quickly because they’re not adaptive.

Tiny Boat comes in, steals market share. This is exactly what the innovators dilemma would lend itself to. And that has not happened. know, chat GPT has blown up, but Google has adapted and is growing quite well. And there are other companies who are not, like Google is the only one that’s actually catching up to, or even getting even close to catching up with OpenAI.

I’ll add that it’s very hard. OpenAI was working on this for, I don’t even know, 10 years plus. It’s very hard to build an LLM and have the talent to do that. So the fact that Google has been able to do that is quite impressive. I believe that this is true. I am not surprised and I am very impressed.

Niklas (58:24.074)
Great. let’s go to some community, community questions, again, because we got some really good questions beforehand, from people that I definitely, yeah. Want to get your answer or at least your, your ideas on, the first one we got was from, someone working at a review platform. So for full, for full context review platform in Germany. So basically like Capterra or G2 it’s called OMR reviews.

I was like, so like a software review platform. And the person asked, what do platforms like review platforms need in terms of content and positioning so that users still actively come to their sites and not just consume the information indirectly via AI or LLM answers.

Ethan (59:11.49)
Well, what I would want would be, you I mentioned Reddit Answers. So Reddit Answers is an LM summarizing what happened on Reddit. What I would want would be a summary. I mentioned CERMO. CERMO is summarize what the doctors are saying on CERMO. I would want the review sites to have their own summarization. So, and I don’t want to read thousands of reviews. I can’t, I don’t have time for that. What I want to know is generally what are people saying? So what I would really like for a G2 or I I forget the site that you mentioned is

I would want an AI summarization of other reviews. That would be super useful. And then maybe also the review flow can help guide me to add stuff to my answer to make it even more useful. Like Airbnb does a good job about asking about the cleanliness and the responsiveness of the host. for a general review site about B2B products, have lots of reviews. Encourage.

part of the reviews to answer the questions that people also have and some diversity of opinion and then have your own AI summarization within your platform. So I don’t need to go somewhere else.

Niklas (01:00:14.708)
Great answer. Another question we got was, do you think that what you shared on Lenny’s podcast has aged well or would you change or add anything?

Ethan (01:00:28.622)
I think it’s aged pretty well. Yeah, I can’t think of, well, I mean, would I add anything? I would add the stuff that I have been sharing post-Lenny. I mean, I have more insights today than I did in September when I filmed it. I wouldn’t take back anything. So I asked, after Lenny, I got a bunch of people adding me on LinkedIn and saying, you had a great podcast. And then I…

Niklas (01:00:30.368)
You

Ethan (01:00:58.464)
asked follow-up questions to each of the people who said that the podcast was good. And I asked them, I would love your feedback. Is there anything that you disagree with? And what would you, what did you find most useful? And is there anything that you would like me to discuss more? And then I got about 50 answers. And then I did an AI summarization of the feedback that I got about the Lenny’s podcast. The number one request was help center optimization.

content. like, me how to optimize my help center. And I’m still early in that. I haven’t mastered that yet. But I think what I would want to add would be, I mean, what I would want to add would be actual real case studies. So the more case studies, the better, which I didn’t, you know, I shared what I had then and I have more now. And then the second would be some more specific tactical stuff around help center optimization and product content, content around describing how your product works and stuff like that.

Niklas (01:01:55.232)
Can you share at least some of your early additional thoughts on help center optimization? Because I also saw help center content actually show up in citations. a couple of times now with clients we’re working with, when we talk about very specific, also like integration related, prompts and like, we’re also trying to explore this, but yeah, obviously very selfishly. I’d like to, get your insights from it.

Ethan (01:02:22.648)
Yes, so step one is what are people asking about? And we talked about how to do that, but I think customer support, sales calls, Reddit answers, what are people asking about my product G2? This will give you information about what product questions people have. And then step two is answering the question. I don’t think that there’s non-obvious about this. If somebody says what meeting note transcription tool integrates with Zoom?

have a page about that and say that you integrate with Zoom and explain how you do that. That’s really all you need to do. Otter shows up for that particular example and they show up because they have a page saying, we have an integration with Zoom, here’s how it works. The non-obvious thing with that particular example is have multiple pages describing this. So if you ask that question, Meeting Note transcription product integrates with Zoom. There’s a help center page on Otter. There’s a feature page.

on like Otter slash Zoom or something like that. There’s an article about it, like how to use it. Then there’s landing pages on Zoom about the Otter integration. So there’s all these different pages. And so have multiple takes at answering this particular question. And especially if have a page on the integration partner site. So let’s say that I’m a brand new meeting note transcription tool and I integrate with Zoom.

and no one’s heard of me, paying the people, know, assuming your friends with Zoom, ask them to make a page about your integration. Like all your integration partners, have them have a page about your amazing integration, and then you’ll show up twice.

Niklas (01:04:04.639)
That’s a good one. That makes a lot of sense. We have another question. We maybe covered a little bit of that, but I still want to be sure to ask it. It’s where and why do you see most AI content workflows fail?

Ethan (01:04:23.714)
It’s because you’re just prompting ChatGPT to derive information from the public and nothing more. It’s just a wrapper around ChatGPT. I think that a better version of that is you have something like, you know, the apartments in city example. People want to know about the price and the features and the areas to get an apartment and the pros and cons. Okay, now I have unique metadata for each of those four different questions. Then I have a workflow and a prompt saying,

summarize the cost, you know, the information about the price and include this, this and this. That’s what they’re lacking. It’s like somebody being thoughtful about configuring the prompt and feeding in unique information to get an output that is more useful than just a derivative of what’s already available to them.

Niklas (01:05:13.695)
got it. This is a community question for myself. Do you think more people should get a super me?

Ethan (01:05:21.986)
Yes. Well, if you have more people should use SuperMe for sure. I’ll explain what SuperMe is. So SuperMe is my friend’s company where you can feed in your thought leadership and LinkedIn content and stuff like that. And then you can have a SuperEthan where it takes all of my webinars and my articles and my LinkedIn posts and things like that. And then you can ask AI Ethan what, you know…

What tools do you like? How would you approach building on an AEO strategy? And the answer is a summarization of my thought leadership. So it’s not a derivative of public information. It’s Ethan’s specific thought leadership. So it’s quite useful. It’s only as useful as the input. So if you have a bunch of novel ideas that you’ve documented, then you should have a super me. If you haven’t done that, no judgment, probably no reason for you to have a super me.

So that’s what I think about it. But everyone should be using it. I looked at, I have had over 500 questions to SuperEthan and the answers are pretty good.

Niklas (01:06:27.197)
Yeah, it looked very interesting. Probably you have to do interesting stuff and have interesting things to say so that you qualify to having a super me. I actually have two more questions. One is one actually came from your colleague, Emily. So don’t blame me, please. She wanted to know what’s your favorite Backstreet Boys song?

Ethan (01:06:55.118)
Okay, the answer to this is my favorite. I have to say, In Sync is my favorite over Backstreet Boys. And my favorite song is Tearin’ Up My Heart. And my favorite performance is the, I think it’s the 1999 MTV VMAs with Britney Spears and In Sync, Tearin’ Up My Heart. That’s one of the best performances of all time. Okay, so then, but the answer to Backstreet Boys is I would say Shape of My Heart.

is my favorite Backstreet Boys song. I very much want to see them at the Sphere in Las Vegas. So I hope that they continue their residency so that can see them.

Niklas (01:07:33.311)
Okay. And in this case, you wouldn’t risk asking Reddit answers if it’s good. And because if it says it’s not good, you don’t want to be spoiled because you definitely want to see it.

Ethan (01:07:47.777)
I’ve heard good things. think I checked Reddit answers and I heard good things. One of my biggest regrets is I didn’t get to see Britney at her residency in Las Vegas and I’ve never seen her perform. So I can’t make that mistake again and I have to see the Backstreet Boys.

Niklas (01:07:49.768)
Ha

Niklas (01:08:02.608)
Okay. Thanks. Then the last one. and I think actually just, bluntly stole this from Lenny, but I also started asking it on my podcast and the, answers were so great that I just, wanted to also have it in here. what’s something that we didn’t talk about, but should have talked about.

Ethan (01:08:26.134)
where to get information on, like who to read, where to get information about the subject is what I would say. Because there’s not that much information. mean, there’s ideas, there’s conversation, but there’s a lack of first party experiment data.

Niklas (01:08:35.314)
Yeah, what can?

Niklas (01:08:48.37)
What would you recommend people to go to or where do you yourself like get your information from?

Ethan (01:08:55.854)
I like Kevin Nidig does good stuff. Lily Ray does good stuff. Both Ahrefs and SCM Rush and Surfer all do pretty good thought leadership and studies. And there’s probably others. I have a full-time job, so I’m not spending all day long reading other people’s stuff. So there’s probably many people that I have not read that are doing interesting things, but those are people that I personally read that I find useful.

Niklas (01:09:25.83)
Awesome. Ethan, this has been a very insightful conversation. I feel like it’s a very decent follow up onto the Lenny’s episode. Thanks so much for sharing, so much also such in depth, thoughts, maybe also some raw thoughts. I really appreciate it. I hope also that everybody listening and viewing had a great time. think it’s obvious if you want to learn more.

about what Ethan is doing, either follow him on LinkedIn or go to super me. I think it’s a super me dot AI slash E Smith. So like Ethan, but just with the E Smith and ask questions to super me, Ethan, before you ask Ethan, because he has a full time job. So please respect it people. But, yeah, Ethan, thanks so much for coming on. was, was, a real pleasure.

Ethan (01:10:22.511)
Thank you for having me.