Mount AI, Fan-out research, Gemini vs OpenAI, Profound $96M Series C | #MonthlyLandwehr

Claude Cowork is replacing agencies. A Mac Mini is the new must-have for AI enthusiasts. And Bing just gave marketers something Google still hasn’t. A lot has happened in the last weeks, so it’s time for a new episode of #MonthlyLandwehr.

Malte is CPO & CMO at Peec AI, one of the leading AI search analytics platforms, and one of the brightest minds in the field. So I want to take all the news, new research and interesting developments and get his perspective on them.

What we covered in this episode

  • Why one-click AI content is creating “Mount AI”, and why the crash is inevitable
  • Bing’s new fan-out query data in Webmaster Tools: exciting or overhyped?
  • Why Perplexity is being left behind and how Gemini is catching up to ChatGPT
  • The hidden insight about English fan-out queries and what it means for non-English markets
  • Profound’s $96M Series C at unicorn valuation and why the math only works if they’re replacing agencies

Check out the full #MonthlyLandwehr episode

Check out the episode on YouTubeSpotify, or Apple Podcasts.

YouTube

Mit dem Laden des Videos akzeptieren Sie die Datenschutzerklärung von YouTube.
Mehr erfahren

Video laden

Auto-generated Transcript

Niklas Buschner (00:01.942)
Claude Cowork is replacing agencies. A Mac Mini is the new must-have for AI enthusiasts. And Bing just gave marketers something Google still hasn’t. A lot has happened in the last few weeks, so it’s time for a new episode of Monthly Landwehr. My guest again is Malte Landwehr, CPO and CMO at Peec AI, one of the leading AI search analytics platforms. Malte is one of the brightest minds in the field, so I want to take all the news, new research and interesting developments

and get his perspective on them. Good to have you on again, Malte.

Malte Landwehr (00:36.812)
Happy to be here.

Niklas Buschner (00:38.088)
Nice. What was your habit in the last four weeks?

Malte Landwehr (00:41.934)
My what? Your highlight. A couple of ex and LinkedIn posts from Lily Ray and Glenn Gabe talking about AI content generation at scale, where they said a lot of things that I have also said before or thought before and shared them with, I think, 20 examples of SEOs or GEOs.

Niklas Buschner (00:42.982)
your highlight, what was the thing that excited you the most?

Malte Landwehr (01:11.694)
who thought that they click a button in a tool and then publish a lot of content and then it works. And I think it was Glenn who coined this term of Mount AI, where your visibility increases and then decreases and it looks like a mountain. And of course, once that happens, once you don’t rank anymore in Google, you are also not showing up in the grounding process of any LLM based answer engine. And that…

More examples for that have been shared. People are becoming more aware. That was my highlight. Also, a of people wrote on X like, hey, I’ve been using certain tool. I’m not naming them here. And my visibility tanked. And then they also showed their internal, like their Google search console screenshots. And then other people replied, hey, I use the same tool. The same thing happened to me. So I really hope that people now wake up a little bit and see that some of these shortcuts are only temporary measures.

Niklas Buschner (02:08.598)
Okay, so you’re saying one click AI generated content is not the loophole that a lot of people thought it was.

Malte Landwehr (02:18.175)
It can work short term, crazily good. It’s not a good long-term strategy. And of course a lot of other things happened, but that one is my personal highlight.

Niklas Buschner (02:23.539)
Okay.

Niklas Buschner (02:28.414)
Okay, do you remember the SEO heist from Jake Ward?

Malte Landwehr (02:32.577)
Yeah, I mean, he was one of the first ones to do this, right? create content at scale that has on paper potentially good quality if you do a checklist, but actually adds nothing to the internet, has zero information gain contribution. And that’s exactly the kind of content Google doesn’t need. Google doesn’t want to index. And if it’s not in the search engines, it’s not going to end up in the LLMs.

So yeah, I mean, he was like the first one who very publicly did this, I believe.

Niklas Buschner (03:06.774)
I just recently talked to Steve Toth from Notebook Agency. He also runs the SEO Notebook Newsletter and the AI Notebook Newsletter. And I also talked to him about the SEO heist and he said, and I think there’s a lot of reason to believe him, that Google wanted to set an example when they basically completely de-indexed the Causal app website, which was basically the website where Jake Ward put on all the content. And finally, he added that, as you also said yourself,

If you look at the content from a checklist perspective, it was okay, but actually the Excel formulas, et cetera, they didn’t really work. So it’s like, it looks like content, but like the whole depth is completely missing. yeah, I don’t know. Hopefully people will take this as a wake up call, not to engage in these short-term measures. Something we talked about last time was perplexity, basically.

being left behind. How do you think did your basically prediction there? Did it age well?

Malte Landwehr (04:14.786)
Yes, I mean just today when I scrolled through my ex feed I saw a post from someone, hey remember perplexity what happened to them? And also I’m preparing for a couple of conferences right now so I looked at market share data and perplexity is the biggest loser among the big search engines and big LLMs and just in October they were bigger than grok and now grok is 50 % bigger than perplexity

Also Claude, who are, I would say, primarily used by the API, based on just website traffic. They are now bigger than perplexity. So they are really being left behind. I don’t understand why people even talk about them anymore. I think they are not relevant. They are good storytellers still.

Niklas Buschner (04:59.55)
Okay, then once and for all. This was the nail in the coffin for perplexity. This is the last time that we will talk about perplexity in this format here. But basically the winner, said Grog, but probably also Gemini, right? Taking the spot of perplexity in some way.

Malte Landwehr (05:16.974)
Yes. Gemini. I mean, I think the question is even is Gemini gonna take the spot of OpenAI? Because if you look at just monthly active users, they are catching up like crazy. If you look at the actual usage, chatgbt is still very far ahead, more than twice the size of Gemini. As far as we can estimate usage on the web. Of course, there’s a lot of hidden Gemini usage in

directly in Google Sheets, et cetera, et cetera. But the same can be true for OpenAI via various APIs. So it’s kind of difficult to understand total market share. But as a consumer product, yes, they are catching up like crazy. And the product is really good. Like right now Gemini is in a very good spot, I believe. The UI, I believe, is still significantly worse than the JetGPT. Maybe that’s just because I’m so used to JetGPT.

but also heard and seen it from others, like how you can manage your chats, find chats, et cetera, et cetera, is all a bit less well-rounded, I would say, in Gemini still. But these are things where Google is actually good at, and they should be able to easily catch up if they prioritize it.

Niklas Buschner (06:35.092)
Not sure if you can share something about it, but maybe just a tendency. What do you see people being most interested in in terms of tracking? For example, like also on the peak AI platform, which models or like which chatbots people want to track their visibility on? Is perplexity there still super relevant or has it also shifted towards Gemini, for example?

Malte Landwehr (06:58.894)
So we had perplexity as a default selection when we started and a of people want consistent tracking to do year over year comparison, et cetera, et cetera. So it’s still very big for us. And then since it was a default selection, there was nobody talking about, I want it, right? Because you just got it. I believe that…

The demand from customers basically mirrors what we just talked about. So there’s more interest in Gemini than there was a year ago. There is a lot more interest in Grok than there was a year ago. That is, I think, the big shift.

Niklas Buschner (07:37.302)
I think you also recently launched Copilot, right? As a model that you can track because I remember we were talking to a client of ours and they basically gave us the insight that a lot of their customers told them they use Copilot. And so we were obviously eager to also monitor the visibility there. Is this something that you see more people also being interested, like basically, let’s say target group specific models and tools?

Malte Landwehr (08:07.414)
Yes, so companies that sell to large enterprises, want CoPilot, which is why we added it. And the reason is, of course, if you work in very large enterprise, CoPilot might be the only LLM that you have access to and the only LLM that you are allowed to use. So you use it to make your purchasing decisions, to research software, to research agencies. So anybody selling to these large enterprises should care about CoPilot.

In terms of market share, think Copilot is actually smaller than Perplexity. But Copilot is also very hard to measure in terms of market share because whenever Microsoft 365 Copilot, formerly known as Microsoft Office 365, formerly known as Microsoft Office, whenever they do a rebranding, also Copilot is suddenly available under a different subdomain.

So it’s kind of difficult to keep track on where are all the places where I can access Co-Pilot. But yeah, in of total market share, think Co-Pilot is still pretty small. I know also almost nobody uses it privately as a consumer LLM, but there are people who do it.

Niklas Buschner (09:26.742)
Do think they do this on purpose to make it hard for people to assess the market share or is it just something where, no, okay.

Malte Landwehr (09:32.678)
No, no, they just don’t care. And when there’s a redesign, it’s brand driven and they want stuff under a certain domain. And if that is called being edge service dot office dot com or co pilot dot Microsoft dot com or co pilot office dot com or co pilot dot business dot office dot com, et cetera, like they don’t care. They, just do it and make my life more difficult when I try to measure the traffic and attribute it.

Niklas Buschner (10:01.27)
Okay, please stop doing this Microsoft, please don’t cause more headache for Malta. But speaking of Microsoft, Copilot, et cetera, there was another thing that was, I would say somehow big on LinkedIn and also like in the whole AI search space, considering being webmaster tools and something they introduced there. Maybe you can enlighten people a little bit that haven’t been aware of the announcement.

Malte Landwehr (10:27.926)
Yeah, so Bing launched a feature where you can see URLs that were used for grounding and you can see fan-out queries. I personally am not a huge fan of the product for a big variety of reasons, but it seems to be important to a lot of SEOs. I’m a little bit shocked that I think many SEOs have just never looked at fan-out queries at scale. Like there was no need to wait for Microsoft Bing.

You could have at any time looked in the JSON files that your browser is loading from chat GPT. There are plugins for it. There are tools that extract these like pki for example. And they are the real fan out queries from chat GPT which are actually relevant and interesting. does not clearly communicate what they show in the Bing Webmaster tools. My assumption is it’s Microsoft Copilot.

Yahoo Scout, which is an LLM chatbot that is officially based on Bing for grounding. And there are probably a couple of other official partners. There are maybe some queries from OpenAI in there if they are still using Bing, which I believe is unclear at the moment. Like what is clear is they are using Google that is known and has been proven again and again and again. Do they also still use Bing? I personally have no idea.

as of today. So maybe there are some in there. But if you look at how the relationship between Microsoft and OpenAI has developed, I kind of doubt that they use an official API. I think if they were to use it, they would just do scraping like they do with Google. And then Bing has no way of knowing that these are grounded, that these are like fan out queries used for grounding and has no mechanic to report on them. So I would assume it’s just copilot.

and scout, but this is the problem with the data. It’s not known. It’s unclear. And anybody who ever really cared about Google search console data knows how much time you need to invest to actually understand, for example, the difference between the URL and the keyword level data. And I have noticed that even super experienced SEOs often have no idea what the numbers mean because they don’t think about what happens if

Malte Landwehr (12:54.626)
Two URLs from your website show up for the same search term. What happens if the same URL shows up twice? Because it can be a different sub-elements. How is the ranking calculated for certain carousels? There are many, many things where people are unaware of what they are looking at in Google Search Console. And this is after people have written hundreds of blog articles, whole books about Google Search Console.

And still the vast majority of SEOs can’t properly explain the data, especially not what happens once you start filtering. It happens once you start exporting the raw data and then putting it together again on your own. So I think it’s interesting what Microsoft is doing there. I personally don’t find it particularly actionable because I have had access to the chat GPT fan out queries for a while now.

But it’s definitely good that more people are getting educated that, you can find out what URLs are used for grounding, or you can understand what fan out queries exist. Like this is good education, but in the end you need different data sources, better data sources than the Bing Webmaster tools.

Niklas Buschner (14:08.15)
All right, there’s a lot to unwrap here. So first of all, I can 100 % confirm I myself spent a lot of time understanding the different dimensions. I remember setting up Looker Studio dashboards and then thinking about what is the difference between the URL dimension and the keyword level dimension with the data from Google Search Console. So let’s assume that not everybody listening here has figured this out. So can you explain very simply

What is the difference between these two types of data before we go into the Bing topic again and the Fan Out queries topic again, because there is a lot more obviously here to dig deeper into.

Malte Landwehr (14:50.518)
Yeah, so the most important thing I believe is that the URL level data is normalized per URL, per search, per impression. So meaning that if you sum up all your URLs, you will count certain searches twice. And then with site links, for example, you might actually be counting the same search eight times. But then on the other hand, the URL level data is complete.

Like you should have every single impression of your website in the URL level data. While the keyword level data is not complete. Some people call it sampled. I believe sampled is wrong because it’s not sampling. It is not a hundred percent document. It’s actually not documented at all what is happening.

based on some things that Google employees have said to some of their business partners, et cetera, et cetera. It looks like there’s a minimum threshold, how often a keyword has to be searched per month until it is reported. It doesn’t even matter. You have to show up there five times, like because five or six was, think, back in the day, the minimum amount. So you can see keywords where you only had one impression, but very, very surely these keywords had other impressions for other websites to reach like a minimum threshold.

And when I was at Idealo, we even had cases where on one day, supposedly, we didn’t have any impressions for the keyword iPhone 15. Maybe that happened. Maybe we didn’t rank for a whole day, but we should have seen that in the traffic pattern. So it’s very likely that there was an issue on Google site where for that one day they classified iPhone 15 as a keyword where people were not seeing data in Google Search Console, which is ridiculous. There are potentially also keywords with higher search volume where something looks like

a credit card number or password where Google might also decide not to show it. And then of course, some people might say maybe it’s not all about privacy. Maybe also Google wants to take away data from people. So maybe they just have rules that are very likely to not show certain keywords. And then there are things like you have a slider on position one that has, I don’t know, five elements. All of these are position one.

Malte Landwehr (17:06.542)
All of them are counted as position one. And this is especially annoying if you are in the European Union and you are in one of these DMA compliance mechanisms. So for example, above Google Maps, you sometimes have these little bubbles of like Facebook, Idealo. Those are also counted as position one impressions. They have a click-through rate of 0.0001 % or something like that, but they mess up your position one CTR if you start filtering.

Or then when you have the sidebar on the right on desktop, you actually first count the left sidebar and then the right sidebar. So if you have a link in the knowledge graph, that can be like position 16 on the first page. That can also really mess up your reporting. Just for the data foundation. And then when you start…

filtering. mean just two simple filters like all keywords with your brand name and all keywords without your brand name and those might not even add up to 100%. Then you can export the data or you can call it via the API but in the API there are empty entries and sometimes you have a whole page of empty entries and then on pagination 53 there’s another little bit of data coming you need to find that out.

Or you can do lots of searches and filtering on the API to get out more data. In the end, the only thing that really works is doing the export to, I just forgot the name of the Google database that you have access to BigQuery exactly where you can export it. There you can actually get all the data and then can analyze it. Or there’s also the option to get that data directly into Looker Studio, but that actually breaks for large websites.

Niklas Buschner (18:40.597)
query.

Malte Landwehr (18:56.396)
because for very large websites, there’s a hard limit and it takes the keywords by number of clicks or number of impressions. So you only get your high traffic keywords, but that is also nowhere documented and you don’t see it until you run the analysis and you realize, wait a minute, why is my average click rate so good? Why is every keyword a high traffic keyword? So there are a lot of things very difficult about Google Search Console data. And I would always do the

export and then do the analysis in BigQuery, at least for a large website. When I was at Idealo, whenever we saw a team member screen sharing or making a screenshot where they have applied a filter in the web interface, it was like almost a meme of somebody in team speaking up. Just a reminder, like whatever you are looking there, it’s like kind of random and we can’t actually make any interpretation out of it. And it’s true for large websites, you need the…

the BigQuery export and then analyze the data there.

Niklas Buschner (19:57.306)
Now that’s what I call a rabbit hole that we just went into. No, I love it. Love it. it. Love That’s why we do this format. But still, even though probably a lot of people should spend more time in Google Search Console, why do you think people are so excited about the Bing Webmaster Tools new feature, new data that’s popping up?

Malte Landwehr (20:02.06)
Sorry about that.

Malte Landwehr (20:21.004)
Because it has not been accessible for them previously. Like not everybody has access to fan out queries from JCPT. I mean, they could. It’s not difficult, but they never bothered or never were given the tools to do it. And now you see fan out queries for the first time and hey, of course that’s exciting. I would be excited as well. Or they have never looked at which of my URLs actually being used for grounding, which you could to a certain degree, estimates based on your log file analysis.

Or you could look in the sources in Chetch GPT or could use a software like PKI. And if you have never done that, again, seeing these are these are URLs that were really used for grounding. Like this is the truth. This actually happened. Bing says so. I think that is exciting and it’s very easy to trust the data. If you have never looked at any GEO, AEO, AI search tools, of course getting the data from a trusted source like Bing is easy. Like, you know, it’s correct. I mean,

you actually don’t know what it is, so it’s very easy to say it is correct. But if you don’t care very deeply about how data is defined, where it’s coming from, what it means, then it’s very, very easy to trust Bing, just like it’s easy to trust the Google Search Console.

Niklas Buschner (21:31.744)
Do you think Google will follow?

Malte Landwehr (21:36.458)
I think Google has very few incentives to do so. I believe this is a decision that’s made by some regulatory compliance and lobbying team at Google to decide, we need to offer this to make some stakeholders happy? Based on the current US government, I don’t think it is needed because right now being a Google is protected from

the European Union. It’s not possible for the European Union to level any fines against Google, at least not realistically. But in the future this might change, of course.

Niklas Buschner (22:20.054)
Let’s dive a little bit deeper into fan-out queries because you already mentioned it and I saw that you did some very interesting research on them that is, I would say, particularly relevant, at least that’s my interpretation for non-English markets. So what did you find there?

Malte Landwehr (22:36.908)
Yeah, so it wasn’t me finding this. All of this research at PKI is done by Tom and Tomek, two of my team members. And in this case, what we found out is that when you write a prompt in German with a German IP address, very, very likely one of your fan out queries is going to be in English. It’s usually the second fan out query. And they of course then also pull in English speaking websites as sources.

And we tested this for a bunch of different languages. And between 60 % and 90 % of the time, at least one of the fan out queries is in English, when the prompt itself was not English. From anecdotal evidence, I think for the e-commerce fan out queries, this value might be even higher. But it’s only anecdotal evidence. I haven’t analyzed it.

And yeah, this is interesting, of course, right? Because it means that even if you are a German speaking brand that only sells to German speaking people in Germany, whatever is written about you on the English speaking internet is relevant. And first of all, let’s talk about maybe also why it happens to honest answers we don’t know. One hypothesis is there is just more English speaking content on the internet and it makes sense to do it.

And I think for many topics that’s actually true. If I ask in German, I don’t know what’s the capital of the United Kingdom, how tall is the Eiffel Tower? These are things where disinformation is more often available in English than in German. Another hypothesis is that English is a very token efficient language. So if you express something in English, it’s generally a lot shorter than the same thing expressed in

German or Turkish or Thai or many many other languages and this also translates to a lot of token efficiency and of course many of the models we are using in Europe they are trained primarily on English speaking content by English speaking data scientists so even if you prompt in German sometimes all the reasoning is happening in English by the way with the Chinese models it also sometimes happened that they start reasoning in Chinese when you ask questions and then it’s weird because you look at it and you don’t know what’s going on anymore

Malte Landwehr (24:56.814)
So it’s unclear if it’s intentional, if it’s a side effect of something, but it is happening. And what does it mean for me as a brand? It means I should not be surprised if I see English speaking sources and I need to be aware that whatever is written there about me is relevant. And it gives a huge advantage to brands coming from English speaking countries who are internationalizing because

Like imagine you have a big footprint in the US and you come to Germany. You have zero footprint usually. It’s an uphill battle. But now in LLMs, if one fan out query is English, you have a huge advantage actually because for let’s say one third of the fan out queries, 25 % or whatever, you are already there because you already have the English speaking content and you are dominating. So I think it’s huge advantage especially for US companies.

Because when we say English-speaking Internet, it’s the US-centric Internet at least. It’s huge advantage for them, a huge problem for pure local players. Like somebody like Audi will not suffer because they also have English-speaking content about Audi already. But a pure local brand that is competing with international brands has a disadvantage.

because they don’t show up in the English fanart queries most likely because they don’t have content for it.

Niklas Buschner (26:28.456)
And would you recommend a pure local player that has only published content in German now to maybe look at the most important pages, whatever this means, like let’s say rankings, let’s say traffic, let’s say conversion impact, whatever we might find meaningful, and then creating basically English versions of those just to make sure you get a better coverage also by these English Fanner queries.

Malte Landwehr (26:57.914)
I would not check what are my most converting etc pages. I would check what kind of pages are being cited in the prompts that I care about. we actually did another analysis on this that we will publish soon. If you have an informational prompt or a commercial prompt that is not yet transactional, then it’s very unlikely that a product page will be cited. So even though you make all your conversions on a product page,

You really don’t need to translate it to influence these kinds of prompts. So I would go the other way, see what kind of content is being cited and see should I create something similar to that. I wouldn’t say I can recommend this because I would have to test it a couple of times to actually give a good recommendation. But I would consider it. If I was an in-house SEO right now tasked with AI visibility in the scenario you described, I would…

think about this and probably run a test. I would also think about does this content need to live on my own website or can I just publish it on Medium or on LinkedIn Pulse or on dev.to or can I just make a profile on a couple of websites like there are many potential scenarios.

Niklas Buschner (28:14.559)
Maybe something I can add here just from a very anecdotal experiment we did with a client. We had German content around, let’s say mobile device management, onboarding automation solutions. And we had to, so we didn’t do it on purpose to do some like GEO experiment, but we had to create a German and an English version just for the CMS setup. And interestingly, we also tracked it with PKI for some prompts, actually the German version and the English version.

was cited. we basically had double citations. also did a post on LinkedIn. think some people said, do you really feel like this is something that is sustainable? I’m not sure, but I found it interesting to see it happen because I obviously immediately, as SEOs always do, thought about how to exploit this. Then I said, okay, let’s keep the users in mind. So is it valuable for the users? Because we have English speaking target customers and German speaking target customers.

Then I would say, yes, it makes sense from a user perspective. But if you would just do it probably to trick the FANL queries, probably not a good idea.

Malte Landwehr (29:23.093)
Yeah, I think there are some things you can do that you technically do to trick the fan out queries. Like I’ve done that in SEO a lot. Just the important thing is that from the outside, it’s not clear you do that. Like I was outnaming now and for my employer at a company where I once worked, I was running a lot of SEO projects where then also people from the outside told me, Hey, are you doing this for SEO? And I was like,

I have not really heard of that project. I think our sales team is doing it or it’s something from the social media team or the content team or whatever. But in reality, it was like pure SEO project, just so well hidden that even even SEOs from the outside were like thinking, I’m not sure it’s an SEO thing they are doing there. Maybe it is. So I think it’s totally fine to create pages just to influence the fan out queries. Just it has to look like that’s not the case.

And even when someone looks very deeply, cannot look like that. Like a very experienced SEO, they can have a doubt in their head. But if you look at it, Niklas, and you know it’s an SEO thing, then it’s not good enough. It has to be so good that you are asking yourself, this could be done for SEO, but I’m not entirely sure. for example, if it’s, if the Excel formats that are shown, they are not working, then it’s not good enough, right? It has to be really top notch, perfect.

So if it’s an English speaking website, maybe also put a video interview with the CEO in English, that even if a Google quality rater goes on the website, they should be like convinced, yeah, this is a legitimate English speaking version of their website. When your competitor looks at it, they should not be like, I report this to Google at spam. They should be like, why do they have an English speaking version? When a journalist who hates you.

looks at the website, they should not be able to find anything sketchy about it. That is how good you have to hide your SEO measures, in my opinion, on very, very large websites where most of my SEO experience is. But I wouldn’t say you can’t do it just to influence, like just do it. I’ve done so many things around cloaking, adding content, hiding content that I, in the end, did just for SEO that sometimes also turned out influencing conversion rate and stuff like that.

Malte Landwehr (31:44.151)
But yeah, I think it’s sometimes okay to do things just for SEO. Just they can’t be negative for this experience and people from the outside cannot notice.

Niklas Buschner (31:54.855)
Okay, if at some point you will write your memoirs or like your biography, I would suggest maybe Life of an SEO, just maybe an idea for the title. I hope that you will share some of the behind the scenes from these experiments that you did because you, at least for me, you created an insane FOMO now to learn more about this.

Malte Landwehr (32:22.903)
As long as any of my customers or former employers exist, will take these secrets with me. If some companies cease to exist at some point, maybe I can talk about things. But I will never write my memoirs. Never ever. There’s no scenario where I would do that.

Niklas Buschner (32:46.023)
Okay, now, Insane Bridge, maybe you should let Claude Cowork or OpenClaw write your memoirs. I want to a little clap clap sound effect now introduced here. No, just kidding. In the last two weeks or last, no, I think it’s not two weeks. Like in the last, I would say four to six weeks, at least for not the AI nerds on X. I think most of the listeners here are not on X all the time.

Malte Landwehr (32:55.843)
I don’t…

Niklas Buschner (33:15.349)
There are two new kits on the blog. One is Claude Corework and one is OpenClaw. Have you tried both yet?

Malte Landwehr (33:22.869)
I have not. I wish, like I have this itching in my fingers to buy another MacBook and to create a new virtual credit card and to just install OpenClaw and give it access to this credit card. But I currently work 70 hours a week on PKI and I have zero time for distractions. So no, I’m not working on them. I have some coworkers looking into both.

At some point somebody will sit me down and tell me Malte I know you’re almost 40 now, but you need to change how you work and give me a quick introduction But right now I just lack the time for it. Unfortunately, but I would love to

Niklas Buschner (34:05.043)
And do you use something else like not cloud core work, then maybe Gemini or may I whatever for like your own daily AI companion or AI driver?

Malte Landwehr (34:16.831)
Yes, like my main tool is JGPT just because it works well for my private and business use cases. I have more more business use cases that I moved to Gemini because it’s sometimes just better. And I use Manus a lot. I think this month alone, I upgraded my Manus account five times because it’s really powerful for complex multi-step tasks.

Niklas Buschner (34:46.005)
Okay, can you share one example for people to have a better visual imagination of it?

Malte Landwehr (34:52.855)
I mean very complex research for example where you need to sift through I don’t know 500 documents and only the first five documents will then tell you what are the next, what are the next, what are the next. This is something where already Gemini is so much better than OpenAI, deep research. But then something else for Manus is if you don’t have the biggest subscription to similar web

then the Manus skill for similar web actually gives you access to a lot of similar web data. So you can give us a hundred domains and say, give me the traffic that these websites had in Germany last month. Or just conduct very, very deep research and then prepare a text or a briefing or something like that. For example, if I had not forgotten about it, I would have given Manus the task to create a briefing for me for this episode.

Niklas Buschner (35:40.061)
Okay.

Malte Landwehr (35:50.569)
I did it last time, I forgot to do it today.

Niklas Buschner (35:53.832)
Yeah, no worries. mean, this is why I’m here doing the heavy lifting. And again, a reminder for everybody from last episode, if you have positive feedback, please send it to Malte. If you have any negative feedback, send it to me. So Malte keeps being super happy and super engaged into doing this. Just as a quick favor, you can do me.

Now let’s talk about something that has been announced very, very recently on the day that we are recording this year, which is a huge funding round for, I would say a competitor or maybe just a tool that operates in the similar space as PKI, which is Profound. They announced a $96 million Series C at a unicorn valuation. Do you have thoughts on that?

Malte Landwehr (37:42.485)
Of course I have thoughts on it. The question is what are my thoughts that I can mention publicly.

Niklas Buschner (37:47.679)
I always like that you take my question and then turn it into an even better question.

Malte Landwehr (37:52.567)
I mean, they are the competitor, of course, like they were the first one who started. They always say they started 18 months ago based on the founders LinkedIn profile. actually think they started 24 months ago, but not sure. So they have a head start, whereas everyone else in the space. And they have been very aggressive in terms of the evaluation and raising money. And if you logically think about it.

building an AI visibility and AI optimization tool cannot be enough to justify that valuation. Like nobody invests at a 1 billion valuation because they hope the value climbs to 2 billion, right? That’s not how venture scale investing works. And this was a VC. This was not like a private equity buyout or something who might be happy with the 2X. So if you look at then what they also announced, which is their…

what they call agents, which to me is a workflow builder like N8n in a more expensive version. The only reason they can justify that valuation is if they either replace marketing employees or replace agencies or automate significant parts of the marketing stack.

I’m not saying they are doing this, like I have no knowledge of that, or if I did, I could not speak about it. But the only thing that logically makes sense to me is that their story is actually a lot more than AI search. And I think they will actually become less of a competitor because of it. I think they need to take on at that evaluation, probably not even SEMrush is a relevant competitor anymore for them. They need to take on.

Salesforce marketing cloud, Adobe, these kinds of players. Otherwise the evaluation and the amount of money they raised makes no sense to me. Like they have more open, profound, has more open jobs right now than we have total employees. And I believe we are also based on team size, the second largest in the space. Their marketing team is, I think, eight people.

Niklas Buschner (39:55.475)
And

Malte Landwehr (40:13.591)
they communicated today. We built our product with less than eight people in the whole company and that includes interns. When I signed there were less than eight people. think when I joined there were eight, nine people. yeah, very, very, I mean, just also if you compare it to current SEO companies.

Some of them are probably at a billion dollar evaluation if they were to be in the venture game or if anybody would still invest into them. But the only company that actually reached unicorn status as an SEO tool was SEMrush. And of course, Ahrefs would be there if they were a venture funded business. And probably seven, eight, nine years ago, BrightEdge was there.

I don’t think Brightage is a unicorn nowadays because where’s the growth? I also don’t think any of the other players in the space are at unicorn status. yeah, it’s a crazy high evaluation, but it shows how bullish people are on the space, how much change people expect to happen in the organic marketing space.

Niklas Buschner (41:34.485)
Hmm. And do you think because there’s, was a, a comment on LinkedIn that I read today about the funding announcement that I found interesting where someone said, I think this, so I’m quoting or paraphrasing. think this will help create more trust in the whole AI search industry. Do you also think that this is something where you can see a connection of like these big funding rounds creating more trust?

Malte Landwehr (42:00.279)
Among enterprise buyers, yes. Among very small companies, I’m not sure. I think there might even be a bit of a negative reaction to so much VC funding, yada, yada, yada, especially in Germany, less so in the US, where people are much more open about VC funding and have more understanding that it’s the only way to build a software company in a competitive space nowadays.

But in Germany, I’m not sure that it actually leads to more trust. Also, we don’t know what Profound will do with that money. And without any judgment, Profound is the only company I know that has repeatedly deleted the research that they previously published from their own website. It is up to anybody to interpret what that means. But it’s not something that we have ever done. It’s not something I see Systrics or some Russia atrives doing.

Profound has done it with multiple of their research articles after they have been cited by everyone on LinkedIn.

Niklas Buschner (43:06.485)
I feel like the part you’re stressing about the research, also noticed that a couple of companies not naming any names, but definitely not PKI, on this one I want to be very clear. They did research, but I feel like it was just marketing narrative with a sprinkle of some data analysis and a sprinkle of some here and there.

And it was just labeled as research just to push the narrative, right? It’s something that has become way more popular over the last years.

Malte Landwehr (43:39.817)
Yes, I’m also seeing it a lot, from US-based competitors of ours, doing sometimes an analysis on something like 10,000 chats, which is like anecdotal evidence, I would say. Like you could have an intern label 10,000 chats. Like it’s not at any way deep relevant data.

And we usually try to run our analysis on significant larger amounts of data. But even then it’s always a challenge to normalize the data, make sure it’s actually representative because we don’t have prompt volume. We don’t have a hundred percent representative real user prompts, right? Like there are all these challenges in the GEO space. And I think when you do research, you need to acknowledge that.

there are players who might decide to ignore it and just publish anyways and say this is how it is.

Niklas Buschner (44:38.389)
Malte, it has been insanely insightful. Again, thanks so much for taking the time. Already very much looking forward to the next episode for people listening or watching. If you have any questions or anything, please put it in the comments somewhere, no matter where YouTube, your LinkedIn comments, think podcasting apps, you can’t comment. If you have positive feedback, send it to Malte. If you have negative feedback, send it to me. Other than that, Malte.

already very much looking forward to the next one.

Malte Landwehr (45:10.079)
Likewise, and you should probably tell people who have positive feedback to give you a rating on Spotify or something.

Niklas Buschner (45:16.061)
Yeah, I added a prequel that will go before our talk where I have the call to action. I’m in marketing for 13 years and now I can’t come up with the word call to action. I have a call to action to like, subscribe, leave a comment, blah, blah, but very, very kind of you to think about that. So, yeah. Thank you so much.

Malte Landwehr (45:46.775)
Thanks. Bye bye.

Niklas Buschner (45:47.029)
Thank