How to create listicles that survive Google’s crackdown

What happens when the content format that drives up to 50% of AI search citations is also the one Google just started cracking down on? If you’re a Head of Marketing at a B2B SaaS company, you might have seen the Lily Ray analysis, felt the fear, and now you’re stuck between two bad options: abandon listicles entirely, or keep publishing them and hope for the best. At Radyant, we think both options are wrong. Here’s the framework for the third path.

Key takeaways

  • Google didn’t crack down on listicles as a format. It cracked down on sites publishing 100s of self-promotional listicles with AI-generated content, artificial year-in-title refreshes, and zero editorial depth. The format isn’t the problem. The approach is.
  • Listicles remain the #1 cited content type in AI search, accounting for up to 50% of top citations. Abandoning them means giving up the single most effective format for GEO visibility at the exact moment AI search is becoming a real pipeline channel.
  • The path forward is restraint plus depth: one excellent listicle with transparent bias disclosure, genuine alternatives, comparison tables with real data, and substantive methodology beats 200 thin ones. Every time.
  • The listicles growing fastest in AI citations in early 2026 share a specific pattern: detailed methodologies, external validation signals, and structured formats with features, pros/cons, pricing, and “best for” sections.

Looking for a shortcut to drive more organic growth from your content, SEO & AI Search efforts? Request a free growth audit from Radyant to get an honest assessment of your organic growth potential.

What actually happened: the listicle crackdown in context

In February 2026, Lily Ray published an analysis connecting sharp organic visibility drops to a specific pattern: SaaS companies publishing large volumes of self-promotional “best of” listicles where they ranked themselves as the #1 option. The data was striking. Multiple sites lost 30% to 50% of their organic visibility within weeks of the December 2025 Core Update and January 2026 volatility.

The industry response was immediate and largely binary: listicles are dead, or listicles are fine. Both reactions miss the point entirely.

Look at what the affected sites actually had in common:

  • Extreme volume: Lily Ray found examples with 76 to 340 self-promotional listicles on a single domain. One site had 191 listicles, all following the same template, all ranking the company’s own product first.
  • AI-generated content at scale: AI detection tools returned high confidence scores that much of the content was machine-generated. These weren’t carefully crafted comparison pieces. They were programmatic templates with product names swapped in.
  • Artificial freshness signals: 38 listicles on one domain were updated to include “2026” in the title with no substantive content changes.
  • Zero editorial methodology: No evaluation criteria explained. No transparent disclosure of bias. No evidence of actually testing or comparing the products listed.

This isn’t a “listicle problem.” It’s a content quality problem that happened to manifest as listicles. As ProgramBusiness noted, self-promotional listicles likely were not the only factor. Many affected sites also showed rapid content scaling, automation, and other tactics associated with algorithmic risk. But the consistency of self-ranking “best” content among the hardest-hit sites suggests this signal now carries greater weight, particularly at scale.

Glenn Gabe of GSQI confirmed the pattern independently, calling it “a cheesy tactic, and sort of embarrassing for companies doing that, but it works (at least for now).” The “for now” part is the key qualifier. What worked in 2024 stopped working in late 2025.

Why listicles still matter more than ever for AI search

Here’s where the conversation gets interesting. At the exact moment Google is tightening its standards for listicle content, AI search platforms are citing listicles more than any other format. If you abandon listicles because of the crackdown, you’re solving one problem while creating a bigger one.

The data across multiple independent studies is remarkably consistent:

There’s also a positional advantage that most practitioners overlook. Research shows that 44.2% of all LLM citations come from the first 30% of text. Where your brand appears in a listicle matters for AI citation probability, not just for user attention.

This creates a genuine strategic tension. Google is penalizing low-quality self-promotional listicles. AI search platforms are rewarding well-structured listicle content. The answer isn’t to pick a side. It’s to create listicles that satisfy both.

We’ve seen this play out directly with our clients. At Heyflow, AI-attributed trials are converting at 14.3% compared to an 11% channel average. That AI search visibility is partly driven by structured comparison content that LLMs can easily parse and cite. Abandoning the format would mean leaving real pipeline on the table.

The listicle citation trend is shifting, not dying

Seer Interactive published critical new data showing that not all listicles are trending equally in AI citations. Sites that scaled “best of” content quickly, experienced traffic spikes, and are now seeing that content decline in the same period as listicle citations drop. But the listicles that are growing in AI citations share a specific pattern: clear numbered formats, standardized sections including features, pros/cons, pricing, and “best for” content, plus detailed methodologies and external validation signals.

Pages with detailed methodologies were accelerating the fastest into February 2026. This isn’t a coincidence. It’s a clear signal about what both Google and AI platforms will reward going forward.

7 rules for creating listicles that won’t get you penalized

Based on the crackdown data, the AI citation research, and our own experience creating listicle content at Radyant, here’s the framework that separates resilient listicles from risky ones.

Rule 1: A few great listicles, not 200 thin ones

The single clearest signal from Lily Ray’s analysis is scale. Sites with 76 to 340 self-promotional listicles got hit. Sites with a handful of genuinely useful comparison pages did not.

The math is simple. If you have 200 listicles on your blog, you almost certainly don’t have 200 categories where you’re qualified to provide expert evaluation. You have a content farm wearing a listicle costume.

When we created our own “6 best AI SEO agencies for B2B startups” listicle, it was the first of only a handful ones on the domain. Not because we’re afraid of the format, but because we only have a handful of comparisons where we have genuine expertise, real market data, and something substantive to add.

Rule 2: Disclose your bias upfront

Every self-promotional listicle is inherently biased. Pretending otherwise insults your reader’s intelligence and signals exactly the kind of manipulation that Google is now targeting.

Listicles that position a publisher’s own products as “best” inherently carry bias: they lack third-party validation, clear evaluation methodology, or evidence of objective testing. The fix isn’t to stop including yourself. It’s to be transparent about why you’re there.

In our listicle, we explicitly state: we’re an AI SEO agency, we’re including ourselves, and here’s exactly how we approached the evaluation. This isn’t just ethical. It builds trust with both users and LLMs. When an AI model encounters a listicle that transparently discloses its perspective, it has better context for how to cite and present that information.

Rule 3: Include genuine strong alternatives

The sites that got hit had a pattern: they always ranked themselves #1, and the alternatives were either obscure companies or thinly described competitors used as filler. Readers can spot this immediately. So can Google’s review quality systems.

Your listicle should include alternatives that could genuinely beat you for certain use cases. If a reader’s needs don’t match your product, the listicle should honestly point them elsewhere. This feels counterintuitive, but it’s the exact behavior that signals editorial integrity.

Think about it from the reader’s perspective: if they can tell your “alternatives” section is designed to make your product look good by comparison, you’ve lost their trust. And a listicle that doesn’t have reader trust has no business ranking.

Rule 4: Add comparison tables with real data

A listicle that says “Product A is great for enterprises” without supporting data is an opinion piece masquerading as a review. A listicle that includes a comparison table with pricing tiers, geographic coverage, client size focus, specific features, and “best for” criteria is a decision-support tool.

This distinction matters for three reasons:

  • Users can make faster, better decisions with structured comparison data
  • Google’s review systems explicitly look for evidence of evaluation, not just assertions
  • LLMs extract structured data 2.5x more effectively than unstructured prose, directly increasing your citation probability

In our agency listicle, we included a comparison table with pricing ranges, geography focus, and best-fit criteria for each agency. We pulled market context from Datos and SparkToro to ground the comparisons in actual data, not just our subjective impressions.

Rule 5: Use external validation, not just your own claims

Google has consistently emphasized that high-quality reviews should demonstrate first-hand experience, originality, and evidence of evaluation. External validation is part of that evidence.

This means incorporating G2 ratings, industry data, expert quotes, and third-party research where relevant. Not as window dressing, but as genuine supporting evidence for your evaluations.

If you claim a product is “best for mid-market SaaS,” back it up. What’s their average customer size? What do G2 reviewers say about their mid-market fit? Is there industry data supporting that positioning? The Seer Interactive data confirms this: the listicles accelerating fastest in AI citations are the ones with external validation signals and thorough methodologies.

Rule 6: Make substantive updates, not cosmetic year-swaps

This is perhaps the most straightforward lesson from the crackdown. Changing “Best CRM Software 2025” to “Best CRM Software 2026” without changing the content is exactly the kind of artificial freshness signal that Google is now penalizing. Lily Ray found 38 listicles on a single domain that did precisely this.

A substantive update means: re-evaluating your recommendations based on product changes, removing options that are no longer competitive, adding new entrants worth considering, and updating pricing and feature data. If nothing has changed in the category, don’t update the title. If things have changed, update the content to reflect reality.

The standard should be: would a reader who saw the previous version and the current version notice meaningful differences? If not, you’re gaming freshness signals, and Google can see it.

Rule 7: Structure for both users and LLMs

The Seer Interactive analysis found that all listicles growing in AI citations shared a clear format: numbered lists with standardized sections including features, pros/cons, pricing, and “best for” content. This isn’t surprising. LLMs parse structured content more reliably than unstructured prose.

For each item in your listicle, include:

  • A clear “best for” statement (e.g., “Best for mid-market SaaS companies with 50-500 employees”)
  • Key features or differentiators (3-5 specific points, not generic claims)
  • Pricing information (ranges are fine if exact pricing isn’t public)
  • Pros and cons (honest ones, including real limitations)
  • A brief editorial take explaining your reasoning

This structure serves users who are scanning for quick comparisons and LLMs that need extractable, structured information to generate accurate citations. It’s not an “AEO hack.” It’s just good content architecture. As we’ve discussed on the Masters of Search podcast with Andy Muns from Telnyx, AEO done right is really just good SEO with better attribution.

Anatomy of a resilient listicle: walking through our own example

Theory is useful. Seeing it applied is better. Here’s a transparent breakdown of the editorial decisions behind Radyant’s “6 best AI SEO agencies for B2B startups” listicle and why each choice was made.

Decision 1: Only one listicle on the domain

We didn’t create a listicle factory. This is one of two comparison pages on radyant.io (the other being 6 best GEO agencies in Germany). Both cover categories where we have genuine expertise and can add original perspective. We’re not planning to publish 50 more. If a category doesn’t warrant our direct involvement and unique insight, we don’t create a comparison page for it.

Decision 2: Transparent bias disclosure in the introduction

The page explicitly states that Radyant is included and explains the editorial approach. No pretending this is an “objective third-party review.” Readers know exactly what they’re getting, and that transparency creates trust rather than undermining it.

Decision 3: Genuine competitors, not filler

Every agency on the list is a real competitor that could genuinely be the better choice for certain buyers. We didn’t include obscure agencies to make ourselves look good by comparison. If a reader’s needs align better with another agency on the list, the listicle should help them reach that conclusion.

Decision 4: Market data, not just opinions

The listicle includes real market context pulled from Datos and SparkToro. This grounds the comparison in data rather than subjective claims. It also creates content that AI farms can’t replicate because they don’t have access to or the ability to interpret that data.

This connects to a principle we’ve proven across client work. With Planeco Building, we grew citation share from 55% to over 130% and achieved 5x organic leads in 10 months, all through owned content authority. The principle is the same: if your content is genuinely the best answer, built on real expertise and data, platforms will reward it. The same logic applies to listicles.

Decision 5: Structured comparison table

The page includes a comparison table with pricing ranges, geographic focus, and best-fit criteria. This serves the user who wants a quick overview before diving into details, and it gives LLMs a structured data format they can extract and cite directly.

Want to understand how your current listicle strategy stacks up against these principles? We can audit your content portfolio and show you exactly where the risks and opportunities are. Talk to us about a growth strategy audit.

The “should you create this listicle?” decision framework

Before publishing any listicle, run through these questions. If you can’t answer “yes” to at least 6 of 8, reconsider whether the listicle should exist.

  • Is this one of fewer than 10 listicles on your domain? If you already have dozens, adding another increases your risk profile significantly.
  • Do you have genuine expertise in this category? Not “we’ve used the product” expertise, but “we can evaluate these options based on deep knowledge of the buyer’s needs” expertise.
  • Are you including alternatives that could genuinely beat you? If every competitor on the list is weaker than you in every dimension, your listicle isn’t honest.
  • Do you disclose your bias transparently? A reader should know within the first 100 words that you have a stake in this comparison.
  • Does the listicle include real data? Pricing, feature comparisons, market data, G2 ratings, or other verifiable information. Not just subjective assessments.
  • Would you be comfortable showing the methodology to a prospect on a sales call? If the answer is “no, it would undermine their trust,” don’t publish it.
  • Is the content substantively different from last year’s version? If you’re just swapping “2025” for “2026,” stop.
  • Does each entry have structured sections? Features, pros/cons, pricing, “best for” criteria. Not just a paragraph of generic praise.

What the experts are saying (and where they disagree)

The industry discourse on listicles is unusually nuanced right now, with legitimate experts taking different positions. Here’s where the consensus is, and where it breaks down.

Where experts agree

Aggressive, scaled self-promotional listicles are increasingly risky. Lily Ray, Glenn Gabe, and Wil Reynolds of Seer Interactive have all flagged this pattern independently. Wil Reynolds has warned that this approach lacks authenticity and risks undermining audience trust. Nobody credible is arguing that publishing 200 self-promotional listicles with AI-generated content is a good strategy.

Where the discourse is incomplete

Most commentary conflates “all listicles” with “scaled, AI-generated, self-promotional listicles.” These are fundamentally different things. Nobody in the mainstream discourse is distinguishing between the number of listicles on a domain (191-340 on affected sites) versus having 1-5 genuinely useful comparison pages.

Lily Ray herself acknowledged this nuance: the affected sites had many other quality issues beyond listicles. Rapid scaling, AI-generated content, artificial freshness updates, and Schema.org violations were all present. The listicle was the most visible symptom, not necessarily the root cause.

The Ahrefs perspective is worth noting

Glen Allsopp at Ahrefs took a different approach entirely: rather than optimizing self-promotional listicles, Ahrefs’ primary focus continues to be on things that naturally get them mentioned on the lists of others. Constantly updating their product, sharing original data and research, and being active where their audience is. This is a valid strategy if you have Ahrefs-level brand recognition. For most B2B companies, you need both: build a product worth recommending AND create your own comparison content with integrity.

Omniscient Digital’s Alex Birkett offered the most practical framing: “If the main goal is AI visibility in branded queries, start managing reviews, listicles, brand profiles, and comparison pieces.” The key word is “managing.” Not mass-producing. Not automating. Managing with editorial intent.

Connecting listicles to actual pipeline

The conversation about listicles too often stays at the level of rankings and visibility. For a Head of Marketing making resource allocation decisions, the question is simpler: does this content drive pipeline?

Listicle content targets comparison-stage buyers. Someone searching “best project management software for remote teams” or “top AI SEO agencies” is actively evaluating options. They’re further down the funnel than someone searching “what is project management.” This makes listicle traffic inherently more valuable per visit than most informational content.

The challenge is attribution. When someone reads your listicle, then asks ChatGPT about your brand, then types your URL directly into their browser, that shows up as “Direct” in your analytics. Not as “listicle drove an AI search recommendation that drove a direct visit.” This is exactly the attribution problem we help clients solve with the 3-layer attribution model: click-based analytics, self-reported attribution via a “how did you hear about us?” field, and verbal attribution captured by sales teams in CRM fields.

With ToolSense, a combination of organic content strategy including comparison-oriented content contributed to 10x inbound demo bookings over two years. The listicle format wasn’t the only driver, but structured comparison content that helped buyers evaluate options was a meaningful part of the funnel.

If you’re seeing pipeline from organic but can’t attribute it properly, your listicle strategy might be working better than you think. Let’s figure out what’s actually driving your growth.

What to do if you already have a listicle problem

If you’re reading this and realizing your site has 50+ self-promotional listicles following the exact pattern Lily Ray flagged, here’s a pragmatic approach:

Step 1: Audit your listicle inventory. Count them. Check whether they all rank your product first. Check whether they were generated from a template. Check whether the “2026” in the title reflects actual content updates.

Step 2: Identify the 3-5 listicles that genuinely serve your audience. These are the categories where you have real expertise, where comparison-stage buyers actually search, and where you can add unique data or perspective.

Step 3: Rebuild those 3-5 listicles to meet the 7 rules above. Add transparent bias disclosure, genuine alternatives, comparison tables, external validation, and structured sections. Make them the best comparison resources available for those categories.

Step 4: Decide what to do with the rest. Options include noindexing, redirecting to relevant product pages, or consolidating into fewer, higher-quality pieces. The right choice depends on whether any of those pages drive meaningful traffic or conversions. If they don’t, removing them reduces your risk without losing anything of value.

Step 5: Stop the production line. If you have a content calendar that includes 10 new listicles per month, pause it. Redirect that effort into making your remaining comparison content genuinely excellent.

This is the same quality-over-scale principle we applied with Planeco Building’s programmatic content. We launched 247 pages in 7 days, but each was built from a gold standard page with genuine depth. 140 ranked Top 3 within 72 hours because the content was substantively useful, not because we gamed a template. The same logic applies to listicles: scale works when quality is the foundation, not the afterthought.

FAQ

Will Google penalize all self-promotional listicles?

No. The data clearly shows that Google is targeting a pattern, not a format. Sites with dozens to hundreds of template-based, AI-generated self-promotional listicles got hit. A single well-crafted comparison page with transparent bias disclosure, genuine alternatives, and real evaluation methodology is a different thing entirely. Google’s review systems reward evidence of first-hand experience and evaluation. Provide that evidence, and the format works.

How many listicles is too many?

There’s no magic number, but the data gives us a clear range. Affected sites had 76 to 340 self-promotional listicles. Having 1-5 high-quality comparison pages in categories where you have genuine expertise is a fundamentally different risk profile. The question to ask isn’t “how many can I get away with?” but “how many categories can I genuinely provide expert evaluation for?” For most B2B companies, that number is in the single digits.

Can I still rank myself #1 on my own listicle?

You can, but only if you’ve earned it and you’re transparent about the bias. The problem isn’t self-ranking. It’s self-ranking across hundreds of listicles without disclosure, without methodology, and without genuine alternatives. If your product is genuinely the best option for a specific use case, say so. Explain why. Show the evaluation criteria. And include alternatives that might be better for different use cases. Readers and algorithms can both distinguish between honest self-advocacy and manipulation.

Do listicles work for AI search even if Google devalues them?

Yes, and this is the critical nuance. Google and AI search platforms evaluate content differently. The Ahrefs study of 26,283 ChatGPT source URLs found that “best X” lists were the most prominent page type cited, even those where brands ranked themselves first. However, the Seer Interactive data shows AI citation trends are also shifting toward listicles with detailed methodologies and external validation. The formats that will work across both Google and AI search are the same: structured, transparent, data-backed comparison content.

What’s the difference between a good listicle and a thin one?

A thin listicle reads like a product directory with opinion sprinkled on top. A good listicle reads like a buying guide written by someone who understands the buyer’s decision criteria. Concretely: a thin listicle has a paragraph per item with generic praise. A good listicle has structured sections per item (best for, features, pricing, pros/cons), a comparison table, transparent methodology, and data supporting the evaluations. The standard we use at Radyant: if someone wouldn’t reasonably pay for the information, it’s not good enough.

Should I create listicles specifically for AI search visibility?

Create listicles for your audience. Structure them in a way that AI models can easily parse. Don’t create content “for AI” that you wouldn’t create for users. The tactics that make listicles work in AI search (structured format, comparison tables, clear “best for” criteria) are the same things that make them useful for humans. If you’re creating a listicle solely because you heard it helps with ChatGPT citations, you’re starting from the wrong motivation. Start with: “Does my audience need help comparing options in this category?” If yes, create the best comparison resource possible. The AI citations will follow.