Your content calendar is full. Your pipeline is empty. You’re publishing two blog posts a week, rankings look healthy, traffic is climbing, and yet your CRM tells a different story: the same number of demos as last quarter. The disconnect between content output and business results is the most expensive problem in B2B marketing, and it starts with how you choose what to write about. Here’s the framework we use at Radyant to make every content slot an investment decision, not a calendar filler.
Key takeaways
- Most content plans are built on keyword volume. Pipeline-connected content plans are built on intent and product fit. The difference in conversion rates between these approaches can be 20x or more.
- Every content slot is an investment. Before writing a single word, evaluate each topic against a decision matrix of commercial intent and product fit. If it doesn’t pass, leave the slot empty.
- Zero search volume does not mean zero demand. We’ve generated 2,190 net new clicks and 60+ leads from keywords that SEMrush showed as “0 volume.” Audience research beats keyword data.
- You can’t connect content to pipeline if you can’t measure it. Click-based attribution alone misses the majority of content-influenced pipeline. You need three layers: click-based, self-reported, and verbal from sales.
Looking for a shortcut to drive more organic growth from your content, SEO & AI Search efforts? Request a free growth audit from Radyant to get an honest assessment of your organic growth potential.
The content-to-pipeline gap is a topic selection problem
Let’s start with two real pages from our client work to illustrate the core problem.
Page A: A blog post about a niche regulatory topic in real estate. It got 89 clicks. It generated 5 qualified leads. That’s a 5.6% conversion rate from organic search, which is exceptional for B2B.
Page B: A blog post answering a common “what happens if…” question. It drove thousands of visits. It generated zero leads. Not one.
The difference between these two pages wasn’t writing quality, design, or SEO execution. It was the topic itself. Page A targeted people actively trying to solve a problem the product addresses. Page B attracted people who already owned the product and had a one-off question with no commercial intent.
This is the content-to-pipeline gap, and it’s not fixed by writing better or publishing more. It’s fixed by choosing different topics.
Data from Grow and Convert across 95 articles and 123,000+ organic pageviews quantifies this precisely: bottom-of-funnel keywords convert at roughly 4.78%, while top-of-funnel content converts at just 0.19%. That’s a 25x difference. Yet a CXL audit of 40+ B2B software websites found that over 70% of their content targets purely informational keywords.
Most companies are investing the majority of their content resources into the category with the lowest conversion potential. The fix isn’t incremental. It requires rethinking how you decide what gets published.
Four mindset shifts before you build the plan
Before getting into the framework, there are four beliefs that need to change. These aren’t abstract principles. They’re the operational assumptions that determine whether your content plan connects to pipeline or just fills a calendar.
1. SEMrush is a signal, not a veto
When a keyword tool shows 0 or 10 monthly searches, most teams skip the topic. This is a mistake. Keyword tools estimate search volume based on historical click data and panel data. They’re directionally useful, but they systematically undercount long-tail, niche, and emerging queries.
When we built a programmatic content cluster for Planeco Building, most of the 247 targeted keywords showed “0 search volume” in SEMrush. The result: 2,190 net new clicks in three months and 60+ leads in under six months. One specific page targeting a keyword with only 70 seed volume generated 5,400 impressions, a 77x multiplier over what the tool predicted.
“We can’t see it in the keyword tools” does not equal “nobody is searching for it.” Treat keyword volume as one data point among several, not as the gating criterion.
2. Specific beats generic, always
100 visitors with the exact problem your product solves will outperform 10,000 visitors with vague curiosity. This sounds obvious, but watch how most content plans get built: teams sort keyword lists by volume and work top-down. The highest-volume topics are almost always the most generic.
A topic like “KMU Digitalisierung” (SME digitalization) might show decent volume, but the people searching it could need an ERP system, a consultancy, a CRM, or an IT management platform. If you sell one of those things, you’re competing for attention with every other category. The topic has low product fit because it’s too broad to attract your buyer specifically.
3. Show the product from a thousand entry points, not a thousand topics
A pipeline-connected content plan doesn’t cover a thousand different subjects. It covers a thousand different situations where someone might need your product. The distinction matters. Every piece of content should have a natural, non-forced connection to what you sell. If you can’t draw a line from the reader’s problem to your product within the content, the topic probably doesn’t belong in your plan.
4. Every content slot is valuable. Don’t fill it casually.
Content production has real costs: writer time, design, review cycles, opportunity cost. A content slot wasted on a topic that won’t generate pipeline is a slot you could have used on one that would. The operating principle should be: rather leave a slot empty than fill it with a topic you’re not convinced about.
This is the opposite of how most content calendars work. Most teams feel pressure to publish consistently, so they fill slots with whatever’s available. A pipeline-connected plan treats each slot as an investment decision with an expected return.
The six content buckets that actually drive pipeline
Not all content formats are created equal when it comes to pipeline generation. Based on our work across B2B SaaS, PropTech, and ClimateTech clients, and validated by Grow and Convert’s conversion rate research, these are the six content types that consistently connect to pipeline.
1. Listicle / comparison
Format: “Best [category] software” or “Top [category] tools”
Example: “Best maintenance software 2026”
Typical CR: 3-7%
When to use: Always. This is category table stakes. If you’re not ranking for your own category listicle, you’re ceding that decision-making moment to someone else. One of our clients’ comparison pages became their top 3 lead-generating page within months of publication.
2. Versus
Format: “[Your product] vs. [Competitor]” or “[Competitor A] vs. [Competitor B]”
Example: “Jira vs. Monday”
Typical CR: 5-10%
When to use: When competitor awareness is high in your market. These searchers are in active evaluation mode. Grow and Convert’s data shows comparison and alternatives keywords convert at 6.94% on average, making them among the highest-converting content types in B2B.
3. Alternatives
Format: “[Competitor] alternatives”
Example: “Salesforce alternatives for small teams”
Typical CR: 5-10%
When to use: When a dominant player exists in your category. The person searching this has already decided the incumbent isn’t right for them. They’re looking for options. Your job is to be one of those options with a clear differentiation story.
4. Jobs to be done
Format: “How to [task the product solves]”
Example: “How to automate employee onboarding”
Typical CR: 1-4%
When to use: When your audience searches by problem rather than product category. This is where audience research becomes essential. For one of our clients, we used Claude to map Jobs to be Done for a funnel builder product and discovered that “phone number validation” was a high-frequency use case nobody was targeting. The audience wasn’t searching for “funnel builder.” They were searching for the specific task they needed to accomplish.
5. Conversion asset-driven
Format: Templates, calculators, checklists that map directly to the product
Example: “Maintenance plan template” or “Heat pump cost calculator”
Typical CR: 3-6%
When to use: When a tangible resource naturally connects to your product’s value proposition. One of our clients’ template pages generated 15 leads since launch because the template was a simplified version of what the product automates. The reader gets immediate value, and the product becomes the obvious next step.
6. Test / review
Format: Product-category evaluations, test results, reviews
Example: “Heat pump test results 2026”
Typical CR: 4-8%
When to use: When your product category requires evaluation content before purchase. One client’s test/review page generated 50+ leads since go-live and became their top converting Google Ads search term as well, proving the topic had genuine commercial demand across channels.
The decision matrix: intent x product fit
Having the right content buckets is necessary but not sufficient. You also need a system for evaluating individual topics within those buckets. This is where most content plans fall apart: they identify a bucket like “comparison content” and then fill it with whatever topics have the highest search volume, without evaluating whether each specific topic will actually drive pipeline.
The decision matrix we use evaluates every content idea on two axes:
Commercial intent: Is the person searching this actively looking to solve a problem, evaluate a solution, or make a purchase decision? (High / Mid / Low)
Product fit: Can your product naturally solve the problem this person is searching about? (High / Medium / Low)
Here’s how the matrix works in practice:
High product fit | Medium product fit | Low product fit | |
|---|---|---|---|
High intent | Go. Prioritize. | Go. Find the product angle. | Caution. Only if SERP is weak. |
Mid intent | Go. Build a conversion path. | Caution. Validate first. | No-go. |
Low intent | Caution. Only for topical authority. | No-go. | No-go. |
The critical insight: most of the matrix is “Caution” or “No-go.” This is intentional. A pipeline-connected content plan is selective by design. If you’re saying “Go” to most topics, you’re not being rigorous enough.
Go vs. no-go: real examples
Abstract frameworks are useful, but seeing them applied to real decisions is what makes them actionable. Here are examples from our client work, with the reasoning behind each decision.
Go: “Nutzungsänderung Ferienwohnung” (Change of use to vacation rental)
Seed volume: only 210
Result: 89 clicks, 5 leads (5.6% conversion rate), 6,700 impressions
Why it worked: High intent (person is actively planning a building project), high product fit (the client’s platform handles exactly this type of regulatory process). The low volume scared off competitors, leaving an open SERP.
Go: “Padel hall conversion” (Umnutzung zur Padel-Halle)
Seed volume: only 70
Result: 318 clicks, 6 leads, 5,400 impressions (77x the seed volume)
Why it worked: Same logic. Hyper-specific intent, direct product fit, and a keyword tool that dramatically underestimated actual demand.
No-go: “Wärmepumpe Stromausfall” (Heat pump power outage)
Traffic: high
Leads: zero
Why we’d reject it: The person already owns a heat pump. They’re not in a buying phase. They have a one-off problem (power went out) with no connection to purchasing decisions. Low intent, low product fit.
No-go: “What is Remaining Useful Life (RUL)?”
Traffic: top 4 page by volume on the client’s site
Leads: zero
Why we’d reject it: Pure educational “what is…” query. The searcher is likely a student, researcher, or someone in early learning mode with no commercial intent. This is the classic trap: it looks great in traffic reports and contributes nothing to pipeline.
No-go: “KMU Digitalisierung” (SME digitalization)
Why we’d reject it: Too generic. Competitors for this term include ERP providers, consultancies, CRM vendors, and IT management platforms. Even if you rank, the traffic won’t be qualified because the search intent is too diffuse. Low product fit despite mid-range intent.
Notice the pattern: the go decisions have low volume but high specificity. The no-go decisions have high volume but low specificity. This is not a coincidence. It’s the core principle at work.
Want to evaluate your current content plan against this framework? Talk to us about a growth audit and we’ll show you which topics are driving pipeline and which are burning content slots.
How to research topics that connect to pipeline
The decision matrix tells you how to evaluate topics. But where do the topics come from in the first place? Most teams default to keyword tools. Open SEMrush, enter a seed keyword, sort by volume, export to a spreadsheet. This produces a list of topics optimized for traffic, not pipeline.
A pipeline-connected content plan starts with audience research and uses keyword data as a validation signal, not the foundation. Here are four specific methods we use, in order of priority.
Method 1: Audience research (customer interviews + sales intelligence)
This is the starting point. Not keyword tools. Not competitor analysis. The audience.
Talk to customers. Talk to churned prospects. Listen to sales calls. Read support tickets. You’re looking for three things:
What problems did they search for before they found you? Not “what keywords did you use?” but “what were you trying to figure out?”
What alternatives did they evaluate? This directly feeds your comparison and alternatives content buckets.
What language do they use to describe their problem? This is often different from the language your product team uses, and it’s the language your content should match.
For the Planeco Building SEO work, we conducted regular interviews with the co-founders to extract regulatory knowledge that created legally accurate content no AI tool could generate on its own. This expert knowledge became the content moat that drove 5x organic leads in 10 months.
Sales teams are an underused goldmine. They hear objections, questions, and competitor mentions every day. Create a simple system: a shared document or CRM field where sales logs the questions prospects ask before they become leads. These questions are your content topics.
Method 2: AI-assisted Jobs to be Done mapping
Once you understand your audience’s problems, use AI to expand the map of specific tasks and situations where your product is relevant.
The process: take your product’s core capabilities and ask Claude or a similar model to generate all the specific jobs someone might hire your product to do. Then evaluate each job against the decision matrix.
For a funnel builder client, this process uncovered that “phone number validation in forms” was a high-frequency use case that nobody in the market was targeting with content. The audience wasn’t searching for “funnel builder.” They were searching for the specific task. This is a Jobs to be Done content opportunity that keyword research alone would never surface.
Method 3: Reddit and community mining
Reddit, Quora, and niche community forums are where your audience describes their problems in their own words, without the filter of marketing language. Search for your product category, your competitors’ names, and the problems your product solves.
What you’re looking for:
Questions with multiple upvotes and detailed responses (signal of genuine demand)
Specific problems described with specific context (these become JTBD content topics)
Competitor complaints (these feed alternatives and comparison content)
Language patterns that differ from what keyword tools suggest
For one client, a topic that showed 0 search volume in SEMrush was validated through Reddit threads where dozens of people were asking the exact question. The topic was real. The keyword tool just couldn’t see it.
Method 4: Google Ads converting search terms
If you’re running Google Ads (or your client is), the Search Terms report is a direct pipeline signal. These are the actual queries people typed before converting. If a search term is driving paid conversions, it’s almost certainly worth targeting organically.
This is one of the most underused research methods in content planning. The data already exists in your ads account. You’re looking for:
Search terms with conversions (demo requests, trial signups, form fills)
Long-tail variations you haven’t targeted with content
Patterns in the language that reveal intent you might have missed
One client’s top converting Google Ads search term turned out to be a content topic that also became their highest-performing organic page, generating 50+ leads since publication. The paid data validated the organic opportunity.
The SERP overlap test: when to combine vs. separate topics
A common planning mistake is creating separate pages for topics that Google treats as the same query, or combining topics that Google treats as distinct. Both waste content slots.
The test is simple: search both topics in Google and compare the top 10 results. If 7+ results overlap, Google considers them the same intent. Create one page. If fewer than 3 results overlap, they’re distinct intents. Create separate pages.
This matters for pipeline because splitting a topic that should be combined dilutes your ranking potential (and therefore your lead potential) across two weaker pages instead of one strong one. And combining topics that should be separate means you’re serving the wrong content to at least one audience segment.
How to measure whether the plan is working
A content plan that claims to connect to pipeline but has no measurement system is just a content plan with better branding. You need to close the loop. And this is where most teams get stuck, because click-based attribution is increasingly broken.
Over 60% of Google searches now end without a click, and AI answer engines are absorbing an increasing share of informational queries. When someone asks ChatGPT for a recommendation, gets your brand mentioned, then types your brand name into Google, that shows up as “Direct” or “Organic” in your analytics. Never as “ChatGPT recommended us.”
This is why we use a three-layer attribution model. No single layer gives the full picture. You have to triangulate.
Layer 1: Click-based attribution
Your CRM and analytics data. Track which pages generate form fills, demo requests, and trial signups. This is the baseline, and it’s useful for content that drives direct conversions. But it systematically undercounts content that influences pipeline through awareness and consideration.
Set up proper tracking so every conversion can be attributed to a landing page. If your analytics can’t tell you which blog post generated a lead, fix that before doing anything else.
Layer 2: Self-reported attribution
Add a “How did you hear about us?” field to every demo form and signup flow. Critical detail: make it a mandatory free-text field, not a dropdown. Dropdowns force people into categories you’ve predefined. Free text lets them tell you what actually happened.
“I read your article about maintenance plan templates” or “ChatGPT recommended you when I asked about onboarding automation” are signals you’ll never get from click data. LLMs now make analyzing hundreds of free-text responses trivial.
At Heyflow, AI-attributed trials (identified through self-reported attribution) convert at 14.3%, compared to the 11% channel average. Without Layer 2, those conversions would have been invisible or misattributed to Direct traffic.
Layer 3: Verbal attribution from sales
What prospects actually say in discovery calls. “I found you when I was researching alternatives to [competitor]” or “Your comparison article convinced me to book a demo.” Most of this intel dies in the call unless you create a system to capture it.
The fix: add a custom CRM field that sales fills in after every discovery call. One question: “What did the prospect say about how they found us?” This takes 30 seconds per call and provides attribution data that no analytics tool can replicate.
When you combine all three layers, you get a much more accurate picture of which content topics are actually generating pipeline, and which are generating traffic reports that look good in slide decks but don’t move the business.
Putting it all together: the content plan as an investment portfolio
Here’s the complete process, from research to published plan:
Step 1: Audience research. Interview customers, mine sales calls, review support tickets. Understand what problems your audience is trying to solve and what language they use.
Step 2: Topic ideation. Use the four research methods (audience interviews, AI-assisted JTBD mapping, Reddit/community mining, Google Ads converting terms) to generate a list of potential topics.
Step 3: Bucket classification. Assign each topic to one of the six content buckets (Listicle/Comparison, Versus, Alternatives, JTBD, Conversion Asset-Driven, Test/Review). If a topic doesn’t fit any bucket, that’s a red flag.
Step 4: Decision matrix evaluation. Score each topic on commercial intent (high/mid/low) and product fit (high/medium/low). Apply the go/caution/no-go thresholds. Be ruthless. A plan with 15 high-conviction topics will outperform a plan with 50 mixed-quality topics.
Step 5: SERP validation. For each “Go” topic, check the actual SERP. Run the overlap test. Assess whether you can realistically create the best answer. If the SERP is dominated by massive authority sites and your domain can’t compete, reconsider the topic or find a more specific angle.
Step 6: Prioritize by expected pipeline impact. Not all “Go” topics are equal. Prioritize based on: conversion potential (bucket type), competitive opportunity (SERP difficulty), and speed to impact (how quickly can you rank and generate leads).
Step 7: Set up attribution before publishing. Ensure Layer 1 (click tracking), Layer 2 (self-reported free-text field), and Layer 3 (sales CRM field) are all in place before the first piece goes live. You can’t retroactively measure what you didn’t track.
The result is a content plan that looks different from what most teams produce. It’s shorter. It’s more specific. It has gaps where topics were deliberately rejected. And every topic on it has a clear line to pipeline.
At ToolSense, this approach compounded over two years into a 10x increase in inbound demo bookings. Not traffic. Demo bookings. The content plan wasn’t bigger than competitors’. It was more targeted.
What about top-of-funnel content?
A common objection: “If we only publish bottom-of-funnel content, we’ll miss the top of the funnel entirely.” This is a valid concern, but it’s based on a flawed assumption about sequencing.
Most B2B content strategies start with top-of-funnel content and work down. The logic sounds reasonable: build awareness first, then nurture toward conversion. In practice, this means you spend months (or years) building traffic that doesn’t convert, hoping the nurture layer will eventually kick in.
The pipeline-connected approach inverts this. Start with mid and high intent content. Get pipeline flowing. Then selectively add top-of-funnel content where it serves a specific strategic purpose: building topical authority that strengthens your mid/high intent pages, or creating content for a specific awareness gap identified through sales intelligence.
Top-of-funnel content isn’t bad. It’s just not where you start. And when you do create it, it should still pass the product fit test. If there’s no natural connection to your product, it doesn’t belong in the plan regardless of funnel stage.
As Fullfunnel.io frames it, the KPI for content needs to shift from “generate traffic” to “get into consideration when strategic accounts evaluate solutions to their top priority problem.” That reframe changes everything about what you publish.
For a deeper look at how successful B2B companies approach organic growth strategy, check out the Masters of Search podcast where we break down what’s actually working.
The AI search dimension
Everything discussed so far applies to traditional Google search. But Gartner predicts traditional search volume will drop 25% as AI answer engines absorb queries. This doesn’t invalidate the content plan framework. It reinforces it.
AI search engines (ChatGPT, Perplexity, Google AI Overviews) are even more aggressive at absorbing informational, low-intent queries. The “What is Remaining Useful Life?” content that already generated zero leads from Google will generate even less from AI search, because the AI will answer the question directly without sending any click at all.
But high-intent, product-fit content is harder for AI to fully absorb. Comparison pages, alternatives content, and detailed JTBD guides require the kind of specific, opinionated, experience-backed depth that AI models prefer to cite rather than replace. When your content is the definitive answer, AI search becomes another distribution channel, not a threat.
The attribution challenge is real though. As we noted earlier, AI-referred visitors show up as “Direct” in analytics. This is why Layer 2 and Layer 3 of the attribution model aren’t optional. Without them, you’ll undercount your content’s pipeline impact and potentially cut the wrong topics from your plan.
Common mistakes that break the content-to-pipeline connection
Even with the right framework, teams make predictable errors. Here are the ones we see most often:
Filling the calendar reflexively. The pressure to publish consistently leads to filling slots with whatever’s available. One mediocre post on a low-fit topic costs you the slot that could have gone to a high-conviction topic next week. Build slack into your calendar. Empty slots are better than wasted ones.
Treating all “Go” topics equally. A comparison page and a JTBD article both passed the decision matrix, but the comparison page might convert at 3x the rate. Prioritize accordingly. Not all pipeline-connected topics have equal pipeline potential.
Ignoring the “best answer” requirement. Choosing the right topic is necessary but not sufficient. If your comparison page is a thin listicle with no original analysis, it won’t convert even though the topic is right. The content itself must be the kind of depth someone would pay for. Topic selection and execution quality are both required.
Not revisiting the plan based on data. A content plan is a hypothesis. After three to six months, your attribution data should tell you which buckets and topics are performing. Double down on what works. Cut what doesn’t. The plan should evolve quarterly based on real pipeline data, not annual planning cycles.
Measuring too early or with the wrong metrics. A new page needs time to rank and accumulate enough traffic for conversion data to be meaningful. Don’t judge a page’s pipeline potential after two weeks and 30 visits. Give it 90 days and evaluate on leads, not traffic.
FAQ
Should I start with bottom-of-funnel or top-of-funnel content?
Start with mid and high intent content. Always. Bottom-of-funnel and mid-funnel content (comparison pages, alternatives, JTBD articles) converts at 10-25x the rate of top-of-funnel content. Get pipeline flowing first, then selectively add top-of-funnel pieces where they serve a specific strategic purpose like building topical authority.
What if SEMrush or Ahrefs shows 0 search volume for a topic I believe in?
Validate it through other signals before dismissing it. Check Reddit and community forums for real discussions. Look at Google Ads converting search terms. Ask your sales team if prospects mention this problem. We’ve generated 60+ leads from keywords that showed 0 volume in SEMrush. The tools are useful but they systematically undercount niche and long-tail queries.
How many content slots should a pipeline-focused plan have per quarter?
There’s no universal number. It depends on your team’s capacity and the number of high-conviction topics you’ve identified. A plan with 8-12 high-conviction pieces per quarter will outperform a plan with 30 mixed-quality pieces. The constraint should be topic quality, not production capacity. If you only have 6 topics that pass the decision matrix, publish 6.
How long before a pipeline-focused content plan shows results?
Expect initial signals (rankings, early conversions) within 60-90 days for mid-difficulty topics. Meaningful pipeline data typically requires 3-6 months. The compounding effect is real though: at ToolSense, the approach generated 10x inbound demo bookings over two years, with the growth curve accelerating over time as more high-intent pages ranked and reinforced each other.
How do I measure content’s pipeline impact when attribution is broken?
Use all three attribution layers simultaneously: click-based analytics for direct conversions, a mandatory free-text “How did you hear about us?” field on every form, and a custom CRM field where sales captures what prospects say in calls. No single layer is complete. Triangulate across all three to get an accurate picture. This is especially important for AI search, where referred visitors are invisible to click-based tracking.
Can I apply this framework if I’m an early-stage startup with no existing content?
Yes, and you’re actually in a better position than companies with hundreds of existing pages. You have no legacy content dragging down your focus. Start with 5-10 topics that score “Go” on the decision matrix, prioritize the highest-intent buckets (Comparison, Alternatives, Versus), and build from there. The framework works whether you’re starting from zero or restructuring an existing plan.


Recommended content
The Masters of Search Podcast
Why AEO might just be SEO | Andy Muns, Director of AEO @ Telnyx | Masters of Search #15
The Masters of Search Podcast
How to drive real pipeline from SEO | Carla Chicharro, Revenue Marketing @ Mitiga Solutions
The Masters of Search Podcast
How he built a 560m profit organic growth machine | Fabrizio Ballarini, Head of Organic Growth @ Wise
Guides
Content that ranks and converts: the 5-step system we use
The Masters of Search Podcast
Why you should embrace AI before it takes your job | Mike Rhodes
Guides
How to create listicles that survive Google’s crackdown
The Masters of Search Podcast
How he managed 24 billion pages | Roberto Grasiano, ex Head of Organic Growth & SEO @ Shutterstock
Comparison
7 best organic growth agencies for scale-ups
Marketing Breakdowns
How Personio grew to $8.5B in valuation | B2B SaaS SEO & PPC Case Study
Guides
The most common prompt tracking mistakes & how to actually measure AI Search visibility
The Masters of Search Podcast
SEO for early-stage startups | Irina Maltseva, Growth Advisor @ Aura, Sphere, Artisan & more | Masters of Search #17
The Masters of Search Podcast
How to get your brand mentioned in ChatGPT & Co. | Ethan Smith, CEO @ Graphite