Most “data driven” marketers make the same content reporting mistakes.
An obsession with each end of the funnel’s spectrum, focusing exclusively on eyeballs or last-touch.
Relying on backward-looking outputs, like traffic or leads, that don’t tell you anything about your current day-to-day decision making.
High-level, aggregate monthly or annual trends that only provide surface-level visibility without any answer to “why” those graphs are going up or down.
And worse, arbitrarily comparing yourself to competitor data points that are completely irrelevant.
These mistakes may not hurt as much in other channels or tactics, like paid. However, in content? In SEO or GEO or whatever the hell you want to call it now?
You’ll have already blown through the annual budget before you ever recognize that you had a problem in the first place.
Here’s why most content and organic-based reporting is poor, and how you can fix it.
Let’s not bury the lead. This is the most important step all brands miss (to their detriment).
But first, we need to set the scene a bit so it makes sense.
Yes. It can be helpful to benchmark competitor marketing data.
Specifically when you’re trying to close a Growth Gap. And when you’re analyzing your ability to rank for a particular keyword.
For instance, if you were trying to target a keyword like “perm timeline,” what is the range of brand site strength in the top ten (DR & UR), and also the number of unique referring domains to each individual page that you might have to compete with?

You can and should do this sort of analysis before selecting a keyword or topic or page to write, because it has a direct bearing on your “time to ranking.”
However…
Benchmarking competitor reporting data is a giant waste of your time.
Businesses are unique. Sites are unique. Teams are unique.
They all start and mature at different times, possess different strengths and weaknesses, deliver “value” through their products differently, have different pricing schemes or models, are working with different budgets to different growth goals at different stages, etc. etc. etc. You get the point.
Consider paid. If your competitor’s CPC is $100 and yours is $200… that might sound poor on the surface. But what if your competitor’s ARPU is $2,500 and yours is $10,000?
Then who gives a shit! Let those campaigns rip! ‘Cause you’ll more than make up for it with the higher LTV, better retention rate. Etc. etc. etc.
Look:
Keyword rankings fluctuate. LLM visibility fluctuates. Traffic or lead totals fluctuate as well.
You know what else fluctuates?
Your weight. Your cash flow. Your relationship with your spouse.
Suck it up and deal with it.
The point is that we can’t always control the “outputs.” We want to. We strive to. But we can’t.
All we can control is what we can control. Meaning: the “inputs” like our attitudes and decisions and work we’re doing today in the present to better set up and support your future growth goals.
No, this isn’t just some Stoic BS. It’s real life.
So reporting, like virtually every other area of life, should NOT be measured against competitors. But against yourself… six, 12, and 24 months ago.
If you can’t compete head-on with the Goliath in your industry today… then don’t bother. Instead, figure out how to stair-step your way to be able to compete with them a year or two from now.
I swear I’m bringing this back to content eventually.
The point is that before you ever target a single keyword or topic, before you ever write a single word, before you ever build a single link… you need to estimate your “time to results” and work backwards.
In search, that translates into: when can you expect to see those lagging indicators or outputs we mentioned at the top?
More specifically, when might you crack the first page or top ~5 positions and start seeing traffic (and then leads) potentially roll in?
After having done this hundreds of times over the last decade+, we’ve created models to help us analyze and predict future results based on extensive pattern recognition.
Whether that’s taking a giant category of 40,000+ keywords and prioritizing them (like in our AI Tools Topic Report). Or, whether it’s taking that starting point and then further personalizing it based on the underlying strengths and weaknesses of a specific site to estimate the “Time to Results.”
See an example of this below, highlighted in red and yellow highlights, in addition to the content hierarchy recommendation to the right of it:

Keep in mind: we’re still researching and analyzing these ideas. But we now have a specific target or output that we’d like to hit in the future.
Why is this first step so important?
Because it gives us a pin on a map. It sets our direction of travel and shapes the decision-making criteria we’ll use to identify how we actually intend to get to that destination in the most effective way possible.
And most important of all, it allows us to create reporting that’s actually useful – allowing us to contrast and compare our decision making over the next few weeks (if not months) against these parameters.
To tell us whether we’re doing a “good” job or not. To tell us whether we need to shift gears or switch strategies. Or, whether to just stay patient and keep our heads down executing.
Still with me? Ok, let’s dive into the actual number-crunching part.
You publish (or republish) an article today.
It gets indexed (or reindexed). Then it moves up over time. (Assuming you executed well vs. over-relying on AI slop forever.)
These early factors, like the number of content pieces published (or republished) and ranking position changes are your inputs.
This is what you can see and control LONG before you ever see a single session or Key Event in GA show up.
The goal in this step (and the next one below) is to link:
In the example table below, we can see how a client’s traffic grew from 1096 monthly visits to over 62,058 each month from just our content publications:

Where:
Studying those trends over a few months of data will give you a peek into how the compounding ROI of search-focused content works in practice.
Publishing more gives you more targets to hit and develop topical authority faster, which means:
Reach and frequency. It’s almost like proven marketing and advertising principles over the past century still apply. Regardless of the channel.
Huh. Imagine that?
Now, let’s think back to the last example for a second. Imagine one of those potential 12+ estimated “time to results” ideas so we can conceptually tie these two steps together better.
Here’s an example of how that keyword’s ranking timeline might look:

So. Is the above graph “bad” now in this context? Does this mean you suck at your job, the sky is falling, or that SEARCH IS SO FINALLY FREAKING DEAD?
Nope.
‘Cause you should have already anticipated this exact problem with your estimated “time to results” in the first step before ever even hitting the Publish button.
Plus, check out that long-term top 3 trendline! That’s the money you were after all along. Sure, it may have taken a few months longer than you would ideally like.
We all wanted results yesterday. But this example basically performed exactly as you anticipated.
The lessons here are that:
Making sense?
So. If your goals are to hit some magical numbers at the end of six or twelve months, you just work backwards.
You specifically allocate $X budget or Y time or Z content pieces or whatever — today! — to target:
Now. Should you stop here, pat yourself on the back here, and call it a day for reporting?
Nope!
Like the homie DMX said, we just gettin’ started up in here.
We’re going to start pushing the pace a bit.
Looking at any channel’s performance month-over-month is fine. Not bad, but also not great. Just… meh.
The only way you can start to draw any real insight is to continue breaking high-level trends down into smaller segments so you can analyze the pieces.
Doing this properly opens all sorts of doors.
For example, take the last spreadsheet data in the example above and now split it out into URL cohorts by monthly publication date.
In other words, which content was published (or republished) during a certain month and year. Now, re-run the analysis to look like this:

The column on the left shows the number of publications for each month and year. Then, you can see how these individual cohorts performed over longer time periods.
Why is this useful? Because you’ll be able to see what’s performing as-expected, better, or worse than your predictions.
Once you have data organized like this, it’s easy to calculate cohort-specific performance to measure against your original predictions.
For example, check out the performance of these URLs by how long it ACTUALLY took to reach top three or first page:

Hopefully, you’re starting to understand this “payback period” idea now.
You’re able to see how long your predictions are taking to work. Which allows you to make better forward-looking predictions in the future.
And perform even more insightful analyses in the steps below.
When you can start analyzing content performance to this degree of granularity, it becomes very easy to assess results by other dimensions like content type or topic area.
This is critically important on large sites, where you might have multiple product lines, or if you have a horizontal product that could cater to LOTS of different ICPs. In these cases, your site often has different levels of topical authority across vastly different categories.
So just because you can get content to perform quickly in one topic area doesn’t necessarily mean that you’ll also see the same “time to results” across other ones.
Once you have publication-based cohort data broken out, layering content type or topic is straightforward. You simply group all of the underlying data together across all page URLs to contrast and compare the relative performance of each sub-group.
Like this example below, that shows the ranking positions and clicks by pages relating to either “AI”-based topics vs. “business”-focused ones:

Now you can compare apples to apples. You can make future topic decisions with more clarity. And you can make faster predictions about the potential ROI for different content concepts.
You will also spot areas of opportunity faster.
Bonus tip: “2-in-2” guideline: How fast can you go from “not indexed” to ~second page positions within ~two weeks?
Like so, where the ranking position timeline goes from zero to the the twenties within about a week from being published:

Our work (and analyzing performance like this) has helped us develop this simple rule of thumb that indicates a HIGH correlation with first-page rankings within six months of publication.
Don’t worry, I can already hear the SEO nerds in the back of my head shouting: “But correlation doesn’t equal causation!”
Of course it doesn’t. But…
I also feel pretty comfortable continuing to trust my high-hundreds of correlating pattern repetition examples like this one over the past 15 years as a pretty damn reliable benchmark.
This sort of analysis might seem overkill for newer sites.
Again, I would argue the opposite.
If you’re using the Beachhead Principle to establish topical authority on a new site, you need to know whether (or not) your decisions are any good!
Well, a graph like this for a new site means you chose the right beachhead:

What you’re seeing here is the cumulative growth of first-page ranking positions on the first 50 articles for a brand new site. Again, you can and should show this compounding nature over time, like this:

And you should break out your page topics to assess the performance of your decisions from a few months back.
For example, on this site, we chose to target “homeschool” topics as one of the first beachheads to establish because these topics were already converting the best on paid search.
I refer to this idea as “reinforcing paid,” where you want to double down on the best performing paid areas with organic content positions to (a) exponentially increase the number of leads with multiple top positions (paid and organic) in the SERPs, and/or (b) be able to pull back on paid at some future point to reinvest in other areas. Two birds, one stone.
The table below shows that 56% of “homeschool” pages we produced are hitting the first page. That’s, well, awesome.

These aren’t some flimsy TOFU, vanity-metric-inducing, zero-click keywords, either. But the juicy ones that actually have buying intent.
Sane parents who actually care about their children’s future aren’t going to blindly follow some regurgitated LLM recommendation for their little special Keiki’s education.

BUT BUT BUT. WHAT ABOUT LLM CITATIONS AND VISIBILITY?!?!?!?!
After all, we gotta throw a bone to all those breathless LinkedIn marketing virgins and mention it at least once every single day.
Oh yeah. Those also look pretty good on only 50 pages on a new site with low DR and zero link or citation building:

Creating channel-specific content, that can only perform in one place but not another, is a waste of resources. So stop blindly chasing “SEO” content tactics or “LLM” content tactics.
Instead, focus on content that can build trust and credibility with readers, position the unique differentiating factors about your brand, and factoring in discovery or distribution (search + LLMs) into the equation at the same time.
The good news (or bad news, depending on how you look at it) is that we’re still not done yet!
There’s one more level to go in this game.
Monthly trends, broken down over the last few steps, allow you to operate with more certainty.
However, the “feedback loop” in search can be LONG.
Again, contrast with paid. You spend money today, run ads tomorrow, watch leads next week, see how many people buy over the next 30 days. Cool.
It don’t work like that with organic content performance. In fact, your ROI in year two should be significantly better than year one.
Problem? We talkin’ years now instead of weeks or months.
So we’re going to need to do some quarter-over-quarter analysis with all this underlying data. This will also help us rule out cyclical trends or seasonality that month over month data falls victim to.
We’ll use another example in a totally different space to help illustrate.
Not all traffic losses are equal. And just because there’s a traffic loss doesn’t mean the sky is falling.
There’s a BIG difference between spotting pages that have lost positions due to content decay (and need updating ASAP) from those that are getting cannibalized by AIO.
Read Traffic Recovery for more here. But long story short, a traffic loss due to a top ranking position lost (ex. 1 → 7) on content that hasn’t been updated in two+ years should be recovered immediately. Especially when you can tell that the content type will continue to work well for LLMs discovery, too.

However. Then there’s the less fun kind of traffic loss to discuss. Yes, AI Overviews are impacting CTR. Yes, zero-click SERPs are on the rise. You already know all about this.
The clue here is that the ranking position doesn’t actually change at all in most cases. IF anything, they may have even grown or increased during this period of time.
Tell me if this chart below looks similar. It’s a quarter over quarter analysis over the past year.
Now, look at how total traffic has declined pretty steadily. Yet, the number of top three ranking positions and first page results has grown substantially.

This is your zero-click SERPs! This is your declining CTR impact.
More, better ranking positions. More AIO visibility. And yet, traffic is still down.
It sucks. I know. Is it time to panic right now based on this surface-level data? Maybe, maybe not.
If you can segment out your content types or topics (as we went through in the last few steps), then you can isolate the variables and apply critical thinking to your analysis.
In this example, the content type with (a) increasing top three positions and (b) biggest traffic declines at the same time are: glossaries. They account for nearly a 36.6% traffic dip in our third quarter, year-over-year analysis. That rate was significantly higher than other content types or topics we measured, so we can clearly isolate the problem child that makes that entire chart look UGLY AF.
If you’ve opened GA or GSC in the last year, you’ve probably seen similar trend lines and also sobbed silently on a Saturday night into your large glass of tequila. (Just me?!)
The point is that we need to dig beneath the surface. WHAT, specifically, are you trying to get out of each content type? What is the actual business problem you’re solving?
See? It’s nuanced.
This content type might suffer a drop in clicks. They might tank even. However, that doesn’t mean all is lost.
Most reporting data only focuses on the two ends of the spectrum: activities to make people aware of your brand, or activities meant to convert warmed-up users. These extremes are where everyone obsesses.
But what about, you know, the middle?
If you’re only measuring traffic, or only measuring lead conversions, then this content type isn’t worth doing. Obviously. However, if the sales cycle is longer and often requires a buying committee (like, you know, most expensive purchases by sophisticated customers), then it probably does make sense to continue.
Because trust and credibility and brand recognition are literally worth their weight in gold. These elements are significantly more important to converting larger customers than any other sort of shortsighted tactical tweak.
In this specific case, the catch is that we need to update reporting to help better capture “credibility” or “trust” or “brand recognition” in a more useful way. Like: content attribution through a CRM, so you can see which pages were visited over weeks or months prior to conversion.
Then, if we can measure the number of leads, MQLs, or sales that visit certain page types — using the same sort of cohort-based analyses described in here — to truly answer whether this content is still a good use of everyone’s time (or not).
Reporting should be actionable.
We should be using it to analyze what is working (as predicted or forecasted) and when we should double down ASAP. Versus, when it’s not working, why, and how we should adapt our decisions based on all of this additional context.
Start by estimating or predicting the targets you’re trying to hit. Then, create month over month views by publish date to help visualize trends by cohort. Once you’ve done this, you can further segment them into specific content types and topics and analyze performance over quarters.
Looking at random numbers going up or down on a monthly basis, or using completely arbitrary competitor benchmarks is a giant waste of everyone’s time.
And the problem is that you won’t know whether it’s a waste of time until it’s already too late.
Every marketer thinks they’re data-driven.
To some degree that’s true. There’s no shortage of data points we can all obsess over on a daily basis.
But that’s also part of the problem. If you don’t take a step back and define what you’re trying to analyze or why, and how you’re going to do it to create actionable insights, you’re going to fall victim to the same biases as everyone else.
This is mandatory with content or search where feedback loops are long and there’s a lot of noise (and bullshit) spewed by people who’ve never actually lived in your shoes before.
The ever-shifting landscape only exacerbates the confusion.
Thankfully, it doesn’t have to be this way. It’s not time to throw the baby out with the bathwater or throw the toys out the pram.
You just need to set up a cohort-based reporting structure around your own objectives to gain perspective. Then, you can use data to make sober decisions. And avoid fear-induced decisions the minute one number goes up while another goes down.
The problem is that you won’t know it’s a waste of time until it’s already too late. You’ll blow through the annual budget chasing the latest shiny object from whatever faux influencer was cool at the time.
And
Traffic, leads, and sales are lagging indicators.
They’re outputs, only providing insight about past decisions three, six, or even 12 months ago. Perhaps they were great. Or meh. Or poor.
Either way, the distinction is that you can’t control outputs directly.
LLM metrics are bullshit.
Visibility
Share of voice
They’re not actionable. They’re not insightful.
They don’t help you learn from the past, recalibrate in the present, and make better future decisions because of it.
//
Now, when you get really fancy

//
All of these steps so far are just table stakes that get you into the game.
These next two content reporting tips are what separate the elite from the amateurs.
And if not, we’re just a phone call away