Skip to content

Content decay detection

A weekly GSC pull that compares the last 7 days to the previous 28, classifies every page by decay severity, and sends a Slack + email digest of what to refresh.

What this does

Every Monday at 8 AM, the workflow pulls the last 7 days of Google Search Console data and compares it to the previous 28 days. For every page, it asks: did clicks drop? did position drop? by how much? Then it classifies each page as critical decay (big drops on a high-value page), decaying (moderate drops), early decay (early warning signs), stable, or growing.

You get a Slack message + email every Monday morning with the breakdown: how many pages in each bucket, which pages are critical, what's growing. Critical pages also get auto-generated fix tasks (refresh content, audit backlinks, fix CTR) dropped into a separate Google Sheet tab.

About 5-10 minutes per Monday run. Your time: 15 minutes to read the report.

The problem this solves

Content rot is a slow burn. A post you wrote a year ago is at position 4. Six months later it's at 14. Six more months and it's at 32 and the page is barely getting impressions. You didn't notice because no single week's drop was big – it was a slow, smooth decline across many months.

The fix is to refresh those pages. But before you can refresh them, you have to find them. And that's where most teams stall:

  1. The manual version is opening GSC, picking a date range, picking another date range, exporting to CSV, manually diffing, eyeballing what dropped, deciding which drops are real signal vs noise. That's 4-8 hours a week for a real content library.
  2. The "I'll check it next quarter" version is what actually happens. By the time someone looks, the drops are catastrophic – position 4 → 47, traffic down 90%, the page is essentially dead.
  3. The cheap-tool version is a SaaS that emails you "your traffic is down 12% this month". Useless. You need to know WHICH PAGES are decaying, not your portfolio average.

This workflow does the boring part – the data pull, the per-page comparison, the classification, the report – automatically every week. Your editor sees the same Monday morning digest every week: here's what's critical, here's what's still growing, here's what to refresh first. No manual GSC drilling.

For a content library of 200+ pages this is the difference between catching decay in week 2 (fixable in an hour) vs week 12 (a real rewrite job).

What you put in

A one-time setup:

  • GSC property the workflow can read (service account access)
  • Google Sheet to log every weekly run (so you can see trends month-over-month)
  • Slack channel for the weekly digest (or skip Slack, just use email)
  • recipient email for the same digest

Per-run inputs: none. The workflow runs itself on the schedule.

What you get out

Every Monday morning:

  • Slack post with the headline numbers (critical: 4 pages, decaying: 11, growing: 8, stable: 192)
  • An email with the full breakdown – each bucket's pages listed, biggest drops first
  • Google Sheet log that records the full classification for every page (so month-over-month comparisons are possible)
  • fix-tasks tab in the sheet, populated for critical pages – each critical page gets auto-generated tasks like "audit backlinks", "refresh content", "fix CTR with new meta description"

You read the digest, pick the 2-3 critical pages worth attacking that week, and either refresh them yourself or hand them to a writer with the auto-generated task list as a starting brief.

How long per Monday

Workflow time: 5-10 minutes Monday morning. Most of it is parallel GSC calls.

Your time: 15 minutes to read the digest, 30 minutes if you want to deep-dive into one critical page.

End-to-end weekly: about 45 minutes of human time, down from 4-8 hours of manual GSC drilling.

When this is a good fit

  • Your blog has at least 100 posts older than 6 months. Below that, you don't have enough indexed content to make weekly monitoring valuable.
  • You have GSC verified on the site and at least 90 days of GSC data. Without that the comparison windows are noisy.
  • You're already committed to refreshing content. The analysis is wasted if no one acts on it.
  • You want weekly cadence – not daily (too noisy), not monthly (you miss the early signal). Monday-morning weekly is the sweet spot for most content teams.

When this isn't a good fit

  • Your site is brand new. Decay analysis assumes a meaningful baseline.
  • You don't have GSC. The whole thing runs on GSC's API. Without it there's nothing to compare.
  • You write seasonal content (e.g. holiday gift guides). The comparison logic will flag everything as decaying every February. The workflow can be tuned to ignore seasonality but it adds complexity – flag this on the call.
  • You want a fully automated "detect AND refresh" pipeline with no human review. That's a different workflow and it's a bad idea even when it works – content decay decisions need editorial judgement.

What's actually under the hood

The workflow runs on n8n. About 14 functional nodes:

  1. Schedule trigger – Monday 8 AM
  2. Calculate date ranges – recent 7 days, baseline 28-day window
  3. Fetch GSC for the recent window
  4. Fetch GSC for the baseline window (parallel)
  5. JavaScript code node: compare clicks + position per page, classify as CRITICAL_DECAY / DECAYING / EARLY_DECAY / STABLE / GROWING
  6. Log every result to the master sheet
  7. Filter to decaying-or-worse pages
  8. Build the weekly report (formatted text, includes top decliners per bucket)
  9. Send to Slack
  10. Send to email
  11. Check if any pages are in CRITICAL_DECAY
  12. For each critical page: generate fix tasks (backlink audit, content refresh, CTR optimization)
  13. Log fix tasks to the dedicated sheet tab

The classification logic is the part that earns its keep. The thresholds for "critical" vs "decaying" vs "early decay" aren't arbitrary – they're tuned to filter out GSC noise (small fluctuations week-over-week) while catching real declines. Default thresholds: critical = ≥30% click drop on a page with ≥100 baseline clicks; decaying = ≥20% click drop OR position drop of ≥5 spots on a top-20 page; early = position drop ≥3 spots OR click drop ≥10%.

Those numbers get tuned to your site during setup – sites with low baseline traffic need different thresholds than sites with high baseline traffic.

What you own at handover

  • The full n8n workflow file
  • The Google Sheet template (master log tab + fix-tasks tab)
  • A setup doc covering: service account creation, GSC permission, Slack webhook setup
  • A runbook for the common errors (GSC quota exceeded, sheet-too-large after a year of logging, false-positive seasonality flags)
  • A tuning doc – the threshold numbers that work for your site, plus how to retune them as the library grows
  • A Loom of one full Monday run
  • Optional add-on: auto-create a Linear / Asana / Notion ticket for each critical page (instead of just logging to the sheet)
  • Optional add-on: GA4 conversion data overlaid on the decay classification (so you can prioritize critical pages by revenue impact, not just traffic)

Why I can help

The data pull is straightforward. The hard part is making the report actually actionable instead of a wall of numbers.

Three things that took iteration to get right:

  • The classification thresholds. Most "content decay tools" use one global threshold across all pages, which means high-traffic pages get flagged every week (normal week-over-week noise) and low-traffic pages never get flagged (no signal). The workflow's thresholds adjust based on baseline traffic.
  • The fix-task generator. A "critical page" without context is just bad news. The auto-generated tasks turn the alert into a starting brief: which signals match a backlink problem vs a content-freshness problem vs a CTR problem. That cuts the time from alert to action.
  • The signal-vs-noise filtering. Pages can drop 30% in clicks because Google rolled out a core update over the weekend. The workflow has a "global drop" detector that, when most pages drop in tandem, suppresses individual page alerts and instead sends a single "looks like an algorithm update, here's the portfolio impact" message.

The version you get already has those tuned for the kind of B2B content site this is built for. We re-tune them during setup based on your specific baseline.

What it costs to run

Per Monday run: free. GSC API is free. n8n hosting: about $5/month if self-hosted.

Build cost: 1-2 weeks of my time. Most of it is tuning the classification thresholds to your specific traffic profile and building the fix-task generator's prompts.

How to start

Book a call. Bring your GSC access + a rough sense of your portfolio size (how many pages, average traffic per page). We'll calibrate thresholds during the call and you can have the first Monday digest in your inbox within 10 days.

Want this built for your team?

Book a call and walk through what we'd adapt for your stack.

Content decay detection · busyless · busyless