Three releases. Twenty-nine corrections. One recurring theme: the data in AI-adjacent documentation degrades faster than the tools themselves.

Between v1.6.6 and v1.6.8, claude-blog went through two independent accuracy audits covering 88 items across API specs, content best practices, schema markup, and AI crawler documentation. Here is what changed and why it matters.

The Audit Process

Starting from v1.6.5, two separate audit passes reviewed every reference file in the skill. Each item was classified:

  • Incorrect: definitively wrong, needs fixing
  • Outdated: was correct, no longer is
  • Confirmed: verified correct
  • Unverifiable: cannot confirm from primary sources

The first audit (Areas 1-2) flagged stat attribution problems and unverifiable claims. The second audit (Areas 3-7) covered 88 items and found 7 incorrect, 24 outdated, 40 confirmed, and 17 unverifiable. What follows is a plain-language breakdown of the most important fixes.

v1.6.6: Stat Attribution and Unverifiable Claims

The first pass focused on statistics referenced in GEO optimization, content rules, and distribution playbook references. The pattern was consistent: stats cited without dates, attributed to secondary sources, or impossible to trace to a primary study.

Key changes:

  • Removed multiple statistics that could not be traced to a primary source
  • Added source and date to every retained statistic
  • Corrected attribution errors (stats credited to the wrong organization)
  • Replaced fabricated or misread data with verified alternatives

The "46% reading mode" stat was the most egregious case. It appeared in both the GEO optimization and AI crawler guides as a complete misquotation of Kevin Indig's work. The actual Indig findings replaced it: top 10 domains capture 46% of all ChatGPT citations per topic (Growth Memo, March 2026); 44.2% of LLM citations come from the first 30% of page content (Growth Memo, February 2026).

v1.6.7: API Accuracy, 29 Corrections Across 10 Files

The second audit was more technical, covering the Google API reference files and AI crawler specifications that blog-google and blog-geo rely on.

Critical: GA4 Quota Wrong by 8x

The GA4 Data API quota was documented as approximately 25,000 tokens per day. The correct figure is 200,000 Core Tokens per day for a standard property (2M for 360 properties), with additional hourly limits of 40,000 per hour per property and 14,000 per hour per project, plus 10 concurrent requests. A quota wrong by 8x is not a rounding error. It changes whether rate limiting is a concern at all for any realistic usage level.

CrUX Data Pipeline Status

The reference stated CrUX data had migrated out of PageSpeed Insights. As of April 2026, the migration was signaled but not executed. CrUX data still appears in PSI responses. The standalone CrUX API is the recommended long-term path, but the PSI integration is still live.

Core Web Vitals vs. Diagnostic Metrics

FCP (First Contentful Paint) and TTFB (Time to First Byte) were listed under Core Web Vitals. They are diagnostic metrics. Core Web Vitals are LCP, INP, and CLS only. This distinction matters for how the skill interprets and reports PSI results.

Google Ads API Version

The reference listed an outdated API version. Updated to v23.1 (February 2026) with a note on the monthly release cadence introduced in January 2026. Any version below v20 is now sunset.

AI Crawler Three-Bot Frameworks

The crawler guide documented a simplified one-bot-per-company model. The actual architecture is more granular:

  • OpenAI: GPTBot (training) + OAI-SearchBot (search indexing) + ChatGPT-User (retrieval)
  • Anthropic: ClaudeBot (training) + Claude-SearchBot (search indexing) + Claude-User (retrieval). Deprecated: anthropic-ai, claude-web
  • Perplexity: PerplexityBot (indexing) + Perplexity-User (retrieval, may not respect robots.txt)

This distinction matters for robots.txt strategy. Blocking training bots is an operator choice with no search consequence. Blocking search and indexing bots means no appearance in AI answers.

Additional crawlers added to the guide: Meta-ExternalAgent, Applebot-Extended, Bytespider, Google-Agent (Project Mariner), CCBot, Amazonbot, DuckAssistBot.

llms.txt: No Platform Has Confirmed Reading It

The reference treated llms.txt as a confirmed best practice. The correct disclaimer: no major AI platform has confirmed reading llms.txt. Google's Gary Illyes stated Google does not support it (July 2025). Semrush testing showed zero crawler visits to llms.txt. Low-cost to implement, but benefits are currently unproven.

Schema Fixes (FAQPage, ClaimReview, AggregateRating, ProfilePage)

  • FAQPage disclaimer strengthened: since August 2023, FAQ rich results only show for government and health authority sites. The markup is still valuable for AI citation and Google says not to remove existing markup.
  • ClaimReview marked as deprecated (June 2025 structured data simplification) and removed from active recommendations.
  • AggregateRating: NOT supported on BlogPosting directly. Only on eligible entity types: Product, Recipe, SoftwareApplication, LocalBusiness, Movie, Book.
  • ProfilePage template added for author pages (fully supported since December 2025, helps E-E-A-T signals).

Distribution Platform Updates

Platform What Changed
Reddit API Became paid ($0.24 per 1,000 calls) in June 2023. Free API access is gone. Strategy must rely on organic participation only.
LinkedIn 360Brew AI algorithm (late 2024-2025): views down ~50%, engagement down ~25%, external links penalized ~60%. Native content strongly favored.
YouTube Shorts Recommendation engine fully decoupled from long-form in late 2025. Shorts no longer boost long-form channel authority.

v1.6.8: Windows Installer Fix and CI Bumps

Windows PowerShell Installer Fix

The one-liner installer used irm | iex, a pattern that throws a ParameterBindingException on Windows PowerShell 5.1. The correct form is iex (irm ...). Both install.ps1 and docs/INSTALLATION.md were updated. Also corrected: the installer summary counted 20 sub-skills (now 22), and the Python version warning incorrectly flagged 3.12+ instead of 3.11+.

CI Action Version Bumps

  • actions/checkout: v4 to v6
  • actions/setup-python: v5 to v6

These supersede Dependabot PRs number 5 and 6.

Why This Matters for a Blog Skill

claude-blog's reference files are the ground truth its skills cite when advising on SEO strategy, schema markup, and API integration. Wrong data in a reference file propagates to every blog post that references it: incorrect quota warnings, wrong schema recommendations, outdated crawler advice.

The audit process, systematically rating every claim as confirmed, outdated, incorrect, or unverifiable, is worth running periodically on any documentation-heavy skill. The rate of factual decay in API documentation, crawler specs, and platform algorithm behavior is high enough that a 6-month-old reference file is worth auditing.

Get the Latest Version

/plugin marketplace add AgriciDaniel/claude-blog
/plugin install claude-blog@agricidaniel-blog-tools

Or via direct install:

claude plugin install github:AgriciDaniel/claude-blog

Full changelog: github.com/AgriciDaniel/claude-blog/releases