I Audited a Blogging
Educator's Website.
310 Posts, But AI Can't
See Any of Them.
By Lesli Rose · April 1, 2026 · 10 min read
This blogger has been at it for over 20 years. The site is a blogging education platform where solopreneurs and content creators learn how to build traffic, grow audiences, and make a living from blogging. New content goes up every Wednesday like clockwork. There are 310 blog posts and 9 pages live, plus multiple other niche sites -- all built on content. This person knows how to blog.
The problem is that every AI system on the internet -- ChatGPT, Claude, Perplexity, Google's AI Overviews, Apple Intelligence, Amazon, Meta, even TikTok's crawler -- is explicitly blocked from reading any of it.
Below, I'll walk through the key findings and why they matter for any blogger or content creator.
The Scores
55
Technical SEO
60
On-Page SEO
75
Content
30
Schema
10
AI Discoverability
Content scored the highest at 75 -- because the content IS strong. But look at that AI Discoverability score: 10 out of 100. For a site with 310 posts full of expert blogging advice, that number should be alarming. Here's why.
Finding #1: Every AI Crawler Is Blocked
This is the critical finding. The site's robots.txt file explicitly blocks every major AI crawler on the internet:
ClaudeBot-- blocked. Claude can't read or recommend the content.
GPTBot-- blocked. ChatGPT can't cite the blogging advice.
Google-Extended -- blocked. Google's AI Overviews can't pull from the posts.
PerplexityBot -- blocked. Perplexity can't surface the answers.
Amazonbot-- blocked. Alexa can't reference the expertise.
Applebot-Extended -- blocked. Apple Intelligence can't find it.
Bytespider-- blocked. TikTok's search can't index the content.
meta-externalagent -- blocked. Meta's AI can't read the posts.
Here's the irony: this blogger teaches others how to get traffic. The entire business is helping people get discovered online. But the site itself is invisible to the fastest-growing discovery channel on the internet -- AI recommendation systems.
Someone asks ChatGPT "how do I start a blog in 2026?" This educator has probably written that exact post. Multiple times. But ChatGPT literally cannot read it. The robots.txt says no.
This likely wasn't an intentional choice. Many WordPress security plugins and AI-blocking plugins add these rules by default. The site owner may not even know they're there. But the result is the same: 310 posts of expert content, locked away from every AI system that could recommend them.
Finding #2: The Brand Identity Is Confused
The site has a clear brand name. But machines think it's something else entirely.
The Open Graph og:site_name tag -- which controls how the site appears in social shares, link previews, and AI summaries -- uses the owner's personal name instead of the brand name. The WebSite schema markup also uses the personal name, with a description from an older version of the site that no longer matches the current focus.
This creates a disconnect. A human landing on the site sees the brand and understands it. But Google, social platforms, and AI systems see a different name and an outdated tagline -- neither of which matches the current brand or content focus.
For someone running multiple content brands, clean brand signals become even more important. Each site needs to tell machines exactly what it is and who it's for. Right now, the signals are crossed.
Finding #3: Duplicate GA4 Properties
The site is running two GA4 properties simultaneously. Every pageview, every session, every event fires twice -- once to each property.
This means the traffic numbers in at least one of those properties are inflated. If the site owner is making business decisions based on analytics -- which posts to write more of, which topics drive traffic, whether the weekly publishing schedule is working -- those decisions are based on unreliable data.
The fix is simple: identify which GA4 property is the primary one, remove the duplicate tracking code, and verify in real-time reports that only one property is collecting. But until then, the data can't be trusted.
Finding #4: 310 Posts of Q&A Content, Zero FAQ Schema
The blog is full of posts that answer specific questions bloggers have: how to start, how to grow traffic, how to monetize, how to stay consistent. These are exactly the kinds of posts that qualify for FAQ rich results in Google and get cited by AI systems answering user questions.
But none of them have FAQ schema. Yoast SEO is installed and properly generating basic Article schema on blog posts -- that's working. But the FAQ layer is missing entirely.
This matters because FAQ schema does two things: it can trigger expanded rich results in Google search (more real estate on the results page), and it gives AI systems clean, structured question-and-answer pairs to cite. For a site with 310 posts answering common blogging questions, this is a massive missed opportunity.
The homepage also has no semantic headings. Divi Builder modules aren't generating proper H1, H2, or H3 tags -- which means the homepage structure is invisible to crawlers looking for topic hierarchy.
What's Genuinely Strong
This is not a weak site. The content engine is real, and several things are working well:
310 blog posts is a massive content library. Most bloggers never get past 50. This site has six times that. This is a real, compounding asset.
20+ years of blogging experience is rare credibility. In a space full of people who started blogging last year and now teach it, two decades of real experience is a genuine differentiator.
Weekly publishing consistency builds trust. Every Wednesday, new content. Search engines and readers both reward consistency, and this site delivers it.
Yoast SEO is properly configured on blog posts. Article schema is firing correctly. H1 and H2 structure on individual posts is solid. Sample posts showed 26 internal links -- that's excellent internal linking.
Multiple successful brands show range. Running several niche content sites proves this is someone who can build and sustain content businesses.
The foundation is strong. The content is real. The expertise is legitimate. The infrastructure gaps -- AI crawler blocking, brand schema mismatches, duplicate tracking, missing FAQ schema -- are all fixable without touching the content itself.
Does This Look Like Your Site?
If you're a blogger, content creator, or online educator with a WordPress site, a solid content library, and years of expertise -- but you've never checked what your robots.txt is blocking, whether your schema matches your current brand, or how AI systems see your site -- these findings probably apply to you too.
The pattern is common: you focused on writing great content and building an audience. The technical layer happened in the background -- plugins installed, settings configured once and forgotten, default rules you never reviewed. And now AI systems are recommending other people's content because they have the technical infrastructure you don't.
That's fixable. Every single issue in this audit can be resolved without rewriting a single blog post. It starts with understanding what's actually there -- and what's blocking it.
Frequently Asked Questions
Why would a blogging educator's website have SEO problems?
Blogging educators often focus on teaching others how to grow traffic while their own site infrastructure goes unchecked. When plugins, themes, or default settings create technical barriers -- like blocking AI crawlers in robots.txt -- the site owner may not realize it because organic search traffic still works fine. The AI discoverability layer is newer and requires different checks.
What happens when robots.txt blocks AI crawlers?
When robots.txt blocks AI crawlers like ClaudeBot, GPTBot, and PerplexityBot, those systems cannot read your content. That means when someone asks ChatGPT, Claude, or Perplexity for blogging advice, your posts full of expert answers will never be cited or recommended. You become invisible to the fastest-growing discovery channel on the internet.
Does duplicate GA4 tracking inflate my analytics numbers?
Yes. Running two GA4 properties on the same site means every pageview, session, and event fires twice -- once to each property. This inflates your traffic numbers and makes it impossible to trust your data for business decisions. The fix is simple: identify which property is the correct one, remove the duplicate tracking code, and verify in real-time reports.
How do I get a free SEO and AI discoverability audit?
Visit lesli.com/ai-visibility-audit and submit your website URL. I'll audit your site the same way -- technical SEO, schema markup, AI discoverability, content structure, and a clear roadmap of what to fix first. Free, no commitment.
What's Hiding on Your Site?
I'll audit your website the same way I audited this one -- technical SEO, schema, AI discoverability, and a clear roadmap. Free, no commitment.
Get Your AI Visibility Audit