I Audited My Own
Dog Registry.
Here's What I Found.
By Lesli Rose · April 5, 2026 · 8 min read
I tell clients their sites have SEO gaps. I write audit reports that show exactly where structured data is missing, where AI crawlers are being ignored, and where schema doesn't match what's actually on the page. So I ran the same audit on my own site -- the American Bulldog Registry and Archives at abra1st.com.
ABRA is a real registry. 29,000+ dogs in the database. 5,400+ registered litters. 28 years of breed history. My husband Lance and I have co-owned it since 2005. It's not a side project or a demo -- it's a working registry that breeders depend on for dog registration, litter paperwork, and ownership transfers.
I built the site myself. Custom PHP, no framework, no WordPress. I control every line of code. And I still found six things that needed fixing. Some were small. One was embarrassing.
That's the point. If someone who audits websites for a living can miss things on her own property, it can happen to anyone. Here's everything I found.
What Was Already Working
Unique title tags and meta descriptions on every page. Because the site runs on a shared header.php template, every page gets its own title tag, meta description, OG tags, canonical URL, and Twitter card markup. No duplicates, no missing tags.
GA4 on every page. Google Analytics 4 tracking fires on every page through the same header template. No pages slipping through without analytics.
Organization schema present. The site had basic Organization schema with the registry name, URL, and logo. Not complete -- but the foundation was there.
Canonical tags on every page. Every page explicitly declares its canonical URL, preventing duplicate content issues across the site.
The template approach works. When you build SEO fundamentals into the shared header, every new page inherits them automatically. No plugin needed, no per-page configuration. This is one advantage of custom PHP over WordPress -- you control the template, so nothing gets missed.
Finding #1: Schema Price Mismatch
This was the embarrassing one.
The register-dog.php page clearly shows "$20" as the registration fee. The page content says $20. The pricing table says $20. Every breeder who visits the page sees $20. But the Service schema markup on that same page said $50.
Two different prices. One for humans, one for machines.
Why this matters. When Google or an AI system reads the schema, it sees $50. When a breeder reads the page, they see $20. That mismatch erodes trust with search engines and can trigger rich result penalties. If someone asks ChatGPT "how much does ABRA dog registration cost?" -- the AI might answer $50 because that's what the structured data says.
The fix. Updated the Service schema price to match the actual $20 fee. A two-minute fix that I'd walked past for weeks.
Finding #2: No AI Crawler Directives in robots.txt
The robots.txt file was bare bones. Just a basic "Allow: /" with a sitemap reference. No mention of GPTBot, ClaudeBot, PerplexityBot, or any of the AI crawlers that are now a significant source of discovery and recommendation traffic.
In 2024, a basic robots.txt was fine. In 2026, AI crawlers are how a growing number of people find services and information. If you don't explicitly address them in your robots.txt, you're leaving AI visibility to chance.
What I added. Explicit Allow directives for GPTBot, ClaudeBot, PerplexityBot, Google-Extended, and other major AI crawlers. Blocked Bytespider (Bytedance's aggressive crawler that provides no value back to publishers).
Why it matters for a registry. Breeders increasingly ask AI systems "what registries accept American Bulldogs?" or "how do I register my American Bulldog?" If AI crawlers can't confidently access the site, ABRA won't be part of those answers.
Finding #3: No llms.txt File
The site had no llms.txt file at all. This is a plain-text file at the root of your domain that gives large language models a structured summary of who you are, what you do, and what's important. Think of it as a resume for AI systems.
For a niche registry like ABRA, this is especially important. Most AI systems have limited training data about breed-specific registries. Without an llms.txt file, an AI has to guess what ABRA is based on whatever fragments it can scrape from the HTML. With one, it gets a clean, authoritative summary.
What I created. A comprehensive llms.txt with ABRA's full identity, all services and pricing, the Blue Ribbon verified breeder program, key statistics (29,000+ dogs, 5,400+ litters), and founding history.
Time to implement. About 30 minutes to write, review, and deploy.
Finding #4: Sitemap Missing 7 Pages
The XML sitemap had 12 URLs. The site has 19+ public-facing pages. That means seven pages were invisible to search engine crawlers that rely on sitemaps for discovery.
The missing pages included the breed standard, studs directory, breeders directory, American Bulldog breeders page, puppies for sale page, verified breeder page, and register kennel page. These aren't thin pages -- they're core functionality that breeders and dog owners actively search for.
How this happens. You build a page, link it in the navigation, test it, and move on. The sitemap doesn't update itself. Unless you have an automated sitemap generator or a manual checklist, new pages quietly get left out.
The fix. Expanded the sitemap from 12 to 19 URLs. Every public-facing page is now included with accurate lastmod dates.
Finding #5: Organization Schema Was Too Basic
The existing Organization schema had three properties: name, URL, and logo. That's technically valid, but it tells search engines almost nothing about who runs ABRA, what credentials they have, or what area the registry serves.
For a niche registry, credibility signals matter. ABRA isn't AKC. When someone searches for American Bulldog registries, the structured data needs to communicate why ABRA is legitimate and who stands behind it.
What I added. Founder Person schema for both Lance Rose and Lesli Rose with credentials -- ABNA representatives, NKC judges, 20+ years in the breed. Added sameAs links to social profiles, areaServed for North America, and knowsAbout properties covering American Bulldog registration, pedigree databases, and breed preservation.
Why founders matter in schema. AI systems evaluate authority partly through the people behind an organization. When GPT or Perplexity can see that the founders are credentialed judges and longtime breed representatives, it has stronger signals to recommend ABRA as a credible registry.
The Meta Lesson: Why Self-Audits Are Harder
When I audit a client's site, I see it fresh. I don't know why they made each decision. I just see what's there and what's missing. There's no emotional attachment to the work.
Auditing my own registry was different. I built every page. I wrote every line of schema. When I found the $50 price mismatch, my first instinct wasn't "good catch" -- it was "how did I miss that?" When I saw the sitemap only had 12 URLs, I remembered adding those 12 and thinking I was done.
Familiarity creates blind spots. You remember building the robots.txt and think it's handled. You remember adding Organization schema and assume it's complete. You remember deploying the sitemap and believe it's current. The work feels done because you remember doing it -- not because you verified the result.
That's the honest takeaway. Even someone who audits websites for a living misses things on their own properties. That's why external audits matter -- not because you're bad at SEO, but because objectivity is a tool that requires distance.
What Changed After the Fixes
All six issues were fixed in a single afternoon. The schema price now matches the page. AI crawlers have explicit directives. The llms.txt file gives language models a clean summary of ABRA. The sitemap includes every public page. And the Organization schema tells search engines exactly who runs this registry and why they're qualified.
None of these fixes required a redesign, a new tech stack, or a large budget. They required someone looking at the site the way a search engine sees it -- not the way the person who built it remembers it.
Frequently Asked Questions
Should I audit my own website?
Yes, absolutely. But know going in that you'll miss things. When you built the site, you made decisions that felt right at the time. Auditing your own work means questioning those decisions with fresh eyes -- and that's harder than auditing a stranger's site. Do it anyway. You'll find things that surprise you.
How often should I run an SEO audit?
At minimum, every six months. But if you're actively building pages, adding content, or changing your site structure, quarterly is better. AI crawlers and search engine guidelines evolve fast. What was fine six months ago might be a gap today. A regular audit catches drift before it compounds.
What's the most common issue you find on your own sites?
Schema mismatches and missing AI crawler directives. When you update pricing or add new pages, it's easy to forget the structured data layer. The page looks right to humans, but the schema still says the old price or the sitemap still has the old page count. These are invisible problems -- you won't notice them unless you specifically look.
Is it worth hiring someone to audit a site you built yourself?
Yes. Not because you're bad at SEO -- because you're too close to the work. An external auditor doesn't have the context of why you made each decision. They just see what's there and what's missing. That objectivity catches things your own familiarity blinds you to. I found six issues on my own registry site that I'd walked past for months.
What Would an Audit Find on Your Site?
I'll run the same audit on your website -- technical SEO, schema, AI discoverability, and a clear list of what to fix. Free, no commitment.
Get Your AI Visibility Audit