Nothing puts fear into the heart of a webmaster (see also in my case: amateur writer trying to run a website) like a sudden seemingly inexplicable drop in site traffic resulting from technical issues. Well, if we’re being honest, plenty of things are scarier, ranging from the decline of the west to eye contact with strangers, but complex tech issues are frustrating as heck.
That’s exactly what I ran in to yesterday with my comic book blog, Comic Book Herald. After my daily Google Analytics check-up, I noticed Google organic search results we’re saying some of my pages had “No Information Available,” making my site look like a common porny mcspam bot. Since I couldn’t find too many replications of this issue or detailed solutions online, below I’ve shared what I found, what I did, and how I (mostly) fixed the issue.
In short, a number of my Google search results had replaced the traditional meta description (the brief summation of the page content) with the dire Google warning “No Information is available for this page.” This warning extended across mobile and desktop search.
HOW DID I FIND & WHY IS THIS A PROBLEM?
The main reason I noticed this issue was a sudden 1,000+ visits decline in my overall site traffic. So to quickly answer why I’d even give much of a hoot, the lack of a proper meta description and the implication that something is wrong with my search listing was resulting in real damage to visitors clicking on my pages.
I’ll admit, I didn’t forsee the extent of this problem, and had actually noticed a similar “No information is available for this page” on one particular landing page a week earlier (shout out to Jessica Jones content!).
Since my visibility was not impacted, and the page in question wasn’t a huge traffic driver, I chalked it up to Google being weird and moved on. This proved untenable as the problem spread to a wide array of traffic drivers across my site!
It is worth noting here that unless you intentionally run searches for your own site regularly (hey, we all like to see ourselves succeed), actually spotting the info in the search engine results page (SERP) is not likely. The more likely indicators of an issue are:
- A large visits decline identified in Google Analytics as mentioned
- Page-level CTR declines identified in Google Search Console
APPROACH NUMBER ONE: FOLLOW GOOGLE’S WARNINGS
If you find yourself viewing similar results and click the “Learn Why” extension, Google will take you to their support page for robots.txt.
In short the page tells you that:
You are seeing this result because the page is blocked by a robots.txt file entry, which tells Google not to crawl your page
As I learned researching the issue, Google replaced their previous messaging that “Information from this page is blocked by robots.txt” (I’m paraphrasing) with the new, significantly more muddled message in late 2017.
When I saw the rationale for the error, I was simultaneously relieved and confused. The relief came because I work in SEO and deal with robots.txt files on a regular basis. The confusion came because I was quite certain my robots file did not contain information that should block any of my site content.
Obviously, this was worth checking out, and here’s the approach I took for what I would consider a typical robots.txt analysis:
- Open your site’s file by visiting www.yourdomainname.com/robots.txt
- Investigate all “Disallow” messages. The most common problem is “Disallow: /” which actually tells search engine crawlers to never crawl any pages on your site.
- Use Google Search Console’s Robots.txt checker tool on problem pages to confirm any “Disallow” messaging impact.
- Use Google Serach Console’s “Fetch as Google” tool on problem pages to confirm Google crawlers (desktop and mobile) can access site content and render on their end.
- Ensure your site does not have multiple or cached robots.txt version – Check https vs. http, www. vs non-www, or old robots versions that Google (and other engines) may still be referencing.
APPROACH NUMBER TWO: PANIC, GET ANGRY, CONSIDER SITE CHANGES YOU’VE MADE
Following the above robots.txt analysis, I confirmed, really beyond a shadow of a doubt, that my robots.txt file was fine. There was absolutely nothing in the code that telling search engines they couldn’t crawl any of the impacted pages (or any pages on my site at all for that matter).
From here I had to back up and consider what else might be going on to cause this issue. If it’s not actually my robots file, then what is causing engines to think it’s my robots file? Here’s potentially related information I found:
- My indexed number of AMP mobile on CBH was cut in half over the course of 10 days, down 300+ pages
- My overall indexed pages have been steady decreasing since mid-Feb, down 300+ pages.
- Resubmitting an XML sitemap resulted in GSC reporting “URL restricted by robots.txt” for sitemap sections
- 10 days ago I updated my “SSL Insecure Content” plugin to force a greater number of assets to register as HTTPS
- 5 additional plugin updates occured over this 10 day period, including my AMPforWP twice.
All this information helped me narrow down my possible error causes to either a rogue robots.txt file (questions I had at this point: Does Genesis (my WordPress theme) create it’s own robots file? Does All in One SEO plugin create a robots file? Where do I find these mystery files?), my SSL plugin update causes issues, or something going awry with my AMP content.
FIX ATTEMPT #1: SSL PLUGIN REVERSION (Day 1: 9 to 10 am)
As noted, I had made an update to my “SSL Insecure Content” plugin over this time frame in an effort to register the green check mark in Chrome for my https content. Prior to updating the plugin to the (admittedly not recommended) more extensive “content” setting, I was still getting the curious warning on all Comic Book Heralds that “Your connection to this site is not secure.”
Since this was a simple fix, I started here and reverte the plugin settings to the recommended “simple” SSL.
I chased this update with the following moves:
- Fetching and rendering known “error” pages in GSC — Requested Google crawl and index linked pages (desktop and mobile).
- Resubmitted XML sitemap in GSC.
- Requested indexing of robots.txt to force Google to reevaluate the document (in theory).
- 11 a.m. Some site content serving “insecure” chrome warning as anticipated
- 1:45 p.m. Two pages (Flash and Batman!) populate correct meta descriptions in desktop search. Both pages also have returned to serving AMP content, whereas they had been regular mobile results in the a.m.!
- 1:50 p.m. Batman already back to “no information” on Desktop.
- 4:00 p.m. First error page (Jessica Jones!) still resulting in “no info” warning. Same goes for 3 of 5 test pages.
FIX ATTEMPT #2: CREATE A NEW ROBOTS FILE (Day 1: 4:15 pm)
One problem continued to trouble me throughout the day: Where was my robots file actually located? While I could access my robots at www.comicbookherald.com/robots.txt, I couldn’t actually see the document in my file manager.
After some research (namely this robots info article), I uncovered that WordPress creates a “virtual” robots.txt file that you can’t actually edit manually. The same article also confirmed that I could easily create an actual file to edit using the All In One SEO plugin (the same applies for Yoast SEO).
I activated the robots feature, and carried over the existing robots messaging verbatim, then saved the new file (which I could view in my root folder):
I chased this measure by:
- Requesting Google crawl and index robots file (even though that doesn’t make sense, want to force issue)
- Requested all test pages by reindexed via GSC “Fetch and Render”
- 8 p.m. (Day 1) — Still “no info” descriptions on primary test pages, with various fluctuations.
- 10 a.m. (Day 2) — All results showing meta descriptions, AMP listings populating for test pages!
And there you have it. Success!
At the end of the day, I don’t know exactly why Google search crawlers started seeing “blocks” in a robots file when there weren’t any. My best summation based on the test is that my overreaching SSL plugin was making it impossible for crawlers to access the file, so they determined an issue with the robots content. Updating the plugin and then submitting an updated robots file relatively quickly fixed the problem.
It was a frustrating experience given the loss in traffic for about a day and a half, but I’m very satisfied with the correction (for now!).
BONUS RESULTS #3: THE AFTER PARTY
Turns out I spoke too soon regarding “success.” Some additional site searches and GA digging revealed at least seven more impacted pages inside my top 50 landing pages. Fun!
I followed the same approach outlined above fetching and requesting indexation for 3 of the 7 on the evening of 3.12.
- Day 5 (8 a.m.) — All 7 newly identified pages are showing proper meta description and mobile AMP listings following request to reindex three of them in GSC.
The following articles or forums all gave me some ideas and interesting insights to consider as I investigated the issue:
On plugin conflicts: https://productforums.google.com/forum/#!topic/webmasters/VF7fZqkMg5k
Google Support on “No page information in search results”: https://support.google.com/webmasters/answer/7489871?hl=en
On WordPress virtual robots files: https://kinsta.com/blog/wordpress-robots-txt/
Forum about Google Search Console updating info following changes to robots: https://productforums.google.com/forum/#!topic/webmasters/aKxkGmKMCMo