Skip to Content
OpportunitiesTechnicalSite IssuesRobots.txtGooglebot Blocked

Googlebot Blocked

What This Means

The robots.txt file blocks Googlebot from crawling the site. This will prevent Google from indexing your pages, which is critical for organic search visibility.

What Triggers This Issue

This issue is triggered when the robots.txt contains a User-agent: Googlebot (or User-agent: *) directive followed by Disallow: / which blocks access to the entire site.

How To Fix

Remove or modify the Disallow directive for Googlebot unless you intentionally want to prevent Google from crawling your site. Use more specific Disallow rules to block only the paths you don’t want indexed.


← Back to Robots.txt

Last updated on