All Crawlers Blocked
What This Means
The robots.txt file blocks all crawlers from accessing the site using User-agent: * with Disallow: /. This prevents all search engines from indexing your content.
What Triggers This Issue
This issue is triggered when the robots.txt contains User-agent: * followed by Disallow: / without any more specific rules that would allow important crawlers.
How To Fix
Remove the blanket Disallow: / rule unless you intentionally want to prevent all crawling. Instead, use specific Disallow rules for paths you want to block, while allowing access to your main content.
Last updated on