Blocked by robots.txt in Google Search Console: Fix Guide

Updated April 2026·By SitemapFixer Team

The "Blocked by robots.txt" status in Google Search Console means Googlebot wanted to crawl the URL, checked your robots.txt file first, and found a Disallow rule that matched the URL path. So it stopped before making the request. Without crawling the content, Google cannot index the page with any real information. This is a common intentional block for admin pages - but it is also one of the most common accidental traffic killers.

Check your robots.txt against your sitemap
We flag URLs in your sitemap that are blocked by robots.txt - free scan
Analyze My Sitemap

What this GSC status means

Googlebot followed its normal process: fetch /robots.txt, check the rules for the User-agent: Googlebot section (or * catch-all), and compare against the target URL. Because a Disallow directive matches the URL path, Google respects it and does not crawl the page. The URL is kept out of the index because Google has no content to evaluate. Note this is different from noindex - with robots.txt blocking, Google never sees the page content at all.

Common causes

How it affects indexing

Blocked URLs are not crawled, which means Google has no content to index and cannot rank the page for any query. However, Google may still index the URL itself if it finds external links pointing to it - showing just the URL in search results with no snippet. For pages you want out entirely, robots.txt is actually the wrong tool (use noindex instead). For pages you want to rank, any robots.txt block is a direct traffic killer.

How to diagnose

Open /robots.txt directly (example.com/robots.txt) and read every Disallow rule. In GSC, go to Settings > robots.txt to see the file Google last fetched and parsed. Run URL Inspection on an affected URL - it reports the specific robots.txt rule blocking it. Check the User-agent directive: rules under User-agent: * apply to Googlebot, but more specific User-agent: Googlebot rules override them. Look for Disallow: / (blocks everything) or Disallow: /* (blocks paths under root).

How to fix

1. View your robots.txt at https://example.com/robots.txt and audit every Disallow line. 2. Check for Disallow: / entries under User-agent: * or User-agent: Googlebot - remove if unintended (common staging leak). 3. In WordPress: Settings > Reading > uncheck "Discourage search engines from indexing this site" - it controls robots.txt. 4. Narrow overly broad rules: change Disallow: /blog/ to Disallow: /blog/admin/ if only the admin section should be blocked. 5. Add Allow: rules for specific URLs inside a blocked directory if needed. 6. If you want the page hidden from search, replace the robots.txt block with a <meta name="robots" content="noindex"> tag instead - Google needs to crawl to see noindex. 7. Save changes, upload the new robots.txt, and submit it in GSC > Settings > robots.txt > Request recrawl. 8. Run URL Inspection to confirm the URL is no longer blocked.

Frequently Asked Questions

Can a page be indexed even if it is blocked by robots.txt?
Yes - weirdly. If Google finds links to the URL elsewhere, it can index the URL without ever crawling the content. You will see a search result with just the URL and no description. The separate status "Indexed, though blocked by robots.txt" covers that case. If you truly want the page out of search results, use noindex, not robots.txt disallow.
Should I use robots.txt or noindex?
Use noindex when you want a page fully excluded from search results - Google needs to crawl the page to see the noindex tag, so the URL must not be blocked by robots.txt. Use robots.txt disallow only to prevent crawling of large, resource-heavy sections (admin panels, internal search, filter parameters) where you do not care if URLs appear URL-only.
How do I test my robots.txt rules?
In Google Search Console, open the robots.txt report under Settings. You can see the parsed version Google uses and test specific URLs. You can also use the URL Inspection tool - if a page is blocked, it will tell you and show which robots.txt rule triggered the block.
Find sitemap URLs blocked by robots.txt
Free scan - we test every sitemap URL against your live robots.txt rules
Analyze My Sitemap Free
Related GSC indexing statuses
All GSC indexing errors