Blocked due to access forbidden (403)

Updated April 2026·By SitemapFixer Team

The "Blocked due to access forbidden (403)" status in Google Search Console means your server responded to Googlebot with HTTP 403 Forbidden - an outright refusal without asking for credentials. Most of the time this is a firewall, WAF, or CDN rule that fingerprinted Googlebot as unwanted traffic. Unlike 401, there is no auth flow to complete - you have to explicitly allow Googlebot to access the URL.

Detect 403 blocks on sitemap URLs
We crawl your sitemap as a user agent and flag URLs returning 403
Analyze My Sitemap

What this GSC status means

Googlebot made an HTTP request and the server (or more often a CDN or WAF in front of it) returned HTTP 403 Forbidden. That response tells Google the request is valid but explicitly refused - no authentication path will unlock it. Google cannot index the URL because it has no content to evaluate. The URL is removed from the index after repeated 403s. The underlying cause is almost always a security layer that does not recognize Googlebot as an allowed client.

Common causes

How it affects indexing

URLs returning 403 do not get indexed and disappear from search results over time. If your entire site hits 403 for Googlebot, you lose all organic traffic. This is catastrophic for e-commerce and content sites - a common scenario is a new WAF deploy blocking all organic traffic overnight. Even partial 403s (on specific URLs or sections) create dead zones in your index coverage.

How to diagnose

In GSC, run URL Inspection on an affected URL to confirm the 403 response. Test the URL with a Googlebot user agent (curl -A "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" -I URL) - if you get 403 with that UA but 200 from a normal browser UA, a WAF is fingerprinting. Check Cloudflare Events, AWS WAF logs, or your security provider dashboard for blocked requests from the last 7 days. Look for Googlebot IP ranges (check Google's published IP list) being rejected.

How to fix

1. In Cloudflare: Security > Bots > disable Bot Fight Mode for verified bots, or add a WAF exception for cf.client.bot. 2. In AWS WAF: add a rule group exception whitelisting Googlebot IP ranges from Google's published JSON. 3. For any WAF: create an allow rule for User-Agent matching Googlebot plus verified source IPs (avoid UA-only since anyone can spoof). 4. Check Apache .htaccess for Deny from entries - remove or narrow them to only truly bad IPs. 5. Check nginx for deny directives in server/location blocks. 6. For file permission 403s: chmod 644 for files, 755 for directories - run as correct web server user. 7. Verify Googlebot with reverse DNS: the IP should rDNS to *.googlebot.com or *.google.com. Forward lookup must resolve back. 8. Remove country or IP blocks that overlap with Google crawler regions. 9. After fixing, run URL Inspection > Test Live URL to confirm it now returns 200, then Request Indexing.

Frequently Asked Questions

Why is my WAF blocking Googlebot?
WAFs often fingerprint bot-like traffic and block anything that does not look like a real browser. Googlebot sends a specific user agent and originates from documented IP ranges - your WAF rules need an explicit allowlist for those ranges (verified via reverse DNS lookup). Aggressive rate limits and "bot fight mode" features are the most common culprits.
How do I verify a request is actually Googlebot?
Do a reverse DNS lookup on the IP (host 66.249.66.1) - the hostname should end in .googlebot.com or .google.com. Then do a forward DNS lookup on that hostname and confirm it resolves back to the same IP. Google publishes current IP ranges in a JSON file you can use to automate allowlisting.
What is the difference between 403 and 401 in GSC?
401 means authentication is required - the server wants credentials. 403 means the server understood the request and is refusing it outright, no auth challenge. 403 is typically a firewall, WAF, or permission rule blocking the IP or user agent. Googlebot cannot bypass either, but the fix paths differ.
Stop your WAF from blocking organic traffic
Free scan - identify URLs returning 403 before Google deindexes them
Analyze My Sitemap Free
Related GSC indexing statuses
All GSC indexing errors