Blocked due to access forbidden (403)
The "Blocked due to access forbidden (403)" status in Google Search Console means your server responded to Googlebot with HTTP 403 Forbidden - an outright refusal without asking for credentials. Most of the time this is a firewall, WAF, or CDN rule that fingerprinted Googlebot as unwanted traffic. Unlike 401, there is no auth flow to complete - you have to explicitly allow Googlebot to access the URL.
What this GSC status means
Googlebot made an HTTP request and the server (or more often a CDN or WAF in front of it) returned HTTP 403 Forbidden. That response tells Google the request is valid but explicitly refused - no authentication path will unlock it. Google cannot index the URL because it has no content to evaluate. The URL is removed from the index after repeated 403s. The underlying cause is almost always a security layer that does not recognize Googlebot as an allowed client.
Common causes
- Cloudflare Bot Fight Mode, Super Bot Fight Mode, or "I'm Under Attack" challenging or blocking Googlebot.
- WAF rules (AWS WAF, Akamai, Imperva) blocking the Googlebot user agent or IP range as suspicious.
- Rate limiting triggering 403 responses when Googlebot crawls quickly on a small site.
- Country-based IP blocks that accidentally include Google's crawler IPs (mostly US-based but global).
- File or directory permission issues (chmod) on static files and assets.
- Apache or nginx deny rules (Deny from, deny IP) blocking specific IP ranges.
- Hotlink protection, referer checks, or cookie requirements blocking anonymous crawler fetches.
How it affects indexing
URLs returning 403 do not get indexed and disappear from search results over time. If your entire site hits 403 for Googlebot, you lose all organic traffic. This is catastrophic for e-commerce and content sites - a common scenario is a new WAF deploy blocking all organic traffic overnight. Even partial 403s (on specific URLs or sections) create dead zones in your index coverage.
How to diagnose
In GSC, run URL Inspection on an affected URL to confirm the 403 response. Test the URL with a Googlebot user agent (curl -A "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" -I URL) - if you get 403 with that UA but 200 from a normal browser UA, a WAF is fingerprinting. Check Cloudflare Events, AWS WAF logs, or your security provider dashboard for blocked requests from the last 7 days. Look for Googlebot IP ranges (check Google's published IP list) being rejected.
How to fix
1. In Cloudflare: Security > Bots > disable Bot Fight Mode for verified bots, or add a WAF exception for cf.client.bot. 2. In AWS WAF: add a rule group exception whitelisting Googlebot IP ranges from Google's published JSON. 3. For any WAF: create an allow rule for User-Agent matching Googlebot plus verified source IPs (avoid UA-only since anyone can spoof). 4. Check Apache .htaccess for Deny from entries - remove or narrow them to only truly bad IPs. 5. Check nginx for deny directives in server/location blocks. 6. For file permission 403s: chmod 644 for files, 755 for directories - run as correct web server user. 7. Verify Googlebot with reverse DNS: the IP should rDNS to *.googlebot.com or *.google.com. Forward lookup must resolve back. 8. Remove country or IP blocks that overlap with Google crawler regions. 9. After fixing, run URL Inspection > Test Live URL to confirm it now returns 200, then Request Indexing.