Google Search Console Indexing Errors: Complete Fix Guide

The Page indexing report in Google Search Console groups every non-indexed URL under a specific reason. Each reason means something very different — from a harmless canonical match to a server error that is actively costing you traffic. Browse every status below with a full explanation, diagnosis steps, and exact fixes.

How to access this report: Google Search Console → Indexing → Pages → filter by “Not indexed” → click any reason to see all affected URLs.

Understanding the GSC Page Indexing Report

Not every non-indexed URL in GSC represents a problem. Some statuses — like “Alternate page with proper canonical tag” — are expected and correct. Others — like “Crawled — currently not indexed” — are warnings that need investigation. The critical skill is distinguishing between statuses that are benign (Google is doing exactly what you intended) versus statuses that are costing you traffic (Google is missing pages it should be ranking).

Start with the statuses covering the most URLs. If “Crawled — currently not indexed” affects 300 URLs and “Server error (5xx)” affects 5, fix the 5xx errors first — active server errors are more urgent than content quality issues — but then address the 300 crawled-not-indexed pages systematically. Each error type below includes the likely root causes, how to verify them using URL Inspection, and the specific fix steps.

Why Pages Fail to Get Indexed

Google has limited crawl budget and finite index capacity. It actively filters out pages it considers low value, technically problematic, or contradictory in their signals. The indexing pipeline has multiple stages where a page can be rejected: crawl access (robots.txt, authentication), rendering (JavaScript errors, server errors), and quality evaluation (thin content, duplicate content, canonical conflicts). A page that passes crawl access can still fail at the quality evaluation stage. Understanding which stage your pages are failing at tells you which type of fix is needed.

Page with redirect
URL is a redirect, so Google drops it from the index and follows the redirect chain to the target page.
Read guide
Excluded by noindex tag
Google sees a noindex meta tag or X-Robots-Tag header and deliberately keeps the page out of the index.
Read guide
Alternate page with proper canonical tag
This URL points at another canonical, so Google indexes the canonical instead. Usually fine but worth auditing.
Read guide
Duplicate without user-selected canonical
Google found duplicates, you did not declare a canonical, and Google chose a different URL than this one.
Read guide
Discovered - currently not indexed
Google knows the URL exists but has not crawled it yet - usually a crawl budget, quality, or server load signal.
Read guide
Crawled - currently not indexed
Google crawled the page but decided not to index it - almost always a content quality or duplication problem.
Read guide
Not found (404)
Server returned 404 Not Found. Google drops the URL from the index after repeated 404 responses.
Read guide
Server error (5xx)
Server returned 500, 502, 503 or similar. Sustained 5xx errors cause deindexing and crawl rate drops.
Read guide
Soft 404
Page returns 200 OK but looks empty or like an error page. Google treats it as 404 and excludes it.
Read guide
Blocked by robots.txt
Your robots.txt file disallows Googlebot from crawling the URL, so it cannot be indexed with content.
Read guide
Blocked due to unauthorized request (401)
Server returned 401 Unauthorized. Googlebot cannot authenticate, so the page never gets fetched.
Read guide
Blocked due to access forbidden (403)
Server returned 403 Forbidden. Typically a firewall, WAF, or permission issue blocking Googlebot.
Read guide
Indexed, though blocked by robots.txt
Google indexed the URL anyway because of external links, even though it could not crawl the content.
Read guide
See which of these errors affect your site
Free sitemap and indexing analysis - results in 60 seconds
Analyze My Sitemap