Discovered - currently not indexed

Updated April 2026·By SitemapFixer Team

"Discovered - currently not indexed" is one of the most frustrating Google Search Console statuses because it means Google knows your URL exists (it found it in your sitemap or through a link) but has deliberately chosen not to crawl it yet. Google typically explains this by saying it delayed the request to avoid overloading your site. In practice, it almost always signals a crawl budget, site quality, or server capacity problem.

Fix sitemap issues that slow Google down
Clean sitemap = better crawl priority. Free analysis in 60 seconds
Analyze My Sitemap

What this GSC status means

Google knows about the URL - it is in Google's known URLs list - but it has not been fetched yet. Google explicitly states: "Typically, Google wanted to crawl the URL but this was expected to overload the site; therefore Google rescheduled the crawl." In reality, the rescheduling signal also correlates with low internal URL priority, poor site authority, and template-heavy or thin content patterns. Google is making a deliberate decision to not spend crawl budget on this URL right now.

Common causes

How it affects indexing

The URLs do not rank at all - they are not in the index. If it is a temporary queue, they will be crawled and likely indexed in the coming days or weeks. But when the bucket is large and stagnant (hundreds or thousands of URLs sitting there for months) it indicates Google has deprioritized most of your content. New publishing without authority growth will just add to the queue.

How to diagnose

In GSC, open the Page indexing report and click "Discovered - currently not indexed". Look at the count and the URLs - are they your highest-value pages or low-priority ones? Check Settings > Crawl stats for average response time and host issues. Run a crawl of your site with a tool and check for orphan pages (URLs in the sitemap with zero internal links). Verify server response time is under 500ms on a representative sample.

How to fix

1. Clean your sitemap - remove thin, duplicate, redirecting, and non-canonical URLs. Only submit high-quality canonicals. 2. Check Crawl Stats in GSC and work on server TTFB - aim for under 500ms average. 3. Add internal links from high-traffic, high-authority pages to each stuck URL (not just from the footer). 4. Consolidate or noindex thin pages so they do not dilute the quality signal for the whole site. 5. For critical URLs, use URL Inspection > Request Indexing (limit ~10/day). 6. Build topical clusters - a hub page linking to related posts lifts every URL in the cluster. 7. Earn external backlinks to the site overall - higher authority = bigger crawl budget. 8. Split very large sitemaps by content type (products vs blog vs categories) to help Google prioritize.

Frequently Asked Questions

How long can a page stay in "Discovered - currently not indexed"?
Weeks to months. Google explicitly says it delayed crawling to avoid overloading your server, or because its internal priority for the URL is low. Pages can sit here indefinitely - the fix is to raise the URL's priority through internal linking and demonstrate server capacity.
Does "Discovered" mean my page is low quality?
Not necessarily. Google has not even crawled the page yet, so it has no opinion on quality. It is more often a crawl-budget or site-authority issue. That said, if Google has deprioritized the URL based on surrounding signals (thin neighboring pages, poor internal links), content quality can indirectly contribute.
Will Request Indexing fix this?
Sometimes for a single URL. But if hundreds or thousands of URLs are stuck in "Discovered", Request Indexing is a band-aid. The real fix is improving site architecture, internal linking, server response time, and overall site authority so Google prioritizes your URLs during normal crawling.
Get Google to crawl your stuck URLs
Start with a clean sitemap - we flag the issues slowing your crawl rate
Analyze My Sitemap Free
Related GSC indexing statuses
All GSC indexing errors