By SitemapFixer Team
Updated April 2026

Submitted URL Not Indexed: Why and How to Fix It

Diagnose why your submitted URLs are not indexingAnalyze Free

What This Status Actually Means

“Submitted URL not indexed” appears in the Google Search Console Pages report under the “Not indexed” section. It means Google crawled a URL that is listed in your submitted sitemap and made a deliberate decision not to include it in the search index. This is different from “Discovered — currently not indexed” (where Google knows the page but has not crawled it) — with this status, Google has visited the page and actively rejected it.

The key word is “deliberate.” Google is not ignoring this page due to a technical barrier — it fetched the content, rendered it, and concluded the page should not be in the index. This is always a quality or signal issue, not a crawl access issue. Fixing it requires addressing the underlying content, technical, or signal problem — not resubmitting the sitemap.

Cause 1: Thin or Low-Value Content

This is the most common cause. Pages with fewer than 300 words, no unique insights beyond what other pages cover, or content that is largely template boilerplate (e.g., auto-generated product pages with only a name and price) are routinely crawled and not indexed. Google evaluates the content against everything else it has seen on the web — if your page does not offer anything that a searcher could not get from the 10 existing indexed pages on the topic, it will be excluded.

Fix: add at least 500 words of original, substantive content. Answer questions your target audience actually asks. Use URL Inspection in Search Console > “Test Live URL” to see what Google renders — if the content preview is sparse, that is what Google assessed. For product pages, add unique descriptions, specifications, images with proper alt text, and user-relevant information like compatibility, usage instructions, and FAQs.

Cause 2: Duplicate or Near-Duplicate Content

If your page is substantially similar to another page on your site (or another site), Google will typically only index one version. Common scenarios: paginated archive pages that mostly repeat content from page 1; product variant pages that differ only in color or size; thin location pages created for cities that all use the same template; and blog posts that cover the same topic as existing posts without meaningfully differentiating.

Fix: check whether a very similar page on your site is already indexed. Use site:yoursite.com keyword in Google to see existing indexed pages on the topic. If you have near-duplicates, either consolidate them into one comprehensive page with a 301 redirect, or differentiate them with genuinely unique content per page. For location pages, this means real unique content per city — not just swapping the city name in a template.

Cause 3: A noindex Directive Is Present

Check the page source for <meta name="robots" content="noindex"> or <meta name="robots" content="noindex, nofollow">. Also check response headers for an X-Robots-Tag: noindex header — this is common in WordPress when staging environments are made live without removing the “Discourage search engines” setting, or in Next.js apps where a robots metadata field was set incorrectly.

Fix: in URL Inspection, click “Test Live URL” and check the “Coverage” section — it will explicitly state if a noindex was found. Remove the noindex directive, then request reindexing. Having a noindex page in your sitemap is a conflicting signal — your sitemap says “index this” while the page says “do not index me.” Clean up your sitemap too by removing all noindex URLs.

Cause 4: Canonical Tag Pointing Elsewhere

If the page has <link rel="canonical" href="https://example.com/different-url" />, Google will index the canonical URL instead of this one. This is correct behavior — Google respects your canonical declaration. The page in your sitemap will show as “not indexed” even if the canonical version is indexed and ranking.

Fix: verify the canonical on each affected page. In URL Inspection, the “Google-selected canonical” field shows which URL Google chose. If it differs from the page's own URL, you either have a misconfigured canonical or a legitimate duplicate where the canonical is correct. Update your sitemap to use only canonical URLs — never list a URL in your sitemap if its canonical points elsewhere.

Cause 5: JavaScript Rendering Problems

For JavaScript-heavy pages (React, Vue, Angular SPAs), Google must render the JavaScript to see the content. If your page returns a mostly empty HTML shell and relies on client-side JavaScript to load the main content, Googlebot may fetch and render a blank page — which it then correctly does not index. Use URL Inspection > “Test Live URL” > “View Crawled Page” to see what Googlebot actually renders. If your page content is missing from the rendered screenshot, you have a rendering problem.

Fix: implement server-side rendering (SSR) or static generation for critical content. For Next.js, ensure pages use getServerSideProps, getStaticProps, or the App Router's default server components so content is in the HTML before JavaScript runs. Avoid relying on lazy-loaded API calls for your primary H1 and body copy.

Cause 6: No Internal Links to the Page

Pages with no internal links pointing to them — orphan pages — receive low crawl priority and weak PageRank signals. Even if Google indexes them initially, it may de-index them over time if they remain disconnected from the site's internal link graph. A page in your sitemap with zero internal links is telling Google two different things: “this page is important” (sitemap inclusion) versus “nothing on our site links to this” (internal link graph). Google trusts the link graph more.

Fix: add contextual internal links to the affected page from at least 3 other pages on your site. Links from your most authoritative and frequently crawled pages (home page, high-traffic blog posts, hub pages) have the most impact. The anchor text should describe the destination page accurately. After adding links, request indexing via URL Inspection — Google typically re-crawls within days after new internal links are discovered.

Cause 7: The URL Redirects

A URL in your sitemap that returns a 301 or 302 redirect will be flagged as “submitted URL not indexed” because Google indexes the redirect destination, not the redirecting URL. Your sitemap should only ever contain final destination URLs — every URL should return a 200 OK directly, with no hops.

Fix: crawl your sitemap URLs with a tool like Screaming Frog or SitemapFixer, filtering for non-200 status codes. Update your sitemap to replace redirecting URLs with their final destinations. This also catches chains where A → B → C — your sitemap should list C directly.

How to Request Reindexing After Fixing

Once you have identified and fixed the underlying cause, use the URL Inspection tool in Google Search Console: enter the page URL, click “Test Live URL” to confirm Google can now see the correct content, then click “Request Indexing.” Google typically re-crawls within 2–7 days for requested URLs. Do not request indexing before fixing the root cause — it has no effect if the underlying quality or technical issue is still present.

For pages that were previously indexing fine and recently dropped into “submitted URL not indexed,” check whether the page content changed recently — a template update, plugin change, or CMS migration can introduce noindex tags, canonical mismatches, or thin content rendering without it being obvious from the URL itself.

Find All Submitted URLs That Are Not Indexing
Free analysis in 60 seconds
Analyze My Sitemap

Related Guides