Excluded by 'noindex' tag

Updated April 2026·By SitemapFixer Team

This status means Google fetched your page, saw a noindex directive, and deliberately kept it out of the index. The directive can come from a meta robots tag in the HTML, from an X-Robots-Tag HTTP header, or from server-side rules. This is an intentional signal - the question is always: did you mean to set it?

Find unintended noindex tags on your site
We crawl every URL in your sitemap and flag pages with noindex directives
Analyze My Sitemap

What this GSC status means

Googlebot crawled the page successfully (no 404, no 403, no redirect) and in the response found a robots directive of either <meta name="robots" content="noindex"> in the HTML head or an X-Robots-Tag: noindex HTTP header. Google respects it and removes the URL from the index. Unlike quality-based exclusions, this is a rule-based exclusion - Google is doing exactly what you told it to do.

Common causes

How it affects indexing

Any page with a noindex tag will not appear in search results, will not receive organic traffic, and over time stops passing link equity to the pages it links to. If the tag is on a page you actually want indexed, the business cost is direct: that page cannot rank at all.

How to diagnose

In GSC, open Page indexing, click "Excluded by noindex tag", and examine the URL list. Run the URL Inspection tool on a sample URL - it shows the exact reason. Then load the URL and view source (Ctrl+U) and search for "noindex". Also check HTTP response headers with curl -I URL to catch X-Robots-Tag. If the HTML shows no noindex but GSC still reports it, the tag is being added after render or in headers.

How to fix

1. Confirm you actually want the page indexed - many excluded pages should stay excluded. 2. View page source and remove the <meta name="robots" content="noindex"> tag (or change content to "index, follow"). 3. Check HTTP headers with curl -I URL and remove any X-Robots-Tag: noindex header from your server, CDN, or .htaccess. 4. In WordPress: Settings > Reading > uncheck "Discourage search engines from indexing this site". 5. In Yoast/Rank Math: open the page editor, find the Advanced section, and set "Allow search engines to show this page?" to Yes. 6. If your framework (Next.js, etc.) renders noindex via JS, make sure the route-level metadata emits index, follow. 7. Use the URL Inspection tool and click "Request Indexing" to speed up reprocessing.

Frequently Asked Questions

I removed the noindex tag - why is the page still excluded?
Google has to recrawl the URL before the status flips. Use the URL Inspection tool in GSC and click "Request Indexing" - it usually updates within a few days. Also double-check that the noindex is not coming from an HTTP header (X-Robots-Tag) rather than the HTML.
Can noindex come from somewhere other than the meta tag?
Yes. Google honors noindex from the X-Robots-Tag HTTP response header, from robots meta tags injected by JavaScript, and from CMS settings that toggle it on the server side. CDNs, SEO plugins, and staging environments are the most common hidden sources.
Does a noindex page still pass link equity?
Links on a long-term noindex page are eventually treated as nofollow by Google, so any PageRank flowing through them fades. Do not rely on noindex pages as internal link hubs.
Spot every noindex tag on your site
Free crawl - find the pages accidentally blocked from Google
Analyze My Sitemap Free
Related GSC indexing statuses
All GSC indexing errors