Excluded by 'noindex' tag
This status means Google fetched your page, saw a noindex directive, and deliberately kept it out of the index. The directive can come from a meta robots tag in the HTML, from an X-Robots-Tag HTTP header, or from server-side rules. This is an intentional signal - the question is always: did you mean to set it?
What this GSC status means
Googlebot crawled the page successfully (no 404, no 403, no redirect) and in the response found a robots directive of either <meta name="robots" content="noindex"> in the HTML head or an X-Robots-Tag: noindex HTTP header. Google respects it and removes the URL from the index. Unlike quality-based exclusions, this is a rule-based exclusion - Google is doing exactly what you told it to do.
Common causes
- Staging site noindex tags accidentally deployed to production (WordPress Settings > Reading > "Discourage search engines" left checked).
- SEO plugins (Yoast, Rank Math, AIOSEO) set to noindex tags, categories, author pages, or post types without you realizing.
- CDN or reverse proxy (Cloudflare, Fastly) injecting X-Robots-Tag: noindex headers.
- Password-protected or members-only templates leaving noindex on for logged-out crawlers.
- Intentional noindex on thin or duplicate pages (filter URLs, internal search results, paginated archives) - this is usually correct.
- JavaScript-rendered noindex added by a client-side framework only on certain routes.
How it affects indexing
Any page with a noindex tag will not appear in search results, will not receive organic traffic, and over time stops passing link equity to the pages it links to. If the tag is on a page you actually want indexed, the business cost is direct: that page cannot rank at all.
How to diagnose
In GSC, open Page indexing, click "Excluded by noindex tag", and examine the URL list. Run the URL Inspection tool on a sample URL - it shows the exact reason. Then load the URL and view source (Ctrl+U) and search for "noindex". Also check HTTP response headers with curl -I URL to catch X-Robots-Tag. If the HTML shows no noindex but GSC still reports it, the tag is being added after render or in headers.
How to fix
1. Confirm you actually want the page indexed - many excluded pages should stay excluded. 2. View page source and remove the <meta name="robots" content="noindex"> tag (or change content to "index, follow"). 3. Check HTTP headers with curl -I URL and remove any X-Robots-Tag: noindex header from your server, CDN, or .htaccess. 4. In WordPress: Settings > Reading > uncheck "Discourage search engines from indexing this site". 5. In Yoast/Rank Math: open the page editor, find the Advanced section, and set "Allow search engines to show this page?" to Yes. 6. If your framework (Next.js, etc.) renders noindex via JS, make sure the route-level metadata emits index, follow. 7. Use the URL Inspection tool and click "Request Indexing" to speed up reprocessing.