By SitemapFixer Team
April 2025 · 5 min read

Noindex Fix Guide: Restore Indexing to Your Pages

Find noindex conflicts in your sitemap freeAnalyze My Site Free

A noindex tag is one of the most damaging accidental mistakes in SEO — it silently removes pages from Google's index without any crawl error or obvious warning. Unlike a 404 or a robots.txt block, noindex pages get crawled normally, so the problem often goes undetected for weeks or months. This guide covers where noindex tags come from, how to find them at scale, and exactly how to fix each variant.

What a noindex tag does

A noindex meta tag (meta name=robots content=noindex) tells Google not to include the page in its search index. The page can still be crawled, but it will not appear in search results. This is useful for admin pages, checkout pages, and duplicate content - but catastrophic when applied accidentally to pages you want to rank.

Where noindex tags come from accidentally

WordPress has a setting under Settings then Reading called 'Discourage search engines from indexing this site' - one checkbox that adds noindex to every page. This is commonly enabled during development and forgotten at launch. SEO plugins can add noindex to individual pages if the visibility setting is changed. Theme updates can sometimes reset these settings. Staging environments copied to production can carry over noindex directives.

How to find accidental noindex tags

Check the page source of important pages for: meta name=robots content=noindex. In Google Search Console, the Pages report shows 'Excluded by noindex tag' - click it to see all affected URLs. In Screaming Frog, go to Directives then Noindex to see all pages with noindex. Test individual pages with Google Search Console URL Inspection - it shows whether the page is excluded and specifically if a noindex tag is the reason.

How to fix noindex tags

In WordPress: go to Settings then Reading and uncheck 'Discourage search engines'. In Yoast SEO: open the page editor, click the Yoast panel, go to Advanced, and set Robots meta to 'Default for post type' or remove the noindex override. In RankMath: open the page, click RankMath, find the Advanced tab, and change the Robots Meta setting. After removing noindex: use Google Search Console URL Inspection to request indexing for the now-indexable page.

Sitemap and noindex conflicts

If a page has a noindex tag but is also in your sitemap, Google Search Console reports it as 'Submitted URL has noindex tag'. This is a contradiction - your sitemap says index this page, the noindex tag says do not. Either remove the page from your sitemap (if noindex is intentional) or remove the noindex tag (if the page should be indexed). SitemapFixer detects this conflict automatically.

Noindex via HTTP header

A noindex directive can also be delivered via an HTTP response header rather than a meta tag: X-Robots-Tag: noindex. This is common on PDFs, images, and pages generated by server-side code where you can't add HTML meta tags. It's harder to detect because you won't see it in the page source - you need to inspect response headers using a tool like curl -I or the Network tab in Chrome DevTools. Check HTTP headers on any non-HTML resources in your sitemap that appear in GSC's noindex exclusions.

Verifying the fix worked

After removing a noindex tag, don't just assume Google will re-index the page. Use Google Search Console URL Inspection to request indexing immediately. Monitor the Pages report under Indexing over the next 1-2 weeks - the page should move from 'Excluded by noindex tag' to 'Indexed'. If it stalls at 'Crawled - currently not indexed' instead, the noindex was removed correctly but content quality may need improvement before Google decides it's worth indexing.

Noindex on CSS and JavaScript files

A particularly damaging mistake is adding noindex to your CSS or JavaScript files via X-Robots-Tag headers - this can prevent Googlebot from rendering your pages correctly, causing a cascade of rendering failures across your site. Always verify that your server-side noindex rules apply only to HTML pages and not to static assets. Use the URL Inspection tool and click Test Live URL to see a screenshot of how Googlebot renders any page where rendering seems broken.

Using noindex correctly for staging sites

Staging environments should use noindex across the entire site during development, but the mechanism matters. The safest approach is a blanket robots.txt Disallow on staging domains combined with basic HTTP authentication - this prevents crawling entirely rather than relying on noindex alone. When you deploy to production, perform a post-launch checklist: verify robots.txt does not block crawling, check that no noindex tags remain on pages you want ranked, and submit your sitemap to GSC immediately.

When to use noindex intentionally

Noindex is the right tool for pages that must exist but should not compete for search visibility: internal search results pages (low value, near-infinite variations), cart and checkout pages, user account dashboard pages, thank-you confirmation pages, and tag archive pages with sparse content. Using noindex on these pages focuses Google's crawl budget on your important content and keeps low-quality pages from diluting your site's overall quality signals. Be deliberate - document every intentional noindex and review the list annually.

Check your sitemap for noindex conflicts
Free - detects noindex + sitemap contradictions instantly
Analyze My Sitemap Free

Related Guides

Is your sitemap hurting your Google rankings?
Check for free →