Noindex Tag: How to Use It and When to Avoid It
The noindex tag tells search engines not to include a page in their search results. It is placed in the HTML head as a meta tag: <meta name="robots" content="noindex">. When Google sees this tag, it will eventually remove the page from its index even if the page was previously indexed.
When to Use Noindex
Use noindex on pages you deliberately do not want in search results: admin and login pages, internal search result pages, shopping cart and checkout pages, thank you and confirmation pages, duplicate content pages you cannot consolidate with canonicals, staging or preview pages accessible to crawlers, and paginated archives beyond page 2.
Noindex vs Canonical vs Robots.txt
Noindex removes a page from the index but allows crawling. Canonical consolidates duplicate pages by pointing to a preferred version. Robots.txt blocks crawling entirely but does not prevent indexing via external links. Use noindex when you want Google to crawl but not index. Use robots.txt when you want to conserve crawl budget on pages you are confident do not need indexing. Use canonicals to manage duplicate content.
Noindex and Your Sitemap - Critical Rule
Never include noindex pages in your XML sitemap. Your sitemap signals to Google which pages you want indexed. Including a noindex page creates a contradiction - you are telling Google to index it via the sitemap but not to index it via the meta tag. Google will honor the noindex directive, but this conflict wastes crawl budget and creates Search Console errors. SitemapFixer automatically detects pages in your sitemap that have a noindex tag.
Common Noindex Mistakes
Accidentally noindexing your entire site during development and forgetting to remove it before launch. Including noindex pages in your sitemap. Using noindex instead of canonicals for duplicate content - canonicals pass PageRank, noindex does not. Noindexing important pages because of thin content instead of improving the content. Forgetting that noindex takes time to take effect - Google must recrawl the page to process the directive.
Related Guides
- Pages Not Indexed by Google: Causes and Fixes
- Submitted URL Not Indexed: How to Fix in GSC
- Crawled Not Indexed: How to Fix It | SitemapFixer
- Discovered Not Indexed: Why It Happens & Fixes | SitemapFixer
- Why Are My Pages Not Indexed by Google?
- X-Robots-Tag: The HTTP Alternative to Meta Robots
- Self-Referencing Canonical: Why Every Page Needs One