Crawled - currently not indexed
"Crawled - currently not indexed" is Google's way of saying "I fetched your page, I looked at it, and I decided not to include it in my index." There is no tag to remove or server error to fix - Google just did not think the page was worth indexing. This status is almost always a signal that the content needs to be better, more unique, or backed by stronger demand signals.
What this GSC status means
Googlebot successfully fetched the URL (HTTP 200, crawlable, no noindex). It rendered the page, analyzed the content, and concluded that the page does not add enough value to warrant indexing. Google's documentation describes it as "may or may not be indexed in the future" - essentially a pending or rejected verdict. The decision is based on perceived content quality, demand for the topic, uniqueness, and how the URL fits into the broader site.
Common causes
- Thin content - under ~300 words, mostly boilerplate, or template-generated pages with little unique value.
- Near-duplicate content - pages that are 80%+ similar to other indexed pages on the same or other domains.
- Low-demand topics - nobody searches for this, so Google sees no point in indexing it.
- AI-generated or spun content that lacks original insight, examples, or data.
- Pages with zero or very few internal links - Google interprets this as you not valuing the page either.
- Auto-generated tag, filter, or archive pages that list other pages without unique narrative.
- Localized or translated pages that just reshuffle the same content without real localization.
How it affects indexing
The affected URLs are not indexed, do not rank, and receive zero organic traffic. Worse, a large number of crawled-not-indexed URLs acts as a negative signal about overall site quality - Google may start deprioritizing related URLs too. Sites with thousands of URLs in this bucket often see general ranking decay across the entire domain until the thin content is pruned or improved.
How to diagnose
Open the report in GSC and export the URL list. Group the URLs by pattern - are they all from one template, category, or content type? Pick 10 samples and open each: is the unique on-page content under 300 words? Does it repeat large blocks from other pages? Is there search demand for the primary query? Check for scraped or AI-generated patterns. URL Inspection confirms the crawl happened successfully and rules out other issues.
How to fix
1. Audit the URL list - decide for each: keep and improve, consolidate, noindex, or 301 redirect. 2. For keep-and-improve: expand the content with unique data, examples, images, original research, or expert quotes. 3. For near-duplicates: consolidate with 301 redirects to the strongest version and strengthen that page. 4. For thin archive/tag/filter pages: add noindex, follow if they should not rank but help navigation. 5. Add internal links from high-authority pages to signal that the URL matters to you. 6. Improve E-E-A-T signals: author bio, publication date, sourcing, schema markup. 7. Target queries with actual search volume - use keyword tools to confirm demand before publishing. 8. Request Indexing on the improved pages and wait 2-4 weeks for Google to reassess. 9. If it is a large pattern (thousands of URLs), prune aggressively - Google values 500 great pages over 5,000 mediocre ones.