By SitemapFixer Team
Updated April 2026

Google Search Console URL Inspection Tool: Complete Guide

Audit indexing across your whole sitemapRun Free Scan

The URL Inspection tool is the single most useful diagnostic in Google Search Console. It tells you whether a specific URL is in Google's index, exactly when Google last crawled it, what canonical Google chose, and what Googlebot actually rendered. This guide covers every field the tool returns, the difference between the indexed snapshot and the live test, how to use the URL Inspection API for bulk monitoring, and the diagnostic patterns that show up when something is wrong.

How to Access the URL Inspection Tool

The tool is built into every Search Console property. Open search.google.com/search-console, select the property you want to check, then paste the full URL into the search bar at the top of the page. The bar is permanent — it sits above the report navigation and is labelled "Inspect any URL in <your property>." Hit enter and Google fetches the indexed snapshot of that URL.

Two important rules. First, the URL must belong to the property you have selected. Pasting a URL from a different domain returns "URL is not on Google" with a hint that the URL is outside the property scope. Second, the tool only works for URLs you own — if you have not verified the property, you cannot inspect URLs on it.

You can also reach the tool from any GSC report. Click any URL in the Pages report, the Performance report, or the Sitemaps coverage drilldown, and the "Inspect URL" option appears in the right-hand panel.

What URL Inspection Returns

The result page is divided into status badges at the top and expandable sections below. Here is what each field means and why it matters:

URL is on Google / URL is not on Google — the headline status. "On Google" means the URL is currently indexed and can appear in search results. "Not on Google" means it is excluded for some reason explained in the Page Indexing section below.

Coverage — the indexing state with one of about 15 specific reasons (Submitted and indexed, Discovered – currently not indexed, Crawled – currently not indexed, Alternate page with proper canonical tag, etc.). This is the single most diagnostic field on the page.

Sitemaps — which sitemap (if any) the URL was submitted in. If this is empty for a URL you expect to be indexed, Google found it through internal links rather than your sitemap; not a problem, but worth confirming the URL is in your sitemap if it is important.

Referring page — one example of a page that links to this URL, used by Google as evidence of discoverability. If this is blank, the URL has no internal links and is technically orphaned, even if it is in your sitemap.

Last crawl — the timestamp of Google's most recent successful crawl of this URL. This is what your "URL is on Google" verdict is based on. If your last crawl is older than the date you fixed an issue, the indexed snapshot has not yet caught up.

Crawled as — Googlebot smartphone or Googlebot desktop. Almost every site is now crawled mobile-first; if you see desktop, your site is on the legacy desktop-first crawl, which is rare.

Crawl allowed? — Yes/No based on robots.txt. A "No" here with a URL that should be indexable means a robots.txt rule is blocking it.

Page fetch — Successful, Failed: not found (404), Failed: server error (5xx), or Failed: redirect error. The status of the actual HTTP fetch attempt.

Indexing allowed? — Yes/No based on the page's meta robots tag and X-Robots-Tag header. A "No" with an indexable page means a noindex directive is leaking from somewhere — often a staging environment template or a CMS plugin.

User-declared canonical — the canonical URL declared in the page's rel=canonical tag (or HTTP header).

Google-selected canonical — the canonical URL Google actually chose for indexing, which may differ from the user-declared one. When these disagree, you have a canonical conflict.

Mobile usability, AMP, Breadcrumbs, Sitelinks searchbox — enhancement reports for the URL, only shown when relevant.

INDEXED Status vs the LIVE TEST Tab

This is the single most-misunderstood feature in URL Inspection. The default view shows the indexed snapshot — what Google has stored from its last crawl. The "Test Live URL" button (top-right) runs a fresh fetch right now. The two tabs almost always show different things.

Use the indexed snapshot when you want to know what Google currently believes about the URL. This is what affects rankings and impressions today. If it says "URL is on Google" with last crawl two weeks ago, that is your current ranking surface.

Use the Live Test when you want to know whether your most recent fix actually works. If you fixed a noindex tag yesterday, the indexed snapshot will still show "Indexing allowed: No" because Google has not recrawled — but Live Test fetches the URL fresh and confirms whether the fix is in place. After Live Test passes, click "Request Indexing" to nudge Google toward recrawling.

The button labels are easy to confuse. Beneath the indexed snapshot you see "View crawled page" — that opens the actual HTML, screenshot, and HTTP response Google has stored. Beneath the Live Test result you see "View tested page" — that opens the rendered HTML, screenshot, and response from the fresh fetch. Both are useful for different reasons; the crawled version tells you what Google is currently ranking, the tested version tells you what Google would see right now.

Inspecting Rendered HTML — What Googlebot Sees

This is the most underused feature in URL Inspection. After running a Live Test, click "View tested page" → "HTML" to see the raw HTML Googlebot ended up with after JavaScript execution and rendering. This is the single source of truth for "does Googlebot see my content?" questions.

What to check in the rendered HTML:

Main content presence. Search for a sentence from your visible page content. If it is missing from rendered HTML, your content is being injected by JavaScript that Googlebot did not execute (most often because of a fetch failure, a service worker bug, or content gated behind a user interaction).

Canonical tag. Search for rel="canonical". The canonical Googlebot sees in rendered HTML is what it uses for canonicalization decisions, regardless of what you have in your unrendered template.

Meta robots. Search for name="robots". If a noindex appears here that you did not author, find which JavaScript is injecting it (often staging-environment guard code that leaked into production).

Internal links. Use Cmd+F to count <a href occurrences. If your menu and footer links are missing, your navigation is likely dependent on JavaScript that did not execute, leaving the page orphaned in Google's graph view.

Then click the "Screenshot" tab. The screenshot is a real PNG of how Googlebot rendered the page. If it shows a blank screen, a cookie banner covering the content, or a layout shift that hides the main element, that is what Google ranks.

Finally check the "More info" tab. It shows JavaScript console messages and any failed resource loads. A failed fetch to your CMS API endpoint here explains an empty rendered HTML — Googlebot tried, the request failed, and you got nothing in the index.

Request Indexing — Flow and Rate Limits

The "Request Indexing" button below the inspection result asks Google to crawl the URL soon. The official flow:

1. Click "Request Indexing". 2. Google runs a quick live test (about 30–90 seconds) to confirm the URL is fetchable and not noindexed. 3. If checks pass, the URL enters a priority crawl queue. 4. Google typically recrawls within minutes to days, but with no SLA — popular sites recrawl faster.

The button does not guarantee indexing. It only guarantees Google will re-evaluate. If the URL has indexing issues (low quality, duplicate, soft 404), recrawling will not change that.

Rate limits are undocumented but well-observed. Most properties hit a daily ceiling of around 10–12 manual Request Indexing submissions, after which the button returns "Quota exceeded". The quota resets approximately every 24 hours. Submitting the same URL multiple times does not help — Google deduplicates within the queue.

Strategy: do not waste manual Request Indexing on bulk fixes. Use it for the 5–10 most important URLs after a fix, then rely on natural recrawl plus an updated sitemap submission for the rest.

The URL Inspection API for Bulk Checks

Google released the URL Inspection API in early 2022. It returns the same data as the manual tool, programmatically, with a quota of 2,000 inspections per property per day at 600 queries per minute (10 QPS). Critically, the API is read-only — you cannot trigger Request Indexing through it, only inspect status.

The endpoint is POST https://searchconsole.googleapis.com/v1/urlInspection/index:inspect. Here is a minimal request using a service account access token:

POST /v1/urlInspection/index:inspect HTTP/1.1
Host: searchconsole.googleapis.com
Authorization: Bearer ya29.A0ARrdaM...
Content-Type: application/json

{
  "inspectionUrl": "https://example.com/blog/my-post",
  "siteUrl": "sc-domain:example.com",
  "languageCode": "en-US"
}

siteUrl must match a verified property. For domain properties use sc-domain:example.com; for URL-prefix properties use the full URL prefix like https://www.example.com/. The inspectionUrl must fall under that property.

A successful response contains the same fields as the UI:

{
  "inspectionResult": {
    "inspectionResultLink": "https://search.google.com/search-console/inspect?...",
    "indexStatusResult": {
      "verdict": "PASS",
      "coverageState": "Submitted and indexed",
      "robotsTxtState": "ALLOWED",
      "indexingState": "INDEXING_ALLOWED",
      "lastCrawlTime": "2026-04-22T08:14:33Z",
      "pageFetchState": "SUCCESSFUL",
      "googleCanonical": "https://example.com/blog/my-post",
      "userCanonical": "https://example.com/blog/my-post",
      "sitemap": ["https://example.com/sitemap.xml"],
      "referringUrls": ["https://example.com/blog"],
      "crawledAs": "MOBILE"
    },
    "mobileUsabilityResult": { "verdict": "PASS" },
    "richResultsResult": {
      "verdict": "PASS",
      "detectedItems": [
        { "richResultType": "Article", "items": [{ "name": "My Post" }] }
      ]
    }
  }
}

Common verdict values: PASS, FAIL, PARTIAL, NEUTRAL, VERDICT_UNSPECIFIED. Common coverageState values match the indexing reasons in the UI: Submitted and indexed, Crawled - currently not indexed, Discovered - currently not indexed, Alternate page with proper canonical tag, Duplicate, Google chose different canonical than user, Soft 404, Page with redirect.

API Authentication with a Service Account

Setting up service-account auth takes about 10 minutes and is the right path for any automated monitoring:

1. In the Google Cloud Console, create a new project (or pick an existing one). 2. Enable the "Google Search Console API". 3. Go to IAM & Admin → Service Accounts → Create Service Account. Name it something like gsc-inspector. 4. Generate a JSON key for the account and download it. 5. In Search Console, open your property → Settings → Users and permissions → Add user. Add the service account email (looks like gsc-inspector@your-project.iam.gserviceaccount.com) with at least Full permission — Restricted will not work for the Inspection API.

The JSON key contains everything you need to authenticate. Treat it like a password — never commit it to a public repo, and rotate it every 90 days.

Batch Monitoring Script (Python)

Here is a working Python script that reads a list of URLs from a CSV, inspects each through the API, and writes results back to a CSV. It throttles to stay under the 600 QPM limit and handles the daily quota by stopping cleanly at 2,000 inspections.

# pip install google-auth google-api-python-client
import csv, time, sys
from google.oauth2 import service_account
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError

SERVICE_ACCOUNT_FILE = "gsc-inspector.json"
SITE_URL = "sc-domain:example.com"
INPUT_CSV = "urls.csv"
OUTPUT_CSV = "results.csv"
DAILY_LIMIT = 2000
SLEEP_SECONDS = 0.12  # ~500 QPM, under the 600 cap

creds = service_account.Credentials.from_service_account_file(
    SERVICE_ACCOUNT_FILE,
    scopes=["https://www.googleapis.com/auth/webmasters.readonly"],
)
svc = build("searchconsole", "v1", credentials=creds)

def inspect(url):
    body = {
        "inspectionUrl": url,
        "siteUrl": SITE_URL,
        "languageCode": "en-US",
    }
    return svc.urlInspection().index().inspect(body=body).execute()

with open(INPUT_CSV) as fin, open(OUTPUT_CSV, "w", newline="") as fout:
    reader = csv.reader(fin)
    writer = csv.writer(fout)
    writer.writerow(["url", "verdict", "coverage", "last_crawl",
                     "google_canonical", "user_canonical", "indexing"])
    for i, row in enumerate(reader):
        if i >= DAILY_LIMIT:
            print("Daily quota reached, stopping.")
            break
        url = row[0]
        try:
            res = inspect(url)["inspectionResult"]["indexStatusResult"]
            writer.writerow([
                url,
                res.get("verdict"),
                res.get("coverageState"),
                res.get("lastCrawlTime"),
                res.get("googleCanonical"),
                res.get("userCanonical"),
                res.get("indexingState"),
            ])
        except HttpError as e:
            writer.writerow([url, "ERROR", str(e), "", "", "", ""])
        time.sleep(SLEEP_SECONDS)
print("Done.")

For Node.js, the equivalent uses googleapis:

// npm install googleapis
import { google } from "googleapis";
import fs from "fs";

const auth = new google.auth.GoogleAuth({
  keyFile: "gsc-inspector.json",
  scopes: ["https://www.googleapis.com/auth/webmasters.readonly"],
});
const sc = google.searchconsole({ version: "v1", auth });

const SITE = "sc-domain:example.com";
const urls = fs.readFileSync("urls.csv", "utf8").trim().split("\n");
const out = fs.createWriteStream("results.csv");
out.write("url,verdict,coverage,last_crawl,google_canonical\n");

for (const url of urls.slice(0, 2000)) {
  try {
    const r = await sc.urlInspection.index.inspect({
      requestBody: { inspectionUrl: url, siteUrl: SITE, languageCode: "en-US" },
    });
    const s = r.data.inspectionResult.indexStatusResult;
    out.write([
      url, s.verdict, s.coverageState, s.lastCrawlTime, s.googleCanonical,
    ].join(",") + "\n");
  } catch (e) {
    out.write([url, "ERROR", e.message].join(",") + "\n");
  }
  await new Promise((r) => setTimeout(r, 120));
}

Comparison: curl vs API vs Manual

For one-off debugging from the terminal, a raw curl call works once you have an OAuth or service-account access token:

# Get an access token from a service account JSON key
ACCESS_TOKEN=$(gcloud auth application-default print-access-token)

# Inspect a single URL
curl -X POST \
  "https://searchconsole.googleapis.com/v1/urlInspection/index:inspect" \
  -H "Authorization: Bearer ${ACCESS_TOKEN}" \
  -H "Content-Type: application/json" \
  -d '{
    "inspectionUrl": "https://example.com/blog/my-post",
    "siteUrl": "sc-domain:example.com",
    "languageCode": "en-US"
  }' | jq '.inspectionResult.indexStatusResult'

# Compare manual tool vs API for the same URL
echo "Open: https://search.google.com/search-console/inspect?resource_id=sc-domain:example.com&id=https://example.com/blog/my-post"

Choose the manual tool when you want to inspect rendered HTML, view the screenshot, or click Request Indexing. Choose the API when you want to monitor more than ~10 URLs at a time, build dashboards, or detect indexing regressions before they show up in GSC reports.

Common Diagnostics by Coverage State

Once you know the coverageState string, the fix path is usually well-defined:

Discovered – currently not indexed. Google found the URL (in your sitemap or via a link) but has not crawled it yet. Cause is almost always crawl-budget pressure: too many low-value URLs competing for crawls. Reduce noise (remove parameter variants from sitemap, fix orphaned URLs, kill dead categories), then resubmit the sitemap.

Crawled – currently not indexed. Google did crawl it and chose not to index. Cause is content quality: thin content, near-duplicate of another page, or low value relative to other site pages. Fix the content, do not just resubmit.

Alternate page with proper canonical tag. Not an error — this is Google correctly recognising the page as a duplicate and using the canonical you specified. No action needed unless you actually want this URL indexed (in which case fix the canonical).

Duplicate, Google chose different canonical than user. You declared canonical A; Google chose canonical B. Investigation: check the rendered HTML on both URLs, compare internal link counts, and confirm your sitemap is consistent with your canonicals. See alternate page with proper canonical tag for the full diagnostic flow.

Soft 404. The URL returns 200 OK but the page content looks like an error or empty state to Google (think "No products found" on a category page). Fix: either return a proper 404 status when content is empty, or populate the page with real content.

Page with redirect. The URL is redirecting somewhere else. The redirect target is what gets indexed. Confirm the target is correct and remove the redirected URL from your sitemap if it is still there.

For a deep dive on why pages stay in "Discovered" or "Crawled" states for weeks, see our why pages aren't indexed guide.

Putting It All Together — A Daily Monitoring Pattern

The most effective use of URL Inspection in production:

Run a daily batch inspection of your top 200–500 URLs (priority pages plus anything modified in the last 30 days). Flag any URL whose coverageState changed from yesterday — especially transitions like Submitted and indexedCrawled - currently not indexed, which signal a regression. Also flag any URL whose googleCanonical differs from userCanonical, which signals a canonical conflict.

Reserve manual URL Inspection for two cases: deep-diving an alert flagged by the batch job, and verifying a fix immediately after deployment using the Live Test tab.

That cadence catches indexing regressions days before they show up in the GSC Pages report aggregates, which lag by 2–7 days. For a complete walkthrough of GSC reports beyond URL Inspection, see our Search Console tutorial; for everything the API can do, see the GSC API guide.

Related Guides

Audit indexing across your whole site
Free analysis in 60 seconds
Analyze My Site Free
Related guides