By SitemapFixer Team
Updated April 2026

How to Remove a URL From Google Search Console

Audit your indexed URLs freeScan My Site Free

There are two questions hiding inside "how do I remove a URL from Google Search Console." The first is fast: how do I make a URL stop appearing in Google Search results right now? The second is slow: how do I remove the URL from Google's index permanently? The Removals tool answers the first question. The second one requires a server-side change — noindex, X-Robots-Tag, 410 Gone, or password protection — combined with patience while Google recrawls. This guide walks through both, in order, with the exact directives you need.

Step 1: Use the GSC Removals Tool for Fastest Temporary Removal

The fastest way to make a URL disappear from Google Search results is the Removals tool inside Google Search Console. It is not a permanent removal — Google describes it as a temporary block that lasts approximately 6 months — but it is the only mechanism that takes effect within roughly 24 hours. Every other approach (noindex, 410, password protection) requires Google to recrawl the URL before it acts, which can take days to weeks.

To submit a removal request, open Search Console for the property that owns the URL, click Indexing → Removals in the left navigation, then click the red New Request button. You will see two tabs: "Temporarily remove URL" and "Outdated content." Stay on the first tab. Paste the exact URL you want hidden, choose whether to remove just that URL or all URLs starting with that prefix, and submit. The request status will appear in the table — usually Processing for a few hours, then Approved once it takes effect.

Two things to remember about the Removals tool: first, it only suppresses the URL from Google Search — the page still exists on your server, still gets crawled, and still accumulates link signals. Second, after the 6-month window expires, the URL will reappear in search results unless you have also implemented a permanent block (covered in steps 3 through 6). Treat the Removals tool as a stopgap, never as the final fix.

The Removals Screen: What You Will See

The Removals report has three tabs along the top:

Temporary Removals — your active and historical removal requests, with status (Processing, Approved, Denied, Expired, Cancelled). Each row shows the URL, request type (URL only or with prefix), submission date, and an option to cancel.

Outdated Content — public-facing reports submitted by anyone (not just property owners) for URLs that show stale or removed content in Google's cache. Different tool, different purpose: covered in step 2.

SafeSearch Filtering — adult-content reports submitted by users that flag a URL for SafeSearch suppression. You will rarely use this tab unless your site has been incorrectly flagged.

If your removal request is denied, the most common reasons are: the URL does not exist in Google's index in the first place (so there is nothing to remove), the property does not own the URL (subdomains and protocols matter — https://www.example.com/page is a different property than https://example.com/page), or the URL syntax is malformed (extra spaces, missing protocol).

Step 2: The Outdated Content Tool Is Different — Use It for Cached Snippets

The Outdated Content tool is frequently confused with the Removals tool, but it solves a narrower problem: when a page on your site (or someone else's site) has been updated or deleted, but Google's cached version still shows the old content. The Outdated Content tool requests that Google refresh the cached snippet, not that it remove the URL.

Use Outdated Content when: the page still exists but you removed sensitive text from it (and Google's snippet still shows that text), the page has been deleted and now returns 404 but still appears in results with the old description, or the page has been updated and the search snippet shows pre-update content. Critically, the Outdated Content tool works on URLs you do not own — anyone with a Google account can submit a request for any public URL, which is the legal mechanism for getting third-party caches refreshed when a site has removed content but Google has not picked up the change.

Submit Outdated Content requests at search.google.com/search-console/remove-outdated-content. Google verifies that the live page actually differs from the cached version before approving — if the live page still shows the content, the request is denied.

Step 3: Permanent Removal Method 1 — The noindex Meta Tag

The noindex meta tag is the standard way to permanently remove a page from Google's index. You add it to the <head> of the HTML, Google recrawls the page, sees the directive, and drops the URL from the index.

<!-- Add inside <head> on any page you want de-indexed -->
<meta name="robots" content="noindex, follow">

<!-- Or, to block only Googlebot specifically -->
<meta name="googlebot" content="noindex, follow">

<!-- "follow" tells Google to still crawl outbound links on this page.
     Use "noindex, nofollow" only if you also want to block link-following. -->

The noindex meta tag is the right choice when: the page should remain accessible to humans (e.g., a thank-you page, a login screen, a duplicate variant), the page must continue to pass link signals through outbound links, or you cannot modify HTTP response headers but can edit the HTML head.

Critical pitfall: noindex only works if Google can crawl the page. If you also have a Disallow: rule for the same URL in robots.txt, Googlebot will never fetch the page, will never see the noindex, and the page will stay indexed indefinitely. Always remove the robots.txt block before relying on noindex.

Step 4: Permanent Removal Method 2 — X-Robots-Tag HTTP Header

The X-Robots-Tag header is the noindex equivalent for non-HTML resources — PDFs, images, videos, JSON files, anything where you cannot embed a meta tag in a <head>. It is also the right choice for HTML pages when you want to manage indexing rules at the server level rather than in templates.

# Apache (.htaccess) — noindex all PDFs in a directory
<FilesMatch "\.pdf$">
  Header set X-Robots-Tag "noindex, nofollow"
</FilesMatch>

# nginx — noindex a specific URL pattern
location ~* /private/ {
  add_header X-Robots-Tag "noindex, nofollow" always;
}

# Express / Node.js
res.setHeader('X-Robots-Tag', 'noindex, nofollow');

# Sample raw response header
HTTP/1.1 200 OK
X-Robots-Tag: noindex, nofollow
Content-Type: text/html

X-Robots-Tag is the cleanest permanent-removal mechanism in most production environments. You can apply it to entire URL patterns from the server config without touching application code, you can target specific bots (e.g., X-Robots-Tag: googlebot: noindex), and you can combine multiple directives (noindex, noarchive, nosnippet) in one header.

Step 5: Permanent Removal Method 3 — 410 Gone HTTP Response

If the page is genuinely deleted and is never coming back, the right HTTP status is 410 Gone, not 404 Not Found. Both eventually cause de-indexing, but 410 explicitly tells Google "this URL is permanently gone, do not try again." Google de-indexes 410 URLs faster than 404 URLs — typically within 1–2 weeks of recrawl, versus 4+ weeks for 404 (Google retries 404s assuming they may be temporary).

# Sample 410 Gone response
HTTP/1.1 410 Gone
Content-Type: text/html
Cache-Control: no-cache

<!DOCTYPE html>
<html><head><title>Page Removed</title></head>
<body><h1>This page has been permanently removed.</h1></body>
</html>

# nginx — return 410 for a specific URL
location = /old-product { return 410; }

# nginx — return 410 for an entire deleted directory
location ^~ /retired-section/ { return 410; }

# Apache (.htaccess)
RewriteEngine On
RewriteRule ^old-product/?$ - [G]   # G flag = 410 Gone

# Express / Node.js
app.get('/old-product', (req, res) => res.status(410).send('Gone'));

Use 410 when: the page is deleted forever, the URL was a mistake, an account was closed, or a product was discontinued with no replacement. Do not use 410 if you plan to redirect — use 301 to the replacement instead. Do not use 410 if you intend to bring the page back later — Google may take longer to re-discover it.

Step 6: Permanent Removal Method 4 — Password Protection

Password protection (HTTP Basic Auth, application-level login walls, or any 401 Unauthorized response to unauthenticated requests) is the strongest possible signal that a URL should not be in the public index. Googlebot cannot bypass authentication, so it cannot crawl the content and the URL is dropped from the index quickly.

Password protection is appropriate when: the page contains genuinely private content that must remain accessible to authorized users, you cannot delete the page but must remove all public access, or you are migrating a staging site that was accidentally indexed (this happens often). It is not appropriate for content you want public but de-indexed — that is what noindex is for.

Ranking choice between methods: for a deleted page → 410. For a private-but-existing page → password protection. For a public page that should not be in search → noindex meta tag (HTML) or X-Robots-Tag (assets). For multi-URL bulk control from server config → X-Robots-Tag.

What NOT to Do: robots.txt Disallow Alone Will Not Remove a URL

This is the single most common mistake with URL removal. Site owners add a Disallow: rule in robots.txt expecting it to remove the URL. It does not. A robots.txt Disallow rule prevents Googlebot from crawling the URL — it does nothing about the URL's presence in the index. Pages already in the index stay indexed, often with the snippet replaced by "No information is available for this page."

# robots.txt — DO NOT rely on this alone for de-indexing
User-agent: *
Disallow: /private-page/

# What this actually does:
# 1. Prevents future crawls of /private-page/
# 2. BLOCKS Google from seeing any noindex tag on the page
# 3. Leaves the URL indexed indefinitely if it was already in the index
# 4. Strips the snippet but keeps the title and URL in search results

# CORRECT: allow crawling AND serve a noindex directive
User-agent: *
Allow: /private-page/
# Then on the page itself:
# <meta name="robots" content="noindex">

The exception is image/PDF/asset URLs that have never been crawled — for those, a robots.txt Disallow added before Google ever finds the URL prevents indexing. But for any URL already in the index, robots.txt Disallow without a noindex is actively harmful.

Removing Images From Google Images (Different Rules Apply)

Removing images from Google Images is a separate problem from removing pages. Critically, do not use a noindex meta tag on the page hosting the image — that will de-index the entire page, not just the image. The correct approaches are noimageindex or a robots.txt rule targeting the image asset.

# Method 1: noimageindex on the page (page stays indexed,
# but its images are dropped from Google Images)
<meta name="robots" content="noimageindex">

# Method 2: robots.txt Disallow targeting the image file
User-agent: Googlebot-Image
Disallow: /uploads/sensitive-photo.jpg

# Method 3: X-Robots-Tag on the image asset itself
# (Apache .htaccess)
<FilesMatch "\.(jpg|jpeg|png|gif|webp)$">
  Header set X-Robots-Tag "noindex"
</FilesMatch>

# Method 4: GSC Removals tool — submit the direct image URL
# (e.g., https://example.com/uploads/photo.jpg)
# Acts as a temporary 6-month block while permanent rule propagates

For images on third-party sites that you do not control, you cannot add directives. Your options are: ask the site owner to remove or block the image, file a copyright (DMCA) notice with Google if the image violates your rights, or file a legal removal request through Google's reporting tools if the image qualifies under explicit-content, doxxing, or non-consensual imagery policies.

Removing Personal Information (PII) and "Right to Be Forgotten"

Google operates two parallel removal pipelines outside Search Console for personal information removal — these work even when you do not own the URL:

Personal information removal request (global, no jurisdiction required): submit at support.google.com/websearch/troubleshooter/9685456. Eligible categories include doxxing content (home address, phone, email exposed against your wishes), government ID numbers, bank account or credit card numbers, login credentials, medical records, signatures, and explicit non-consensual imagery. Google reviews each request manually; approved requests remove the URL from search results globally.

EU "Right to Be Forgotten" request (EU/EEA/UK residents only, under GDPR Article 17): submit at support.google.com/legal/contact/lr_eudpa. Google evaluates each request against a public-interest test — public figures, journalistic content, and matters of legitimate public concern are typically rejected; outdated, irrelevant, or excessive personal information about private individuals is typically approved. Approved removals only suppress the URL from European Google domains; the URL remains visible on google.com.

Both pipelines remove URLs from search results without modifying the source page. If you also own the page, combine the legal request with a noindex or 410 — that way the URL is removed regardless of which pipeline takes effect first.

Removals Tool Limitations and Bulk Removal via the API

The GSC Removals tool only works for properties you have verified ownership of. You cannot use it to remove URLs from sites you do not control — that is what the Outdated Content and Personal Information tools are for. The prefix removal option ("Remove all URLs with this prefix") lets you wildcard-remove an entire directory, but it counts as a single request and the prefix must be exact (no wildcards inside the prefix).

For bulk removal across hundreds or thousands of URLs, the Search Console API exposes an URL Inspection endpoint and a Sitemaps endpoint, but does not currently expose a Removals endpoint — bulk programmatic submission of removal requests is not officially supported. Workarounds:

Bulk de-index via sitemap manipulation: create a temporary sitemap listing all the URLs you want de-indexed, ensure each URL serves either a noindex or a 410 response, submit the sitemap. Google will recrawl all listed URLs faster than waiting for natural recrawl, see the directive, and de-index. Once GSC reports the URLs as removed, delete the temporary sitemap.

Browser automation for the Removals UI: tools like Puppeteer or Playwright can iterate over a list of URLs and submit each one through the Removals form. This is fragile (UI changes break it, Google may rate-limit aggressive submission), but it is the only mechanism to bulk-submit Removals requests today. Limit submissions to a few hundred per day to avoid triggering anti-abuse responses.

For most sites, the right bulk strategy is to fix the root cause server-side (add noindex headers via X-Robots-Tag pattern matching, return 410 for deleted directories) and let natural recrawl propagate the change. Use the manual Removals tool only for the highest-priority URLs that need to disappear within 24 hours.

Timeline: How Long Until the URL Is Actually Gone

GSC Removals tool (temporary): URL hidden from search results within 24 hours of approval. Lasts approximately 6 months, then automatically expires.

410 Gone: URL de-indexed within 1–2 weeks for high-traffic pages, 3–6 weeks for low-priority URLs. Faster than 404 because Google does not retry.

noindex meta tag or X-Robots-Tag: URL de-indexed on next recrawl. For high-priority pages this can be 1–3 days; for low-priority deep pages, 4–8 weeks. You can speed this up by submitting the URL via URL Inspection → Request Indexing — counterintuitive, but it triggers Google to recrawl, see the noindex, and drop the URL faster.

Password protection (401): URL de-indexed within 1–2 weeks of next crawl attempt.

Personal information removal: Google typically responds within a few days to a few weeks; approved removals propagate globally within 24–72 hours of approval.

The combined recipe for fastest permanent removal: submit a Removals tool request (24-hour suppression) + apply server-side noindex or 410 + submit URL Inspection "Request Indexing" to trigger recrawl. By the time the 6-month Removals window expires, the permanent block has long since taken effect.

Related Guides

Find every URL Google has indexed on your site
Free indexing audit in 60 seconds
Analyze My Site Free
Related guides