Google Crawl Rate Limiter: When and How to Slow Down Googlebot
What Is the Crawl Rate Limiter?
The crawl rate limiter is a setting in Google Search Console that lets you ask Google to slow down how frequently Googlebot fetches pages from your site. It is a request you send to Google — not a technical enforcement mechanism — but Google generally honors it.
By default, Google's algorithms automatically determine the appropriate crawl rate for your site based on your server's response times and the value of your content. If your server responds quickly, Google will crawl faster. If it responds slowly or returns errors, Google backs off automatically. The crawl rate limiter is an override that lets you manually set a lower ceiling on this rate when you need to protect your server.
The setting is available in Google Search Console under Settings > Crawl rate. There are two modes: "Let Google optimize for my site" (the default and recommended setting for most sites) and "Limit Google's maximum crawl rate" (the manual override). When you set a limit, it applies for 90 days and then reverts to automatic mode.
When to Use the Crawl Rate Limiter
The crawl rate limiter is appropriate in a narrow set of circumstances. Most sites should never need to use it because Google's automatic rate management works well in normal conditions.
Use the crawl rate limiter when:
- Googlebot is causing server overload: If you can see in your server logs that Googlebot crawling spikes coincide with high CPU usage, memory pressure, or increased response times for real users, throttling Googlebot can protect your user experience. This is most common on smaller servers with limited capacity.
- During a planned infrastructure migration: If you are moving servers, databases, or CDNs, temporarily reducing Googlebot crawl rate can reduce load on the migrating infrastructure during the transition window.
- When you have large amounts of content being rebuilt: If you are regenerating thousands of pages (such as rebuilding a static site from scratch or migrating a CMS), a heavy Googlebot crawl hitting pages mid-generation can cause it to index incomplete content. Slowing Googlebot temporarily while the build completes reduces this risk.
Do not use the crawl rate limiter to try to speed up indexing of new content — it only reduces the crawl rate, it cannot increase it. Do not use it as a substitute for fixing the underlying server performance issues.
How to Set the Crawl Rate in Google Search Console
To access the crawl rate limiter in Google Search Console:
- Open Google Search Console and select the property you want to configure
- Click Settings in the left sidebar
- Scroll to the "Crawl rate" section
- Click the pencil/edit icon to change the setting
- Select "Limit Google's maximum crawl rate"
- Use the slider to set the maximum number of requests per second you want to allow
- Click Save
The slider allows you to set a rate between 1 request per second at the lowest end and up to several hundred requests per second at the highest. For most shared hosting environments, setting a limit of 1-2 requests per second during high-load periods is a reasonable starting point. After 90 days, the setting automatically reverts to "Let Google optimize for my site."
Note: the crawl rate setting is per property and only affects the domain you configure. If you have a sitemap index with multiple subdomains, you need to set the crawl rate separately for each property in Search Console.
Crawl Rate Limiter vs Crawl Budget: Key Difference
These two concepts are frequently confused. The crawl rate limiter controls how fast Googlebot makes requests to your server — the number of simultaneous connections and requests per second. Crawl budget controls how many pages Googlebot will crawl and index across your site in a given time period.
Reducing the crawl rate does not increase your crawl budget — it actually reduces the effective number of pages Googlebot can crawl in a given time window. If your site has 500,000 pages and Googlebot can only request 1 page per second (due to your rate limit), it would take over 5 days to crawl your entire site once. For sites with large content footprints, aggressively limiting crawl rate can mean some pages are not recrawled frequently enough to reflect content updates.
Crawl budget is a function of your site's overall authority, server responsiveness, and content quality. The crawl rate limiter only affects the speed at which Googlebot consumes that budget — it does not change the total budget itself.
Crawl-Delay in robots.txt: Does Google Respect It?
The Crawl-delay directive is a robots.txt extension supported by many crawlers (Bing, Yandex, and others). It looks like this:
User-agent: * Crawl-delay: 10 User-agent: bingbot Crawl-delay: 5
Google does not respect the Crawl-delay directive. Google has explicitly stated this in their documentation — Googlebot ignores Crawl-delay entirely. If you want to control Googlebot's crawl speed, you must use the Google Search Console crawl rate setting described above. The Crawl-delay directive will still work for other crawlers that support it (Bing, Yandex, etc.).
This distinction matters in practice: if you add Crawl-delay to robots.txt expecting it to reduce Googlebot server load, it will have no effect. You need to use the Search Console setting for Google specifically.
How Googlebot Actually Determines Crawl Rate
In automatic mode (when you have not set a manual limit), Googlebot determines its crawl rate using a continuous feedback loop based on your server's responses:
- Response time: If your server responds quickly (under 200ms), Googlebot increases crawl rate. If responses slow down, it backs off.
- Error rate: If Googlebot starts seeing 5xx errors (server errors), it immediately reduces crawl rate and may pause crawling to let your server recover.
- Connection success rate: If connections are timing out or being refused, Googlebot interprets this as a capacity signal and reduces rate accordingly.
- Historical patterns: Google learns your server's typical capacity over time and uses this to calibrate its baseline crawl rate for your domain.
This means that the most effective long-term strategy for getting Googlebot to crawl your site efficiently is to have a fast, reliable server. Improving your server response times (TTFB) and eliminating 5xx errors will result in Google naturally increasing its crawl rate — which means new content gets indexed faster without you needing to configure anything.
What the Crawl Rate Limiter Cannot Do
Understanding the limitations of the crawl rate limiter helps you use it appropriately and avoid false expectations:
- Cannot speed up crawling: The crawl rate limiter can only reduce Googlebot's crawl rate, not increase it. If you want more pages crawled faster, you need to improve site speed, fix indexability issues, and build authority.
- Cannot control which pages get crawled: Limiting the crawl rate does not let you specify which pages get priority. For page priority control, use robots.txt, canonical tags, and sitemap optimization.
- Does not affect other Google crawlers: The crawl rate setting in Search Console only affects the main Googlebot. Googlebot-Image, Googlebot-News, AdsBot-Google, and other Google crawlers are not affected.
- Cannot guarantee exact rates: The setting is a request to Google, not a hard technical limit. Google may still exceed it briefly in certain situations, particularly for newly discovered URLs from sitemaps.
- Does not resolve indexing problems: If your pages are not being indexed despite being crawled, the crawl rate limiter will not help. Indexing issues are separate from crawl rate issues — use URL Inspection to diagnose indexing failures.