By SitemapFixer Team
Updated April 2026

How to Add Your Sitemap to Robots.txt

Is your sitemap correctly referenced in robots.txt? Verify it instantly.Check My Sitemap Free

The Sitemap: directive in robots.txt is one of the simplest and most effective ways to help search engine crawlers find your XML sitemap. Unlike submitting through Google Search Console, this method works for every crawler that reads robots.txt — including Bing, Yandex, and others — all at once. Adding it takes under a minute and is considered a best practice for any site that has a sitemap.

What Is the Sitemap Directive in Robots.txt?

The Sitemap: directive is a non-standard extension to the robots.txt protocol, originally introduced by Google, Bing, and Yahoo during the sitemaps.org initiative. It tells any compliant crawler the full URL of your sitemap file. Unlike User-agent or Disallow rules, the Sitemap: directive applies globally to all bots — there is no need to place it under a specific user-agent block. Most major crawlers, including Googlebot, Bingbot, and many SEO tools, honor this directive automatically when they fetch your robots.txt.

The Exact Syntax to Use

The directive is straightforward: the keyword Sitemap: followed by a single space and the absolute URL of your sitemap. Always use the full URL — never a relative path. Here is the correct format:

User-agent: *
Disallow:

Sitemap: https://example.com/sitemap.xml

If you have a sitemap index file, point to that instead of individual sitemaps. The crawler will follow the index and discover all child sitemaps automatically.

Where to Place the Sitemap Line

You can place the Sitemap: directive anywhere in your robots.txt file — top, bottom, or between rule blocks. Most SEOs put it at the bottom for readability, but placement has no effect on how it is processed. What matters is that it appears at least once and uses the absolute URL. If you have multiple sitemaps (such as separate image or video sitemaps not covered by a sitemap index), you can list multiple Sitemap: lines, one per URL, and all will be recognized.

Multiple Sitemap Entries

You can declare multiple sitemaps in one robots.txt file. This is useful when you manage separate sitemaps for different content types and are not using a sitemap index file:

Sitemap: https://example.com/sitemap.xml
Sitemap: https://example.com/sitemap-images.xml
Sitemap: https://example.com/sitemap-news.xml

Each line is treated independently. Crawlers will fetch and process each sitemap separately, so there is no risk of conflicts or overriding between entries.

Common Mistakes to Avoid

The most frequent errors with the robots.txt sitemap directive are using a relative URL (e.g., /sitemap.xml instead of the full HTTPS URL), a typo in the keyword (Sitemaps: with a capital S or an extra letter will not be recognized), or pointing to a URL that returns a non-200 status code. Another common mistake is declaring a sitemap that does not match the canonical domain — for example, using HTTP when your site is HTTPS, or omitting the www subdomain when your canonical URL includes it. Always verify with a sitemap checker after updating robots.txt.

Robots.txt Sitemap vs. GSC Submission

These two methods are complementary, not alternatives. The robots.txt directive covers all crawlers simultaneously and requires no account access, making it the lowest-effort and broadest-reach option. Google Search Console submission gives you additional benefits: you can see crawl coverage reports, submission status, and indexing data specific to Googlebot. The recommended approach is to do both — add the directive to robots.txt for universal autodiscovery and submit via GSC for diagnostic visibility. Neither method guarantees indexing; they simply ensure the crawler can find the sitemap.

How Crawlers Process the Directive

When a crawler fetches your robots.txt, it parses all Sitemap: lines and queues those URLs for fetching. Googlebot, for example, will download the sitemap and use it to discover URLs it may not have found through regular link crawling. This is especially valuable for large sites, paginated content, or pages that are not well-linked internally. The crawler does not index every URL in the sitemap immediately — it uses the sitemap as a discovery hint alongside its normal crawl queue prioritization logic.

Verifying the Setup Works

After adding the directive, visit https://yourdomain.com/robots.txt in a browser to confirm the line appears correctly. Then use Google Search Console's robots.txt tester or a dedicated sitemap checker to verify the sitemap URL resolves to a valid XML file returning a 200 status. You can also use the GSC Coverage report to see whether Google has processed the sitemap successfully. Changes to robots.txt are typically picked up within the next crawl cycle, usually within a few days for active sites.

Platform-Specific Notes

Most CMS platforms generate or allow you to edit robots.txt directly. WordPress (with Yoast SEO or Rank Math) automatically adds the sitemap directive. Shopify includes it for the default sitemap. On custom Next.js or other framework sites, you can manage robots.txt via a static file in the public/ folder or through a dynamic route. Wherever your robots.txt is generated, the directive format is the same — the absolute URL following Sitemap:.

Validate Your Sitemap Setup
Free sitemap analysis in 60 seconds
Check My Sitemap Free

Related Guides