Sitemap for Single Page Apps (React, Vue, Angular)

Single page applications deliver fast, app-like experiences — but they create a crawlability gap that standard sitemaps alone can't close. Without the right rendering strategy, Googlebot may never see most of your content. Here's how to build a sitemap that works with your SPA framework.

Why SPAs Have Crawlability Problems

A traditional website returns fully-formed HTML on every page request. A single page application loads a minimal HTML shell and then renders the rest of the page using JavaScript. If a crawler doesn't execute JavaScript, it sees only the empty shell — no content, no internal links, no metadata. Even Googlebot, which does execute JavaScript, processes it in a second crawl wave that can lag the initial crawl by days. A sitemap is essential because it tells Google which URLs exist, but it doesn't fix the rendering problem on its own.

How Googlebot Handles JavaScript Rendering

Google's crawl pipeline has two stages. In the first stage, Googlebot fetches the raw HTML and discovers links. In the second stage — Web Rendering Service (WRS) — Google runs the JavaScript and processes the rendered output. The gap between these stages can range from hours to weeks depending on crawl budget and server speed. URLs found only in JavaScript-rendered content may wait in the rendering queue before being indexed. A complete sitemap helps Google prioritize which URLs to render, but it cannot guarantee prompt rendering of JavaScript-heavy pages.

SSR vs SSG: Sitemap Implications

Server-side rendering (SSR) generates HTML on each request — every page a crawler hits is pre-rendered. Static site generation (SSG) generates HTML at build time. Both approaches are crawler-friendly: Googlebot gets real HTML immediately, without waiting for a render queue. For sitemaps, SSG is particularly convenient because your sitemap can be generated at build time alongside your pages. SSR sites can generate sitemaps dynamically on the server. Either way, combining a proper rendering strategy with a complete sitemap puts you in the strongest possible crawlability position.

Next.js Sitemap Generation

Next.js supports sitemap generation natively via the App Router's sitemap.ts file convention. Exporting a default function from app/sitemap.ts causes Next.js to serve the sitemap at /sitemap.xml automatically. For dynamic routes — like blog posts stored in a database — you fetch the data inside the sitemap function and return an array of URL objects. Next.js can also generate multiple sitemaps by exporting a generateSitemaps function, which is useful for large sites with thousands of pages that need splitting across multiple files.

Nuxt.js Sitemap Generation

Nuxt.js uses the @nuxtjs/sitemap module for sitemap generation. After installing the module, you configure it in nuxt.config.ts under the sitemap key. For dynamic routes, you provide a urls function that fetches your content and returns an array of path strings. The module supports multi-sitemap output, image sitemaps, and automatic lastmod population. Because Nuxt supports both SSR and SSG modes, the sitemap module works in both contexts — but SSG users benefit from the sitemap being generated once at build time rather than on each server request.

Angular Universal and Sitemaps

Angular is a client-side framework by default, but Angular Universal adds server-side rendering. Without Universal, Angular apps require prerendering or a service like Rendertron to be crawlable. With Angular Universal, the server renders each route on request, making it crawler-friendly. For sitemaps, Angular apps typically generate a static sitemap.xml at build time using a custom Node.js script that reads the routing configuration and outputs all known paths. Dynamic routes backed by an API require the build script to query the API to enumerate all content URLs.

Dynamic Routes in SPA Sitemaps

The hardest part of SPA sitemap generation is handling dynamic routes — URLs like /products/[slug] or /blog/[year]/[month]/[slug]. These routes are defined in code as patterns, not as fixed paths. To populate the sitemap, you need to enumerate every concrete URL that exists — which means querying your database, CMS, or API at build time. For very large datasets, paginate the queries and split the output across multiple sitemap files linked from a sitemap index. Never include route patterns (/products/:slug) in a sitemap — only resolved, real URLs.

Testing If Google Can Crawl Your SPA

Google Search Console's URL Inspection tool shows you how Googlebot sees any given URL after JavaScript rendering. Submit a representative page URL, click "Test Live URL," and review the rendered HTML tab. If content is missing or links are absent, your JavaScript rendering is not working correctly for Googlebot. Lighthouse's SEO audit also checks for indexability issues. For a broader test, fetch several pages with a JavaScript-disabled browser — if the content disappears entirely, you have a crawlability problem that a sitemap alone cannot solve.

Sitemap + Prerendering as Fallback

If migrating a pure SPA to SSR is not feasible, prerendering is a practical fallback. Services like Prerender.io or a self-hosted Puppeteer setup can generate static HTML snapshots of each URL and serve them specifically to crawlers. Pair prerendering with a complete sitemap: the sitemap tells crawlers which URLs to visit, and prerendering ensures those URLs return crawler-readable HTML. This approach adds infrastructure complexity but allows teams to keep their existing SPA architecture while improving crawlability in the short term.

Validate Your SPA Sitemap

SitemapFixer checks every URL in your sitemap for correct status codes, canonical tags, and crawlability — whether your site is SSR, SSG, or client-rendered.

Audit My Sitemap

Related Guides