If you've built your marketing site on Lovable, there's a good chance you have an SEO problem. It's not a bug. It's a structural limitation of how React single-page apps work.
What Crawlers See When They Visit Your Site
Lovable generates React applications. React is a JavaScript framework that builds your entire site inside the visitor's browser. Every page, every piece of content, every component is assembled on the fly by JavaScript running on their device. It's fast, flexible, and it's why Lovable can generate complex, interactive products so quickly.
When your Lovable site gets deployed, what actually lives on the server looks like this:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>My App</title>
<meta name="description" content="Welcome to My App" />
</head>
<body>
<div id="root"></div>
<script src="/assets/index-abc123.js"></script>
</body>
</html>One HTML file with a mostly empty <div> and a reference to a JavaScript bundle.
Every single page on your site — your homepage, your pricing page, your blog posts, your demo page — is served from this same file. The JavaScript bundle takes that empty <div> and builds the actual page content in the browser after it loads.
For a user, this happens in a fraction of a second and they never notice.
But when Google, ChatGPT, Perplexity, Slack, or LinkedIn's link previewer sends a request to one of your pages, they make an HTTP request and read whatever HTML comes back. What they get when they request yoursite.com/pricing is that exact same file. The actual pricing page content doesn't exist in the HTML. It only appears after JavaScript executes, and most crawlers don't wait for that.
So from their perspective, every page on your site looks identical. Your pricing page and your homepage are indistinguishable. Your blog post about a specific problem your customers care about has no title, no description, and no indexable content.
Googlebot does eventually execute JavaScript, but in a deferred crawl pass that can lag days or weeks behind the initial crawl. For pages Google considers lower priority, which includes most pages on newer sites, that rendering pass may not happen at all. For LLM crawlers like the ones feeding ChatGPT and Perplexity, JavaScript execution isn't part of the process.
Why This Matters If You're Publishing Content
If you're investing in content as a long-term acquisition channel, the foundation underneath that content really matters. You're publishing pages and articles you want people to find.
But if that post is indexed with your homepage metadata instead of its own title and description, it's essentially invisible to anyone searching for the problem it solves.
The same goes for conversion pages. If your /demo or /pricing page is being indexed as your homepage, you're missing every search query where someone is specifically looking for a demo or a pricing comparison in your category.
The Fix: Build-Time Pre-Rendering
One answer is build-time pre-rendering — generating a static HTML file for every public route at build time, with the correct metadata already baked in.
Users still get the full SPA experience. Crawlers get static HTML with the right tags on the first request.
Here's what the implementation looks like in practice.
A custom Vite plugin does the heavy lifting. After every build, it takes your dist/index.html as a template and generates a copy for each public marketing route. Each copy gets the correct <title>, <meta name="description">, canonical URL, and Open Graph tags injected before any JavaScript runs. When a crawler requests yoursite.com/pricing, it gets an HTML file that actually says "Pricing" in the title tag.
A central SEO config file (src/seo/routes.ts) lists every public route alongside its metadata. When you publish a new page or article, you add one entry and pre-rendering handles the rest. Without this file, metadata management gets scattered across components and breaks down as your content library grows.
Sitemap, robots.txt, and a quick cleanup finish the job. Your sitemap needs to include all your public marketing pages so Google discovers them on a reasonable schedule. Your robots.txt should steer crawlers away from authenticated routes — dashboards, settings, account pages — so crawl budget goes toward content that actually matters for SEO. And any duplicate meta tags in your base index.html that conflict with react-helmet-async should come out, since conflicting tags cause unpredictable behavior across different crawlers.
Going forward, make adding a routes.ts entry part of your publishing checklist for every new page or article. The metadata should be written at the same time as the content — not retrofitted after you've already hit publish.
A Lovable Prompt You Can Use Right Now
Here's the prompt we use to apply this fix. Fill in the four bracketed fields with your site's details before pasting it into Lovable.
This site is a single-page React app (SPA) with an SEO problem. When crawlers from Google, ChatGPT, Perplexity, Slack, or Facebook request any page, they all receive the same generic index.html with homepage meta tags. The correct titles, descriptions, and Open Graph data only appear after JavaScript executes — but many crawlers don't execute JavaScript. Fix this with build-time pre-rendering. Implement the following three things: 1. Custom Vite pre-rendering plugin. Create plugins/vite-seo-prerender.ts. This plugin should run after every build and do the following: take the built dist/index.html as a template; for each public marketing route defined in the SEO config, create a copy at the correct path (e.g., dist/about/index.html); inject the correct <title>, <meta name="description">, <link rel="canonical">, and Open Graph tags into each copy; remove any duplicate meta tags from the base index.html that would conflict with react-helmet-async. Register this plugin in vite.config.ts. 2. Central SEO config file. Create src/seo/routes.ts. Export an array of route objects, each containing: path, title, description, and ogImage (which can default to a site-wide fallback). Populate this file with all current public marketing routes. Authenticated routes should not be included. 3. Sitemap, robots.txt, and cleanup. Update public/sitemap.xml to include all public marketing routes with today's date as <lastmod>. Update public/robots.txt to disallow authenticated routes and include the sitemap URL. Remove duplicate or conflicting <meta> tags from index.html. Site details: Site domain: [YOUR DOMAIN] Default OG image: [YOUR DEFAULT OG IMAGE URL] Public routes to include: [LIST YOUR ROUTES] Authenticated routes to block: [LIST YOUR AUTH ROUTES] Expected outcome: every marketing page serves crawler-ready HTML with correct metadata on the first request, with no JavaScript execution required. Real users continue to get the normal SPA experience.
Is This the Right Fix for Your Site?
A WordPress developer would likely look at this solution and have some legitimate things to say about it, so it's worth being honest about the tradeoffs.
Server-side rendering is WordPress's default. Every page is generated on the server and served as complete HTML — no custom Vite configuration, no routes file to maintain. It just works for every page, including dynamic ones, out of the box. From a pure content-publishing standpoint, that's a genuine advantage.
The sharpest critique of build-time pre-rendering is that it only works for pages you know about at build time. If your blog posts are pulled from a CMS or your URLs are dynamically generated, you either have to rebuild the entire site every time you publish something new, or your new content sits without pre-rendered HTML until the next build. There's also a manual dependency — if someone forgets to add a new page to the routes config, that page is invisible to crawlers and they probably won't know it.
This fix works well when your site is primarily a web application — a SaaS product, an interactive tool, something with an authenticated core — and your marketing layer is a relatively stable set of pages. A homepage, a pricing page, a demo page, a handful of articles. In that context, maintaining a routes file is a trivial cost and rebuilding your entire stack around content publishing makes no sense.
It starts to break down when your primary use case is publishing content at scale. If you're running a large blog, pulling content from a headless CMS, or building URL structures that change frequently, you're working against the grain of how Lovable-built sites are structured. At that point the conversation shifts — and it's worth talking through whether your architecture is actually matched to what you're trying to do.
Need help with your Lovable site's SEO?
We've applied this fix across multiple client sites. If you'd rather have someone handle the implementation — or if you're not sure whether your architecture needs something more — we're happy to take a look.
Get a Ballpark Estimate
