lovablehtml logo - turn your SPA into a crawler-friendly websiteBLOGAPI PLATFORMPRICING

SEO Prompts for Lovable, Base44, Replit & AI Website Builders

Ready-to-use SEO prompts for AI website builders. Fix meta tags, robots.txt, sitemaps, and social previews in minutes. Copy, paste, and ship.

Copy and paste these prompts 1 by 1 into your AI website builder (Lovable, Base44, Replit, Bolt, etc.) for improved on-page SEO and page previews. Takes about 15 minutes to complete all.

Improve social preview content

Audit and set proper Open Graph and Twitter meta tags on every page so links shared on social media render rich previews with the correct title, description, and image.

Improve social preview content
I want you to add meta tags on all pages in this codebase. To do that, I need you to do the following and after finishing it all, review one more time of all changes and provide a summary of the changes you made.

Tasks:
1. Create an SEOHead component using react-helmet-async that takes title, description, canonical, ogImage, and noIndex as props. Use that component on every page before the main tag with page-specific values. Use page's title, description, and canonical from frontmatter or content in the page (i.e., h1 as title, first paragraph as description, and first image as og:image and page's URL as canonical link).
2. Scan all pages for directly set meta tags that override the ones set in the SEOHead component. If found, remove them and set the values from the SEOHead component.
3. Check index.html <head> section to make sure it does not contain any tags that are being set through the SEOHead component. If found, remove them. index.html cannot have title, description, robots, rel="canonical" and any of the og: or twitter: tags.

Improve robots.txt

Create or update your robots.txt with proper crawl rules and a sitemap reference so search engines know which pages to index and where to find your sitemap.

Improve robots.txt
Create a robots.txt file in the public folder that:
1. Allows all user agents to crawl the site
2. Points to my sitemap at https://YOURDOMAIN.com/sitemap.xml (replace YOURDOMAIN with my actual domain)
3. Blocks any sensitive paths like /api/, /dashboard/, /settings/ from being crawled

If a robots.txt already exists, update it to include these rules without removing any existing valid rules.

Generate sitemap

Create a sitemap.xml from your app's routes so Google Search Console and other crawlers can discover all your public pages.

Generate sitemap
I need you to generate a sitemap.xml file and put it inside the public folder.

To do this:
1. Find the place where all my routes are defined (usually in App.tsx, router.tsx, or index.tsx)
2. Go through all the public routes and add them to the sitemap.xml file
3. Exclude any private/dashboard routes from the sitemap
4. Make sure it is valid XML that can be parsed by Google Search Console
5. Use https://YOURDOMAIN.com as the base URL (replace YOURDOMAIN with my actual domain)
6. Also check that all canonical links in the codebase use this same domain

Keep the task scope tight and do not change anything else. Only add the routes to the sitemap.xml file and fix any canonical links if needed.

Sitemap Generator (advanced)

One prompt to set up automatic sitemap.xml generation on every build. It analyzes your codebase first — finds your routes, detects your domain, discovers any CMS or database fetching patterns for dynamic content — and only asks you questions when something is genuinely ambiguous. Includes the exact code to create.

Paste it into Lovable and let it do the rest.

Sitemap Generator (build-time, with CMS support)
I need you to set up automatic sitemap generation that runs on every build. Follow these steps exactly.

Step 0: Investigate my codebase silently before doing anything.
- Find my routes file (check App.tsx, router.tsx, routes.tsx, main.tsx)
- Detect my production domain from canonical links, meta tags, env vars, CNAME, or config files. Determine if it uses www or non-www.
- Search for any existing pattern that fetches dynamic content by slug from a CMS, database, or API (Supabase, Firebase, Sanity, Strapi, Contentful, Convex, Prisma, or raw fetch calls). Trace the client, endpoint, auth method, and response shape from the existing code.
- Identify private/protected routes (behind auth guards, inside dashboard or admin layouts)
- Only ask me if you cannot determine the domain or if a CMS/API key is not already in env vars.

Step 1: Run this command:
bun add -d @babel/parser @babel/traverse @types/babel__traverse

Step 2: Create the file src/lib/generate-sitemap.ts with exactly this code:

```ts
// Sitemap generation script
import fs from "fs";
import path from "path";
import { fileURLToPath } from "url";
import * as parser from "@babel/parser";
import traverse, { NodePath } from "@babel/traverse";
import { JSXAttribute, JSXIdentifier, JSXOpeningElement } from "@babel/types";

const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);

// ———————————————
// CONFIGURATION
// ———————————————

const BASE_URL = "https://YOURDOMAIN.com";
const ROUTER_FILE_PATH = path.resolve(__dirname, "../App.tsx");
const OUTPUT_DIR = path.resolve(__dirname, "../../public");
const IGNORE_PATHS: string[] = ["/dashboard/*"];

// ———————————————
// SITEMAP SCRIPT
// ———————————————

const SITEMAP_PATH = path.join(OUTPUT_DIR, "sitemap.xml");

function getAttributeValue(
  astPath: NodePath<JSXOpeningElement>,
  attributeName: string
): string | null {
  const attribute = astPath.node.attributes.find(
    (attr): attr is JSXAttribute =>
      attr.type === "JSXAttribute" && attr.name.name === attributeName
  );
  if (!attribute) return null;
  const value = attribute.value;
  if (value?.type === "StringLiteral") return value.value;
  return null;
}

function joinPaths(paths: string[]): string {
  if (paths.length === 0) return "/";
  const joined = paths.join("/");
  const cleaned = ("/" + joined).replace(/\/+/g, "/");
  if (cleaned.length > 1 && cleaned.endsWith("/")) return cleaned.slice(0, -1);
  return cleaned;
}

function shouldIgnoreRoute(route: string): boolean {
  for (const ignorePattern of IGNORE_PATHS) {
    if (ignorePattern === route) return true;
    if (ignorePattern.endsWith("/*")) {
      const prefix = ignorePattern.slice(0, -2);
      if (route.startsWith(prefix + "/") || route === prefix) return true;
    }
  }
  return false;
}

function createSitemapXml(routes: string[]): string {
  const today = new Date().toISOString().split("T")[0];
  const urls = routes
    .map((route) => {
      const fullUrl = new URL(route, BASE_URL).href;
      return `
    <url>
      <loc>${fullUrl}</loc>
      <lastmod>${today}</lastmod>
      <changefreq>weekly</changefreq>
      <priority>0.8</priority>
    </url>`;
    })
    .join("");
  return `<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">${urls}
</urlset>
`;
}

async function generateSitemap() {
  console.log("Generating sitemap...");
  if (!BASE_URL.startsWith("http")) {
    console.error('Error: BASE_URL must be a full URL (e.g., "https://example.com")');
    process.exit(1);
  }
  const content = fs.readFileSync(ROUTER_FILE_PATH, "utf-8");
  const ast = parser.parse(content, {
    sourceType: "module",
    plugins: ["jsx", "typescript"],
  });
  const pathStack: string[] = [];
  const foundRoutes: string[] = [];
  traverse(ast, {
    JSXOpeningElement: {
      enter(astPath) {
        const nodeName = astPath.node.name as JSXIdentifier;
        if (nodeName.name !== "Route") return;
        const pathProp = getAttributeValue(astPath, "path");
        const hasElement = astPath.node.attributes.some(
          (attr) => attr.type === "JSXAttribute" && attr.name.name === "element"
        );
        if (pathProp) pathStack.push(pathProp);
        if (hasElement && pathProp) {
          const fullRoute = joinPaths(pathStack);
          foundRoutes.push(fullRoute);
        }
      },
      exit(astPath) {
        const nodeName = astPath.node.name as JSXIdentifier;
        if (nodeName.name !== "Route") return;
        const pathProp = getAttributeValue(astPath, "path");
        if (pathProp) pathStack.pop();
      },
    },
  });
  const staticRoutes = foundRoutes.filter(
    (route) => !route.includes(":") && !route.includes("*")
  );
  const filteredRoutes = staticRoutes.filter(
    (route) => !shouldIgnoreRoute(route)
  );
  console.log(`Found ${foundRoutes.length} total routes.`);
  console.log(`Filtered ${staticRoutes.length - filteredRoutes.length} ignored routes.`);
  console.log(`Final ${filteredRoutes.length} routes in sitemap.`);
  if (filteredRoutes.length > 0) console.log("Routes:", filteredRoutes.join(", "));
  const sitemapXml = createSitemapXml(filteredRoutes);
  if (!fs.existsSync(OUTPUT_DIR)) fs.mkdirSync(OUTPUT_DIR, { recursive: true });
  fs.writeFileSync(SITEMAP_PATH, sitemapXml);
  console.log(`Sitemap successfully generated at ${SITEMAP_PATH}`);
}

generateSitemap().catch(console.error);
```

Now adapt ONLY the CONFIGURATION section at the top of this file:
- Set BASE_URL to the production domain you detected (with www or non-www based on what the codebase already uses). If you could not determine it, ask me.
- Set ROUTER_FILE_PATH to point to the routes file you found (e.g. "../router.tsx" or "../App.tsx").
- Set IGNORE_PATHS to include all private/protected routes you identified (e.g. "/dashboard/*", "/admin/*", "/settings/*", "/auth/*").
- If you found a CMS, database, or API integration that fetches dynamic content by slug, add a function called fetchDynamicRoutes() that reuses the same client/endpoint/auth pattern from the codebase to fetch all published slugs at build time. Call it in generateSitemap() between the "Filter out ignored paths" step and the "Generate the XML" step, prepend the correct URL prefix to each slug, and concat them into filteredRoutes.

Step 3: Add the following Vite plugin to vite.config.ts. Add the import at the top and sitemapPlugin() to the plugins array. Do not remove or change any existing plugins or config:

```ts
import { execSync } from "child_process";

function sitemapPlugin() {
  return {
    name: "sitemap-generator",
    buildEnd: () => {
      console.log("Running sitemap generator...");
      try {
        execSync("bun run src/lib/generate-sitemap.ts", { stdio: "inherit" });
      } catch (error) {
        console.error("Failed to generate sitemap:", error);
      }
    },
  };
}
```

Add sitemapPlugin() to the plugins array in defineConfig.

Step 4: Run bun run build and confirm the sitemap was generated at public/sitemap.xml. Show me its contents.

What this prompt does

  1. Analyzes your codebase to detect your domain, routes, private paths, and any existing CMS/database fetching patterns — only asks you if something can't be determined automatically
  2. Creates the sitemap generator script with the exact code provided, then adapts only the config section based on what it found
  3. If your app fetches dynamic content (blog posts, products, etc.) from a CMS or database, it reuses your existing client/query pattern to fetch all slugs at build time
  4. Adds a Vite plugin with the exact code provided so the sitemap regenerates on every build
  5. Verifies the output by running the build and showing you the generated sitemap

For a detailed walkthrough of how the script works under the hood, see the full blog post.

Next steps: Google Search Console

Once your meta tags, robots.txt, and sitemap are in place, submit everything to Google:

How can we help?
Ask our assistant anything
Leave a message
We'll get back to you soon
Book a Meeting
Select a date & time
Chat with us
We typically reply instantly
Thinking
Preview
Powered by ReplyMaven
Avatar