Free Tool
Robots.txt Analyzer
Check which crawlers are allowed or blocked by your robots.txt file. See rules for Googlebot, GPTBot, ClaudeBot, and 20+ other bots.
Robots.txt Examples
Copy these templates to control how crawlers access your site.
Allow All Crawlers
Best for most websites wanting maximum visibility
User-agent: * Allow: / Sitemap: https://example.com/sitemap.xml
Block AI Training Only
Allow search engines but block AI training crawlers
User-agent: * Allow: / User-agent: GPTBot Disallow: / User-agent: Google-Extended Disallow: / User-agent: CCBot Disallow: / Sitemap: https://example.com/sitemap.xml
Allow Search + AI Citations
Be indexed by search engines and cited by AI assistants
User-agent: * Allow: / # Allow AI assistants to cite your content User-agent: ChatGPT-User Allow: / User-agent: ClaudeBot Allow: / User-agent: PerplexityBot Allow: / Sitemap: https://example.com/sitemap.xml
Block Specific Paths
Allow most content but protect admin/private areas
User-agent: * Allow: / Disallow: /admin/ Disallow: /api/ Disallow: /private/ Disallow: /*.json$ Sitemap: https://example.com/sitemap.xml
For AI-built JavaScript Sites
Allowing crawlers is just step one.
If your site is built with React, Vue, or Angular, crawlers may see an empty page even with robots.txt configured correctly. LovableHTML pre-renders your JavaScript into static HTML that every crawler can read.
- Get indexed by Google in hours, not weeks
- Be cited by ChatGPT, Claude & Perplexity
- No code changes required