robots.txt Generator
This free robots.txt generator can configure path rules, set per-bot overrides for search engines and AI crawlers, add your sitemap, and download a ready-to-upload file.
Configure crawl rules, set per-bot overrides for search engines and AI crawlers, add disallowed paths or URL patterns, and specify your sitemap.
General
One per line or comma-separated for multiple sitemaps.
Use /directory/ for folders or /*?utm_ for URL parameter patterns. Use Allow to override a Disallow for a specific path.
“Same as default” follows the global rule above. Override individual bots to allow or block them regardless.
Configure your robots.txt below. Output updates in real time.
robots.txt output
Upload this file to the root of your domain as /robots.txt.
What is a robots.txt file?
A robots.txt file is a plain-text file placed at the root of your website (yourdomain.com/robots.txt) that tells web crawlers which pages or sections they should or shouldn't access.
It uses a simple directive format:
User-agent: Googlebot Disallow: /admin/ Allow: /blog/ Sitemap: https://yourdomain.com/sitemap.xml
User-agent specifies which bot the rules apply to. * applies to all bots. Disallow tells crawlers not to access a path. Allow creates an exception. Sitemap points crawlers to your XML sitemap.
Robots.txt is not a security mechanism — it's a courtesy protocol. Most reputable crawlers respect it, but some do not. For truly private content, use authentication or noindex meta tags instead.
How to use this robots.txt generator
Using this online robots.txt generator takes less than a minute:
- Set your global rule. Choose whether all crawlers are allowed or blocked by default. This becomes the
User-agent: *block at the top of your file. - Add path rules. Use
Disallowto block directories or URL patterns (e.g./admin/or/*?utm_), andAllowto carve out exceptions within a disallowed section. - Configure search engine crawlers. Override defaults for Googlebot, Bingbot, Baiduspider, YandexBot, and more — useful when you want different crawl behaviour across markets.
- Control AI crawlers. Decide whether bots like GPTBot, ClaudeBot, Google-Extended, and Perplexity can access your content. Each bot is listed with its purpose, so you know exactly what you're allowing or blocking.
- Add your sitemap URL. Declaring your sitemap in robots.txt helps crawlers discover your content faster.
- Download or copy your file. The output updates in real time. Once you're happy, hit Download and upload the file to your domain root at
yourdomain.com/robots.txt.
What makes this robots.txt generator different?
Comprehensive bot coverage Most robot txt generators only cover the major search engines. This one also includes the full set of AI crawlers — ClaudeBot, GPTBot, Google-Extended, Meta-ExternalAgent, Applebot-Extended, PerplexityBot, and more — giving you complete control over which bots can access your site in 2025 and beyond.
Per-bot custom rules Don't want to block an entire crawler — just certain paths? Set custom rules per bot. Allow Googlebot everywhere while restricting a specific AI crawler to only your blog, for example.
Real-time output Your robots.txt file builds as you configure. No form submission, no waiting. See the exact output before you download it.
Free and instant — no account needed This is a completely free robots txt generator. No signup, no email, no API key. Generate your robots.txt online and download it immediately.
Frequently Asked Questions
Where do I upload my robots.txt file?
Your robots.txt file must be placed at the root of your domain — for example, https://www.example.com/robots.txt. It will not be recognised at any other path.
Should I block AI crawlers in my robots.txt?
That depends on your goals. If you don't want your content used for AI model training, you can block bots like GPTBot, ClaudeBot, Google-Extended, and CCBot. Note that some AI crawlers (like ChatGPT-User) are documented as not respecting robots.txt, so blocking them via robots.txt may have limited effect. This tool labels each bot clearly so you can make an informed decision.
Is a robots.txt file required for SEO?
Not strictly required, but strongly recommended. Without one, crawlers will attempt to index everything. A well-configured robots.txt helps you prevent crawl budget waste, protect admin areas, and signal your sitemap location — all of which support good technical SEO.
How do I check if my robots.txt is working?
You can also view your live file by navigating directly to yourdomain.com/robots.txt in your browser, and use online tool to test and validate if your robots.txt is blocking specific URLs. Also, you can use Google Search Console's robots.txt report to see if Googlebot can read your file.
Related Tools
robots.txt Tester & Validator
Test if any URL is allowed or blocked by your robots.txt. Check rules for Googlebot, AI crawlers, and custom user-agents instantly. Free online robots.txt checker.
Free XML Sitemap Extractor
The XML sitemap extractor is a free, browser-based tool that lets you instantly extract all the URLs from an XML sitemap.