robots.txt Tester & Validator
This free robots.txt tester lets you validate your rules before they cause problems: fetch your live file directly from your site, enter the URLs you want to check, choose a user-agent, and see instantly whether each path is allowed or blocked.
Paste or fetch your robots.txt, enter URLs to test, and see which paths are allowed or blocked for a given user-agent.
Enter robots.txt content and URLs to test.
How to use this robots.txt tester
Testing your robots.txt file takes three steps:
- Load your robots.txt. Either paste the contents directly into the editor, or use the fetch field to pull the live file from your website automatically.
- Select a user-agent. Choose from common bots — Googlebot, Bingbot, ClaudeBot, GPTBot, and many more — or enter a custom user-agent string to test against any crawler.
- Enter the URLs you want to test. Add one or more paths (e.g.
/admin/,/blog/article-slug/) and run the test. Each URL will show as allowed or blocked based on your current robots.txt rules.
The results update immediately, so you can tweak your robots.txt content in the editor and re-test without reloading the page.
Why you need a robots.txt checker?
Robots.txt syntax is deceptively simple, but small mistakes have large consequences. A missing trailing slash, a wildcard pattern in the wrong position, or a Disallow rule that's slightly too broad can block pages you never intended to hide from Google.
Common issues a robots.txt validator catches:
- Pages accidentally blocked by robots.txt. If a key landing page or product page is disallowed, it won't be indexed — and you won't rank. This tool makes it easy to test any URL against your live rules.
- Conflicting Allow and Disallow directives. When both apply to the same path, the more specific rule wins. Testing each path individually removes the guesswork.
- AI crawler rules not working as expected. With new bots like GPTBot, ClaudeBot, and Google-Extended now active, it's worth checking that your AI crawler rules are behaving correctly — not just your search engine rules.
- Wildcard patterns matching unintended paths. Patterns using
*and$can match more than you expect. Use this robots.txt test tool to verify exact behaviour before deploying changes.
If you're using Google Search Console, it has a built-in robots.txt tester under the legacy tools section — but it only tests Googlebot, and it's tied to your verified property. This tool lets you check any website's robots.txt against any bot, without authentication.
Frequently Asked Questions
How do I check my robots.txt file?
There are two main ways. You can view your live robots.txt by navigating to yourdomain.com/robots.txt in a browser. To actually test whether specific URLs are allowed or blocked for a particular bot, use a dedicated robots.txt checker like this one — paste your file, choose a user-agent, enter a path, and get an instant result.
How do I know if a page is blocked by robots.txt?
The most direct method is to test it here: paste your robots.txt, enter the page URL, select the relevant user-agent, and run the check. You can also look in Google Search Console's Coverage report: pages blocked by robots.txt will appear with the "Excluded by 'noindex' tag" or "Blocked by robots.txt" status. In the URL Inspection tool, GSC will explicitly flag if a URL is blocked.
What does "blocked by robots.txt" mean in Google Search Console?
It means Googlebot attempted to crawl a URL and found it disallowed in your robots.txt. The page won't be indexed. If this is intentional (e.g. for admin pages or duplicate content), no action is needed. If it's affecting pages you want indexed, check your robots.txt rules with this tester to identify the conflicting directive.
Why does my robots.txt block everything even though I only disallowed one path?
Usually this means a wildcard pattern is matching more than intended, or there's a Disallow: / rule somewhere in the file. Paste your robots.txt into this tester, select * as the user-agent, and test a few different paths to isolate which rule is triggering.
Related Tools
robots.txt Generator
Generate a robots.txt file in seconds. Configure crawl rules for search engines and AI bots, block specific paths, and download your file instantly. Free, no signup.
Free XML Sitemap Extractor
The XML sitemap extractor is a free, browser-based tool that lets you instantly extract all the URLs from an XML sitemap.