StoreConnect provides a robots.txt1 file automatically via an automatically generated XML file which is regenerated daily.
robots.txt is available at all times via
your-store.com/robots.txt. If you are running multiple stores within StoreConnect, each store will have it’s own, unique robots.txt file.
What is the
This is a file StoreConnect automatically generates and is accessible via the below URL. When downloading, use your browser’s ‘Private Mode’ to ensure you get the most current version otherwise you could be downloading an older version from your cache.
Note, each store will have it’s own separate robots.txt file.
The robot.txt file generated looks like this:
# Example Store robots.txt file per http://www.robotstxt.org/ User-agent: * Allow: / Sitemap: https://store.example.com/sitemap.xml
robots.txt: A robots.txt file tells search engine crawlers which URLs the crawler can access on your site. This is used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of a search engine. To keep a web page out of a search engine, block indexing with noindex or password-protect the page. ↩
Back to Documentation