System > Configuration > General > Design > Search Engine Robots
This section lets you customize a robots.txt file to control which pages of your site are indexed by search engines. The robots.txt file is a text file that is checked whenever search engine robots visit your website. Search engine robots, or search engine “bots,” are machines that index web content, including your store. The robots.txt file is currently the standard for determining which information on a website gets indexed. When a search engine robot visits your website, it first looks for a robots.txt file. If one is found, the search engine robot follows the instructions in the file.
|Default Robots||Select a default action for search engine robots from the list:
Note: Applicable pages are those that are not excluded using the “Disallow” feature of the robots.txt file.
|Edit custom instruction of robots.txt File||Select instructions to add to your customized file from the list. The first instruction, “User-agent: *” tells the robot that the rules listed below it apply to all robots. The “Disallow: /” instruction followed by a page designation tells the robot not to visit the page specified in the instruction. These instructions are included before the head closing tag in the page HTML.|
|Reset to Default||Set to Yes or No. Selecting Yes deletes your custom instructions and resets the robots.txt file to default settings.|