A file that says to a search engine WebCrawler “do not search me!” Placing a file named robots.txt in the main directory of a website allows the webmaster to block all WebCrawlers from accessing the page, or specifically block certain WebCrawlers, and therefore stop appearing in results of search engines.