Txt file is then parsed and can instruct the robotic as to which webpages aren't to get crawled. To be a search engine crawler may well continue to keep a cached duplicate of this file, it might every now and then crawl pages a webmaster would not wish to crawl. https://edgarvlzod.tinyblogging.com/detailed-notes-on-seo-78418167