Task 4: Robots Exclusion Standard The robots exclusion standard is a standard us
ID: 3909055 • Letter: T
Question
Task 4: Robots Exclusion Standard The robots exclusion standard is a standard used by websites to communicate with web crawlers, mainly to declare which pages the author of the website does not wish to be crawled. For example, see Wikipedia's robots.txt. The standard is explained under: http://www.robotstxt.org/robotstxt.htm Write a robots.txt file that allows full access to all web crawlers, but that disallows crawling by the crawler of Microsoft's Bing (user agent: "Bingbot") entirely, and that allows Google's main crawler to access every page except the paths "/secret.html" and "/keep/out/". Note: use only the standard directives "User-agent" and "Disallow".Explanation / Answer
User-agent: Bingbot
Disallow: /
User-agent: Googlebot
Disallow: /secret.html
Disallow: /keep/out/
Related Questions
Hire Me For All Your Tutoring Needs
Integrity-first tutoring: clear explanations, guidance, and feedback.
Drop an Email at
drjack9650@gmail.com
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.