Conductor
Get Started

What does crawl-delay: 10 mean in robots.txt?

The crawl-delay directive is an unofficial directive meant to communicate to crawlers to slow down crrawling in order not to overload the web server.

Some search engines don't support the crawl-delay directive, and the way the crawl-delay directive is interpreted varies across search engines.

How does Google interpret crawl-delay: 10?

Google doesn't support the crawl-delay directive, so her crawlers will just ignore it.

If you want to ask Google to crawl slower, you need to set the Crawl rate in Google Search Console:

Setting crawl rate in GSC

  1. Log onto the old Google Search Console .
  2. Choose the website you want to define the crawl rate for.
  3. There's only one setting you can tweak: Crawl rate, with a slider where you can set the preferred crawl rate. By default the crawl rate is set to "Let Google optimize for my site (recommended)".
Audit your robots.txt, and find out right away!

Bing and Yahoo and crawl-delay: 10

Bing and Yahoo support the crawl-delay directive, and in case of crawl-delay: 10 they'll divide a day into 10 second windows and within each window they'll crawl a maximum of one page.

Yandex and crawl-delay: 10

Yandex supports the crawl-delay directive, and in case of crawl-delay: 10 they'll wait a minimum of 10 seconds before requesting another URL.

While Yandex supports this directive, they recommend using Yandex.Webmaster — their version of Google Search Console to define the crawl speed settings .

Baidu and crawl-delay: 10

Baidu does not support the crawl-delay directive, so — similar to Google — they'll ignore the crawl-delay directive. You can define your prefered crawl frequency through Baidu Webmaster Tools.

Read the full Academy article to learn everything about Robots.txt for SEO: The Ultimate Guide
Share Article Details

Ready to unlock your website's potential?

TrustRadius logo
G2 logo
SoftwareReviews logo