Normally, for Google SEO, we certainly hope search engine spiders crawl and index content on your website daily. However, if your server has resource limitations, Googlebot crawling your site too frequently could lead to server resource exhaustion or slow website loading. In such cases, we can consider appropriately reducing the crawl speed of search engines to ensure the website can be accessed normally and won't be overwhelmed by spider crawling.
Setting Googlebot's crawl speed
Google uses advanced algorithms to determine the optimal website crawl speed. Each time the Google search spider visits your site, it crawls as many pages as possible without overloading your server's bandwidth. If Google sends too many requests per second to a website, causing server slowdown, you can limit the speed at which Google crawls your site. You can limit the crawl speed for root-level websites (e.g., www.example.com and http://subdomain.example.com). The crawl speed you set is the maximum limit for Googlebot. Please note that Googlebot may not necessarily reach this limit. Unless you notice server load issues and determine they are caused by Googlebot accessing your server too frequently, we recommend not limiting the crawl speed. You cannot change the crawl speed for non-root-level websites (e.g., www.example.com/folder).
Specific method to limit:Open the resource's
„Crawl Speed Settings“ page。
- If the crawl speed is „Calculated optimal speed“, the only way to reduce the crawl speed is tosubmit a special request. You cannot increase the crawl speed.
- Otherwise, select the corresponding option and limit the crawl speed as needed. The new crawl speed is valid for 90 days.
Set the crawling speed for all search engine spiders
In addition to individual settings, you can also use
robots.txt fileThe Crawl-delay directive in the robots.txt file to set the search engine crawling frequency. Most search engines support this
Crawl-delayparameter, set to how many seconds to wait between consecutive requests to the same server:
User-agent: *
Crawl-delay: 10
You just need to add the above code to your website's robots.txt file and wait for search engine spiders to crawl and recognize it. Although it's rare to encounter a situation where spiders crash a website by crawling, it does happen. For ordinary foreign trade enterprise websites, which don't have much content to begin with, there's no need for spiders to crawl resources frequently 24/7. After all, if the website speed is slowed down, it affects both SEO results and user experience. So, when you find your website being crawled frantically by spiders, you can consider taking this action.
Related articles:
Comments are closed
The comment function for this article is closed. If you have any questions, please feel free to contact us through other channels.