A crawler is the name given to a program used by search engines that traverses the internet in order to collect and index data.
A crawler will visit a site via a hyperlink. The crawler then reads the site’s content and embedded links before following the links away from the site. The crawler continues this process until it has visited and indexed data from every single website that has a link to another site. It essentially crawls the web, hence the name.
Why are crawlers important to SEO?
Search engine optimisation is influenced by crawlers in several ways.
Firstly, websites that are easier to crawl will be favoured to those that aren’t. If a site is easy to visit and navigate, with the most important pages as few clicks from your homepage as possible, it will not only make your site more readable for crawlers, but for users too. Crawlers also use sitemaps to help them identify the important content on a site.
Secondly, crawlers follow the links to and from a site, as well as the internal links on a site. A crawlable internal link structure is necessary for a crawler to index all of the pages on a website, and good quality links to and from your site (external links) are important as they show that your site is reputable and has high-quality content.
Thirdly, crawlers index the content on the site that hasn’t been tagged nofollow. It also looks for keywords, so the search engine will know what words and queries the page will rank for. It also looks for duplicate content, copied from other sites or other pages from the site, ensuring the website content is unique.
Is there a difference between a crawler and a spider?
At Go Up, clients often ask us what the difference is between a crawler and a spider. There isn’t a difference. Spider is just another moniker deriving from the fact that the programme crawls the web. A crawler can also be referred to as a robot or a bot.