A web crawler is a software program that follows all the links on a page, leading to new pages, and continues that process until it has no more new links or pages to crawl. Web crawlers are known by different names: robots, spiders, search engine bots, or just “bots” for short. They are called robots because they have an assigned job to do, travel from link to link, and capture each page’s information. Unfortunately, If you envisioned an actual robot with metal plates and arms, that’s not what these robots look like. Google’s web crawler is named Googlebot.