Spider (Robot)

 

Search engines find web content using automated programs called spiders or robots.

Search engine spiders generally follow links to discover new pages. As they read each page, they look for links to other pages and sites. These are added to a master list for spidering later.

When a search engine spiders a page, it may not be added to their public index immediately - it may need to be analyzed first.

One principle of search engine optimization is to make websites easily spiderable. Some of the techniques for this are to create a comprehensive link structure for the site that ensures each page is well-linked. For spiderability, links are usually designed as plain HTML links as opposed to complex scripted or dynamic links. Where sophisticated dynamic menus are used, SEOs often add an additional set of simple HTML links. Site maps are another tool often used to make sites easy to spider.

Back to Web Marketing Glossary - definitions of SEO and Internet terms

Web Marketing Help: Meta Tags, Robots.txt, and more

 

©Copyright 2000 - 2005, Association of Internet Marketing Professionals, Inc., All Rights Reserved.
All rights reserved.

 

Back to Top