Putting it simply, sitemaps allow web-masters and website owners to help direct Google (and other search engines) to index their web content optimally.
In the simplest form a Sitemap is an XML file listing a set of url’s of a given domain, usually with some extra meta data about each url. The meta data that usually surrounds a url can include: last updated, how often a page is updated and relative importance to the website.

Search Engine Crawlers (spiders) crawl your website based on links from around the internet, and of course within your own website. A sitemap is simply a supplement for such indexation and not a replacement, this is always important to remember. A sitemap should be seen as the helping hand to a search engine to ensure that all your pages are found and indexed, as well as that correct priority is given to each page – for example you may indicate that your homepage is the most important page and contact page least important.

Sitemap Schema 0.9, a W3 consortium build is the current default standard for most sitemaps – though there are unique alternatives like video sitemaps around. SS0.9 is a set of protcols that allows for more than what I mention above, in terms of meta data, but given it’s all a tad boring here is a link if you want to know more about the sitemap schemas. SS0.9 is widely adopted by search engines including Google, Bing and Yahoo (now powered in part by Bing).

At this point in our series of SiteMaps series I feel it is important to make clear that most sources, from wikipedia to seomoz don’t think sitemaps can have a huge effect on seach engines. This is one point I am challenging in my SiteMap Experiment, I believe that it has some effect on indexation rates and to a milder extent ranking. – Now on day 7 of the experiment it is already becoming clear indexed pages has increase a fair amount, see numbers from the sitemap experiment.

Next: SiteMaps – What’s inside?