Skip to main content

Google Guideline - How spiders view your site

In its "Guidelines for Webmasters" document Google notes that "search engine spiders see your site much as Lynx would".

A web spider is a program that searches through the internet for content (see here for more definitions.)

Lynx is a web browser that was used in the good old days of internet before we had fancy things like mouses, graphics, or sliced bread. Put very simply Lynx is a bareboned web browser that supports a minimal set of features. You can download a free copy from this website. There are other uses for Lynx other than SEO (such as pinging a webpage in a crontab), but for SEO it is mainly used for usability and visibility testing.

If you don't feel like installing new software there are a number of online spider emulators that will try to show you how a spider views your website. One that I found is available here.

Now that we have the means to see how Google spiders view our website we can have a look at what implications the guideline has for our site.

Firstly we need to realize that search spiders "crawl" through your site by following links. Obviously if a spider is unable to read a link then it won't find the page the link points to.

Certain technologies like can make links invisible to spiders. Google can now index text from Flash files and supports common Javascript methods. They don't currently support Microsoft Silverlight so you should avoid using it (it's probably a good idea to steer away from Microsoft proprietory formats anyway no matter how much crazy monkey man screams "developers!" and sweats in his blue shirt).

Google maintains an easy-to-read list of technologies that it supports. You can find it online here.

View your site in a spider emulator or Lynx and make sure that you can navigate through the links. If you can't then there is a good chance that Google can't either.

One way to nudge spiders along is to provide a sitemap. This also helps your human readers. Remember that Google does not like you to have more than 100 links on a page so if you have a large site try to identify key pages rather than providing an exhaustive list.

Some people argue that if you need a sitemap then your navigation system is flawed. Think about it - if your user can't get to content quickly through your navigation system then how good is your site at providing meaningful content? Personally I like to balance this out and provide sitemaps as an additional "bonus" while still ensuring that all my content is within 2 clicks of the landing page.

Comments

Popular posts from this blog

Solving Doctrine - A new entity was found through the relationship

There are so many different problems that people have with the Doctrine error message: exception 'Doctrine\ORM\ORMInvalidArgumentException' with message 'A new entity was found through the relationship 'App\Lib\Domain\Datalayer\UnicodeLookups#lookupStatus' that was not configured to cascade persist operations for entity: Searching through the various online sources was a bit of a nightmare.  The best documentation I found was at  http://www.krueckeberg.org/  where there were a number of clearly explained examples of various associations. More useful information about association ownership was in the Doctrine manual , but I found a more succinct explanation in the answer to this question on StackOverflow . Now I understood better about associations and ownership and was able to identify exactly what sort I was using and the syntax that was required. I was implementing a uni-directional many to one relationship, which is supposedly one of the most simpl...

Grokking PHP monolog context into Elastic

An indexed and searchable centralized log is one of those tools that once you've had it you'll wonder how you managed without it.    I've experienced a couple of advantages to using a central log - debugging, monitoring performance, and catching unknown problems. Debugging Debugging becomes easier because instead of poking around grepping text logs on servers you're able to use a GUI to contrast and compare values between different time ranges. A ticket will often include sparse information about the problem and observed error, but if you know more or less when a problem occurred then you can check the logs of all your systems at that time. Problem behaviour in your application can occur as a result of the services you depend on.  A database fault will produce errors in your application, for example. If you log your database errors and your application errors in the same central platform then it's much more convenient to compare behaviour between...

Translating a bit of the idea behind domain driven design into code architecture

I've often participated in arguments discussions about whether thin models or thin controllers should be preferred.  The wisdom of a thin controller is that if you need to test your controller in isolation then you need to stub the dependencies of your request and response. It also violates the single responsibility principal because the controller could have multiple reasons to change.   Seemingly, the alternative is to settle on having fat models. This results in having domain logic right next to your persistence logic. If you ever want to change your persistence layer you're going to be in for a painful time. That's a bit of a cargo cult argument because honestly who does that, but it's also a violation of the single responsibility principal.   One way to decouple your domain logic from both persistence and controller is to use the "repository pattern".   Here we encapsulate domain logic into a data service. This layer deals exclusively with imple...