A few days back, I noticed that my PC’s IP address was reported by a website as being 126.96.36.199. That address isn’t in my IP stack, and isn’t my router’s ISP-provided IP address either.
After checking the address out at DNSstuff, I found that address is registered to Google and then I remembered that I’d installed the Google web accelerator. As far as I can see, this is acting as a big proxy server, caching and prefetching my Internet search requests. It claims so far to have speeded up my downloads by 25% but there are some negative points too:
- Some people have reported being logged in to websites as someone else – it seems that Google caches personal details, meaning that unsecured (HTTP) logins are cached. HTTPS sessions are not stored though.
- Some websites have recorded difficulties with Google’s prefetching having unexpected results on dynamic pages.
- Some webmasters have decided that the prefetching steals too many of their CPU cycles and have blocked access to those who have the Google web accelerator enabled. Privacy advocates have similar concerns.
- Some sites use geolocation to determine whether or not the viewer is local to the content. I tried to stream some content from the BBC tonight and was met with a message informing me that the content was restricted to UK users (Google’s address is flagged as American – my ISP-provided address is British).
It is possible to stop the web accelerator from caching certain sites, as well as switching it on/off without re-installing – details of this, along with how it all works can be found at the Google web accelerator support page, but to be honest, that’s a pain in the backside – I already have to switch my proxy settings when I jump from my corporate VPN to my home network and don’t want to have to think about another set of proxies. On that basis, I think the web accelerator will be off my PC soon.
As a web site administrator, I’ll also be giving serious thought to implementing a Google web accelerator blocking method (and the update). Rather than blocking IP ranges, I’m more likely to reject x-moz: prefetch requests and, instead of sending back a custom HTTP error page, I’ll probably refer to no web accelerator (unnecessary proxying considered harmful).
To Google’s credit, they have published web accelerator information for webmasters. What’s not clear to me though, is whether or not blocking/ignoring prefetch requests will also prevent Google from crawling my site. I’d rather lose a few bytes to a prefetch than see my page ranking start to slide.
Whilst writing this post, I found that some versions of Firefox also prefetch by default (I’m using Firefox 1.5 and that certainly does). Most websites don’t seem to care about this as they are looking for Google’s web accelerator IP addresses, but any form of prefetch will load unnecessary content over slow links, or hit web servers with unnecessary requests. For details, read more about prefetching or to turn this off in Firefox’s about:config, set network.prefetch.next to false.