In April of 2010 Google announced that a website’s speed was now a factor in where Google would rank that website in the search results. Now, it’s only one of over two hundred factors Google uses to help it decide where a site should rank, and how big of an influence site speed has individually has yet to be fully determined. However, what is known is that Google is increasing its reliance on a set of “user experience” factors in helping them determine a site’s ranking. Since how fast a site loads is an element of the user experience, one can reasonably assume that the importance of the site speed factor will only grow as Google continues to turn up that dial.
A recent experiment done by Zoompf seems to indicate that neither the size of a page, nor the total number of images used on a page has as much of an impact on positive ranking as the pages Time To First Byte (TTFB) does. TTFB is just a more complicated way of saying “how fast your server responds to begin with, no matter what or how much data it will ultimately send to the person requesting the page.”
Zoompfs experiment seems to confirm what was already put forward by Google back in 2010. Just after the official announcement, Matt Cutts (a Google employee, who works with the Search Quality team on search engine optimization issues, and who serves as the unofficial Google spokesperson), spoke directly to “site speed” in a blog post:
“I would love if SEOs dive into improving website speed, because (unlike a few facets of SEO) decreasing the latency of a website is something that is easily measurable and controllable.”
The important word there is Latency, a term associated with servers that send data across a network. In this case the internet is the network, and latency is the amount of time it takes for the host’s server to receive and process a request for one of the pages of your website. The amount of latency a user will experience will depend on factors like how far away the user is from the server physically, but also how taxed the server itself is for resources (memory, processor), and finally how optimized the back-end software is (especially true for dynamically generated websites). The measure of this latency by Google appears to be TTFB.
More evidence that TTFB is being used by Google can be seen in the results of one of the tools Google provides for us to test website speed called PageSpeed Insights. Running a known slow website though the tool produces the following message:
“Reduce server response time to under 200ms – In our test, your server responded in 1.8 seconds. There are many factors that can slow down your server response time. Please read our recommendations to learn how you can monitor and measure where your server is spending the most time.”
Even if the above requested page fully loaded for you in three seconds, almost two thirds of that time would have nothing to do with what was on the actual page and everything to do with the server the page is hosted on. The “make the user wait for no good reason” issue it seems is what Google is suggesting should be avoided by site owners who value the user experience they are providing.
Further, Google does not excuse small businesses just because they do not have money for fast servers. In the same blog post, Matt Cutts has this to say about smaller sites:
“I want to pre-debunk another misconception, which is that this change will somehow help “big sites” who can affect to pay more for hosting. In my experience, small sites can often react and respond faster than large companies to changes on the web.”
As an SEO, when I read between the lines there (as one often needs to do when Google speaks), what I see is “big sites can’t just modify their websites overnight and/or switch hosts on the drop of dime, but you are small, you can, so fix it.”. So if your a small business whose website suffers from a higher than 200ms server response time on a regular basis, fix it. It’ll make for a better user experience on your site, and we already know that Google likes to reward sites that do that.