Google Speed Experiment Reference

A common data point I see brought in web performance discussions is a reference to a test Google ran to test the impact of slower search results on the number of searches a user performs. I found the original blog post at Speed Matters:

Our experiments demonstrate that slowing down the search results page by 100 to 400 milliseconds has a measurable impact on the number of searches per user of -0.2% to -0.6% (averaged over four or six weeks depending on the experiment).

The impact extends beyond the initial test period:

Users exposed to the 400 ms delay for six weeks did 0.21% fewer searches on average during the five week period after we stopped injecting the delay.

If you are going to reference this test and the corresponding data, please link back to original Google blog post. Hopefully that will save others the time of having to hunt down the original information.

The 1% of United States Internet Users

The Verizon / AOL deal has brought up an interesting data point: AOL still has 2.1 million dial customers ( page 4 in the PDF ).

Combined this with the number of Internet users in the United States ( ~279 million ) you get: ~1% of United States Internet users are on AOL dial up.

I ran www.google.com through WebPageTest, comparing Cable ( 5 Mbps down, 1Mbps up, 28ms RTT ) to 56k dialup ( 49Kbps down, 30Kbps up, 120ms RTT ). The result:

And google.com is fast compared to most sites, nearly every modern web site is going to be horribly painful on a dialup connection.

Scripts At The Bottom Is Not Enough

You may remember one of the original webperf rules was to put scripts at the bottom.

Over the last few years browsers have been adjusting, now it is common for JavaScript to have higher loading priority via a pre-loader. This can result in scripts at the bottom effectively loading the same as scripts at the top.

Patrick Meenan on how it is supposed to work ( emphasis mine ):

– The preload scanner scans the whole document as it comes in and issues fetch requests.
– The main parser sends a signal when it reaches the start of the body and sends fetch requests for any resources it discoveres that are not already queued.
– The layout engine increases the priority of in-viewport visibile images when it does layout.
The resource loader treats all script and css as critical, regardless of where it is discovered (in the head or at the end of the body). see crbug.com/317785
– Resource loader loads in 2 phases. The first (critical phase) delays non-critical resources except for 1 image at a time. Once the main parser reaches the body tag it removes the constraint and fetches everything.

At this point you’ll likely need to use async/defer to really push JavaScript loading out of the critical category. Of course that isn’t a slam dunk either. Paul Irish notes this limitation with IE:

TL;DR: don’t use defer for external scripts that can depend on eachother if you need IE <= 9 support

Even after 20 years of development there are days where browser environments still feel like the wild west.

The Chrome 41 Bump

Patrick Meenan mentioned first paint time improvements in Chrome 41. I noticed a ~25% improvement in the first view SpeedIndex times for one of our tests. It was easy to spot when the auto update from Chrome 40 to 41 happened:

chrome-41-bump

I compared the individual tests before and after the update and this really is all about first paint times. The total time for the page to be visually complete was roughly the same.

WordPress.com #3 in Worldwide DNS Performance

Two weeks ago I mentioned dnsperf.com. After that I reached out to @jimaek about adding WordPress.com to the list of measured providers.

It has been super exciting to see WordPress.com DNS performance rank #3 worldwide:

dnsperf-wpcom-1

We are behind second place EdgeCast by just 0.66ms.

Serious kudos to our systems and network operations teams on including DNS as part of our Anycast network, which made this level of performance possible.