What A Difference 300ms Can Make

How we code interactions on the web has changed significantly with mobile touch devices. It isn’t just about hover, it is also about timing:

By default, if you tap on a touchscreen it takes about 300ms before a click event fires. It’s possible to remove this delay, but it’s complicated.

– via Suppressing the 300ms click delay – QuirksBlog.

Some browsers allow pages to turn off this delay when you have width=device-width set. Unfortunately mobile Safari isn’t one of those.

There are JavaScript approaches like fastclick that can help. If you are using a UI framework make sure to test the two together, you don’t want both of them trying to fire click events at the same time.

Tried Out SPDY

Zack Tollman suggested I try out SPDY with my updated Nginx install. While I’m sad at the idea of giving up a plain text HTTP API, I was curious to see what SPDY looked like on this site.

I was disappointed with the results. The fastest page load time out of 5 runs without SPDY was 1.039 s. With SPDY the fastest result was 1.273 s. I then did several more runs of the same test with SPDY enabled to see if any of them could get close to the 1.0 s base line. None of them did, most came in close to 2 seconds. I had honestly expected to see SPDY perform better. That said this type of testing is not particularly rigorous, so take these numbers with a sufficiently large grain of salt.

Given the initial poor showing of SPDY in these tests I’m going to leave it turned off for now.

Retail Sites Are Slowing Down

A cross section of web performance over the last two years:

The median top 500 ecommerce home page takes 10 seconds to load. In spring 2012, the median page loaded in 6.8 seconds. This represents a 47% slowdown in just two years.

According to “Retail sites that use a CDN are slower than sites that do not*” on Web Performance Today.

I downloaded the PDF of the report to find out how these measurements were done:

Radware tested the home page of every site in the Alexa Retail 500 nine consecutive times. The system automatically clears the cache between tests. The median test result for each home page was recorded and used in our calculations.

The tests were conducted on March 24, 2014, via the WebPagetest.org server in Dulles, VA, using Chrome 33 on a DSL connection.

I asked about the 2012 settings that were used in the comments section.

Update Nginx For Better HTTPS Performance

I decided to try out this suggestion from Optimizing NGINX TLS Time To First Byte (TTFB) ( which I mentioned at the end of 2013 ):

After digging through the nginx source code, one stumbles onto this gem. Turns out, any nginx version prior to 1.5.6 has this issue: certificates over 4KB in size incur an extra roundtrip, turning a two roundtrip handshake into a three roundtrip affair – yikes. Worse, in this particular case we trigger another unfortunate edge case in Windows TCP stack: the client ACKs the first few packets from the server, but then waits ~200ms before it triggers a delayed ACK for the last segment. In total, that results in extra 580ms of latency that we did not expect.

I’ve been using Nginx 1.4.x from the Ubuntu package collection on this site. A few webpagetest.org runs showed that HTTPS negotiation was taking more than 300ms on the initial request. After updating to Nginx 1.5.13 more tests showed HTTPS negotiation was down around 250ms.

The 50ms savings isn’t nearly as dramatic as the worst case scenario described in the quote above, but I’ll take it.

Twitter Backing Away From Hashbang URLs

The big news from Twitter’s Improving performance on twitter.com was the first step in backing away from hashbang (#!) URLs:

… our primary reason for this change is to improve initial page-load performance.

The first thing that you might notice is that permalink URLs are now simpler: they no longer use the hashbang (#!). While hashbang-style URLs have a handful of limitations, our primary reason for this change is to improve initial page-load performance.

When you come to twitter.com, we want you to see content as soon as possible. With hashbang URLs, the browser needs to download an HTML page, download and execute some JavaScript, recognize the hashbang path (which is only visible to the browser), then fetch and render the content for that URL. By removing the need to handle routing on the client, we remove many of these steps and reduce the time it takes for you to find out what’s happening on twitter.com.

I’m not surprised that they found doing one thing faster than doing that one thing plus four more.

I, like many others, are happy to see this go. Rafe Colburn put it this way:

It feels good to see terrible ideas die, even when it takes awhile.

Dion Almaer concludes:

It’s about the experience stupid.

Providing a good user experience with “traditional” methods is better than providing a poorer user experience using the hotest new trends.

The sad part about Twitter’s path down the hashbang road is that they are now left with two unappealing options. Either they continue to include a backwards compatibility piece of Javascript on every page load, or break all of the previous hashbang URLs. So far it appears that they are going with the first option, including a piece of Javascript on each page that looks for a hashbang URL.

Google’s plusone.js Doesn’t Support HTTP Compression

I was surprised to see that Google’s plusone.js doesn’t support HTTP compression. Here is a quick test with
curl -v --compressed https://apis.google.com/js/plusone.js > /dev/null

Request Headers:

> GET /js/plusone.js HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3
> Host: apis.google.com
> Accept: */*
> Accept-Encoding: deflate, gzip

Response Headers:

HTTP/1.1 200 OK
< Content-Type: text/javascript; charset=utf-8
< Expires: Fri, 18 Nov 2011 02:35:20 GMT
< Date: Fri, 18 Nov 2011 02:35:20 GMT
< Cache-Control: private, max-age=3600
< X-Content-Type-Options: nosniff
< X-Frame-Options: SAMEORIGIN
< X-XSS-Protection: 1; mode=block
< Server: GSE
< Transfer-Encoding: chunked

You'll notice there is no Content-Encoding: gzip header in the response.

We'll have to get Steve Souders to pester them about that.

Timing Details With cURL

Jon’s recent Find the Time to First Byte Using Curl post reminded me about the additional timing details that cURL can provide.

cURL supports formatted output for the details of the request ( see the cURL manpage for details, under “-w, –write-out <format>” ). For our purposes we’ll focus just on the timing details that are provided.

Step one: create a new file, curl-format.txt, and paste in:

            time_namelookup:  %{time_namelookup}\n
               time_connect:  %{time_connect}\n
            time_appconnect:  %{time_appconnect}\n
           time_pretransfer:  %{time_pretransfer}\n
              time_redirect:  %{time_redirect}\n
         time_starttransfer:  %{time_starttransfer}\n
                 time_total:  %{time_total}\n

Step two, make a request:

curl -w "@curl-format.txt" -o /dev/null -s http://wordpress.com/

What this does:

  • -w "@curl-format.txt" tells cURL to use our format file
  • -o /dev/null redirects the output of the request to /dev/null
  • -s tells cURL not to show a progress meter
  • http://wordpress.com/ is the URL we are requesting

And here is what you get back:

            time_namelookup:  0.001
               time_connect:  0.037
            time_appconnect:  0.000
           time_pretransfer:  0.037
              time_redirect:  0.000
         time_starttransfer:  0.092
                 time_total:  0.164

Jon was looking specifically at time to first byte, which is the time_starttransfer line. The other timing details include DNS lookup, TCP connect, pre-transfer negotiations, redirects (in this case there were none), and of course the total time.

The format file for this output provides a reasonable level of flexibility, for instance you could make it CSV formatted for easy parsing. You might want to do that if you were running this as a cron job to track timing details of a specific URL.

For details on the other information that cURL can provide using -w check out the cURL manpage.

When MySQL EXPLAIN Estimates Go Wrong

An odd looking bug with EXPLAIN in MySQL 5.1 mentioned in When EXPLAIN estimates can go wrong!

the row estimates on 5.1 is very off, its as much as 57 times less than the number of rows, which is not acceptable at all. While 5.5 returns an acceptable estimate.

This test proves that there is a bug in MySQL 5.1 and how it calculates the row estimates. This bug was tracked down to http://bugs.mysql.com/bug.php?id=53761 and was indeed fixed in 5.5 as my tests show.

Which leads MySQL 5.1 to make some odd index choices based on wildly inaccurate row estimates. Good to know this has already been fixed in MySQL 5.5.