I decided to try out this suggestion from Optimizing NGINX TLS Time To First Byte (TTTFB) ( which I mentioned at the end of 2013 ):
After digging through the nginx source code, one stumbles onto this gem. Turns out, any nginx version prior to 1.5.6 has this issue: certificates over 4KB in size incur an extra roundtrip, turning a two roundtrip handshake into a three roundtrip affair – yikes. Worse, in this particular case we trigger another unfortunate edge case in Windows TCP stack: the client ACKs the first few packets from the server, but then waits ~200ms before it triggers a delayed ACK for the last segment. In total, that results in extra 580ms of latency that we did not expect.
I’ve been using Nginx 1.4.x from the Ubuntu package collection on this site. A few webpagetest.org runs showed that HTTPS negotiation was taking more than 300ms on the initial request. After updating to Nginx 1.5.13 more tests showed HTTPS negotiation was down around 250ms.
The 50ms savings isn’t nearly as dramatic as the worst case scenario described in the quote above, but I’ll take it.
Performance engineering is its own discipline. The problem is, not many people have realized that yet.
From Steve Souders post on web performance for the future.
The big news from Twitter’s Improving performance on twitter.com was the first step in backing away from hashbang (#!) URLs:
… our primary reason for this change is to improve initial page-load performance.
The first thing that you might notice is that permalink URLs are now simpler: they no longer use the hashbang (#!). While hashbang-style URLs have a handful of limitations, our primary reason for this change is to improve initial page-load performance.
I’m not surprised that they found doing one thing faster than doing that one thing plus four more.
I, like many others, are happy to see this go. Rafe Colburn put it this way:
It feels good to see terrible ideas die, even when it takes awhile.
Dion Almaer concludes:
It’s about the experience stupid.
Providing a good user experience with “traditional” methods is better than providing a poorer user experience using the hotest new trends.
I was surprised to see that Google’s plusone.js doesn’t support HTTP compression. Here is a quick test with
curl -v --compressed https://apis.google.com/js/plusone.js > /dev/null
> GET /js/plusone.js HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3
> Host: apis.google.com
> Accept: */*
> Accept-Encoding: deflate, gzip
HTTP/1.1 200 OK
< Expires: Fri, 18 Nov 2011 02:35:20 GMT
< Date: Fri, 18 Nov 2011 02:35:20 GMT
< Cache-Control: private, max-age=3600
< X-Content-Type-Options: nosniff
< X-Frame-Options: SAMEORIGIN
< X-XSS-Protection: 1; mode=block
< Server: GSE
< Transfer-Encoding: chunked
You'll notice there is no
Content-Encoding: gzip header in the response.
We'll have to get Steve Souders to pester them about that.
Jon’s recent Find the Time to First Byte Using Curl post reminded me about the additional timing details that cURL can provide.
cURL supports formatted output for the details of the request ( see the cURL manpage for details, under “-w, –write-out <format>” ). For our purposes we’ll focus just on the timing details that are provided.
Step one: create a new file, curl-format.txt, and paste in:
Step two, make a request:
curl -w "@curl-format.txt" -o /dev/null -s http://wordpress.com/
What this does:
-w "@curl-format.txt" tells cURL to use our format file
-o /dev/null redirects the output of the request to /dev/null
-s tells cURL not to show a progress meter
http://wordpress.com/ is the URL we are requesting
And here is what you get back:
Jon was looking specifically at time to first byte, which is the time_starttransfer line. The other timing details include DNS lookup, TCP connect, pre-transfer negotiations, redirects (in this case there were none), and of course the total time.
The format file for this output provides a reasonable level of flexibility, for instance you could make it CSV formatted for easy parsing. You might want to do that if you were running this as a cron job to track timing details of a specific URL.
For details on the other information that cURL can provide using
-w check out the cURL manpage.
An odd looking bug with EXPLAIN in MySQL 5.1 mentioned in When EXPLAIN estimates can go wrong!
the row estimates on 5.1 is very off, its as much as 57 times less than the number of rows, which is not acceptable at all. While 5.5 returns an acceptable estimate.
This test proves that there is a bug in MySQL 5.1 and how it calculates the row estimates. This bug was tracked down to http://bugs.mysql.com/bug.php?id=53761 and was indeed fixed in 5.5 as my tests show.
Which leads MySQL 5.1 to make some odd index choices based on wildly inaccurate row estimates. Good to know this has already been fixed in MySQL 5.5.
Here are the slides from my “Site Performance, From Pinto to Ferrari” talk that I gave at WordCamp SLC 2011 and Wordcamp Albuquerque 2011.
Steve Souders posted an update on the HTTP performance trends for top sites, based on data gathered via http://httparchive.org/. Here are the bottom line numbers:
Here’s a recap of the performance indicators from Nov 15 2010 to Aug 15 2011 for the top ~13K websites:
- total transfer size grew from 640 kB to 735 kB
- requests per page increased from 69 to 76
- sites with redirects went up from 58% to 64%
- sites with errors is up from 14% to 25%
- the use of Google Libraries API increased from 10% to 14%
- Flash usage dropped from 47% to 45%
- resources that are cached grew from 39% to 42%
I was surprised by the total transfer size increase. If you followed that trend on a weekly basis, every Friday for the last 9 months you added another 2.6 kB to the total transfer size of your site. Not much for any given week, but it adds up fast.
The key to making programs fast is to make them do practically nothing. ;-)
via Mike Haertel on why GNU grep is fast, where he describes some of the tricks he used to minimize the amount of work grep needed to do in order to find strings. Mike wrote the original version of GNU grep.