Tag: performance (page 1 of 3)

Update Nginx For Better HTTPS Performance

I decided to try out this suggestion from Optimizing NGINX TLS Time To First Byte (TTTFB) ( which I mentioned at the end of 2013 ):

After digging through the nginx source code, one stumbles onto this gem. Turns out, any nginx version prior to 1.5.6 has this issue: certificates over 4KB in size incur an extra roundtrip, turning a two roundtrip handshake into a three roundtrip affair – yikes. Worse, in this particular case we trigger another unfortunate edge case in Windows TCP stack: the client ACKs the first few packets from the server, but then waits ~200ms before it triggers a delayed ACK for the last segment. In total, that results in extra 580ms of latency that we did not expect.

I’ve been using Nginx 1.4.x from the Ubuntu package collection on this site. A few webpagetest.org runs showed that HTTPS negotiation was taking more than 300ms on the initial request. After updating to Nginx 1.5.13 more tests showed HTTPS negotiation was down around 250ms.

The 50ms savings isn’t nearly as dramatic as the worst case scenario described in the quote above, but I’ll take it.

Performance engineering

Performance engineering is its own discipline. The problem is, not many people have realized that yet.

From Steve Souders post on web performance for the future.

Twitter Backing Away From Hashbang URLs

The big news from Twitter’s Improving performance on twitter.com was the first step in backing away from hashbang (#!) URLs:

… our primary reason for this change is to improve initial page-load performance.

The first thing that you might notice is that permalink URLs are now simpler: they no longer use the hashbang (#!). While hashbang-style URLs have a handful of limitations, our primary reason for this change is to improve initial page-load performance.

When you come to twitter.com, we want you to see content as soon as possible. With hashbang URLs, the browser needs to download an HTML page, download and execute some JavaScript, recognize the hashbang path (which is only visible to the browser), then fetch and render the content for that URL. By removing the need to handle routing on the client, we remove many of these steps and reduce the time it takes for you to find out what’s happening on twitter.com.

I’m not surprised that they found doing one thing faster than doing that one thing plus four more.

I, like many others, are happy to see this go. Rafe Colburn put it this way:

It feels good to see terrible ideas die, even when it takes awhile.

Dion Almaer concludes:

It’s about the experience stupid.

Providing a good user experience with “traditional” methods is better than providing a poorer user experience using the hotest new trends.

The sad part about Twitter’s path down the hashbang road is that they are now left with two unappealing options. Either they continue to include a backwards compatibility piece of Javascript on every page load, or break all of the previous hashbang URLs. So far it appears that they are going with the first option, including a piece of Javascript on each page that looks for a hashbang URL.

Slides From UTOSC 2012 – Improving Front End Performance

Google’s plusone.js Doesn’t Support HTTP Compression

I was surprised to see that Google’s plusone.js doesn’t support HTTP compression. Here is a quick test with
curl -v --compressed https://apis.google.com/js/plusone.js > /dev/null

Request Headers:

> GET /js/plusone.js HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3
> Host: apis.google.com
> Accept: */*
> Accept-Encoding: deflate, gzip

Response Headers:

HTTP/1.1 200 OK
< Content-Type: text/javascript; charset=utf-8
< Expires: Fri, 18 Nov 2011 02:35:20 GMT
< Date: Fri, 18 Nov 2011 02:35:20 GMT
< Cache-Control: private, max-age=3600
< X-Content-Type-Options: nosniff
< X-Frame-Options: SAMEORIGIN
< X-XSS-Protection: 1; mode=block
< Server: GSE
< Transfer-Encoding: chunked

You'll notice there is no Content-Encoding: gzip header in the response.

We'll have to get Steve Souders to pester them about that.

Timing Details With cURL

Jon’s recent Find the Time to First Byte Using Curl post reminded me about the additional timing details that cURL can provide.

cURL supports formatted output for the details of the request ( see the cURL manpage for details, under “-w, –write-out <format>” ). For our purposes we’ll focus just on the timing details that are provided.

Step one: create a new file, curl-format.txt, and paste in:

n
            time_namelookup:  %{time_namelookup}n
               time_connect:  %{time_connect}n
            time_appconnect:  %{time_appconnect}n
           time_pretransfer:  %{time_pretransfer}n
              time_redirect:  %{time_redirect}n
         time_starttransfer:  %{time_starttransfer}n
                            ----------n
                 time_total:  %{time_total}n
n

Step two, make a request:

curl -w "@curl-format.txt" -o /dev/null -s http://wordpress.com/

What this does:

  • -w "@curl-format.txt" tells cURL to use our format file
  • -o /dev/null redirects the output of the request to /dev/null
  • -s tells cURL not to show a progress meter
  • http://wordpress.com/ is the URL we are requesting

And here is what you get back:

            time_namelookup:  0.001
               time_connect:  0.037
            time_appconnect:  0.000
           time_pretransfer:  0.037
              time_redirect:  0.000
         time_starttransfer:  0.092
                            ----------
                 time_total:  0.164

Jon was looking specifically at time to first byte, which is the time_starttransfer line. The other timing details include DNS lookup, TCP connect, pre-transfer negotiations, redirects (in this case there were none), and of course the total time.

The format file for this output provides a reasonable level of flexibility, for instance you could make it CSV formatted for easy parsing. You might want to do that if you were running this as a cron job to track timing details of a specific URL.

For details on the other information that cURL can provide using -w check out the cURL manpage.

When MySQL EXPLAIN Estimates Go Wrong

An odd looking bug with EXPLAIN in MySQL 5.1 mentioned in When EXPLAIN estimates can go wrong!

the row estimates on 5.1 is very off, its as much as 57 times less than the number of rows, which is not acceptable at all. While 5.5 returns an acceptable estimate.

This test proves that there is a bug in MySQL 5.1 and how it calculates the row estimates. This bug was tracked down to http://bugs.mysql.com/bug.php?id=53761 and was indeed fixed in 5.5 as my tests show.

Which leads MySQL 5.1 to make some odd index choices based on wildly inaccurate row estimates. Good to know this has already been fixed in MySQL 5.5.

Slides: Site Performance, From Pinto to Ferrari

Here are the slides from my “Site Performance, From Pinto to Ferrari” talk that I gave at WordCamp SLC 2011 and Wordcamp Albuquerque 2011.

Performance Trends For Top Sites On The Web

Steve Souders posted an update on the HTTP performance trends for top sites, based on data gathered via http://httparchive.org/. Here are the bottom line numbers:

Here’s a recap of the performance indicators from Nov 15 2010 to Aug 15 2011 for the top ~13K websites:

  • total transfer size grew from 640 kB to 735 kB
  • requests per page increased from 69 to 76
  • sites with redirects went up from 58% to 64%
  • sites with errors is up from 14% to 25%
  • the use of Google Libraries API increased from 10% to 14%
  • Flash usage dropped from 47% to 45%
  • resources that are cached grew from 39% to 42%

I was surprised by the total transfer size increase. If you followed that trend on a weekly basis, every Friday for the last 9 months you added another 2.6 kB to the total transfer size of your site. Not much for any given week, but it adds up fast.

The Key to Making Programs Fast

The key to making programs fast is to make them do practically nothing. ;-)

via Mike Haertel on why GNU grep is fast, where he describes some of the tricks he used to minimize the amount of work grep needed to do in order to find strings. Mike wrote the original version of GNU grep.

Older posts

© 2014 Joseph Scott

Theme by Anders NorenUp ↑