How To Check A Site For SPDY Support With OpenSSL

A Stack Overflow thread with an example on how to check a server for SPDY support with OpenSSL:

openssl s_client -connect google.com:443 -nextprotoneg ''

The result I got from “OpenSSL 1.0.1f 6 Jan 2014″ looked like this ( emphasis mine ):

CONNECTED(00000003)
Protocols advertised by server: spdy/5a1, h2-14, spdy/3.1, spdy/3, http/1.1
139790806673056:error:140920E3:SSL routines:SSL3_GET_SERVER_HELLO:parse tlsext:s3_clnt.c:1061:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 110 bytes and written 7 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
Next protocol: (2) 
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : 0000
    Session-ID: 
    Session-ID-ctx: 
    Master-Key: 
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    Start Time: 1417496091
    Timeout   : 300 (sec)
    Verify return code: 0 (ok)
---

The “Protocols advertised by server:” is the line you need.

HTTP Plugin for MySQL

There is a HTTP plugin for MySQL available at labs.mysql.com. Ulf Wendel covers it in more detail.

The development preview brings three APIs: key-document for nested JSON documents, CRUD for JSON mapped SQL tables and plain SQL with JSON replies. More so: MySQL 5.7.4 has SQL functions for modifying JSON, for searching documents and new indexing methods!

At this point I still have more questions than answers, but I’m definitely intrigued.

Tried Out SPDY

Zack Tollman suggested I try out SPDY with my updated Nginx install. While I’m sad at the idea of giving up a plain text HTTP API, I was curious to see what SPDY looked like on this site.

I was disappointed with the results. The fastest page load time out of 5 runs without SPDY was 1.039 s. With SPDY the fastest result was 1.273 s. I then did several more runs of the same test with SPDY enabled to see if any of them could get close to the 1.0 s base line. None of them did, most came in close to 2 seconds. I had honestly expected to see SPDY perform better. That said this type of testing is not particularly rigorous, so take these numbers with a sufficiently large grain of salt.

Given the initial poor showing of SPDY in these tests I’m going to leave it turned off for now.

Listen for SSL and SSH on the Same Port

Many corporate firewalls will limit outgoing connections to ports 80 and 443 in a vain effort to restrict access to non-web services. You could run SSH on port 80 or 443 on a VPS or dedicated server, but if you have one of those you are probably already using it to host a small web site. Wouldn’t it be nice if your server could listen for both SSH and HTTP/S on port 80 and 443? That is where sslh comes in:

sslh accepts connections on specified ports, and forwards them further based on tests performed on the first data packet sent by the remote client.

Probes for HTTP, SSL, SSH, OpenVPN, tinc, XMPP are implemented, and any other protocol that can be tested using a regular expression, can be recognised. A typical use case is to allow serving several services on port 443 (e.g. to connect to ssh from inside a corporate firewall, which almost never block port 443) while still serving HTTPS on that port.

Hence sslh acts as a protocol demultiplexer, or a switchboard. Its name comes from its original function to serve SSH and HTTPS on the same port.

Source code is available at https://github.com/yrutschle/sslh.

For small uses cases this may come in handy. If you were constantly needing to SSH to port 80 or 443 then I’d recommend just spending a few dollars a month to get a VPS dedicated to that task.

If you are stuck in a limited corporate network another tool you may find useful is corkscrew, which tunnels SSH connections through HTTP proxies.

Fewer HTTP Verbs

Brett Slatkin suggests that we reduce the number of verbs in HTTP 2.0:

Practically speaking there are only two HTTP verbs: read and write, GET and POST. The semantics of the others (put, head, options, delete, trace, connect) are most commonly expressed in headers, URL parameters, and request bodies, not request methods. The unused verbs are a clear product of bike-shedding, an activity that specification writers love.

Interestingly, HTTP 1.0 only defined GET, POST, and HEAD back in 1996.

I could get behind the idea of just having GET, POST, and HEAD. In practice these tend to be the safest verbs to use. It would also put an end to having to talk about the semantics of PUT every six months.

Those that insist that all things must be REST or they are useless won’t like this. They could find a way to get over that.

TCP Over HTTP, A.K.A. HTTP 2.0

Skimming through the HTTP 2.0 draft RFC that was posted yesterday I’m left with the distinct feeling of implementing TCP on top of HTTP:

HTTP 2.0 Framing
HTTP 2.0 Framing

I’m in the camp that believes that future versions of HTTP should continue to be a text based protocol ( with compression support ).

Most weeks I look at several raw HTTP requests and responses. Yes, there will still be tools like cURL ( which I love ) to dig into HTTP transactions, so it isn’t the end of the world. Still, I am sad to see something that is currently fairly easy to follow turn into something significantly more complex.

reddit.com HTTP Response Headers

I found an old note to myself to look at the HTTP response headers for reddit.com. So I did this:

$ curl -v -s http://www.reddit.com/ > /dev/null
* About to connect() to www.reddit.com port 80 (#0)
* Trying 69.22.154.10…
* connected
* Connected to www.reddit.com (69.22.154.10) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8r zlib/1.2.5
> Host: www.reddit.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: text/html; charset=UTF-8
< Server: ‘; DROP TABLE servertypes; –
< Vary: accept-encoding
< Date: Wed, 22 May 2013 14:37:25 GMT
< Transfer-Encoding: chunked
< Connection: keep-alive
< Connection: Transfer-Encoding
<
{ [data not shown]
* Connection #0 to host www.reddit.com left intact
* Closing connection #0

Fun Server entry in there. Reminded me of little Bobby tables from xkcd.

I’m sure this has made the rounds in other places. Unfortunately my note didn’t indicate where I first saw this.

iOS6 Safari Caching POST Responses

With the release of iOS6 mobile Safari started caching POST responses. Mark Nottingham talks through the related RFCs to see how this lines up with the HTTP specs. Worth a read for the details, here is the conclusion:

even without the benefit of this context, they’re still clearly violating the spec; the original permission to cache in 2616 was contingent upon there being explicit freshness information (basically, Expires or Cache-Control: max-age).

So, it’s a bug. Unfortunately, it’s one that will make people trust caches even less, which is bad for the Web. Hopefully, they’ll do a quick fix before developers feel they need to work around this for the next five years.

Over the years I’ve run across a handful of services and applications that claim to be able to cache HTTP POST responses. In every case that turned out to be a bad decision.