From the world of client hints comes Save-Data:
The token is a signal indicating explicit user opt-in into a reduced data usage mode on the client, and when communicated to origins allows them to deliver alternate content honoring such preference – e.g. smaller image and video resources, alternate markup, and so on.
The Google Developer site has a write up on Delivering Fast and Light Applications with Save-Data. No surprise that their main target is a better experience for users on slow mobile connections.
If sites start respecting
Save-Data = "on" requests, and they do it in a way that maintains basically functionality, then why not turn it on all the time? Even if I’m on an LTE connection, I’d rather have a faster loading site. The next step is to turn it on for desktop browsers. There is nothing that limits this to mobile clients.
Anything that targets faster page loads on mobile clients will eventually get used everywhere.
The HPACK ( header compression for HTTP/2 ) static table:
The static table was created from the most frequent header fields used by popular web sites, with the addition of HTTP/2-specific pseudo-header fields (see Section 126.96.36.199 of [HTTP2]). For header fields with a few frequent values, an entry was added for each of these frequent values. For other header fields, an entry was added with an empty value.
Here is a direct link to the list. I like number 16,
accept-encoding: gzip, deflate, even though that means this won’t be usable with Brotli.
Back in June I mentioned HTTP client hints, like DPR. It has gone on to Intent to Ship for Chrome. That spawned additional discussion on privacy and performance issues. Yoav Weiss has written up a doc on implementation considerations.
The Chrome Feature List has it with a status of “In development”.
From the world of “retina images are still kind of a pain”: clients could include DPR ( device pixel ratio ) details in image requests:
DPR hint automates device-pixel-ratio-based selection and enables delivery of optimal image variant without any changes in markup.
A request from the client for an image would look like:
GET /img.jpg HTTP/1.1
User-Agent: Awesome Browser
Accept: image/webp, image/jpg
This would be wonderfully helpful to servers that can manipulate image results to best fit the capabilities of clients.
There is also discussion about a
RW ( resource width ) header.
A Stack Overflow thread with an example on how to check a server for SPDY support with OpenSSL:
openssl s_client -connect google.com:443 -nextprotoneg ''
The result I got from “OpenSSL 1.0.1f 6 Jan 2014” looked like this ( emphasis mine ):
Protocols advertised by server: spdy/5a1, h2-14, spdy/3.1, spdy/3, http/1.1
139790806673056:error:140920E3:SSL routines:SSL3_GET_SERVER_HELLO:parse tlsext:s3_clnt.c:1061:
no peer certificate available
No client certificate CA names sent
SSL handshake has read 110 bytes and written 7 bytes
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS supported
Next protocol: (2)
Protocol : TLSv1.2
Cipher : 0000
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1417496091
Timeout : 300 (sec)
Verify return code: 0 (ok)
The “Protocols advertised by server:” is the line you need.
PostgreSQL has also been considering a HTTP API.
There is a HTTP plugin for MySQL available at labs.mysql.com. Ulf Wendel covers it in more detail.
The development preview brings three APIs: key-document for nested JSON documents, CRUD for JSON mapped SQL tables and plain SQL with JSON replies. More so: MySQL 5.7.4 has SQL functions for modifying JSON, for searching documents and new indexing methods!
At this point I still have more questions than answers, but I’m definitely intrigued.
Zack Tollman suggested I try out SPDY with my updated Nginx install. While I’m sad at the idea of giving up a plain text HTTP API, I was curious to see what SPDY looked like on this site.
I was disappointed with the results. The fastest page load time out of 5 runs without SPDY was 1.039 s. With SPDY the fastest result was 1.273 s. I then did several more runs of the same test with SPDY enabled to see if any of them could get close to the 1.0 s base line. None of them did, most came in close to 2 seconds. I had honestly expected to see SPDY perform better. That said this type of testing is not particularly rigorous, so take these numbers with a sufficiently large grain of salt.
Given the initial poor showing of SPDY in these tests I’m going to leave it turned off for now.
Mark Nottingham on Nine Things to Expect from HTTP/2.
If you have any interest in the future of HTTP then mnot’s blog is well worth reading.
Many corporate firewalls will limit outgoing connections to ports 80 and 443 in a vain effort to restrict access to non-web services. You could run SSH on port 80 or 443 on a VPS or dedicated server, but if you have one of those you are probably already using it to host a small web site. Wouldn’t it be nice if your server could listen for both SSH and HTTP/S on port 80 and 443? That is where sslh comes in:
sslh accepts connections on specified ports, and forwards them further based on tests performed on the first data packet sent by the remote client.
Probes for HTTP, SSL, SSH, OpenVPN, tinc, XMPP are implemented, and any other protocol that can be tested using a regular expression, can be recognised. A typical use case is to allow serving several services on port 443 (e.g. to connect to ssh from inside a corporate firewall, which almost never block port 443) while still serving HTTPS on that port.
Hence sslh acts as a protocol demultiplexer, or a switchboard. Its name comes from its original function to serve SSH and HTTPS on the same port.
Source code is available at https://github.com/yrutschle/sslh.
For small uses cases this may come in handy. If you were constantly needing to SSH to port 80 or 443 then I’d recommend just spending a few dollars a month to get a VPS dedicated to that task.
If you are stuck in a limited corporate network another tool you may find useful is corkscrew, which tunnels SSH connections through HTTP proxies.