Being part of the second class citizen group that is Google Apps for Domains is becoming a real let down.
Being part of the second class citizen group that is Google Apps for Domains is becoming a real let down.
Google has announced the move for several of their APIs to require SSL when making requests. This is a good thing.
If you aren’t planning on it already, now is a good time to expect new APIs to require SSL from the start. This is likely going to make TLS Server Name Indication an even bigger deal, as demand for SSL services increases and IP addresses become more expensive.
The term fanboy is often used to describe in a negative way someone who sticks to one product or vendor just because. The term is intended to be negative because it implies that the person gave no real thought as to how good the product is or how well it fits their needs.
In a discussion about this vendor vs. that vendor playing the fanboy card is a lazy way to try and remove any legitimacy to the point of view of the other side. After all, how can they possibly bring anything useful to a discussion if they are just a fanboy?
Sadly playing the fanboy card is more often than not a lazy way of trying to ignore a different view point instead of addressing it. This is particularly true of Google Android phones vs. the iPhone. Aaron Toponce recently employed this method:
People jumped onto the iPhone bandwagon when it was announced on AT&T for two reasons: Apple fanboys and superior hardware. People getting an iPhone on the Verizon network will be: Apple fanboys.
The only possible reason for “People getting an iPhone on the Verizon network will be: Apple fanboys.” By playing the fanboy card you simply get to ignore any possible counter point. I used Aaron’s post because it illustrated the point so well, folks on both sides play this game, it isn’t limited to one side or the other.
In the original post there was actually one detail provided, that the HTC Evo 4G is “head and shoulders over the iPhone 4. It’s no contest, and it’s already outdated hardware”. I’ve never used an HTC Evo 4G before and only played with an iPhone 4 for a few minutes. This made me curious to find out if there is any possible reason why someone would go with an iPhone 4 over an HTC Evo 4G. Fortunately others have already done the numbers comparison for me – Engadget lists numbers for the iPhone 4 vs. the HTC Evo 4G – with a chart for easy comparison.
Since the claim was “no contest” I only looked for things listed on the chart that could reasonably be justified by someone to indeed provide some contest.
There are several other factors that could come into play, but I choose to limit the list to just specific numbers. Now it is entirely possible that none of the iPhone 4 advantages listed above make any difference given individual circumstances. The flip side is also true, some of these factors may be very important for some individuals. This constitutes a contest between the two.
The next time you get the urge to simply wave off counter points by calling the other side a fanboy, stop and think about what you are doing. Other wise you may end up being the actual fanboy in the discussion :-)
mod_pagespeed is an open-source Apache module that automatically optimizes web pages and resources on them. It does this by rewriting the resources using filters that implement web performance best practices. Webmasters and web developers can use mod_pagespeed to improve the performance of their web pages when serving content with the Apache HTTP Server.
curl -O -v --compressed http://ajax.googleapis.com/ajax/libs/jquery/1.4.3/jquery.min.js
This downloads a minified version of jQuery 1.4.3, with
--compressed, which means I’d like the response to be compressed. The HTTP request looked like:
> GET /ajax/libs/jquery/1.4.3/jquery.min.js HTTP/1.1 > User-Agent: curl/7.16.4 (i386-apple-darwin9.0) libcurl/7.16.4 OpenSSL/0.9.7l zlib/1.2.3 > Host: ajax.googleapis.com > Accept: */* > Accept-Encoding: deflate, gzip >
The response from Google was:
I was surprised that there was no
Content-Encoding: gzip header in the response, meaning the response was NOT compressed. I wasn’t quite sure what to make of this at first. No way would Google forget to turn on HTTP compression, I must have missed something. I stared at the HTTP response for sometime, trying to figure out what I was missing. Nothing came to mind, so I ran another test.
This time I made a request for http://ajax.googleapis.com/ajax/libs/jquery/1.4.3/jquery.min.js in Firefox 3.6.12 on Mac OS X and used Firebug to inspect the HTTP transaction. The request:
GET /ajax/libs/jquery/1.4.3/jquery.min.js HTTP/1.1 Host: ajax.googleapis.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:184.108.40.206) Gecko/20101026 Firefox/3.6.12 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Pragma: no-cache Cache-Control: no-cache
and the response:
This time the content was compressed. There were several differences in the request headers between curl and Firefox, I decided to start with just one, the “User-Agent”. I modified my initial curl request to include the User-Agent string from Firefox:
curl -O -v --compressed --user-agent "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:220.127.116.11) Gecko/20101026 Firefox/3.6.12" http://ajax.googleapis.com/ajax/libs/jquery/1.4.3/jquery.min.js
> GET /ajax/libs/jquery/1.4.3/jquery.min.js HTTP/1.1 > User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:18.104.22.168) Gecko/20101026 Firefox/3.6.12 > Host: ajax.googleapis.com > Accept: */* > Accept-Encoding: deflate, gzip >
and the response:
Sure enough, I got back a compressed response. Google was sniffing the User-Agent string to determine if a compressed response should be sent. It didn’t matter if the client asked for a compressed response (
Accept-Encoding: deflate, gzip) or not. What still wasn’t clear is if this was a black list approach (singling out curl) or a white list approach (Firefox is okay). So I tried a few other requests with various User-Agent strings. First up, no User-Agent set at all:
curl -O -v --compressed --user-agent "" http://ajax.googleapis.com/ajax/libs/jquery/1.4.3/jquery.min.js
Not compressed. Next a made up string:
curl -O -v --compressed --user-agent "JosephScott/1.0 test/2.0" http://ajax.googleapis.com/ajax/libs/jquery/1.4.3/jquery.min.js
Not compressed. At this point I think Google is using a white list approach, if you aren’t on the list of approved User-Agent strings for getting a compressed response then you won’t get one, no matter how nicely you ask.
I collected a few more browser samples as well, just to be sure:
One more time, curl using the IE 8 User-Agent string:
curl -O -v --compressed --user-agent "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET4.0C;" http://ajax.googleapis.com/ajax/libs/jquery/1.4.3/jquery.min.js
Since I can manipulate the response based on the User-Agent value I’m left to conclude that the Google Library CDN sniffs the User-Agent string to determine if it will respond with a compressed result. From what I’ve seen so far Google Library contains a white list of approved User-Agent patterns that it checks against to determine if it will honor the compression request.
If you are on a current version of one of the popular browsers you will get a compressed response. For those using anything else you’ll have to test to confirm if Google Library will honor your request for compressed content. Opera users are just plain out of luck, even the most recent version gets an uncompressed response.
Many people have already complained about Google Chrome leaving off the http:// in the URL field (there are certain cases where it does display, that is now the exception though, not the rule), here is my take on why this move was not only wrong, but worse than what we had before.
Initially the complaint was that without the http:// in front copy-paste will be a problem, because other systems use that to detect strings that look like URLs. So Google got clever (this should be the first clue of something bad happening, picking a clever solution over a simple one), when you copy the URL it magically inserts the http:// at the beginning so that it shows up when you paste.
Problem solved right? No, it actually made things worse.
It is not unusual for me to copy just the host name portion of the URL from my browser (Chrome is usually my default browser), but since Chrome silently adds the http:// in the background it is not impossible to copy just the host name. Using this site as an example, copying josephscott.org from the URL results in http://josephscott.org/ when I paste. Not only does it prefix http:// it also adds the trailing slash.
This ends up being super annoying. I’ve looked for options to disable this feature of Chrome and just always show http:// in the URL field and not to mangle copy-paste. So far I haven’t found a way to do this. My work around for now is to copy all but the first character of the host name, type that in manually and then paste the rest of the host name.
Was mangling the copy-paste buffer in the background a clever hack? Yes. Is it better than the simple solution just showing http:// in the URL field? No, not by a long shot.
Wednesday’s New York Times article – Google and Verizon Near Deal on Web Pay Tiers – claiming that Google is in talks with Verizon for preferred treatment of network traffic destined for Google servers – made folks very upset. In a nut shell, anti-net neutrality, which is a big deal given the size of the two companies involved. This caused the halt of private FCC meetings about regulation and both Google (denial) and Verizon (denial) have denied that there are in talks for such a deal. They did confirm though that they are talking, and that they have been talking for some time. This has gathered plenty of attention.
Eric Schmidt’s comment to CNBC may clear this up a bit:
Schmidt clarified that the net neutrality he advocates is not a neutrality between different types of content, but between the same type of content. He wants to make sure that there’s no discrimination between one video download over another.
If this is correct the neutrality level would only be at the content type level, not at the lower network layer. I suppose this could be considered slightly better than anti-net neutrality, but not by much. Large network providers are notoriously slow at adapting to change. Have you tried getting an IPv6 address for your home connection? Or even a co-located server for that matter? Yeah, not as fun as it sounds.
While advocating that all video content (for example) be treated the same may sound nice, what happens when someone comes up with a hot new video delivery method? Are all the big networks going to update all of their filter rules right away to detect this new type of video packet? I think it is more likely to see rain on the moon than for all of the big network providers respond quickly to such a change. As a result any development around video would self limit itself to make sure it matched the existing packet patterns for video to make sure they get the same treatment by Internet routers. Could this have a stifling impact on innovation? You can count on it.
There is something else that Google and Verizon could be talking about in this area as well; large-scale network peering (more info on Internet peering here and here). This could provide a similar benefit (increased speed/performance for traffic to Google from Verizon) by reducing the number of routers between Verizon customers and Google servers. For instance between my home DSL connection from Qwest and www.google.com there are roughly 14 routers (according to traceroute). Of those 5 are operated by Qwest, 4 by Level3, and 5 by Google (confirmed by whois lookup for each IP address in the traceroute results). A peering arrangement between Qwest and Google would likely eliminate all 4 of the Level3 run routers and perhaps one or two more between Qwest and Google. Removing 30% of the routers and a third-party entirely would likely result in better network performance between my Qwest DSL connection and Google servers. Seems reasonable that Google and Verizon would be in high level talks to discuss peering arrangements.
There is little doubt that Google already has many peering arrangements with other network carriers, given the massive popularity of Google services such peering arrangements would be mutually beneficial. Perhaps Verizon has been holding out? Maybe they want Google to pay them for the privilege? That would go against the traditional no pay arrangement for peering, but I don’t think that would stop Verizon from asking anyway.
In the end all of this is highly speculative, including the original New York Times article. It could range from nearly 100% spot on, to being no where near the mark. Until there is some sort of definitive declaration from Google and Verizon we won’t really know what is going on. That may be the biggest point out of all this, if Google and Verizon are talking net neutrality issues then they need to be out in the open about it, the impact of such a discussion go way beyond just these two companies.
For now Google says they are “committed to an open internet”. Given Eric Schmidt’s comments on content level neutrality I don’t think this is good enough, definitions of open come and go.
I was looking for something in my Gmail account recently and couldn’t find it via search. I came back a few hours later thinking that maybe I had made a typo, or threw off the search in some other way, but I couldn’t remember the exact terms that I’d used last time.
It would be really handy if Gmail kept a list of my last dozen or so search terms.
ReadWriteWeb has an article about Google potentially using PuSH to get updates into the search index. I knew this idea sounded familiar so I went hunting through some of my old posts and found – Apache Module Idea mod_ping from 2004.
Back then I had thought a lot about searching blog feeds. There were some services that offered blog feed searching but they were all pretty bad. I wrote a review of the situation at the time in Why Hasn’t Anyone Figured Out How To Do Feed Searches?. Keep in mind that Google’s blog search feature wouldn’t be announced for another year.
It seemed odd to me that blog feed search would be so bad given how strongly the blogging software community had embraced the idea of pinging updates. This led me to the idea of some sort of mod_ping for Apache that would do similar pings for any type of website updates, it didn’t have to be limited to just blogs. Obviously I never took this idea any where (besides writing that post) and to be honest I hadn’t really revisited the idea much since. Search engines took a different route for update frequency, with features like sitemaps in 2006.
Fast forward to 2010 and we’ve got discussion of Google potentially using PubSubHubbub (PuSH) to subscribe to updates from every single web site out there. This brings up an interesting question though. Since PuSH focuses on feed formats (RSS & Atom) for pings, what format will pings from sites that don’t have feeds look like? Will the ping just contain the entire HTML output of the updated page? What about a diff (unified format of course!) between the new HTML and the previous HTML for a given page?
Folks like Brett Slatkin have been thinking about this sort of thing on a deeper level than I ever did, so I’m curious to see where this goes.
Wandered across the 133t Google translation today. Wait for the rest of the page to fade in for the full effect.