The day has finally arrived, Google Reader is no more. You will be missed.
That leaves PageSpeed Online has the only option for running a PageSpeed analysis on a site.
I really liked using the Chrome plugin (back when I could still get it to work), but I’m happy to see there is at least one place where you can still run a PageSpeed analysis.
Last week I mentioned the concerns over the $1 sale price of the iProvo fiber network to Google. Well, the Provo Google Fiber project continues to get even stranger than that. From the Daily Herald article on the Provo city council vote:
Curtis also introduced new information and obligations that had not been discussed during the initial excitement of last week. There will be a need to spend some money. For one, the map on where the fiber conduits are actually laid is not available and it may take some guessing at a few locations as to what side of the street the fiber backbone is under. There is also an agreement the city will have control of the fiber to the schools and the city operations. Money has already been set aside from the telecom fund to take care of those needs. An insurance policy will also be needed to protect the city from the unknown. The total cost for city outlay will be approximately $1.7 million.
Emphasis at the end is mine.
A more detailed breakdown was reported by The Salt Lake Tribune:
- $722,000 “for equipment in order to continue using the gigabit service for government operations already using the network, such as the operation of traffic lights and police and fire services.”
- $500,000 “to a civil engineering firm to determine exactly where the fiber optic cables are buried, a requirement by Google”
- $500,000 “for an insurance policy to help mitigate any possible legal damages should Provo’s network not be presented to Google as promised”
Of course Google is paying Provo $1 for the network, so the real cost to Provo for selling their existing fiber network to Google is only $1,721,999. Still a fair bit money to pay someone to take an asset off your hands.
Then there is the issue of not even knowing where all of the fiber in the ground actually is. Didn’t they have to file permits with the city when they installed it in the first place? If they moved it later on, wouldn’t that require getting permits from the city as well? For something that they paid $39M for I would have thought they would keep a close eye on it.
The Salt Lake Tribune also reported numbers on how much is still due on the original $39M in bonds:
With interest, taxpayers still have to pay $3.3 million in bond payments per year for the next 12 years.
For a total of $39.6M that Provo will have paid out over the next 12 years.
This story may still have a happy ending. If Google Fiber in Provo blossoms into everything it could be, then all of this may have been worth it.
The money thing doesn’t really freak me out though. What really freaks me out is that the city of Provo has connected “operation of traffic lights and police and fire services” to the same fiber network that connects to the Internet. That strikes me as a really, really, really bad idea.
Last week I mentioned that Google Fiber was coming to Provo. This process would be accelerated because they were purchasing the existing fiber network, iProvo, instead of having to built out a completely new one.
Turns out the sale price of iProvo fiber network to Google is $1. It should come as no surprise that some people are less than thrilled about this price.
The original cost to build out the iProvo fiber network is reported at $39 million. To fund this the city of Provo issued bonds. Those bonds have not yet been paid in full. The $1 in revenue from the sale isn’t going to pay off the bonds. In an effort to help pay off the bonds in 2011 a surcharge was added to the utility bills of every household in Provo.
While most people seem happy to have Google coming in to upgrade and run the fiber network in Provo, others are concerned that the sale of the network does nothing to address the remaining millions of dollars in bond obligations.
Both sides are benefiting from this, but the $1 sale does feel a bit odd. I’d assume the existing fiber network has some value, less than the original $39 million and more than $1. Perhaps something on the order of $5 to $10 million would have been reasonable. Google would still have been given a great deal on existing infrastructure, and the city of Provo would have some money to pay down the bonds faster.
In the negotiations for this I’m sure Google held all the cards. The city of Provo was likely happy to bend over backwards and into a pretzel to make sure Google didn’t walk away from the deal.
The big announcement in Utah yesterday was that Google Fiber is coming to Provo. Instead of building out completely new infrastructure, Google will be purchasing the existing fiber network in Provo. From there they will be upgrading it to be in line with other Google Fiber installs.
I live about a 30 minute drive north of Provo, so this won’t make it to my home. Though Comcast recently upped my connection to 50Mbps down, so my current connection isn’t horrible. I wonder if Comcast knew this announcement from Google was coming and wanted to preempt some of the complaints from users.
The other thing that I wonder about is the NSA datacenter a short hop north of Provo. Was this a motivating factor on the either side? No idea. Would be nice to know if Google Fiber in Provo ends up peering with the NSA data center.
Speaking of peering, there are a few others in the area that would probably love to have a network peering arrangement with Google Fiber in Provo. C7 has a data center in Bluffdale ( north of Provo and not far from the NSA data center ) that apparently has a hefty number Twitter servers. Bluehost has a data center in Provo. Those are the two I could think of off the top of my head, there are data centers in the area who would be interested as well.
This is just the beginning of Google Fiber in Provo, and even though I won’t have access to it, I’ll be watching closely to see how it develops.
“Is It More Important Than Reader?”
That is the question that immediately came to mind when I saw the announcement for Google Keep. If the concern was focusing more resources on few products, this seems to go the opposite direction.
Then there is the lost of trust issue with keeping a service around. Om Malik described his thoughts on Google Keep this way:
It might actually be good, or even better than Evernote. But I still won’t use Keep. You know why? Google Reader.
For each new Google product announcement I’m going to be wondering, is it more important than Google Reader?
When Google Reader came along in 2005 it wiped out most of the feed reader services that were around at the time. Prior to Google Reader I used Bloglines, back when Blogline was still a feed reader. I had resisted moving from Bloglines to Google Reader, because for awhile I liked the experience in Bloglines better. Ultimately though Google Reader soaked up most of the oxygen in the feed reader service space and I moved entirely from Bloglines to Google Reader.
I suggest exporting your Google Reader data via Google Takeout soon. It only took a few minutes to walk through the steps to export my data. Other readers will likely support importing OPML, so best to export now instead of waiting until the last minute.
Despite how ugly this is going to be in the short term I think this painful process may eventually yield better things ahead. Marco Arment put it this way:
Now, we’ll be forced to fill the hole that Reader will leave behind, and there’s no immediately obvious alternative. We’re finally likely to see substantial innovation and competition in RSS desktop apps and sync platforms for the first time in almost a decade.
Technology, like nature, views a vacuum as an opportunity to fill a space.
So what now? There are a few alternatives available today, but none of them seem to be a clear winner. I imagine over the next twelve months there will be a number of new services that show up, trying to be _the_ replacement for Google Reader. It will be ugly for awhile, but I’m hopeful in the end we’ll see some really strong replacements come out of this.
In the mean time leave your Google Reader replacement suggestions in the comments below.
Zopfli Compression Algorithm is a new zlib (gzip, deflate) compatible compressor. This compressor takes more time (~100x slower), but compresses around 5% better than zlib and better than any other zlib-compatible compressor we have found.
Being gzip compatible means that existing clients can decompress files that have been compressed with Zopfli. Specifically, web browsers will be able to understand this.
A number of people have asked what the big deal is over such a relatively small size reduction at the cost of a much slower compression process. Lets take the lastest jQuery file from the Google CDN as an example – http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js.
First I cloned the Zopfli source repo and compiled it on my Macbook Air. Took about a minute or so to clone and build. The build process only required running
Here are the results of
gzip -9 and
zopfli --i1000 on jquery.min.js:
|gzip -9||32,660 bytes||0.009s|
|zopfli –i1000||31,686 bytes||16.376s|
In this test Zopfli saved an additional 974 bytes and took over 16 seconds longer. In our Google CDN example the time it takes to do the compression doesn’t make any difference, that is something you’ll only be doing once per file. To see what the potential savings is from those extra 974 bytes we’d need to know how often the jQuery file is downloaded. I don’t know what the actual numbers are, so lets make some up. Lets say it is 1 million times per month.
So 974 bytes * 1,000,000 gives us 974,000,000 bytes per month in savings. No doubt this is a drop in the bucket when compared to total bandwidth usage at Google, but it is an improvement. The improvement wouldn’t just be for Google either, everyone who views web sites that had the Zopfli compressed jQuery would have a slightly better experience as well. Smaller file means it gets downloaded faster over my existing connection. This would be extra good for mobile users.
I think there are plenty of cases where using Zopfli to compress your files will be a nice little improvement.
Google is setting some high expectations at the new Google Glass site – http://www.google.com/glass/start/ – if you haven’t seen it yet go watch the How it feels video. Setting the expectations bar so high is either going to create a ton of demand for Glass, or is going to kill it.
Beyond that though, the opportunities and challenges for Glass are fun to look at. For instance, will this style of wearable technology fall into the same “lame look” bucket as bluetooth ear pieces? Beyond style, how will people find ways to use it inconspicuously. I mean, can you imagine having to put this on during a meeting to look up contact information or an email thread? Would Glass support whispering, or hand gestures so that you could do things without having to talk out loud? Can you imagine how annoying it would get if there were 20 people in a room all trying to use their own Glass headset at the same time?
This leads me to wondering, would Glass be positioned as a phone replacement or a phone accessory? The ability to function as an amazing phone accessory will be very compelling. But like some people with tables vs. laptops, there will no doubt be a segment of the market that will find Glass as a potential phone replacement. My guess is that will still be the minority for at least the first few years.
There are plenty of situations where having something voice activated that is hands free is very desirable. With a few extra features this could do amazing things for firefighters, policemen, paramedics, etc. Tons of recreational and sports activities aren’t conducive to handling a phone, but something like Glass would go well. Imagine a composite video simulation of the Olympic opening ceremonies based on video feeds from every single person in the stands, the performers, and stadium crew, all wearing a Glass headset.
Which would be an astronomical amount of data to process. It wouldn’t even take thousands of people in one stadium to generate a large amount of data. Just one person constantly taking pictures, videos, sending messages, can generate a demand for lots of storage and a reasonable way to manage it all. That is both a huge opportunity and a huge challenge that Google will have to deal with, and could be the key to making the whole thing a viable source of revenue. What would you be willing to pay for a Google Glass account that allowed you to store, access, and share everything you ever did using your Glass headset?
The news that Google is building another data center in Iowa got me wondering if there was a list of all the Google data centers.
Turns out Google has a page with all of the data center locations, which is just one part of their ‘Data Center’ details area. There are 6 Google data centers in the United States, 2 in Europe, and 3 in Asia. Each data center has a page with more details.