Skip to main content

Why Acer a500 ICS update's distribution strategy is a failure

Acer released the Android Ice Cream Sandwich (ICS, aka 4.0) update on Apr 27th 2012, for its A50x line of tablets. The owners were ecstatic about the news. Nevertheless, the process of deployment for Acer and the installation for the end-users was nothing short of a disastrous saga.

For a couple of days, users saw one of four cases.
  1. Downloads not start at all - while the update app claiming "Network coverage is currently poor: Please move to a better location with better network coverage to continue." (which is appalling for users sitting right next to their routers)
  2. Downloads started but not finish (the progress bar waited, as if eternally, until the download window finally disappeared).
  3. Downloads finished 100% but not kick off update installer (this and the next one is probably the most heart-breaking).
  4. Update installer failing claiming "Invalid update file"

A lucky few could get the whole thing complete without breaking. However, it wasn't the case for the majority of the users.

The failure of this distribution strategy could be summarised into one major point. The release payload was centrally distributed and was made available only though the update app. This means, the ~400MB download must be served to each user (and the repeat requests due to failed attempts) by Acer's own servers. Add to that the "primitive" update application handle the download process. For what ever reason, this doesnot sound rational. Even though there may be geographically load balanced CDNs, The fact that the download process is interwoven with update program itself is beckoning Murphy's law.

Presumably, the user reaction's were ranging from dissappointment to wide spread despise. As with any update release, Acer should have planned better. Torrent distributions, and http downloads are much more standard. They could have made an encrypted file available for download sans the controlled delivery mechanism. The upload tool could decrypt the user downloaded file, and install it. May be in future, Acer will not repeat this mistake. Well atleast it released updates pronto, unlike some other industry leaders (read Samsung)

update: I went through all four cases above and finally got ICS on my Tablet. Thank you Acer! and take my advice distribute your updates in the "open" way, in future.

Popular posts from this blog

Powered By

As it goes, We ought to give thanks to people who power us. This page will be updated, like the version page , to show all the tools, and people this site is Powered By! Ubuntu GIMP Firebug Blogger Google [AppEngine, Ajax and other Apis] AddtoAny Project Fondue jQuery

Decorator for Memcache Get/Set in python

I have suggested some time back that you could modularize and stitch together fragments of js and css to spit out in one HTTP connection. That makes the page load faster. I also indicated that there ways to tune them by adding cache-control headers. On the server-side however, you could have a memcache layer on the stitching operation. This saves a lot of Resources (CPU) on your server. I will demonstrate this using a python script I use currently on my site to generate the combined js and css fragments. So My stitching method is like this @memize(region="jscss") def joinAndPut(files, ext): res = files.split("/") o = StringIO.StringIO() for f in res: writeFileTo(o, ext + "/" + f + "." + ext) #writes file out ret = o.getvalue() o.close() return ret; The method joinAndPut is * decorated * by memize. What this means is, all calls to joinAndPut are now wrapped (at runtime) with the logic in memize. All you wa...

Faster webpages with fewer CSS and JS

Its easy, have lesser images, css and js files. I will cover reducing number of images in another post. But If you are like me, You always write js and css in a modular fashion. Grouping functions and classes into smaller files (and Following the DRY rule, Strictly!). But what happens is, when you start writing a page to have these css and js files, you are putting them in muliple link rel=style-sheet or script tags. Your server is being hit by (same) number of HTTP Requests for each page call. At this point, its not the size of files but the number server roundtrips on a page that slows your page down. Yslow shows how many server roundtrips happen for css and js. If you have more than one css call and one js call, You are not using your server well. How do you achieve this? By concatinating them and spitting out the content as one stream. So Lets say I have util.js, blog.js and so.js. If I have a blog template that depends on these three, I would call them in three script tags. Wh...