Skip to main content

Hmm

I am considering I should come back to blogging. Its been a while. Lot of things happened in my life. I lost my beloved father to cancer. I wish he were longer with me. But atleast he got to see my marriage. Yes. I got married. To the love of my life. Well I dropped everything for these two people, I care the most about.

Ok. Not completely. In this period, I started learning grails and groovy. Started loving App Engine for java. I upgraded my lenovo to Jaunty. I lost track of blogosphere. I bought a Dell 1545 for my wife. Forcibly turned her to a Ubuntu fan :). Started working on iBatis again :( Moved to Rockville MD. Totally ignored Hollywood (have to catch up on a lot of movies). And much more..

Coming back, let me start with a @#$@#$ on What is going on with all these news aggregation sites. Of the past, Sites have stopped creating content themselves, making a living just out of aggregation. But this is getting worse. They are linking among themselves before going onto the orginal. Look at this http://www.tuxwire.com/2009/08/06/mind-mapping-with-xmind-on-ubuntu-2/. The original story could only be reached with 2 steps of following crazy link chain. Is this getting into another "Forward this mail" tradition? Only with an immediate incentive of Ad revenue?

Popular posts from this blog

Powered By

As it goes, We ought to give thanks to people who power us. This page will be updated, like the version page , to show all the tools, and people this site is Powered By! Ubuntu GIMP Firebug Blogger Google [AppEngine, Ajax and other Apis] AddtoAny Project Fondue jQuery

Decorator for Memcache Get/Set in python

I have suggested some time back that you could modularize and stitch together fragments of js and css to spit out in one HTTP connection. That makes the page load faster. I also indicated that there ways to tune them by adding cache-control headers. On the server-side however, you could have a memcache layer on the stitching operation. This saves a lot of Resources (CPU) on your server. I will demonstrate this using a python script I use currently on my site to generate the combined js and css fragments. So My stitching method is like this @memize(region="jscss") def joinAndPut(files, ext): res = files.split("/") o = StringIO.StringIO() for f in res: writeFileTo(o, ext + "/" + f + "." + ext) #writes file out ret = o.getvalue() o.close() return ret; The method joinAndPut is * decorated * by memize. What this means is, all calls to joinAndPut are now wrapped (at runtime) with the logic in memize. All you wa...

How to Make a Local (Offline) Repository in Ubuntu / Debian

If you are in a place where you dont have internet (or have a bad one) You want to download .deb packages and install them offline. Each deb file is packaged as a seperate unit but may contain dependencies (recursively). apt-get automagically solves all the dependencies and installs all that are necessary. Manually install deb files one by one resolving each dependency would be tedious. A better approach is to make your own local repository. Before you actually make a repo, You need *all* deb files. You dont practically have to mirror all of the packages from the internet, but enough to resolve all dependencies. Also, You have to make sure, you are getting debs of the correct architecture of your system (i386 etc) # 1. make a dir accessible (atleast by root) sudo mkdir /var/my-local-repo # 2. copy all the deb files to this directory. # 3. make the directory as a sudo dpkg-scanpackages /var/my-local-repo /dev/null > \ /var/my-local-repo/Packages # 4. add the local repo to sour...