Skip to main content

Using Commons Pooling for Objects in JEE

A little more than 3 weeks ago, I was asked to rewrite a Multi-threading component that *pooled* connections to Mainframe HTTP Gateway. There was an existing implementation with synchronized array of connections which was initialization with a configurable size. The problem was this implementation was home grown way back in history, and the confidence level in that code was very low. Especially because its going into a JEE container and code-managed threads were not scaling atone to the container.

My first instinct was to use Apache Commons pooling. For one, this would have been much more QA-ed than what I would have implemented. The primary logic would be approximately the same, create-pool > borrow-object > return object. Everything could be done with commons pooling, using the GenericObjectPool.

On a finer level, I need to manage the life of the objects themselves - make sure the encapsulated connection is not stale etc, in which case invalidate it
A sample example (as in JavaDoc)
Object obj = null;
try {
// The only Guarentee is an object will not
// be given to different threads
obj = pool.borrowObject();
// Loop here until you an UNSTALE connection
//...use the object...
} catch(Exception e) {
// if any other invalidate the object
pool.invalidateObject(obj);
// do not return the object to the pool twice
obj = null;
} finally {
// make sure the object is returned to the pool
if(null != obj) {
pool.returnObject(obj);
}
}

The Default Implementation would block the borrowObject() call for requests indefinitely. To make this more scalable, I would setMaxWait() to a positive millisecs and handle the NoSuchElementException.

PS. I never completed this implementation due to lack to time at work and other production emergencies hogging my time. I would not have written this post, but Now that I am going to take up a new position at a new client soon. So the eventual implementation of this would be done by someone else at my current client. This post is dedicated to my own procrastination and to suggest my idea to him/her

Popular posts from this blog

Powered By

As it goes, We ought to give thanks to people who power us. This page will be updated, like the version page , to show all the tools, and people this site is Powered By! Ubuntu GIMP Firebug Blogger Google [AppEngine, Ajax and other Apis] AddtoAny Project Fondue jQuery

Decorator for Memcache Get/Set in python

I have suggested some time back that you could modularize and stitch together fragments of js and css to spit out in one HTTP connection. That makes the page load faster. I also indicated that there ways to tune them by adding cache-control headers. On the server-side however, you could have a memcache layer on the stitching operation. This saves a lot of Resources (CPU) on your server. I will demonstrate this using a python script I use currently on my site to generate the combined js and css fragments. So My stitching method is like this @memize(region="jscss") def joinAndPut(files, ext): res = files.split("/") o = StringIO.StringIO() for f in res: writeFileTo(o, ext + "/" + f + "." + ext) #writes file out ret = o.getvalue() o.close() return ret; The method joinAndPut is * decorated * by memize. What this means is, all calls to joinAndPut are now wrapped (at runtime) with the logic in memize. All you wa...

How to Make a Local (Offline) Repository in Ubuntu / Debian

If you are in a place where you dont have internet (or have a bad one) You want to download .deb packages and install them offline. Each deb file is packaged as a seperate unit but may contain dependencies (recursively). apt-get automagically solves all the dependencies and installs all that are necessary. Manually install deb files one by one resolving each dependency would be tedious. A better approach is to make your own local repository. Before you actually make a repo, You need *all* deb files. You dont practically have to mirror all of the packages from the internet, but enough to resolve all dependencies. Also, You have to make sure, you are getting debs of the correct architecture of your system (i386 etc) # 1. make a dir accessible (atleast by root) sudo mkdir /var/my-local-repo # 2. copy all the deb files to this directory. # 3. make the directory as a sudo dpkg-scanpackages /var/my-local-repo /dev/null > \ /var/my-local-repo/Packages # 4. add the local repo to sour...