Skip to main content

Resource Contention in JEE for a layman

In terms of scalability, a frequent term used is "Resource Contention". Most commonly this means, there is a shared object, file handle, db connection, or similar for which Threads are trying to get ownership. Until they obtain this ownership they either wait or exit on a timeout mechanism. In normal flow, they simply wait - This state is called blocked state. In an asynchronous mode (generally meaning a spawned child thread), there can be a Time-Out flow that can be defined to control the maximum length of time the thread can take to complete a task.

In a blocked state, There may be "Resource Contention" in one thread that is avoiding other threads to proceed. In a JEE environment, This Resource Contention is caused commonly by Transactions(, Synchronized blocks, Locks). Since User Threads are discouraged in JEE, Synchronized blocks and Locks are less often encountered as issues.

That's theory. If you dint understand much, Dont feel despair. I come from the very same shoes. To imagine a multi-threaded environment and its concepts in theory is VERY difficult for a person coming from a basic programming background. A small example should help.

Imagine a Tomcat web app. When a request comes from the browser, its a THREAD. This thread starts from the socket listener, goes through catalina servlet, goes to the servlet mapping in your application. Assuming you are using MVC, the application starts with central controller, which hands off the control to service layer, which calls the DAO layer and the stack frame is completed in the reverse order ending up with a result in web browser. In Simple case, ALL this is *ONE THREAD*.

But the DAO has a data source; Data Source provides Connections(Resources). These are *limited* resources. And here is the Main Issue. Assuming your application get 3 requests in the first second [depicted from here on as 1'], 3 *ONE THREAD* start, and try to complete, lets say, in a duration of 10 secs). In the simplest case, 3 connections are created at the DAO call, and destroyed at the end of 10'. But what if there are 30 requests at the 2' ? At 2' there is 33 connections and 33 threads.
And 40 requests at the 3'. There are 73 connections @3'.

so it goes like,
1' - 3
2' - 33
3' - 73
...
...
11' - 70
12' - 40
13' - 0

The problem with this is, If there are a 1000 requests that are being served at a time, there will be 1000 Connections open. Threads are light, you can have as many as you want - But Connections are costly resources. So we have to share them.

Think of connections as beads. Thread of execution as an actual thread with a needle at the beginning, and an open end. Take a bead (resource) and start thread to go into it. Move the bead along the thread, until it reaches the end of the thread. Once it reaches the end of the thread, the bead falls off the thread. This is exactly what happens to the request. It starts - picks up a connection - uses it - and releases it. But This is one thread, one connection scenario.

Look How is it in JEE. There are multiple Connections (Pooled resources). Consider them to be beads in a bowl. And then requests come in and create threads. The connection pool manager makes sure only one Thread gets ONE specific connection at a time. The constructs I mentioned before (Locks, Sync Blocks) are used by the pool manager, so the user doesn't need to do it. After the thread executes, the connection is given back to pool (bead falls back in the bowl)

Now you get the picture, When there are more threads than the connections, the threads need to wait for them. That is the bottle neck. For most scenarios, This is the only part that will require attention when scalability of a JEE app is in Question.

This is a basic over view of Resource Contention and how it effects scalability. I will follow up on more finer details in following posts

Popular posts from this blog

Powered By

As it goes, We ought to give thanks to people who power us. This page will be updated, like the version page , to show all the tools, and people this site is Powered By! Ubuntu GIMP Firebug Blogger Google [AppEngine, Ajax and other Apis] AddtoAny Project Fondue jQuery

One page Stock

Alright.. That was a long absence. The whole last week I dint blog. I dint go away. I was "occupied". I was learning stock trading. Its very fascinating. I have a good weeeked blog for you all. Here is my experience. I can literally hyper-link every word from the following paragraphs, but I am writing it as simple as I can so you can look up the italicised words in wikipedia . I got a paper trading account from a brokerage firm . You need one brokerage account first. Then it can be an Equity account where all your money is yours or a Margin account , where some of the money is lent by the brokerage firm. Then I get Buying power , which is the dollor value of how much stocks you can buy. I can make profit by simple rules. Buy when Price is low. Sell when price is high. There is another more intersting way of earning money. Selling short . Thats when price is not high, per say, but when are confident that the price WILL go down. then buy back when its lowest. This is what

Decorator for Memcache Get/Set in python

I have suggested some time back that you could modularize and stitch together fragments of js and css to spit out in one HTTP connection. That makes the page load faster. I also indicated that there ways to tune them by adding cache-control headers. On the server-side however, you could have a memcache layer on the stitching operation. This saves a lot of Resources (CPU) on your server. I will demonstrate this using a python script I use currently on my site to generate the combined js and css fragments. So My stitching method is like this @memize(region="jscss") def joinAndPut(files, ext): res = files.split("/") o = StringIO.StringIO() for f in res: writeFileTo(o, ext + "/" + f + "." + ext) #writes file out ret = o.getvalue() o.close() return ret; The method joinAndPut is * decorated * by memize. What this means is, all calls to joinAndPut are now wrapped (at runtime) with the logic in memize. All you wa