In terms of scalability, a frequent term used is "Resource Contention". Most commonly this means, there is a shared object, file handle, db connection, or similar for which Threads are trying to get ownership. Until they obtain this ownership they either wait or exit on a timeout mechanism. In normal flow, they simply wait - This state is called blocked state. In an asynchronous mode (generally meaning a spawned child thread), there can be a Time-Out flow that can be defined to control the maximum length of time the thread can take to complete a task.
In a blocked state, There may be "Resource Contention" in one thread that is avoiding other threads to proceed. In a JEE environment, This Resource Contention is caused commonly by Transactions(, Synchronized blocks, Locks). Since User Threads are discouraged in JEE, Synchronized blocks and Locks are less often encountered as issues.
That's theory. If you dint understand much, Dont feel despair. I come from the very same shoes. To imagine a multi-threaded environment and its concepts in theory is VERY difficult for a person coming from a basic programming background. A small example should help.
Imagine a Tomcat web app. When a request comes from the browser, its a THREAD. This thread starts from the socket listener, goes through catalina servlet, goes to the servlet mapping in your application. Assuming you are using MVC, the application starts with central controller, which hands off the control to service layer, which calls the DAO layer and the stack frame is completed in the reverse order ending up with a result in web browser. In Simple case, ALL this is *ONE THREAD*.
But the DAO has a data source; Data Source provides Connections(Resources). These are *limited* resources. And here is the Main Issue. Assuming your application get 3 requests in the first second [depicted from here on as 1'], 3 *ONE THREAD* start, and try to complete, lets say, in a duration of 10 secs). In the simplest case, 3 connections are created at the DAO call, and destroyed at the end of 10'. But what if there are 30 requests at the 2' ? At 2' there is 33 connections and 33 threads.
And 40 requests at the 3'. There are 73 connections @3'.
so it goes like,
1' - 3
2' - 33
3' - 73
...
...
11' - 70
12' - 40
13' - 0
The problem with this is, If there are a 1000 requests that are being served at a time, there will be 1000 Connections open. Threads are light, you can have as many as you want - But Connections are costly resources. So we have to share them.
Think of connections as beads. Thread of execution as an actual thread with a needle at the beginning, and an open end. Take a bead (resource) and start thread to go into it. Move the bead along the thread, until it reaches the end of the thread. Once it reaches the end of the thread, the bead falls off the thread. This is exactly what happens to the request. It starts - picks up a connection - uses it - and releases it. But This is one thread, one connection scenario.
Look How is it in JEE. There are multiple Connections (Pooled resources). Consider them to be beads in a bowl. And then requests come in and create threads. The connection pool manager makes sure only one Thread gets ONE specific connection at a time. The constructs I mentioned before (Locks, Sync Blocks) are used by the pool manager, so the user doesn't need to do it. After the thread executes, the connection is given back to pool (bead falls back in the bowl)
Now you get the picture, When there are more threads than the connections, the threads need to wait for them. That is the bottle neck. For most scenarios, This is the only part that will require attention when scalability of a JEE app is in Question.
This is a basic over view of Resource Contention and how it effects scalability. I will follow up on more finer details in following posts
In a blocked state, There may be "Resource Contention" in one thread that is avoiding other threads to proceed. In a JEE environment, This Resource Contention is caused commonly by Transactions(, Synchronized blocks, Locks). Since User Threads are discouraged in JEE, Synchronized blocks and Locks are less often encountered as issues.
That's theory. If you dint understand much, Dont feel despair. I come from the very same shoes. To imagine a multi-threaded environment and its concepts in theory is VERY difficult for a person coming from a basic programming background. A small example should help.
Imagine a Tomcat web app. When a request comes from the browser, its a THREAD. This thread starts from the socket listener, goes through catalina servlet, goes to the servlet mapping in your application. Assuming you are using MVC, the application starts with central controller, which hands off the control to service layer, which calls the DAO layer and the stack frame is completed in the reverse order ending up with a result in web browser. In Simple case, ALL this is *ONE THREAD*.
But the DAO has a data source; Data Source provides Connections(Resources). These are *limited* resources. And here is the Main Issue. Assuming your application get 3 requests in the first second [depicted from here on as 1'], 3 *ONE THREAD* start, and try to complete, lets say, in a duration of 10 secs). In the simplest case, 3 connections are created at the DAO call, and destroyed at the end of 10'. But what if there are 30 requests at the 2' ? At 2' there is 33 connections and 33 threads.
And 40 requests at the 3'. There are 73 connections @3'.
so it goes like,
1' - 3
2' - 33
3' - 73
...
...
11' - 70
12' - 40
13' - 0
The problem with this is, If there are a 1000 requests that are being served at a time, there will be 1000 Connections open. Threads are light, you can have as many as you want - But Connections are costly resources. So we have to share them.
Think of connections as beads. Thread of execution as an actual thread with a needle at the beginning, and an open end. Take a bead (resource) and start thread to go into it. Move the bead along the thread, until it reaches the end of the thread. Once it reaches the end of the thread, the bead falls off the thread. This is exactly what happens to the request. It starts - picks up a connection - uses it - and releases it. But This is one thread, one connection scenario.
Look How is it in JEE. There are multiple Connections (Pooled resources). Consider them to be beads in a bowl. And then requests come in and create threads. The connection pool manager makes sure only one Thread gets ONE specific connection at a time. The constructs I mentioned before (Locks, Sync Blocks) are used by the pool manager, so the user doesn't need to do it. After the thread executes, the connection is given back to pool (bead falls back in the bowl)
Now you get the picture, When there are more threads than the connections, the threads need to wait for them. That is the bottle neck. For most scenarios, This is the only part that will require attention when scalability of a JEE app is in Question.
This is a basic over view of Resource Contention and how it effects scalability. I will follow up on more finer details in following posts