Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Load Balancing With Tomcat

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Apache Tomcat clustering, load balancing, failover, session replication, and

optimisation

You all are probably familiar with high-availability clusters.

To cut long story short, the idea of high-availability clusters aims at creating failure resistant,
reliable, and ultra fast blistering systems.

HA clusters most commonly use the following techniques (or should I say "buzzwords"?):

• clustering
• load balancing
• failover
• session replication

There are lots of documentation, forums, blog posts about it, but they are full of errors
(including Apache's official mod_proxy documentation).

Here I show you how to cluster, load balance, failover, and replicate session using Apache
HTTP Sever 2.2.11 and Apache Tomcat 6.0.20.

1
The infrastructure
:
I have three machine:

• public - Apache HTTP 2.2.11, load balancer


• backend1 - Apache Tomcat 6.0.20, web container, IP: 172.16.253.88
• backend2 - Apache Tomcat 6.0.20, web container, IP: 172.16.253.7

public is a front-end Apache HTTP which balances traffic and distributes the requests to two
backend Tomcats.

Optimisation

I did some additional things to make my HA cluster even more swift, my tips are:

• use AJP communication protocol - AJP is a binary protocol and is far more efficient
than verbose text-based HTTP protocol

• use APR based Apache Tomcat Native library - it allows optimal performance in
production environment, when enabled, the AJP connector will use a socket poller for
keepalive, increasing scalability of the server, also this will reduce significantly the
amount of processing threads needed by Tomcat

• unload all unnecessary modules on Apache HTTP Server - simply don't waste your
CPU and memory for things that wan't be used at all

• optimise JVM parameters - last year I wrote an article about JVM performance
tuning for Java EE, you can read it here: Tuning JVM for Java EE development

Simple clustred application


I wrote a very simple web application. I called it test-app.

It basically consists of META-INF/context.xml, WEB-INF/web.xml and index.jsp files.

META-INF/context.xml is used to inform Tomcat that our application is to be clustered.

In order to do so, inside the META-INF/context.xml file, I added distributable="true"


attribute to <Context /> element:
view source
print?
1.<?xml version="1.0" encoding="UTF-8"?>
2.<Context distributable="true" />
My index.jsp file looked something like this:
view source
print?
01.<html>
02.<head><title>Cluster test app</title></head>
03.<body>
04.<h1>Backend Tomcat: <%=
java.net.InetAddress.getLocalHost().getHostAddress() %></h1>
05.<h1>Session ID: <%=request.getSession().getId()%></h1>
06.<%
07.Long counter = (Long) request.getSession().getAttribute("counter");
08.if (counter == null) {
09.counter = 0l;
10.}
11.counter++;
12.request.getSession().setAttribute("counter", counter);
13.%>
14.<h2>current counter value is ${counter}</h2>
15.</body>
16.</html>
Apache HTTP Server as a proxy and a load balancer
I installed brand new copy of Apache HTTP 2.2.11.

I read the documentation about mod_proxy, mod_proxy_ajp, and mod_proxy_balancer:

• http://httpd.apache.org/docs/2.2/mod/mod_proxy.html
• http://httpd.apache.org/docs/2.2/mod/mod_proxy_ajp.html
• http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html
I opened conf/httpd.conf file and uncommented these modules:
view source
print?
1.LoadModule proxy_module modules/mod_proxy.so
2.LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
3.LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
Then, at the end of includes list, I added:
view source
print?
1.# Proxy
2.Include conf/extra/httpd-proxy.conf
Finally, I created conf/extra/httpd-proxy.conf file and wrote:
view source
print?
01.<Proxy balancer://wwwcluster>
02.BalancerMember ajp://172.16.253.88:8009 route=www1
03.BalancerMember ajp://172.16.253.7:8009 route=www2
04.</Proxy>
05.
06.ProxyPass /test-app balancer://wwwcluster/test-app
stickysession=JSESSIONID
07.
08.<Location /balancer-manager>
09.SetHandler balancer-manager
10.Order Deny,Allow
11.Deny from all
12.Allow from localhost
13.</Location>
Apache Tomcat clustering and routing
On both Tomcats, I opened conf/server.xml and, as a first child of default <Host />
element, I copied and pasted the following <Cluster /> definition:
view source
print?
01.<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
02.channelSendOptions="8">
03.<Manager className="org.apache.catalina.ha.session.DeltaManager"
04.expireSessionsOnShutdown="false" notifyListenersOnReplication="true" />
05.<Channel className="org.apache.catalina.tribes.group.GroupChannel">
06.<Membership
className="org.apache.catalina.tribes.membership.McastService"
07.address="228.0.0.4" port="45564" frequency="500" dropTime="3000" />
08.<Receiver
className="org.apache.catalina.tribes.transport.nio.NioReceiver"
09.address="auto" port="4000" autoBind="100" selectorTimeout="5000"
10.maxThreads="6" />
11.<Sender
className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
12.<Transport
13.className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"
/>
14.</Sender>
15.<Interceptor
16.className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetect
or" />
17.<Interceptor
18.className="org.apache.catalina.tribes.group.interceptors.MessageDispatch1
5Interceptor" />
19.<!-- prints stats on message traffic -->
20.<Interceptor
21.className="org.apache.catalina.tribes.group.interceptors.ThroughputInterc
eptor" />
22.</Channel>
23.<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
24.filter="" />
25.<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve" />
26.<ClusterListener
27.className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener
" />
28.<ClusterListener
29.className="org.apache.catalina.ha.session.ClusterSessionListener" />
30.</Cluster>
Also, I added jvmRoute attributes to <Engine /> element, on both Tomcats respectively:
view source
print?
1.<Engine name="Catalina" defaultHost="localhost" jvmRoute="www1">
2.<Engine name="Catalina" defaultHost="localhost" jvmRoute="www2">
Load balancing and sticky sessions
OK, now as I showed you how I created my test application and how I configured my servers,
you can follow my steps to see how clustering, load balancing, failover and session replication
work.

Start Apache HTTP Server.

Go to:
view source
print?
1.http://localhost/test-app
There will be an error:
1.503 Service Temporarily Unavailable
2.
3.The server is temporarily unable to service your request due to
maintenance downtime or capacity problems. Please try again later.
When you'll access this URL:
view source
print?
1.http://localhost/balancer-manager
you will see that both Tomcats have status set to Err. They are simply stopped, thus cannot be
connected to, thus the Err status.

One thing was weird, routes were not set as defined in conf/extra/httpd-proxy.conf, they
were blank:
Thankfully, they can be changed from within balancer-manger, just click on a cluster
member and set proper routes: www1 and www2, just as in proxy config file.

First, start 172.16.253.88 Tomcat:


01.INFO: Cluster is about to start
02.2009-07-02 10:36:34 org.apache.catalina.tribes.transport.ReceiverBase
bind
03.INFO: Receiver Server Socket bound to:/172.16.253.88:4000
04.2009-07-02 10:36:34
org.apache.catalina.tribes.membership.McastServiceImpl setupSocket
05.INFO: Setting cluster mcast soTimeout to 500
06.2009-07-02 10:36:34
org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
07.INFO: Sleeping for 1000 milliseconds to establish cluster membership,
start level:4
08.2009-07-02 10:36:35
org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
09.INFO: Done sleeping, membership established, start level:4
10.2009-07-02 10:36:35
org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
11.INFO: Sleeping for 1000 milliseconds to establish cluster membership,
start level:8
12.2009-07-02 10:36:36
org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
13.INFO: Done sleeping, membership established, start level:8
14.2009-07-02 10:36:36 org.apache.catalina.ha.session.JvmRouteBinderValve
start
15.INFO: JvmRouteBinderValve started
16.2009-07-02 10:36:36 org.apache.catalina.ha.session.DeltaManager start
17.INFO: Register manager /test-app to cluster element Host with name
localhost
18.2009-07-02 10:36:36 org.apache.catalina.ha.session.DeltaManager start
19.INFO: Starting clustering manager at /test-app
20.2009-07-02 10:36:36 org.apache.catalina.ha.session.DeltaManager
getAllClusterSessions
21.INFO: Manager [/test-app]: skipping state transfer. No members active in
cluster group.
22.2009-07-02 10:36:36 org.apache.coyote.http11.Http11Protocol start
23.INFO: Starting Coyote HTTP/1.1 on http-8080
24.2009-07-02 10:36:36 org.apache.jk.common.ChannelSocket init
25.INFO: JK: ajp13 listening on /0.0.0.0:8009
26.2009-07-02 10:36:36 org.apache.jk.server.JkMain start
27.INFO: Jk running ID=0 time=0/16 config=null
28.2009-07-02 10:36:36 org.apache.catalina.startup.Catalina start
29.INFO: Server startup in 2670 ms
Then, start 172.16.253.7 Tomcat:
01.INFO: Cluster is about to start
02.2009-01-02 10:36:59 org.apache.catalina.tribes.transport.ReceiverBase
bind
03.INFO: Receiver Server Socket bound to:/172.16.253.7:4000
04.2009-01-02 10:36:59
org.apache.catalina.tribes.membership.McastServiceImpl setupSocket
05.INFO: Setting cluster mcast soTimeout to 500
06.2009-01-02 10:36:59
org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
07.INFO: Sleeping for 1000 milliseconds to establish cluster membership,
start level:4
08.2009-01-02 10:37:00 org.apache.catalina.ha.tcp.SimpleTcpCluster
memberAdded
09.INFO: Replication member
added:org.apache.catalina.tribes.membership.MemberImpl[tcp://{-84, 16, -3,
88}:4000,{-84, 16, -3, 88},4000, alive=27016,id={-
10.67 -32 98 38 58 -88 73 -102 -98 -112 29 -21 -20 103 -28 -103 },
payload={}, command={}, domain={}, ]
11.2009-01-02 10:37:00
org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
12.INFO: Done sleeping, membership established, start level:4
13.2009-01-02 10:37:01
org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
14.INFO: Sleeping for 1000 milliseconds to establish cluster membership,
start level:8
15.2009-01-02 10:37:02
org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
16.INFO: Done sleeping, membership established, start level:8
17.2009-01-02 10:37:02 org.apache.catalina.ha.session.JvmRouteBinderValve
start
18.INFO: JvmRouteBinderValve started
19.2009-01-02 10:37:02 org.apache.catalina.ha.session.DeltaManager start
20.INFO: Register manager /test-app to cluster element Host with name
localhost
21.2009-01-02 10:37:02 org.apache.catalina.ha.session.DeltaManager start
22.INFO: Starting clustering manager at /test-app
23.2009-01-02 10:37:02 org.apache.catalina.tribes.io.BufferPool
getBufferPool
24.INFO: Created a buffer pool with max size:104857600 bytes of
type:org.apache.catalina.tribes.io.BufferPool15Impl
25.2009-01-02 10:37:02 org.apache.catalina.ha.session.DeltaManager
getAllClusterSessions
26.WARNING: Manager [/test-app], requesting session state from
org.apache.catalina.tribes.membership.MemberImpl[tcp://{-84, 16, -3,
88}:4000,{-84, 16, -3
27., 88},4000, alive=28516,id={-67 -32 98 38 58 -88 73 -102 -98 -112 29 -21
-20 103 -28 -103 }, payload={}, command={}, domain={}, ]. This operation
will
28.timeout if no session state has been received within 60 seconds.
29.2009-01-02 10:37:02
org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor report
30.INFO: ThroughputInterceptor Report[
31.Tx Msg:1 messages
32.Sent:0,00 MB (total)
33.Sent:0,00 MB (application)
34.Time:0,02 seconds
35.Tx Speed:0,03 MB/sec (total)
36.TxSpeed:0,03 MB/sec (application)
37.Error Msg:0
38.Rx Msg:0 messages
39.Rx Speed:0,00 MB/sec (since 1st msg)
40.Received:0,00 MB]
41.
42.2009-01-02 10:37:02 org.apache.catalina.ha.session.DeltaManager
waitForSendAllSessions
43.INFO: Manager [/test-app]; session state send at 02.01.09 10:37 received
in 110 ms.
44.2009-01-02 10:37:02 org.apache.coyote.http11.Http11Protocol start
45.INFO: Starting Coyote HTTP/1.1 on http-8080
46.2009-01-02 10:37:02 org.apache.jk.common.ChannelSocket init
47.INFO: JK: ajp13 listening on /0.0.0.0:8009
48.2009-01-02 10:37:02 org.apache.jk.server.JkMain start
49.INFO: Jk running ID=0 time=0/15 config=null
50.2009-01-02 10:37:02 org.apache.catalina.startup.Catalina start
51.INFO: Server startup in 2583 ms
After the startup of the second Tomcat, in first 172.16.253.88 Tomcat's console you should
see:
01.2009-07-02 10:37:01 org.apache.catalina.tribes.io.BufferPool
getBufferPool
02.INFO: Created a buffer pool with max size:104857600 bytes of
type:org.apache.catalina.tribes.io.BufferPool15Impl
03.2009-07-02 10:37:03
org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor report
04.INFO: ThroughputInterceptor Report[
05.Tx Msg:2 messages
06.Sent:0,00 MB (total)
07.Sent:0,00 MB (application)
08.Time:0,02 seconds
09.Tx Speed:0,07 MB/sec (total)
10.TxSpeed:0,07 MB/sec (application)
11.Error Msg:0
12.Rx Msg:2 messages
13.Rx Speed:0,00 MB/sec (since 1st msg)
14.Received:0,00 MB]
Open and refresh a few times:
view source
print?
1.http://localhost/test-app
you will see:
01.Backend Tomcat: 172.16.253.88
02.Session ID: 8D5B2E961C57B0BB1A9BA93B8A41F61C.www1
03.current counter value is 1
04.
05.Backend Tomcat: 172.16.253.88
06.Session ID: 8D5B2E961C57B0BB1A9BA93B8A41F61C.www1
07.current counter value is 2
08.
09.Backend Tomcat: 172.16.253.88
10.Session ID: 8D5B2E961C57B0BB1A9BA93B8A41F61C.www1
11.current counter value is 3
Load balancing and sticky sessions work.

You can now refresh:


view source
print?
1.http://localhost/balancer-manager
to see how many requests and kB were sent to each Tomcat.

Failover and session replicating


Let's see how the failover is behaving.

Kill 172.16.253.88 Tomcat, in 172.16.253.7 Tomcat console you will see:


1.2009-01-02 10:39:32
org.apache.catalina.tribes.group.interceptors.TcpFailureDetector
memberDisappeared
2.INFO: Verification complete. Member
disappeared[org.apache.catalina.tribes.membership.MemberImpl[tcp://{-84, 16,
-3, 88}:4000,{-84, 16, -3, 88},4000,
3.alive=179535,id={-67 -32 98 38 58 -88 73 -102 -98 -112 29 -21 -20 103 -28
-103 }, payload={}, command={66 65 66 89 45 65 76 69 88 ...(9)}, domain={},
4.]]
5.2009-01-02 10:39:32 org.apache.catalina.ha.tcp.SimpleTcpCluster
memberDisappeared
6.INFO: Received member
disappeared:org.apache.catalina.tribes.membership.MemberImpl[tcp://{-84, 16,
-3, 88}:4000,{-84, 16, -3, 88},4000, alive=179535,i
7.d={-67 -32 98 38 58 -88 73 -102 -98 -112 29 -21 -20 103 -28 -103 },
payload={}, command={66 65 66 89 45 65 76 69 88 ...(9)}, domain={}, ]
Refresh:
view source
print?
1.http://localhost/test-app
you will see:
1.Backend Tomcat: 172.16.253.7
2.Session ID: 8D5B2E961C57B0BB1A9BA93B8A41F61C.www2
3.current counter value is 4
4.
5.Backend Tomcat: 172.16.253.7
6.Session ID: 8D5B2E961C57B0BB1A9BA93B8A41F61C.www2
7.current counter value is 5
All requests are now distributed to 172.16.253.7 Tomcat.

Session ID is the same as previously, but this time www2 is appended at the end. Counter's
value was replicated.

When you access:


view source
print?
1.http://localhost/balancer-manager
you will see that 172.16.253.88 Tomcat's status is Err.

When you start 172.16.253.88 Tomcat, in 172.16.253.7 Tomcat's console you will see:
1.2009-01-02 10:40:40 org.apache.catalina.ha.tcp.SimpleTcpCluster
memberAdded
2.INFO: Replication member
added:org.apache.catalina.tribes.membership.MemberImpl[tcp://{-84, 16, -3,
88}:4000,{-84, 16, -3, 88},4000, alive=1016,id={-2
3.3 -9 50 91 -8 -56 75 11 -86 -107 -46 -16 75 99 -21 8 }, payload={},
command={}, domain={}, ]
If you open another browser, load balancer probably will redirect you to just recovered
172.16.253.88 Tomcat.

Summary
I know it looks simple (and it really is simple!), but there is so much rubbish, misleading, out-
dated information out there to dig through... It always takes some time to find something
useful.

If you have any questions, and I think you will, just shoot, but don't kill :)

Cheers,
Łukasz

You might also like