This topic provides instructions on how to cluster WSO2 Enterprise Service Bus (ESB) using WSO2 Elastic Load Balancer (ELB), but you can use a third-party load balancer in its place (for configuration details, see your load balancer's documentation).
For details on further configuration required for the WSO2 product you are clustering, see the links in the table of contents.
Installing the products
Before you begin, download and extract WSO2 ESB and WSO2 ELB to a local directory on the sever. For this example, we have extracted one copy of the ELB and two copies of the ESB on the server with IP xxx.xxx.xxx.206 (the x's represent your actual IP prefix), and we extracted two copies of the ESB on the server with the IP xxx.xxx.xxx.132:
Server xxx.xxx.xxx.206:
- 1 ELB instance (Well Known Member)
- 1 ESB instance (worker node)
- 1 ESB instance (Dep-sync management / manager node )
Server xxx.xxx.xxx.132:
- 2 ESB instances (worker nodes)
Configuring the load balancer
In this scenario, we are using WSO2 ELB for the load balancer (you can also use a third-party load balancer; for details on configuration, see your load balancer's documentation). You configure the ELB with the overall definition of the cluster and how it should distribute the load. You can achieve this by adding a few lines to a configuration file called loadbalancer.conf
. You specify the detailed clustering configurations in the axis2.xml
file. This section describes how to perform these steps.
The system should have at least two Well-known Address (WKA) members in order to work correctly and to recover if a single WKA member fails. The member can either be another ELB or another manager or worker node.
Refer to Configuring the Load Balancer to follow instructions on how to set up the WSO2 ELB in your cluster.
Setting up the database
Each Carbon-based product uses a database to store information such as user management details and registry data. Set up the databases to store information and work hand-in-hand with your cluster.
The next step is to configure the manager node.
Configuring the manager node
In this section, we will configure data sources to allow the manager node to point to the central database, enable the manager node for clustering, change the port offset, and map the host names to IPs.
Configuring the data source
You configure datasources to allow the manager node to point to the central database. Make sure that you copy the database driver JAR to the manager node and follow the steps described in Setting up the Database.
Setting up cluster configurations for the manager node
Configuring clustering for the manager node is similar to the way you configured it for the ELB node, but the localMemberPort
is 4001 instead of 4000, and you define the ELB node instead of the ESB manager node as the well-known member.
- Open the
<ESB_HOME>/repository/conf/axis2/axis2.xml
file. - Locate the Clustering section and verify or configure the properties as follows (some of these properties are already set correctly by default):
- Enable clustering for this node:
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
- Set the membership scheme to
wka
to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later):<parameter name="membershipScheme">wka</parameter>
- Specify the name of the cluster this node will join (this is the domain defined in the loadbalancer.conf file on the ELB):
<parameter name="domain">wso2.esb.domain</parameter>
- Specify the host used to communicate cluster messages:
<parameter name="localMemberHost">xxx.xxx.xxx.206</parameter>
Specify the port used to communicate cluster messages:
<parameter name="localMemberPort">4001</parameter>
Note: This port number will not be affected by the port offset in carbon.xml. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.
The receiver's http/https port values are without the
portOffset
addition; they get auto-incremented byportOffset
. The 'WSDLEPRPrefix
' parameter should point to the worker node's host name (esb.cloud-test.wso2.com
) and ELB's http (8280)/https (8243) transport ports.Ensure that you set the value of the subDomain as mgt to specify that this is the manager node, which will ensure that traffic for the manager node is routed to this member.
<
property
name
=
"subDomain"
value
=
"mgt"
/>
Edit the
<members>
element so that it looks as follows:<members> <member> <hostName>xxx.xxx.xxx.206</hostName> <port>4500</port> </member> </members>
The IP address mentioned in the
hostName
represents the IP of the ELB.
- Enable clustering for this node:
Locate the port mapping section and configure the properties as follows:
<property name="port.mapping.80" value="9764"/>
<property name="port.mapping.443" value="9444"/>
This configuration will change as follows if you did not configure the ELB to listen on default ports:
<property name="port.mapping.8280" value="9764"/> <property name="port.mapping.8243" value="9444"/>
This value should increment based on the port offset value. In this example it is incremented by 1 since the port offset for the manager node is one.
In a dynamically clustered set up where you front a WSO2 Carbon instance using a WSO2 ELB, it is the responsibility of a Carbon server to send its information to ELB. You can visualize this as a "member object somehow getting passed to ELB from the Carbon server instance". In the Carbon server's clustering section, under properties, you can define any member property. This way, you can let ELB know about the information other than the basic ones. Typically, this basic information includes host names, HTTP port, HTTPS port, etc.
WSO2 ESB, WSO2 API Manager etc. are somewhat special with regard to ports as they usually have two HTTP ports (compared to one HTTP port for products like WSO2 AS). Hence, here we have to somehow send this additional information to ELB. The easiest way to do this is by setting a member property. Here, we use port.mapping property. Also, in order to front these special servers, we need two HTTP ports in ELB too, which are exposed to the outside. There's a deployment decision to be made here, i.e., which HTTP port of ELB should map to which HTTP port of the server (i.e., servlet HTTP port or NHTTP HTTP port). With that in mind, let's consider only the HTTP scenario. Say, in your ESB instance, you have used 8280 as the NHTTP transport port (axis2.xml) and 9763 as the Servlet transport port (catalina-server.xml). Also, ELB has 2 HTTP ports, one is 8280 and the other is 8290. Imagine there's a member object, and in this case, the member's HTTP port would be 8280 (usually the port defined in axis2.xml gets here). But since ELB has 2 ports, there's no way to correctly map ports, by only specifying member's HTTP port. There arises the importance of port.mapping property. You have to think of this property from the perspective of ELB.
Let's assume we define the above property, now this means, if a request comes to ELB, in its 8290 port (see... we're thinking from ELB's perspective), forward that request to the 9764 port of the Member. Having only this property is enough, we do not need following property:
<property name="port.mapping.8280" value="8280"></property>
This occurs because the logic was written in a way that port.mapping properties get higher precedence over the default ports. This means, that when a request comes to ELB, ELB will first check whether the port it received the request from is specified as a port.mapping property. If it is, it will grab the target port from that property. If not, it will send the request to the default http port. Hence, if a request is received by the 8280 port of ELB, it will be automatically get redirected to 8280 port of the Member (since it's the HTTP port of Member).
Similarly, we should define a mapping for https servlet port (8243).
Configuring the port offset and host name
Because we are running two Carbon-based products on the same server, we must change the port offset to avoid port conflicts. Additionally, we will add the cluster host name so that any requests sent to the manager host are redirected to the cluster, where the ELB will pick them up and manage them.
- Open
<ESB_MANAGER_HOME>/repository/conf/carbon.xml
. - Locate the
<Ports>
tag and change the value of its sub-tag to:<Offset>1</Offset>
- Locate the
<HOSTNAME>
tag and add the cluster host name:
<HostName>esb.wso2.com</HostName> - Locate the
<MgtHostName>
tag and uncomment it. Make sure that the management host name is defined as follows:
<MgtHostName>mgt.esb.wso2.org</MgtHostName>
In the next section, we will map the host names we specified to real IPs.
Mapping host names to IPs
In the manager node we have specified two host names: carbondb.mysql-wso2.com
for the MySQL server and esb.cloud-test.wso2.com
for the cluster. We will now map them to the actual IPs. Note that if you created the database on the same server as the manager node, you will have already added the first line, and if you created it on the same server as the ELB, you will have already added the second line.
Open the server's /etc/hosts
file and add the following lines, where <MYSQL-DB-SERVER-IP>
and <ELB-IP>
are the actual IP addresses (in this example, xxx.xxx.xxx.206):
<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com <ESB-WORKER-IP> esb.wso2.com
In this example, it would look like this:
xxx.xxx.xxx.206 carbondb.mysql-wso2.com xxx.xxx.xxx.206 esb.wso2.com
We have now finished configuring the manager node and are ready to start the ESB server.
Starting the ESB server
Start the ESB server by typing the following command in the terminal:
sh <ESB_MANAGER_HOME>/bin/wso2server.sh -Dsetup
The additional -Dsetup
argument will clean the configurations, recreate the central database, and create the required tables in the database.
The ESB should print logs to the server console indicating that the cluster initialization is complete.
We have now finished configuring the manager node. Next, we will configure the ESB worker nodes.
Configuring the worker nodes
You configure worker nodes in very much the same way as you configured the manager node. Be sure to follow these steps for each worker node in the cluster.
Configuring the data source
You configure the data source to connect to the central database. If there are multiple data sources, configure them to reference the central database as well. Since the data source changes are the same for all worker nodes, you can configure this file on one node and then copy it to the other worker nodes. Make sure you copy the database driver JAR to each worker node and follow the steps described in Setting up the central database.
After you have finished configuring the data source, be sure to copy this configuration to the other worker nodes in the cluster.
Setting up cluster configurations for the worker nodes
Configuring clustering for the worker nodes is similar to the way you configured it for the manager node, but the localMemberPort
will vary for each worker node, you add the subDomain property, and you add the ELB and ESB manager node to the well-known members, as described in the following steps.
- Open the
<ESB_HOME>/repository/conf/axis2/axis2.xml
file. - Locate the Clustering section and verify or configure the properties as follows (some of these properties are already set correctly by default):
- Enable clustering for this node:
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true"> - Set the membership scheme to
wka
to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later):<parameter name="membershipScheme">wka</parameter>
- Specify the name of the cluster this node will join:
<parameter name="domain">wso2.esb.domain</parameter> - Specify the host used to communicate cluster messages:
<parameter name="localMemberHost">xxx.xxx.xxx.206</parameter>
- Specify the port used to communicate cluster messages (if this node is on the same server as the ELB, manager node, or another worker node, be sure to set this to a unique value, such as 4000 and 4001 for worker nodes 1 and 3 and 4002 for worker node 2, which is on the same server as the ELB and manager node):
<parameter name="localMemberPort">4002</parameter>
Note: This port number will not be affected by the port offset incarbon.xml
. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server. Add a new property "
subDomain
" and set it to "worker
" to denote that this node belongs to worker subdomain of the cluster as defined in loadbalancer.conf.<parameter name="properties"> <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/> <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/> <property name="subDomain" value="worker"/> <property name="port.mapping.8290" value="9763"/> </parameter>
Define the ELB and manager nodes as well-known members of the cluster by providing their host name and
localMemberPort
values. The manager node is defined here because it is required for the Deployment Synchronizer to function.<members> <member> <hostName>xxx.xxx.xxx.206</hostName> <port>4500</port> </member> <member> <hostName>xxx.xxx.xxx.206</hostName> <port>4501</port> </member> </members>
The member in port 4500 is the ELB, and port 4001 is the manager node. 4500 is the value of the
group_mgt_port
we specify in the loadbalancer.conf file of the ELB and 4001 is thelocalMemberPort
specified in the manager node configurations.
- Enable clustering for this node:
Adjusting the port offset
Because we are running two Carbon-based products on the same server, we must change the port offset to avoid port conflicts.
- Open
<ESB_WORKER_HOME>/repository/conf/carbon.xml
on each worker node. - Locate the
<Ports>
tag and change the value of its sub-tag as follows on each worker node:
Worker1:
<Offset>2</Offset>
- Set the offset to 2, because there are already two more Carbon products (ELB and ESB manager node) running on this (xxx.xxx.xxx.206) server.Worker2:
<Offset>0</Offset>
- No changes needed, because this will be the first node on this (xxx.xxx.xxx.132) server.Worker3:
<Offset>1</Offset>
- Set the offset of 1, because Worker2 occupies the default ports on this (xxx.xxx.xxx.132) server.
In the next section, we will map the host names we specified to real IPs.
Mapping host names to IPs
In the worker nodes, we have used three hosts names: carbondb.mysql-wso2.com
for the MySQL server, elb.wso2.com
for the ELB, and mgt.esb.wso2.com
for the ESB manager node. We will now map them to the actual IPs.
Open the server's /etc/hosts
file and add the following lines, where <MYSQL-DB-SERVER-IP>
, <ELB-IP>
, and <ESB-Manager-IP>
are the actual IP addresses (in this example, xxx.xxx.xxx.206): In this example, it would look like this:
xxx.xxx.xxx.206 carbondb.mysql-wso2.com xxx.xxx.xxx.206 elb.wso2.com xxx.xxx.xxx.206 mgt.esb.wso2.com
We have now finished configuring the worker nodes and are ready to start them.
If you want to remove all UI components from the worker nodes you need to run the ant createWorker
task before you start the worker nodes. Note that this will remove management console capability from worker nodes.
Starting the ESB server
Start the ESB server by typing the following command in the terminal:
sh <ESB_WORKER_HOME>/bin/wso2server.sh -DworkerNode=true
The additional -DworkerNode=true
argument indicates that this is a worker node.
When starting the Worker1, it should display logs in the console indicating that the cluster initialization is complete.
The ELB console should have these new messages:
INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx.132:4000(wso2.esb.domain) INFO - MembershipManager Application member xxx.xxx.xxx.132:4000(wso2.esb.domain) joined group wso2.esb.domain INFO - DefaultGroupManagementAgent Application member Host:xxx.xxx.xxx.132, Port: 4000, HTTP:8280, HTTPS:8243, Domain: wso2.esb.domain, Sub-domain:worker, Active:true joined application clustert
The manager node console should have these new messages:
INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx.132:4000(wso2.esb.domain) INFO - RpcInitializationRequestHandler Received GetConfigurationCommand initialization request message from xxx.xxx.xxx.132:4000(wso2.esb.domain)
If you have similar messages in your consoles, you have finished configuring the worker nodes and the cluster is running. When you terminate one node, all nodes identify that the node has left the cluster. The same applies when a new node joins the cluster. If you want to add another new worker node, you can simply copy worker1 without any changes if you are running it on a new server (such as xxx.xxx.xxx.184). If you intend to use the new node on a server where another WSO2 product is running, you can use a copy of worker1 and change the port offset accordingly in the carbon.xml
file. You may also have to change localMemberPort
in axis2.xml
if that product has clustering enabled. Be sure to map all host names to the relevant IP addresses when creating a new node.