A WSO2 ESB cluster should contain two or more ESB instances that are configured to run within the same domain. To make an instance a member of the cluster, you must configure it to either of the available membership schemes:
- Well Known Address (WKA) membership scheme
- Multicast membership scheme
In this example, we will be using the WKA membership scheme, and the ELB will act as the well-known member in the cluster. It will accept all the service requests on behalf of the ESBs and divide the load among worker nodes in the ESB cluster.
This page describes how to create an ESB cluster with an ELB front end in the following sections:
Installing the products
Before you begin, download and extract WSO2 ESB and WSO2 ELB to a local directory on the sever. For this example, we have extracted one copy of the ELB and two copies of the ESB on the server with IP xxx.xxx.xxx.206 (the x's represent your actual IP prefix), and we extracted two copies of the ESB on the server with the IP xxx.xxx.xxx.132:
Server xxx.xxx.xxx.206:
- 1 ELB instance (Well Known Member)
- 1 ESB instance (worker node)
- 1 ESB instance (Dep-sync management / manager node )
Server xxx.xxx.xxx.132:
- 2 ESB instances (worker nodes)
Configuring the load balancer
You configure the ELB with the overall definition of the cluster and how it should distribute the load. You can achieve this by adding a few lines to a configuration file called loadbalancer.conf
. You specify the detailed clustering configurations in the axis2.xml
file. This section describes how to perform these steps.
Setting up load-balancing configurations
- Open the
<ELB_HOME>/repository/conf/loadbalancer.conf
file. Locate the ESB configuration and edit it as follows:
esb { domains { wso2.esb.domain { hosts esb.cloud-test.wso2.com; sub_domain worker; tenant_range *; } } }
In this file, we specified the domain name (wso2.esb.domain
), which is used to identify the cluster. On startup, a node with this domain name will look for a cluster with this same domain name.
The ELB will divide the load among the sub-domains. With this sub-domain concept, we can virtually separate the cluster, according to the task that each collection of nodes intends to perform. We defined a sub-domain called worker
.
In the previous diagram, you can see that all the service requests need to be routed to the worker nodes through the ELB, which is the front end to the entire cluster. We used the hosts
attribute to configure the the publicly accessible host name (esb.cloud-test.wso2.com
), which clients can use to send their requests to the cluster. We will map the host name to the ELB server IP address later.
Finally, the tenant_range
attribute is used to handle tenant-aware load-balancing, which is another very powerful feature of the ELB. This attribute allows us to partition tenants into several clusters, so that when there is a large number of tenants to work with, we can instruct each cluster to work only with a particular tenant or a few selected tenants. This approach is also useful if you need a particular cluster to be dedicated to work for a single special tenant ("Private Jet Mode"). In this example, we are not enabling tenant partitioning, so we have used an asterisk ( * ) in front of the tenant_range
attribute to represent all possible tenants.
In summary, we have configured the load balancer to handle requests sent to esb.cloud-test.wso2.com
and to distribute the load among the worker nodes in the worker
sub-domain of the wso2.esb.domain
cluster. We are now ready to set up the cluster configurations.
Setting up cluster configurations on the ELB
Previously, we configured several properties of the cluster such as domain name and sub-domain, but we didn’t define them there. We now define these properties as we build the cluster.
- Open the
<ELB_HOME>/repository/conf/axis2/axis2.xml
file. - Locate the Clustering section and configure the properties as follows:
- Enable clustering for this node:
<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
- Set the membership scheme to
wka
to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later):<parameter name="membershipScheme">wka</parameter>
- Specify the name of the cluster this node will join:
<parameter name="domain">wso2.esb.domain</parameter>
- Specify the port used to communicate cluster messages:
<parameter name="localMemberPort">4000</parameter>
Note: This port number will not be affected by the port offset incarbon.xml
. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server. Define the ESB manager node as a well-known member of the cluster by providing its host name and its
localMemberPort
(you will configure these on the manager node later):<members> <member> <hostName>mgr.esb.wso2.com</hostName> <port>4001</port> </member> </members>
- Enable clustering for this node:
We have now completed the clustering-related configuration for the ELB. In the next section, we will make one last change to the ELB that will increase usability.
Configuring the ELB to listen on default ports
We will now change the ELB configuration to listen to the default HTTP and HTTPS ports.
- Open the
<ELB_HOME>/repository/conf/axis2/axis2.xml
file. - Locate the Transport Receiver section and configure the properties as follows:
- In the
<transportReceiver name="http" class="org.apache.synapse.transport.passthru.PassThroughHttpListener">
transport, enable service requests to be sent to the ELB's default HTTP port instead of having to specify port 8280:<parameter name="port">80</parameter>
- In the
<transportReceiver name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLListener">
transport, enable service requests to be sent to the ELB's default HTTPS port instead of having to specify port 8243:<parameter name="port">443</parameter>
- In the
In the next section, we will map the host names we specified to real IPs.
Mapping host names to IPs
In the ELB we have specified two host names: esb.cloud-test.wso2.com
for worker hosts and mgr.esb.wso2.com
for the manager node. We will now map them to the actual IPs.
Open the server's /etc/hosts
file and add the following lines, where <ELP-IP>
and <ESB-Manager-IP>
are the actual IP addresses (in this example, xxx.xxx.xxx.206):
<ELB-IP> esb.cloud-test.wso2.com <ESB-Manager-IP> mgr.esb.wso2.com
We have now finished configuring the ELB and ready to start the ELB server.
Starting the ELB server
Start the ELB server by typing the following command in the terminal:
sudo -E sh <ELB_HOME>/bin/wso2server.sh
If you skipped the step of configuring the ELB to listen on the default ports, you do not need to use the sudo
command and can just start the ELB with the following command: sh <ELB_HOME>/bin/wso2server.sh
The ELB should print logs to the server console similar to the following:
INFO - TribesClusteringAgent Initializing cluster... INFO - TribesClusteringAgent Cluster domain: wso2.esb.domain INFO - TribesClusteringAgent Using wka based membership management scheme INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4000 INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4000 INFO - TribesClusteringAgent Local Member xxx.xxx.xxx.206:4000(wso2.esb.domain) INFO - TribesUtil No members in current cluster INFO - TribesClusteringAgent Cluster initialization completed.
Now that the ELB is configured and running, you create a central database for all the nodes to use.
Setting up the central database
Each Carbon-based product uses a database to store information such as user management details and registry data. All nodes in the cluster must use one central database.
- Download and install MySQL server.
- Download the MySQL jdbc driver.
- Define the host name for configuring permissions for the new database by opening the
/etc/hosts
file and adding the following line:<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com
- Open a terminal/command window and log in to MySQL with the following command:
mysql -u username -p
- When prompted, specify the password, and then create the database with the following command:
mysql> create database carbondb;
- Grant permission to access the created database with the following command:
mysql> grant all on carbondb.* TO username@carbondb.mysql-wso2.com identified by "password";
- Unzip the downloaded MySQL driver zipped archive and copy the MySQL JDBC driver JAR (
mysql-connector-java-x.x.xx-bin.jar
) to the<ESB_HOME>/repository/component/lib
directory for each worker and manager node.
We have now created a central database called carbondb
with host carbondb.mysql-wso2.com
, and with permission for user username
with password password
. The next step is to configure the manager node.
Configuring the manager node
In this section, we will configure data sources to allow the manager node to point to the central database, enable the manager node for clustering, change the port offset, and map the host names to IPs.
Configuring the data source
- Make sure that you have copied the MySQL JDBC driver JAR to the manager node as described in Creating an ESB Cluster.
- Open the
master-datasources.xml
file located in the<ESB_MANAGER_HOME>/repository/conf/datasources/
directory. - Locate the WSO2_CARBON_DB data source configurations and change them as follows:
- Define the location of the central database:
<url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=60000</url>
- Give user
username
access to the database:
<username>username</username>
<password>password</password> - Specify the driver to use for connecting to the central database (the driver we copied in the previous section):
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
- Define the location of the central database:
When you are finished, the data source configuration should look like this:
<datasource> <name>WSO2_CARBON_DB</name> <description>The datasource used for registry and user manager</description> <jndiConfig> <name>jdbc/WSO2CarbonDB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALS E;LOCK_TIMEOUT=60000</url> <username>username</username> <password>password</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> </configuration> </definition> </datasource>
In most WSO2 products, only one data source is used. If there is more than one data source, make sure they reference the central databases accordingly. For example, the API Manager deployment setup requires more specific data source configurations, so it is described in a different section below.
Setting up cluster configurations for the manager node
Configuring clustering for the manager node is very similar to the way you configured it for the ELB node, but the localMemberPort
is 4001 instead of 4000, and you define the ELB node instead of the ESB manager node as the well-known member.
- Open the
<ESB_HOME>/repository/conf/axis2/axis2.xml
file. - Locate the Clustering section and configure the properties as follows:
- Enable clustering for this node:
<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
- Set the membership scheme to
wka
to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later):<parameter name="membershipScheme">wka</parameter>
- Specify the name of the cluster this node will join:
<parameter name="domain">wso2.esb.domain</parameter>
- Specify the port used to communicate cluster messages:
<parameter name="localMemberPort">4001</parameter>
Note: This port number will not be affected by the port offset incarbon.xml
. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server. Define the ELB node as a well-known member of the cluster by providing its host name and its
localMemberPort
:<members> <member> <hostName>elb.wso2.com</hostName> <port>4000</port> </member> </members>
- Enable clustering for this node:
Adjusting the port offset
Because we are running two Carbon-based products on the same server, we must change the port offset to avoid port conflicts.
- Open
<ESB_MANAGER_HOME>/repository/conf/carbon.xml
. - Locate the <Ports> tag and change the value of its sub-tag to:
<Offset>1</Offset>
In the next section, we will map the host names we specified to real IPs.
Mapping host names to IPs
In the manager node we have specified two host names: carbondb.mysql-wso2.com
for the MySQL server and elb.wso2.com
for the ELB. We will now map them to the actual IPs.
Open the server's /etc/hosts
file and add the following lines, where <MYSQL-DB-SERVER-IP>
and <ELB-IP>
are the actual IP addresses (in this example, xxx.xxx.xxx.206):
<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com <ELB-IP> elb.wso2.com
Note that if you created the database on the same server as the manager node, you may have already added the first line.
We have now finished configuring the manager node and are ready to start the ESB server.
Starting the ESB server
Start the ESB server by typing the following command in the terminal:
sh <ESB_MANAGER_HOME>/bin/wso2server.sh -Dsetup
The additional -Dsetup
argument will clean the configurations, recreate the central database, and create the required tables in the database.
The ESB should print logs to the server console similar to the following:
INFO - TribesClusteringAgent Initializing cluster... INFO - TribesClusteringAgent Cluster domain: wso2.esb.domain INFO - TribesClusteringAgent Using wka based membership management scheme INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4001 INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4001 INFO - WkaBasedMembershipScheme Added static member xxx.xxx.xxx.206:4000(wso2.esb.domain) INFO - TribesClusteringAgent Local Member xxx.xxx.xxx.206:4001(wso2.esb.domain) INFO - TribesUtil Members of current cluster INFO - TribesUtil Member1 xxx.xxx.xxx.206:4000(wso2.esb.domain) INFO - WkaBasedMembershipScheme Sending JOIN message to WKA members... INFO - RpcMembershipRequestHandler Received MEMBER_LIST message from xxx.xxx.xxx.206:4000(wso2.esb.domain) INFO - TribesClusteringAgent Trying to send initialization request to xxx.xxx.xxx.206:4000(wso2.esb.domain)
Additionally, the ELB console should have these new messages to indicate that the manager node joined the cluster:
INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx.206:4001(wso2.esb.domain) INFO - MembershipManager Application member xxx.xxx.xxx.206:4001(wso2.esb.domain) joined group wso2.esb.domain INFO - TribesMembershipListener New member xxx.xxx.xxx.206:4001(wso2.esb.domain) joined cluster. INFO - RpcInitializationRequestHandler Received GetConfigurationCommand initialization request message from xxx.xxx.xxx.206:4001(wso2.esb.domain) INFO - DefaultGroupManagementAgent Application member Host:xxx.xxx.xxx.206, Port: 4001, HTTP:8280, HTTPS:8243, Domain: wso2.esb.domain, Sub-domain:null, Active:true joined application cluster
We have now finished configuring the manager node. Next, we will configure the ESB worker nodes.
Configuring the worker nodes
You configure worker nodes in very much the same way as you configured the manager node. Be sure to follow these steps for each worker node in the cluster.
Configuring the data source
You configure the data source to connect to the central database. Because the data source changes are the same for all worker nodes, you can configure this file on one node and then copy it to the other worker nodes.
- Make sure that you have copied the MySQL JDBC driver JAR to each worker node as described in Creating an ESB Cluster.
- Open the
master-datasources.xml
file located in the<ESB_WORKER_HOME>/repository/conf/datasources/
directory. - Locate the WSO2_CARBON_DB data source configurations and change them as follows:
- Define the location of the central database:
<url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALS E;LOCK_TIMEOUT=60000</url>
- Give user
username
access to the database:
<username>username</username>
<password>password</password> - Specify the driver to use for connecting to the central database:
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
- Define the location of the central database:
When you are finished, the data source configuration on each worker node should look like this:
<datasource> <name>WSO2_CARBON_DB</name> <description>The datasource used for registry and user manager</description> <jndiConfig> <name>jdbc/WSO2CarbonDB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALS E;LOCK_TIMEOUT=60000</url> <username>username</username> <password>password</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> </configuration> </definition> </datasource>
As mentioned previously, if there is more than one data source, configure them to reference the central database as well.
After you have finished configuring the data source, be sure to copy this configuration to the other worker nodes in the cluster.
Setting up cluster configurations for the worker nodes
Configuring clustering for the worker nodes is similar to the way you configured it for the manager node, but the localMemberPort
will vary for each worker node, you add the subDomain property, and you add the ELB and ESB manager node to the well-known members, as described in the following steps.
- Open the
<ESB_HOME>/repository/conf/axis2/axis2.xml
file. - Locate the Clustering section and configure the properties as follows:
- Enable clustering for this node:
<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
- Set the membership scheme to
wka
to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later):<parameter name="membershipScheme">wka</parameter>
- Specify the name of the cluster this node will join:
<parameter name="domain">wso2.esb.domain</parameter>
- Specify the port used to communicate cluster messages (if this node is on the same server as the ELB, manager node, or another worker node, be sure to set this to a unique value, such as 4000 and 4001 for worker nodes 1 and 3 and 4002 for worker node 2, which is on the same server as the ELB and manager node):
<parameter name="localMemberPort">4000</parameter>
Note: This port number will not be affected by the port offset incarbon.xml
. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server. - Define the sub-domain as worker by adding the following property under the
<parameter name="properties">
element:<property name="subDomain" value="worker"/>
Define the ELB and manager nodes as well-known member of the cluster by providing their host name and
localMemberPort
values:<members> <member> <hostName>elb.wso2.com</hostName> <port>4000</port> </member> <member> <hostName>mgr.esb.wso2.com</hostName> <port>4001</port> </member> </members>
- Enable clustering for this node:
Adjusting the port offset
Because we are running two Carbon-based products on the same server, we must change the port offset to avoid port conflicts.
- Open
<ESB_WORKER_HOME>/repository/conf/carbon.xml
on each worker node. - Locate the <Ports> tag and change the value of its sub-tag as follows on each worker node:
Worker1:
<Offset>0</Offset>
- No changes needed, because this will be the first node on this (xxx.xxx.xxx.132) server.Worker2:
<Offset>2</Offset>
- Set the offset to 2, because there are already two more Carbon products (ELB and ESB manager node) running on this (xxx.xxx.xxx.206) server.Worker3:
<Offset>1</Offset>
- Set the offset of 1, because Worker1 occupies the default ports on this (xxx.xxx.xxx.132) server.
In the next section, we will map the host names we specified to real IPs.
Mapping host names to IPs
In the worker nodes, we have used three hosts names: carbondb.mysql-wso2.com
for the MySQL server, elb.wso2.com
for the ELB, and mgr.esb.wso2.com
for the ESB manager node. We will now map them to the actual IPs.
Open the server's /etc/hosts
file and add the following lines, where <MYSQL-DB-SERVER-IP>
, <ELB-IP>
, and <ESB-Manager-IP>
are the actual IP addresses (in this example, xxx.xxx.xxx.206):
<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com <ELB-IP> elb.wso2.com <ESB-Manager-IP> mgr.esb.wso2.com
We have now finished configuring the worker nodes and are ready to start them.
Starting the ESB server
Start the ESB server by typing the following command in the terminal:
sh <ESB_WORKER_HOME>/bin/wso2server.sh -DworkerNode=true
The additional -DworkerNode=true
argument indicates that this is a worker node.
When starting the Worker1, it should display logs similar to the following in the console:
INFO - TribesClusteringAgent Initializing cluster... INFO - TribesClusteringAgent Cluster domain: wso2.esb.domain INFO - TribesClusteringAgent Using wka based membership management scheme INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.132:4000 INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.132:4000 INFO - WkaBasedMembershipScheme Added static member xxx.xxx.xxx.206:4000(wso2.esb.domain) INFO - WkaBasedMembershipScheme Added static member xxx.xxx.xxx.206:4001(wso2.esb.domain) INFO - TribesClusteringAgent Local Member xxx.xxx.xxx.132:4000(wso2.esb.domain) INFO - TribesUtil Members of current cluster INFO - TribesUtil Member1 xxx.xxx.xxx.206:4000(wso2.esb.domain) INFO - TribesUtil Member2 xxx.xxx.xxx.206:4001(wso2.esb.domain) INFO - WkaBasedMembershipScheme Sending JOIN message to WKA members... INFO - RpcMembershipRequestHandler Received MEMBER_LIST message from xxx.xxx.xxx.206:4000(wso2.esb.domain) INFO - TribesClusteringAgent Trying to send initialization request to xxx.xxx.xxx.206:4000(wso2.esb.domain) INFO - GetConfigurationResponseCommand Received configuration initialization message INFO - TribesClusteringAgent Cluster initialization completed.
The ELB console should have these new messages:
INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx.132:4000(wso2.esb.domain) INFO - MembershipManager Application member xxx.xxx.xxx.132:4000(wso2.esb.domain) joined group wso2.esb.domain INFO - TribesMembershipListener New member xxx.xxx.xxx.132:4000(wso2.esb.domain) joined cluster. INFO - DefaultGroupManagementAgent Application member Host:xxx.xxx.xxx.132, Port: 4000, HTTP:8280, HTTPS:8243, Domain: wso2.esb.domain, Sub-domain:worker, Active:true joined application cluster
We have now finished configuring the worker nodes and the cluster is running! When you terminate one node, all nodes identify that the node has left the cluster. The same applies when a new node joins the cluster. If you want to add another new worker node, you can simply copy worker1 without any changes if you are running it on a new server (such as xxx.xxx.xxx.184). If you intend to use the new node on a server where another WSO2 product is running, you can use a copy of worker1 and change the port offset accordingly in the carbon.xml
file. You may also have to change localMemberPort
in axis2.xml
if that product has clustering enabled. Be sure to map all host names to the relevant IP addresses when creating a new node.