This site contains the documentation that is relevant to older WSO2 product versions and offerings.
For the latest WSO2 documentation, visit https://wso2.com/documentation/.

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

 

A WSO2 ESB cluster should contain two or more ESB instances that are configured to run within the same domain. To make an instance a member of the cluster, you must configure it to either of the available membership schemes:

  • Well Known Address (WKA) membership scheme
  • Multicast membership scheme

In this example, we will be using the WKA membership scheme, and the ELB will act as the Well Known Member in the cluster. It will accept all the service requests on behalf of the ESBs and divide the load among worker nodes in the ESB cluster.

Installing the products

Before you begin, download and extract WSO2 ESB and WSO2 ELB to a local directory on the sever. For this example, we have extracted one copy of the ELB and two copies of the ESB on the server with IP 206, and we extracted two copies of the ESB on the server with the IP 132:

Server xxx.xxx.xxx.206:

  • 1 ELB instance (Well Known Member)
  • 1 ESB instance (worker node)
  • 1 ESB instance (Dep-sync management / manager node )

Server xxx.xxx.xxx.132:

  • 2 ESB instances (worker nodes)

Configuring the load balancer

You configure the ELB with the overall definition of the cluster and how it should distribute the load. You can achieve this by adding a few lines to a configuration file called loadbalancer.conf. You specify the detailed clustering configurations in the axis2.xml file. This section describes how to perform these steps.

Setting up load-balancing configurations

  1. Open the <ELB_HOME>/repository/conf/loadbalancer.conf file.
  2. Locate the ESB configuration and edit it as follows:

    esb {
            domains   {
                wso2.esb.domain {
                    hosts esb.cloud-test.wso2.com;
                    sub_domain worker;
                    tenant_range    *;
                }
            }
        }

 

In this file, we specified the domain name (wso2.esb.domain), which is used to identify the cluster. On startup, a node with this domain name will look for a cluster with this same domain name.

The ELB will divide the load among the sub-domains. With this sub-domain concept, we can virtually separate the cluster, according to the task that each collection of nodes intends to perform. We defined a sub-domain called worker.

In the previous diagram, you can see that all the service requests need to be routed to the worker nodes through the ELB, which is the front end to the entire cluster. We used the hosts attribute to configure the the publicly accessible host name (esb.cloud-test.wso2.com), which clients can use to send their requests to the cluster. This host name must be mapped to the ELB server IP address via the host entries in the /etc/hosts file.

Finally, the tenant_range attribute is used to handle tenant-aware load-balancing, which is another very powerful feature of the ELB. This attribute allows us to partition tenants into several clusters, so that when there is a large number of tenants to work with, we can instruct each cluster to work only with a particular tenant or a few selected tenants. This approach is also useful if you need a particular cluster to be dedicated to work for a single special tenant ("Private Jet Mode"). In this example, we are not enabling tenant partitioning, so we have used an asterisk ( * ) in front of the tenant_range attribute to represent all possible tenants.

In summary, we have configured the load balancer to handle requests sent to esb.cloud-test.wso2.com and to distribute the load among the worker nodes in the worker sub-domain of the wso2.esb.domain cluster. We are now ready to set up the cluster configurations.

Setting up cluster configurations on the ELB

Previously, we configured several properties of the cluster such as domain name and sub-domain, but we didn’t define them there. We now define these properties as we build the cluster.

  1. Open the <ELB_HOME>/repository/conf/axis2/axis2.xml file.
  2. Locate the Clustering section and configure the properties as follows:
    • Enable clustering for this node: <clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
    • Set the membership scheme to wka to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later): <parameter name="membershipScheme">wka</parameter>
    • Specify the name of the cluster this node will join: <parameter name="domain">wso2.esb.domain</parameter>
    • Specify the port used to communicate cluster messages: <parameter name="localMemberPort">4000</parameter>
      Note: This port number will not be affected by the port offset in carbon.xml. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.
    • Define the ESB manager node as a well-known member of the cluster by providing its host name and localMemberPorts port :

      <members>
          <member>
              <hostName>mgr.esb.wso2.com</hostName>
              <port>4001</port>
          </member>
      </members>

 

We have now completed the clustering-related configuration for the ELB. In the next section, we will make one last change to the ELB that will increase usability.

 

Setting up transport configurations on the ELB

We will now change the ELB configuration to listen to the default HTTP and HTTPS ports.

  1. Open the <ELB_HOME>/repository/conf/axis2/axis2.xml file.
  2. Locate the Transport Receiver section and configure the properties as follows:
    • In the <transportReceiver name="http" class="org.apache.synapse.transport.passthru.PassThroughHttpListener"> transport, enable service requests to be sent to the ELB's default HTTP port instead of port 8280: <parameter name="port">80</parameter>
    • In the <transportReceiver name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLListener"> transport, enable service requests to be sent to the ELB's default HTTPS port instead of port 8243: <parameter name="port">443</parameter>

 Note: We have used hosts names above(esb.cloud-test.wso2.com and mgr.esb.wso2.com) but in case we didn’t have DNS to to map them, we have to manually map them accordingly. In the next section below, we will see how these host name mapping can be carried out.

##left off here; continue editing and formatting from this point forward
 

3.1.4 Map IPs


 

In the ELB we have used two host names, one is used for worker hosts in loadbalancer.conf and other one is used for manager node. So it has to be mapped to a real IP.


 

So open /etc/hosts and add following lines.

<ELB-IP> esb.cloud-test.wso2.com 

<ESB-Manager-IP> mgr.esb.wso2.com


 

In the case of this sample, it is:

xxx.xxx.xxx.206 esb.cloud-test.wso2.com

xxx.xxx.xxx.206 mgr.esb.wso2.com


 

Now we are all complete  with the configurations of the ELB. Lets start the ELB server by typing the following command in the terminal

  • sudo -E sh <ELB_HOME>/bin/wso2server.sh 


 

If you haven not configured the ELB to listen to the default HTTP & HTTPS ports (Section 2.1.1.3 - Step 2) there will not be any need for sudo command. You can just start the ELB by,

  • sh <ELB_HOME>/bin/wso2server.sh 


 

Now it should print logs to the server console like the following,

“INFO - TribesClusteringAgent Initializing cluster...

INFO - TribesClusteringAgent Cluster domain: wso2.esb.domain

INFO - TribesClusteringAgent Using wka based membership management scheme

INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4000

INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4000

INFO - TribesClusteringAgent Local Member xxx.xxx.xxx.206:4000(wso2.esb.domain)

INFO - TribesUtil No members in current cluster

INFO - TribesClusteringAgent Cluster initialization completed.


 

Now lets move on to Configuring ESB manager node of our setup.


 

Secondly, we need to enable clustering capabilities of the ESB worker nodes and instruct those about the Well Known Member ( Which is the ESB in this case ) in the cluster.


 
 
 
 

3.2. Set up Central Database


 

Before we go on with configuring ESB nodes, we have to set up a central database. Each carbon based product uses a database to store user management details, registry data etc. All nodes in the cluster need to use one central database.


 

Step 1: Download and install MySQL server.


 

Step 2: Download MySQL jdbc driver.


 

Step 3: We need to use a host name for configuring permissions for new database. So open /etc/hosts file and add following line,

<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com 


 

Step 4: Now we need to create a new database. Open a terminal/command window, and login to mysql with the following command:

mysql -u username -p 


 

When prompted, specify the password. Then create a database with the following command:

mysql> create database carbondb;


 

Grant permission to access the created database with: 

mysql> grant all on carbondb.* TO username@carbondb.mysql-wso2.com identified by "password"; 


 

Step 5: Unzip the downloaded MySQL driver zipped archive. Copy MySQL JDBC driver jar (mysql-connector-java-x.x.xx-bin.jar) into <CARBON_HOME>/repository/component/lib directory for all worker and manager nodes.


 

In summary, now we have a central carbondb database in carbondb.mysql-wso2.com hosts, with the permission  to the user username with the password password. 


 
 
 
 
 
 
 
 
 
 
 
 

3.3 Configure the Manager Node


 

3.3.1 Configure Data Sources 


 

We have to point the manager node to newly created central database in above (2.1.2) section.


 

Step 1: Copy  MySQL JDBC driver jar (mysql-connector-java-x.x.xx-bin.jar) into the <ESB_MANAGER_HOME>/repository/component/lib directory as mentioned in Step 5 in previous (2.1.2) section.


 

Step 2: Open master-datasources.xml file located in <ESB_MANAGER_HOME>/repository/conf/datasources/ directory and locate the “WSO2_CARBON_DB” data source configurations and change as follows.


 

  1. <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=60000</url>

    • Defines the location of our central database.

  2. <username>username</username>

    • Gives the username to access the database

  3. <password>password</password>

    • Password for above user

  4. <driverClassName>com.mysql.jdbc.Driver</driverClassName>

    • The driver to be used to connect for the central database. Since we already copied the MySQL JDBC driver, we can use that.


 

Other configurations do not need any changes. So the final outcome would be like this,


 

<datasource>

   <name>WSO2_CARBON_DB</name>

   <description>The datasource used for registry and user manager</description>

   <jndiConfig>

       <name>jdbc/WSO2CarbonDB</name>

   </jndiConfig>

   <definition type="RDBMS">

       <configuration>

           <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALS E;LOCK_TIMEOUT=60000</url>

           <username>username</username>

           <password>password</password>

           <driverClassName>com.mysql.jdbc.Driver</driverClassName>

           <maxActive>50</maxActive>

           <maxWait>60000</maxWait>

           <testOnBorrow>true</testOnBorrow>

           <validationQuery>SELECT 1</validationQuery>

           <validationInterval>30000</validationInterval>

       </configuration>

   </definition>

</datasource>


 

Please note that, in most of our products there is only one datasource used. If there are more than one datasource they also should be refer the central databases accordingly.For an example API Manager deployment setup has bit more specific datasource configurations to be done, hence its is described in different section below.


 

Now we are complete configuring datasources for our ESB manager node.


 

3.3.2 Enable Clustering for the Manager Node


 

Now we have an idea of how we enable clustering. So let’s do it directly.


 

Step 1: Open axis2.xml file which is at <ESB_MANAGER_HOME>/repository/conf/axis2/ directory.


 

Step 2: Locate “Clustering” section and there should be clustering configurations as follows.


 

  1. <clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true"> 


 

  1. <parameter name="membershipScheme">wka</parameter>

    1. So this node will send cluster initiation messages to wka member(s) which we’ll defined in configuring worker nodes.


 

  1. <parameter name="domain">wso2.esb.domain</parameter>

    1. Defines the name of the cluster which this node going to join.


 

  1. <parameter name="localMemberPort">4001</parameter>

    1. This port is used to communicate cluster messages.

    2. Note that this port number will not be affected by port offset if carbon.xml

    3. Here we are setting a port to 4001 since we used 4000 for the ELB on this machine (xxx.xxx.xxx.206). 


 

  1. Define well known members for the cluster as follows,

<members>

<member>

               <hostName>elb.wso2.com</hostName>

               <port>4000</port>

           </member>

</members>

    • So here we defines ELB as well known members for the manager node by giving ELBs hostName and localMemberPorts.


 
 

3.3.3 Change carbon.xml


 

Since we are running two carbon based products on the same machine we have to change port offset to avoid conflicts on ports they used.


 

Step 1: Open carbon.xml which is at <ESB_MANAGER_HOME>/repository/conf/ directory. 


 

Step 2: Locate the <Ports> tag and change the value of its sub tag as <Offset>1</Offset>.


 

Now we have complete clustering configurations on manager node. As before this to finish off, we have to specify IPs for hosts names if there exists any.


 

3.3.4 Map IPs


 

In the manager node we have used two hosts names, one is used to specify the mysql server in master-datasources.xml  and the other one is used to specify ELB host name when defining wka members in axis2.xml.


 

So open /etc/hosts and add following lines. And note that if you have made the database in same machine you may have already added the first line. 

<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com

<ELB-IP> elb.wso2.com


 
 

In the case of this sample it is as follows:

xxx.xxx.xxx.206 carbondb.mysql-wso2.com

xxx.xxx.xxx.206 elb.wso2.com


 
 

Now we are complete with the configurations of the ESB manager node. Lets start ESB by,

sh <ESB_MANAGER_HOME>/bin/wso2server.sh -Dsetup


 

The additional “-Dsetup” argument will clean the configurations, recreate DB, re-populate the configuration which is for our case because our central database is empty and the required tables need to be created in it.


 

When starting the server in manager console it should display a message as follows:

“INFO - TribesClusteringAgent Initializing cluster...

INFO - TribesClusteringAgent Cluster domain: wso2.esb.domain

INFO - TribesClusteringAgent Using wka based membership management scheme

INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4001

INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4001

INFO - WkaBasedMembershipScheme Added static member xxx.xxx.xxx.206:4000(wso2.esb.domain)

INFO - TribesClusteringAgent Local Member xxx.xxx.xxx.206:4001(wso2.esb.domain)

INFO - TribesUtil Members of current cluster

INFO - TribesUtil Member1 xxx.xxx.xxx.206:4000(wso2.esb.domain)

INFO - WkaBasedMembershipScheme Sending JOIN message to WKA members...

INFO - RpcMembershipRequestHandler Received MEMBER_LIST message from xxx.xxx.xxx.206:4000(wso2.esb.domain)

INFO - TribesClusteringAgent Trying to send initialization request to xxx.xxx.xxx.206:4000(wso2.esb.domain)”


 

The ELB console should have these new messages to indicate the manager node joined the cluster.

“INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx.206:4001(wso2.esb.domain)

INFO - MembershipManager Application member xxx.xxx.xxx.206:4001(wso2.esb.domain) joined group wso2.esb.domain

INFO - TribesMembershipListener New member xxx.xxx.xxx.206:4001(wso2.esb.domain) joined cluster.

INFO - RpcInitializationRequestHandler Received GetConfigurationCommand initialization request message from xxx.xxx.xxx.206:4001(wso2.esb.domain)

INFO - DefaultGroupManagementAgent Application member Host:xxx.xxx.xxx.206, Port: 4001, HTTP:8280, HTTPS:8243, Domain: wso2.esb.domain, Sub-domain:null, Active:true joined application cluster”


 

We have now completed configuring the manager node.

Now let’s move on to configuring ESB worker nodes of our setup.


 
 
 
 
 
 
 
 
 
 
 
 
 

3.4 Configuring Worker Nodes for Clustering


 

Here we are configuring our final set of settings needed for clustering. We have to do these settings for all worker nodes in the cluster.


 

As done for the manager node we start by configuring our datasource.


 

3.4.1 Configure Data Sources 


 

We have to point the worker nodes to newly created central database in above (2.1.2) section. In this section the datasource changes are the same for all three nodes, so it is possible to configure one node and copy that into the other two nodes.


 

Step 1: Copy  MySQL JDBC driver jar (mysql-connector-java-x.x.xx-bin.jar) into the <ESB_WORKER_HOME>/repository/component/lib directory as mentioned in Step 5 in previous (2.1.2) section.

As our setup has 3 worker nodes, this driver has to be copied to all three nodes.


 

Step 2: Open and edit master-datasources.xml file of each worker node which is at <ESB_WORKER_HOME>/repository/conf/datasources/ directory. Locate the “WSO2_CARBON_DB” data source configurations and change as follows.


 

  1. <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=60000</url>

    • Defines the location of our central database.

  2. <username>username</username>

    • Gives the username to access the database

  3. <password>password</password>

    • Password for above user

  4. <driverClassName>com.mysql.jdbc.Driver</driverClassName>

    • The driver to be used to connect for the central database. Since we already copied the MySQL JDBC driver, we can use that.


 

Other configurations do not need to be changed. So the final outcome would be like this,


 

<datasource>

   <name>WSO2_CARBON_DB</name>

   <description>The datasource used for registry and user manager</description>

   <jndiConfig>

       <name>jdbc/WSO2CarbonDB</name>

   </jndiConfig>

   <definition type="RDBMS">

       <configuration>

           <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALS E;LOCK_TIMEOUT=60000</url>

           <username>username</username>

           <password>password</password>

           <driverClassName>com.mysql.jdbc.Driver</driverClassName>

           <maxActive>50</maxActive>

           <maxWait>60000</maxWait>

           <testOnBorrow>true</testOnBorrow>

           <validationQuery>SELECT 1</validationQuery>

           <validationInterval>30000</validationInterval>

       </configuration>

   </definition>

</datasource>


 

This is same as the datasource configurations of the manager node.


 

As before mentioned please note that, in most of our products there is only one datasource is used. But if there are more than one datasource they also should be refer the central databases accordingly.For an example API Manager deplyoment setup has bit more specific datasource configurations to be done, hence its is described in different () section below.


 

Now we are complete configuring datasources for our workers.


 

3.4.2 Enable Clustering for Worker Nodes


 

As mentioned earlier section, all three worker nodes should be configured as follows. Note that almost all the configurations are same across the worker nodes except localMemberPort.


 

Step 1: Open axis2.xml file which is at <ESB_WORKER_HOME>/repository/conf/axis2/ directory.


 

Step 2: Locate “Clustering” section and there should be clustering configurations as follows.


 

  1. <clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true"> 


 

  1. <parameter name="membershipScheme">wka</parameter>

    • So this node will send cluster initiation messages to wka member(s) that we define in configuring worker nodes.


 

  1. <parameter name="domain">wso2.esb.domain</parameter>

    • Defines the name of the cluster this node will join.


 

  1. <parameter name="localMemberPort">4000</parameter>

    • This value changes according to the worker node. Worker1 and Worker3 will have localMemberPort values 4000 and 4001 respectively, as they are both on a separate machine(xxx.xxx.xxx.132) and ports are not used by any other.

    • But Worker2 is one same machine (xxx.xxx.xxx.206) as ELB and ESB Manager node, so it has needs to have the port 4002 . This port is used to communicate cluster messages.

    • Note that this port number will not be affected by port offset if carbon.xml 


 

  1. Add property “<property name="subDomain" value="worker"/>” under <parameter name="properties">

    • This defines the sub-domain of this node, here as a “worker“


 

  1. Defines well known members for the cluster as follows,

<members>

<member>

               <hostName>elb.wso2.com</hostName>

               <port>4000</port>

           </member>

<member>

               <hostName>mgr.esb.wso2.com</hostName>

               <port>4001</port>

           </member>

</members>

    • So here we define ELB and ESB manager nodes as well known members for workers by giving their hostNames and localMemberPorts.


 

3.4.2 Change carbon.xml 


 

Since we are running two carbon based products on the same machine we have to change port offset to avoid conflicts on ports they used.


 

Step 1: Open carbon.xml which is at <ESB_WORKER_HOME>/repository/conf/ directory. 


 

Step 2: Locate the <Ports> tag and change the value of its sub tag as <Offset> as follows, 

  • Worker1 : <Offset>0</Offset> - No changes needed as this one will be the first node in this (xxx.xxx.xxx.132) machine.

  • Worker2 : <Offset>2</Offset> - Should add offset of 2 since there are already two more carbon products(ELB and ESB manager node) running on this(xxx.xxx.xxx.206) machine.

  • Worker3 : <Offset>1</Offset> - Should add offset of 1 since Worker1 occupy the default ports on this (xxx.xxx.xxx.132) machine.


 

Now we have complete clustering configurations on worker nodes. As before this to finish off, we have to map host names to IPs.


 
 

3.4.3 Map IPs


 

In the worker nodes we have used three hosts names, one is used to specify the mysql server in master-datasources.xml  and the other two is used to specify elb and manager host names when defining wka members in axis2.xml.


 

So open /etc/hosts and add following lines if they aren’t already there. 

<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com

<ELB-IP> elb.wso2.com

<ESB-Manager-IP> mgr.esb.wso2.com 


 

In our case it looked like as follows,

xxx.xxx.xxx.206 carbondb.mysql-wso2.com

xxx.xxx.xxx.206 elb.wso2.com

xxx.xxx.xxx.206 mgr.esb.wso2.com 


 
 

Now we are all complete with the configurations of the ESB worker nodes also. Lets start worker nodes,

sh <ESB_WORKER_HOME>/bin/wso2server.sh -DworkerNode=true 


 

When starting the Worker1, it should display logs similar as follow in its console,

“INFO - TribesClusteringAgent Initializing cluster...

INFO - TribesClusteringAgent Cluster domain: wso2.esb.domain

INFO - TribesClusteringAgent Using wka based membership management scheme

INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.132:4000

INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.132:4000

INFO - WkaBasedMembershipScheme Added static member xxx.xxx.xxx.206:4000(wso2.esb.domain)

INFO - WkaBasedMembershipScheme Added static member xxx.xxx.xxx.206:4001(wso2.esb.domain)

INFO - TribesClusteringAgent Local Member xxx.xxx.xxx.132:4000(wso2.esb.domain)

INFO - TribesUtil Members of current cluster

INFO - TribesUtil Member1 xxx.xxx.xxx.206:4000(wso2.esb.domain)

INFO - TribesUtil Member2 xxx.xxx.xxx.206:4001(wso2.esb.domain)

INFO - WkaBasedMembershipScheme Sending JOIN message to WKA members...

INFO - RpcMembershipRequestHandler Received MEMBER_LIST message from xxx.xxx.xxx.206:4000(wso2.esb.domain)

INFO - TribesClusteringAgent Trying to send initialization request to xxx.xxx.xxx.206:4000(wso2.esb.domain)

INFO - GetConfigurationResponseCommand Received configuration initialization message

INFO - TribesClusteringAgent Cluster initialization completed.


 

And in the ELB console should have these new messages

“INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx.132:4000(wso2.esb.domain)

INFO - MembershipManager Application member xxx.xxx.xxx.132:4000(wso2.esb.domain) joined group wso2.esb.domain

INFO - TribesMembershipListener New member xxx.xxx.xxx.132:4000(wso2.esb.domain) joined cluster.

INFO - DefaultGroupManagementAgent Application member Host:xxx.xxx.xxx.132, Port: 4000, HTTP:8280, HTTPS:8243, Domain: wso2.esb.domain, Sub-domain:worker, Active:true joined application clustert”


 

Now you have the complete cluster up and running. You can see when you terminate one node, all nodes identify that node has left the cluster. The same applies when a new node joins the cluster.


 

So if you want to add another new worker node what you have to do is, copy worker1 and you can simply use it without any changes if you are running on a new machine, lets say xxx.xxx.xxx. 184.


 

If you intend to use it in a machine where another wso2 product is running, you can use a copy of worker1 with a change to port offset accordingly in the carbon.xml file. You may have to change localMemberPort also in axis2.xml if that product has clustering enabled.

Either case, make sure you have mapped all the host names to the relevant IP addresses in /etc/hosts file when creating a new node.

 

  • No labels