Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

In this example, we will be using the WKA membership scheme, and the ELB will act as the Well Known Member well-known member in the cluster. It will accept all the service requests on behalf of the ESBs and divide the load among worker nodes in the ESB cluster.

This page describes how to create an ESB cluster with an ELB front end in the following sections:

Table of Contents

Installing the products

Before you begin, download and extract WSO2 ESB and WSO2 ELB to a local directory on the sever. For this example, we have extracted one copy of the ELB and two copies of the ESB on the server with IP xxx.xxx.xxx.206 (the x's represent your actual IP prefix), and we extracted two copies of the ESB on the server with the IP xxx.xxx.xxx.132:

...

    • Enable clustering for this node: <clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
    • Set the membership scheme to wka to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later): <parameter name="membershipScheme">wka</parameter>
    • Specify the name of the cluster this node will join: <parameter name="domain">wso2.esb.domain</parameter>
    • Specify the port used to communicate cluster messages: <parameter name="localMemberPort">4000</parameter>
      Note: This port number will not be affected by the port offset in carbon.xml. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.
    • Define the ESB manager node as a well-known member of the cluster by providing its host name and the its localMemberPort port you just specified (you will configure these on the manager node later):

      Code Block
      languagehtml/xml
      <members>
          <member>
              <hostName>mgr.esb.wso2.com</hostName>
              <port>4001</port>
          </member>
      </members>

We have now completed the clustering-related configuration for the ELB. In the next section, we will make one last change to the ELB that will increase usability.

...

Configuring the ELB to listen on default ports

We will now change the ELB configuration to listen to the default HTTP and HTTPS ports.

...

In the ELB we have specified two host names: esb.cloud-test.wso2.com for worker hosts and mgr.esb.wso2.com for the manager node. We will now map them to IPs in case there is no DNS to map themthe actual IPs.

 

Open the server's /etc/hosts file and add the following lines, where <ELP-IP> and <ESB-Manager-IP> are the actual IP addresses (in this example, xxx.xxx.xxx.206):

...

We have now finished configuring the ELB and ready to start the ELB server.

 

Starting the ELB server

Start the ELB server by typing the following command in the terminal: 

sudo -E sh <ELB_HOME>/bin/wso2server.sh 

...

Info

If you skipped the step of configuring the ELB to listen on the default ports, you do not need to use the sudo command and can just start the ELB with the following command: sh <ELB_HOME>/bin/wso2server.sh

...

Code Block
languagenone
INFO - TribesClusteringAgent Initializing cluster...
INFO - TribesClusteringAgent Cluster domain: wso2.esb.domain
INFO - TribesClusteringAgent Using wka based membership management scheme
INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4000
INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4000
INFO - TribesClusteringAgent Local Member xxx.xxx.xxx.206:4000(wso2.esb.domain)
INFO - TribesUtil No members in current cluster
INFO - TribesClusteringAgent Cluster initialization completed.

You are ready to configure the ESB manager node, enable clustering on the ESB worker nodes, and configure them to recognize the Well Known Member (the ESB) in the cluster.

##left off here

3.2. Set up Central Database

 Before we go on with configuring ESB nodes, we have to set up a central database. Each carbon Now that the ELB is configured and running, you create a central database for all the nodes to use.

Setting up the central database

Each Carbon-based product uses a database to store information such as user management details , and registry data etc. All nodes in the cluster need to must use one central database. Step 1: Download

  1. Download and install MySQL server.

...

  1. Download the MySQL jdbc driver.

...

  1. Define the host name for configuring permissions for the new database

...

  1. by opening the /etc/hosts file and

...

  1. adding the following line

...

  1. :
    <MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.

...

  1. com
  2. Open a terminal/command window

...

  1. and

...

  1. log in to

...

  1. MySQL with the following command:
    mysql -u username -p 

...

  1. When prompted, specify the password

...

  1. , and then create the database with the following command:
    mysql> create database carbondb;

...

  1. Grant permission to access the created database with the following command:

...


  1. mysql> grant all on carbondb.* TO username@carbondb.mysql-wso2.com identified by "password"; 

...

  1. Unzip the downloaded MySQL driver zipped archive

...

  1. and copy the MySQL JDBC driver

...

  1. JAR (mysql-connector-java-x.x.xx-bin.jar)

...

  1. to the <ESB_HOME>/repository/component/lib directory for

...

  1. each worker and manager

...

  1. node.

 In summary, now we have a central carbondb database in We have now created a central database called carbondb with host carbondb.mysql-wso2.com hostscom, and with the permission  to the user username with the password password.  
 
 
 
 
 
 
 
 
 
 
 

3.3 Configure the Manager Node

...

3.3.1 Configure Data Sources 

...

We have to point the manager node to newly created central database in above (2.1.2) section.

...

Step 1: Copy  MySQL JDBC driver jar (mysql-connector-java-x.x.xx-bin.jar) into the <ESB_MANAGER_HOME>/repository/component/lib directory as mentioned in Step 5 in previous (2.1.2) section.

 Step 2: Open for user username with password password. The next step is to configure the manager node.

Configuring the manager node 

In this section, we will configure data sources to allow the manager node to point to the central database, enable the manager node for clustering, change the port offset, and map the host names to IPs.

Configuring the data sources

  1. Make sure that you have copied the MySQL JDBC driver JAR to the manager node as described in Creating an ESB Cluster.
  2. Open the master-datasources.xml file located in the <ESB_MANAGER_HOME>/repository/conf/datasources/ directory

...

  1. .
  2. Locate the WSO2_CARBON_

...

  1. DB data source configurations and change them as follows

...

  1. :
    • Define the location of the central database: <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=60000</url>

...

    • Give user username access to the database:
      <username>username</username>
  • Gives the username to access the database

    • <password>password</password>
  • Password for above user

    • Specify the driver to use for connecting to the central database (the driver we copied in the previous section): <driverClassName>com.mysql.jdbc.Driver</driverClassName>
  • The driver to be used to connect for the central database. Since we already copied the MySQL JDBC driver, we can use that.

...

Other configurations do not need any changes. So the final outcome would be like this,

...

<datasource>

   <name>WSO2When you are finished, the data source configuration should look like this:

Code Block
languagehtml/xml
<datasource>
   <name>WSO2_CARBON_DB</name>

...

   <description>The datasource used for registry and user manager</description>

   <jndiConfig>

       <name>jdbc/WSO2CarbonDB</name>

   </jndiConfig>

...


   <description>The datasource used for registry and user manager</description>
   <jndiConfig>
       <name>jdbc/WSO2CarbonDB</name>
   </jndiConfig>
   <definition type="RDBMS">

...

       <configuration>

...


       <configuration>
           <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALS E;LOCK_TIMEOUT=60000</url>

...

           <username>username</username>

           <password>password</password>

...


           <username>username</username>
           <password>password</password>
           <driverClassName>com.mysql.jdbc.Driver</driverClassName>

...

           <maxActive>50</maxActive>

           <maxWait>60000</maxWait>

           <testOnBorrow>true</testOnBorrow>

           <validationQuery>SELECT 1</validationQuery>

           <validationInterval>30000</validationInterval>

       </configuration>

   </definition>

</datasource>

...


           <maxActive>50</maxActive>
           <maxWait>60000</maxWait>
           <testOnBorrow>true</testOnBorrow>
           <validationQuery>SELECT 1</validationQuery>
           <validationInterval>30000</validationInterval>
       </configuration>
   </definition>
</datasource>
Info

In most WSO2 products, only one data source is used. If there is more than one data source, make sure they reference the central databases accordingly. For

...

example, the API Manager deployment setup

...

requires more specific

...

data source configurations

...

, so it is described in a different section below.

...

Now we are complete configuring datasources for our ESB manager node.

...

3.3.2 Enable Clustering for the Manager Node

...

Now we have an idea of how we enable clustering. So let’s do it directly.

...

Setting up cluster configurations for the manager node

Configuring clustering for the manager node is very similar to the way you configured it for the ELB node, but the localMemberPort is 4001 instead of 4000, and you define the ELB node instead of the ESB manager node as the well-known member.

  1. Open the <ESB_HOME>/repository/conf/axis2/

...

Step 2: Locate “Clustering” section and there should be clustering configurations as follows.

...

  1. axis2.xml file.
  2. Locate the Clustering section and configure the properties as follows:
    • Enable clustering for this node: <clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true"

...

<parameter name="membershipScheme">wka</parameter>

...

    • >
    • Set the membership scheme to wka to enable the Well Known Address registration method (this node will send cluster initiation messages to

...

    • WKA members that we will define later): <parameter name="

...

    • membershipScheme">wka</parameter>

...

    • Specify the name of the cluster

...

    • this node

...

    • will join: <parameter name="

...

    • domain">wso2.esb.domain</parameter>

...

    • Specify the port

...

    • used to communicate cluster messages

...

    • : <parameter name="localMemberPort">4001</parameter>
      Note

...

    • : This port number will not be affected by the port offset

...

    • in carbon.xml

...

Here we are setting a port to 4001 since we used 4000 for the ELB on this machine (xxx.xxx.xxx.206). 

...

  1. Define well known members for the cluster as follows,

<members>

<member>

...

    • . If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.
    • Define the ELB node as a well-known member of the cluster by providing its host name and its localMemberPort:

      Code Block
      languagehtml/xml
      <members>
          <member>
              <hostName>elb.wso2.com</hostName>

...

               <port>4000</port>

           </member>

    • 
              <port>4000</port>
          </member>
      </members>
    • So here we defines ELB as well known members for the manager node by giving ELBs hostName and localMemberPorts.

...

3.3.3 Change carbon.xml

...

Adjusting the port offset

Because we are running two carbon Carbon-based products on the same machine server, we have to must change the port offset to avoid port conflicts on ports they used. Step 1:

  1. Open

...

  1. <ESB_MANAGER_HOME>/repository/conf/

...

  1. carbon.xml.

...

...

  1. Locate the <Ports> tag and change the value of its sub-tag

...

  1. to: <Offset>1</Offset>

...

Now we have complete clustering configurations on manager node. As before this to finish off, we have to specify IPs for hosts names if there exists any.

...

3.3.4 Map IPs

...

In the next section, we will map the host names we specified to real IPs.

Mapping host names to IPs

In the manager node we have used specified two hosts names, one is used to specify the mysql server in master-datasources.xml  and the other one is used to specify ELB host name when defining wka members in axis2.xml. 

So open /etc/hosts and add following lines. And note that if you have made the database in same machine you may have already added the first line. 

host names: carbondb.mysql-wso2.com for the MySQL server and elb.wso2.com for the ELB. We will now map them to the actual IPs.

Open the server's /etc/hosts file and add the following lines, where <MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com

<ELB-IP> elb.wso2.com

...

In the case of this sample it is as follows:

and <ELB-IP> are the actual IP addresses (in this example, xxx.xxx.xxx.206):

Code Block
languagehtml/xml
<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com

...


<ELB-IP> elb.wso2.com

...

Now we are complete with the configurations of the ESB manager node. Lets start ESB by,

sh Note that if you created the database on the same server as the manager node, you may have already added the first line.

We have now finished configuring the manager node and are ready to start the ESB server.

Starting the ESB server

Start the ESB server by typing the following command in the terminal:

sh <ESB_MANAGER_HOME>/bin/wso2server.sh -

...

Dsetup 

 The additional -Dsetup” Dsetup argument will clean the configurations, recreate DB, re-populate the configuration which is for our case because our central database is empty and the required tables need to be created in it. 

When starting the server in manager console it should display a message as follows:

“INFO - TribesClusteringAgent Initializing cluster...

INFO - TribesClusteringAgent Cluster domain: the central database, and create the required tables in the database.

The ESB should print logs to the server console similar to the following:

Code Block
languagenone
INFO - TribesClusteringAgent Initializing cluster...
INFO - TribesClusteringAgent Cluster domain: wso2.esb.domain

...


INFO - TribesClusteringAgent Using wka based membership management scheme

...


INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4001

...


INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4001

...


INFO - WkaBasedMembershipScheme Added static member xxx.xxx.xxx.206:4000(wso2.esb.domain)

...


INFO - TribesClusteringAgent Local Member xxx.xxx.xxx.206:4001(wso2.esb.domain)

...


INFO - TribesUtil Members of current cluster

...


INFO - TribesUtil Member1 xxx.xxx.xxx.206:4000(wso2.esb.domain)

...


INFO - WkaBasedMembershipScheme Sending JOIN message to WKA members...

...


INFO - RpcMembershipRequestHandler Received MEMBER_LIST message from xxx.xxx.xxx.206:4000(wso2.esb.domain)

...


INFO - TribesClusteringAgent Trying to send initialization request to xxx.xxx.xxx.206:4000(wso2.esb.domain)

...

Additionally, the ELB console should have these new messages to indicate that the manager node joined the cluster.“INFO - RpcMembershipRequestHandler Received JOIN message from :

Code Block
languagenone
INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx.206:4001(wso2.esb.domain)

...


INFO - MembershipManager Application member xxx.xxx.xxx.206:4001(wso2.esb.domain) joined group wso2.esb.domain

...


INFO - TribesMembershipListener New member xxx.xxx.xxx.206:4001(wso2.esb.domain) joined cluster.

...


INFO
 - RpcInitializationRequestHandler Received GetConfigurationCommand 
initialization request message from 
xxx.xxx.xxx.206:4001(wso2.esb.domain)

...


INFO - 
DefaultGroupManagementAgent Application member Host:xxx.xxx.xxx.206, 
Port: 4001, HTTP:8280, HTTPS:8243, Domain: wso2.esb.domain, Sub-domain:null, Active:true joined application

...

 cluster

We have now completed finished configuring the manager node. Now let’s move on to configuring Next, we will configure the ESB worker nodes of our setup. 
 
 
 
 
 
 
 
 
 
 
  

3.4 Configuring Worker Nodes for Clustering

...

Here we are configuring our final set of settings needed for clustering. We have to do these settings for all worker nodes in the cluster.

...

As done for the manager node we start by configuring our datasource.

...

3.4.1 Configure Data Sources 

...

We have to point the worker nodes to newly created central database in above (2.1.2) section. In this section the datasource changes are the same for all three nodes, so it is possible to configure one node and copy that into the other two nodes.

...

Step 1: Copy  MySQL JDBC driver jar (mysql-connector-java-x.x.xx-bin.jar) into the <ESB_WORKER_HOME>/repository/component/lib directory as mentioned in Step 5 in previous (2.1.2) section.

As our setup has 3 worker nodes, this driver has to be copied to all three nodes.

...

Configuring the worker nodes

You configure worker nodes in very much the same way as you configured the manager node. Be sure to follow these steps for each worker node in the cluster. 

Configuring the data source

You configure the data source to connect to the central database. Because the data source changes are the same for all worker nodes, you can configure this file on one node and then copy it to the other worker nodes.

  1. Make sure that you have copied the MySQL JDBC driver JAR to each worker node as described in Creating an ESB Cluster.
  2. Open the master-datasources.xml file located in the <ESB_WORKER_HOME>/repository/conf/datasources/ directory.
  3. Locate the

...

  1. WSO2_CARBON_

...

  1. DB data source configurations and change them as follows

...

  1. <url>jdbc:
    • Define the location of the central database:

      <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=

...

    • FALS E;LOCK_TIMEOUT=60000</url>

...

    • Give user username access to the database:
      <username>username</username>
  • Gives the username to access the database

    • <password>password</password>
  • Password for above user

    • Specify the driver to use for connecting to the central database: <driverClassName>com.mysql.jdbc.Driver</driverClassName>
    • The driver to be used to connect for the central database. Since we already copied the MySQL JDBC driver, we can use that.

...

Other configurations do not need to be changed. So the final outcome would be like this,

...

<datasource>

   <name>WSO2When you are finished, the data source configuration on each worker node should look like this:

Code Block
languagehtml/xml
<datasource>
   <name>WSO2_CARBON_DB</name>

...

   <description>The datasource used for registry and user manager</description>

   <jndiConfig>

       <name>jdbc/WSO2CarbonDB</name>

   </jndiConfig>

...


   <description>The datasource used for registry and user manager</description>
   <jndiConfig>
       <name>jdbc/WSO2CarbonDB</name>
   </jndiConfig>
   <definition type="RDBMS">

...

       <configuration>

...


       <configuration>
           <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALS E;LOCK_TIMEOUT=60000</url>

...

           <username>username</username>

           <password>password</password>

...


           <username>username</username>
           <password>password</password>
           <driverClassName>com.mysql.jdbc.Driver</driverClassName>

...

           <maxActive>50</maxActive>

           <maxWait>60000</maxWait>

           <testOnBorrow>true</testOnBorrow>

           <validationQuery>SELECT 1</validationQuery>

           <validationInterval>30000</validationInterval>

       </configuration>

   </definition>

</datasource>

...

This is same as the datasource configurations of the manager node.

...

As before mentioned please note that, in most of our products there is only one datasource is used. But if there are more than one datasource they also should be refer the central databases accordingly.For an example API Manager deplyoment setup has bit more specific datasource configurations to be done, hence its is described in different () section below.

...

Now we are complete configuring datasources for our workers.

...

3.4.2 Enable Clustering for Worker Nodes

...

As mentioned earlier section, all three worker nodes should be configured as follows. Note that almost all the configurations are same across the worker nodes except localMemberPort.

...


           <maxActive>50</maxActive>
           <maxWait>60000</maxWait>
           <testOnBorrow>true</testOnBorrow>
           <validationQuery>SELECT 1</validationQuery>
           <validationInterval>30000</validationInterval>
       </configuration>
   </definition>
</datasource>

As mentioned previously, if there is more than one data source, configure them to reference the central database as well.

After you have finished configuring the data source, be sure to copy this configuration to the other worker nodes in the cluster.

Setting up cluster configurations for the worker nodes

Configuring clustering for the worker nodes is similar to the way you configured it for the manager node, but the localMemberPort will vary for each worker node, you add the subDomain property, and you add the ELB and ESB manager node to the well-known members, as described in the following steps.

  1. Open the <ESB_HOME>/repository/conf/axis2/

...

Step 2: Locate “Clustering” section and there should be clustering configurations as follows.

...

  1. axis2.xml file.

...

  1. Locate the Clustering section and configure the properties as follows:
    • Enable clustering for this node: <clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true"

...

<parameter name="membershipScheme">wka</parameter>

...

    • >
    • Set the membership scheme to wka to enable the Well Known Address registration method (this node will send cluster initiation messages to

...

    • WKA members that we

...

    • will define later): <parameter name="

...

    • membershipScheme">wka</parameter>

...

    • Specify the name of the cluster this node will join

...

    • : <parameter name="

...

    • domain">wso2.esb.domain</parameter>

...

This value changes according to the worker node. Worker1 and Worker3 will have localMemberPort values 4000 and 4001 respectively, as they are both on a separate machine(xxx.xxx.xxx.132) and ports are not used by any other.

...

But Worker2 is one same machine (xxx.xxx.xxx.206) as ELB and ESB Manager node, so it has needs to have the port 4002 . This port is used to communicate cluster messages.

    • Specify the port used to communicate cluster messages (if this node is on the same server as the ELB, manager node, or another worker node, be sure to set this to a unique value, such as 4000 and 4001 for worker nodes 1 and 3 and 4002 for worker node 2, which is on the same server as the ELB and manager node): <parameter name="localMemberPort">4000</parameter>
      Note: This port number will not be affected by

...

    • the port offset in carbon.xml. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.
    • Define the sub-domain as worker by adding the following property under the <parameter name="properties"> element: <property name="subDomain" value="worker"/

...

  1. Defines well known members for the cluster as follows,

<members>

<member>

...

    • >
  • This defines the sub-domain of this node, here as a “worker“

    • Define the ELB and manager nodes as well-known member of the cluster by providing their host name and localMemberPort values:

      Code Block
      languagehtml/xml
      <members>
          <member>
              <hostName>elb.wso2.com</hostName>

...

               <port>4000</port>

           </member>

<member>

...

    • 
              <port>4000</port>
          </member>
          <member>
              <hostName>mgr.esb.wso2.com</hostName>

...

               <port>4001</port>

           </member>

</members>

    • So here we define ELB and ESB manager nodes as well known members for workers by giving their hostNames and localMemberPorts.

...

3.4.2 Change carbon.xml 

...

    • 
              <port>4001</port>
          </member>
      </members>

Adjusting the port offset

Because we are running two Carbon-based products on the same machine server, we have to must change the port offset to avoid port conflicts on ports they used. Step 1: Open carbon.xml which is at

  1. Open <ESB_WORKER_HOME>/repository/conf/

...

  1. carbon.xml on each worker node.
  2. Locate the <Ports> tag and change the value of its sub-tag as

...

  1. follows on each worker node:

3.4.3 Map IPs

...

    • Worker1: <Offset>0</

  • Offset> 
    • Offset> - No changes needed

  • as
    • , because this

  • one
    • will be the first node

  • in
    • on this (xxx.xxx.xxx.132)

  • machine
    • server.

    • Worker2: <Offset>2</

  • Offset> - Should add offset of 2 since
    • Offset> - Set the offset to 2, because there are already two more

  • carbon
    • Carbon products (ELB and ESB manager node) running on this (xxx.xxx.xxx.206)

  • machine
    • server.

    • Worker3: <Offset>1</

  • Offset> - Should add
    • Offset> - Set the offset of 1

  • since
    • , because Worker1

  • occupy
    • occupies the default ports on this (xxx.xxx.xxx.132

  • ) machine.

...

Now we have complete clustering configurations on worker nodes. As before this to finish off, we have to map host names to IPs.

...

    • ) server.

In the next section, we will map the host names we specified to real IPs.

Mapping host names to IPs

In the worker nodes, we have used three hosts names, one is used to specify the mysql server in master-datasources.xml  and the other two is used to specify elb and manager host names when defining wka members in axis2.xml. So open /etc/hosts and add following lines if they aren’t already there. : carbondb.mysql-wso2.com for the MySQL server, elb.wso2.com for the ELB, and mgr.esb.wso2.com for the ESB manager node. We will now map them to the actual IPs.

Open the server's /etc/hosts file and add the following lines, where <MYSQL-DB-SERVER-IP>, <ELB-IP>, and <ESB-Manager-IP> are the actual IP addresses (in this example, xxx.xxx.xxx.206):

Code Block
languagehtml/xml
<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com

...


<ELB-IP> elb.wso2.com

...


<ESB-Manager-IP>

...

In our case it looked like as follows,

xxx.xxx.xxx.206 carbondb.mysql-wso2.com

xxx.xxx.xxx.206 elb.wso2.com

...

 mgr.esb.wso2.

...

Now we are all complete with the configurations of the ESB worker nodes also. Lets start worker nodes,

...

com 

We have now finished configuring the worker nodes and are ready to start them.

Starting the ESB server

Start the ESB server by typing the following command in the terminal:

sh <ESB_WORKER_HOME>/bin/

...

wso2server.sh -DworkerNode=true 

The additional -DworkerNode=true argument indicates that this is a worker node.

When starting the Worker1, it should display logs similar as follow in its console,“INFO - TribesClusteringAgent Initializing to the following in the console:

Code Block
languagenone
INFO - TribesClusteringAgent Initializing cluster...

...


INFO - TribesClusteringAgent Cluster domain: wso2.esb.domain

...


INFO - TribesClusteringAgent Using wka based membership management scheme

...


INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.132:4000

...


INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.132:4000

...


INFO - WkaBasedMembershipScheme Added static member xxx.xxx.xxx.206:4000(wso2.esb.domain)

...


INFO - WkaBasedMembershipScheme Added static member xxx.xxx.xxx.206:4001(wso2.esb.domain)

...


INFO - TribesClusteringAgent Local Member xxx.xxx.xxx.132:4000(wso2.esb.domain)

...


INFO - TribesUtil Members of current cluster

...


INFO - TribesUtil Member1 xxx.xxx.xxx.206:4000(wso2.esb.domain)

...


INFO - TribesUtil Member2 xxx.xxx.xxx.206:4001(wso2.esb.domain)

...


INFO - WkaBasedMembershipScheme Sending JOIN message to WKA members...

...


INFO - RpcMembershipRequestHandler Received MEMBER_LIST message from xxx.xxx.xxx.206:4000(wso2.esb.domain)

...


INFO - TribesClusteringAgent Trying to send initialization request to xxx.xxx.xxx.206:4000(wso2.esb.domain)

...


INFO - GetConfigurationResponseCommand Received configuration initialization message

...


INFO - TribesClusteringAgent Cluster initialization completed

...

...

.

The ELB console should have these new messages“INFO - RpcMembershipRequestHandler Received JOIN message from :

Code Block
languagenone
INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx.132:4000(wso2.esb.domain)

...


INFO - MembershipManager Application member xxx.xxx.xxx.132:4000(wso2.esb.domain) joined group wso2.esb.domain

...


INFO - TribesMembershipListener New member xxx.xxx.xxx.132:4000(wso2.esb.domain) joined cluster.

...


INFO
 - DefaultGroupManagementAgent Application member Host:xxx.xxx.xxx.132, 
Port: 4000, HTTP:8280, HTTPS:8243, Domain: wso2.esb.domain, Sub-domain:worker, Active:true joined application

...

 cluster

We have now finished configuring the worker nodes and the cluster is running! When you terminate one node, all nodes identify that the node has left the cluster. The same applies when a new node joins the cluster.  So if If you want to add another new worker node what you have to do is, copy worker1 and you can simply use it copy worker1 without any changes if you are running it on a new machine, lets say server (such as xxx.xxx.xxx.184).   If you intend to use it in a machine the new node on a server where another wso2 WSO2 product is running, you can use a copy of worker1 with a and change to the port offset accordingly in the carbon.xml file. You may also have to change localMemberPort also in axis2.xml if that product has clustering enabled. Either case, make sure you have mapped all the Be sure to map all host names to the relevant IP addresses in /etc/hosts file when creating a new node.