Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This topic is based on the worker/manager clustering pattern you choose. Note that this page describes This topic provides instructions on how to cluster WSO2 Enterprise Service Bus (ESB) using WSO2 Elastic Load Balancer (ELB), but you can use a third-party load balancer in its place (for configuration details, see your load balancer's documentation).

...

In this scenario, we are using WSO2 ELB for the load balancer (you can also use a third-party load balancer; for details on configuration, see your load balancer's documentation). You configure the ELB with the overall definition of the cluster and how it should distribute the load. You can achieve this by adding a few lines to a configuration file called loadbalancer.conf. You specify the detailed clustering configurations in the axis2.xml file. This section describes how to perform these steps.

Info

The system should have at least two Well-known Address (WKA) members in order to work correctly and to recover if a single WKA member fails. The member can either be another ELB or another manager or worker node.

Setting up

...

  1. Open the <ELB_HOME>/repository/conf/loadbalancer.conf file.
  2. Locate the ESB configuration and edit it as follows:

    Code Block
    languagehtml/xml
    esb {
      domains{
         wso2.esb.domain {
            tenant_range *;
            group_mgt_port 5000;
            worker {
                   hosts esb.cloud-test.wso2.com;
            }
         }
       }
    }

In this file, we specified the domain name (wso2.esb.domain), which is used to identify the cluster. On startup, node that are configured with this domain name can join this cluster.

All the service requests need to be routed to the worker nodes through the ELB, which is the front end to the entire cluster. Therefore, we specify the worker sub-domain and use the hosts attribute to configure the publicly accessible host name (esb.cloud-test.wso2.com) that clients can use to send their requests to the cluster. We will map the host name to the ELB server IP address later.

For service-based products (such as WSO2 ESB, Data Services Server, and Application Server), the ELB creates one group management agent per cluster to manage the service groups. Because we are configuring an ESB cluster, we used the group_mgt_port attribute to specify the port for this cluster's group management agent. The port should be a unique value between 4000 and 5000. In scenarios where you are not using the ELB in front of the cluster, you configure the group management agent in the node's axis2.xml file instead as described in Group Management.

Finally, the tenant_range attribute is used to handle tenant-aware load-balancing, which is another very powerful feature of the ELB. This attribute allows us to partition tenants into several clusters, so that when there is a large number of tenants to work with, we can instruct each cluster to work only with a particular tenant or a few selected tenants. This approach is also useful if you need a particular cluster to be dedicated to work for a single special tenant ("Private Jet Mode"). Following are examples of the values you can specify for tenant_range:

  • 1,6,3,4: Tenant IDs 1, 6, 3 and 4
  • 1-3: Tenant IDs 1 to 3 (inclusive of both 1 and 3)
  • 43: Tenant ID 43
  • *: All tenants

In this example, we are not enabling tenant partitioning, so we have used an asterisk ( * ) in front of tenant_range to represent all possible tenants.

When there is more than one load balancer fronting the cluster, these configurations change as follows:

Code Block
esb { 
        domains{ 
           wso2.esb.domain { 
                tenant_range *; 
                group_mgt_port 5000; 
                members 127.0.0.1:6000;
           mgt { 
                hosts mgt.esb.wso2.com; 
           } 
           worker { 
                hosts esb.wso2.com; 
           } 
       } 
    } 
}
Info

Note that 5000 is the group_mgt_port value used only for ESB. This value needs to be different for different products.

 

In summary, we have configured the load balancer to handle requests sent to esb.cloud-test.wso2.com and to distribute the load among the worker nodes in the worker sub-domain of the wso2.esb.domain cluster. We are now ready to set up the cluster configurations.

Setting up cluster configurations on the ELB

Previously, we configured several properties of the cluster such as domain name and sub-domain, but we didn’t define them there. We now define these properties as we build the cluster.

  1. Open the <ELB_HOME>/repository/conf/axis2/axis2.xml file.
  2. Locate the Clustering section and verify or configure the properties as follows (some of these properties are already set correctly by default):
    • Enable clustering for this node: 
      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    • Set the membership scheme to wka to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later): 
      <parameter name="membershipScheme">wka</parameter>
    • Specify a domain name for the ELB node (note that this domain it for potentially creating a cluster of ELB nodes and is not the cluster of ESB nodes that the ELB will load balance): 
      <parameter name="domain">wso2.carbon.lb.domain</parameter>
    • Specify the port used to communicate with this ELB node: 
      <parameter name="localMemberPort">4000</parameter>
       
      Note: This port number will not be affected by the port offset in carbon.xml. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.

Info

In the case of multiple load balancers fronting the cluster, you need to add members to the axis2.xml file.

Code Block
languagehtml/xml
<members> 
      <member> 
            <hostName>127.0.0.1</hostName> 
           <port>4200</port> 
      </member> 
</members>

Here, the second ELB runs in localhost and port 4200 is the localMemberPort (found in the axis2.xml file) of the second ELB. Note that 4200 is the port value used only for ESB. The port value needs to be different for different products.

Do this configuration for both the ELBs. This configuration is done primarily to make both ELBs aware of each other. This is exceptionally helpful in scenarios when one ELB restarts and it would need to get the existing cluster information to serve requests again. To do that, it would need to communicate with the other ELB.

We have now completed the clustering-related configuration for the ELB. In the next section, we will make one last change to the ELB that will increase usability.

Configuring the ELB to listen on default ports

We will now change the ELB configuration to listen to the default HTTP and HTTPS ports.

  1. Open the <ELB_HOME>/repository/conf/axis2/axis2.xml file.
  2. Locate the Transport Receiver section and configure the properties as follows:
    • In the <transportReceiver name="http" class="org.apache.synapse.transport.passthru.PassThroughHttpListener"> transport, enable service requests to be sent to the ELB's default HTTP port instead of having to specify port 8280: 
      <parameter name="port">80</parameter>
    • In the <transportReceiver name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLListener"> transport, enable service requests to be sent to the ELB's default HTTPS port instead of having to specify port 8243: 
      <parameter name="port">443</parameter>

In the next section, we will map the host names we specified to real IPs.

Mapping the host name to the IP

In the ELB, we configured a host name in loadbalancer.conf to front the worker service requests. We must now map this host name (esb.cloud-test.wso2.com) to the actual IP address. Open the server's /etc/hosts file and add the following line, where <ELP-IP> is the actual IP address:

Code Block
languagenone
<ELB-IP> esb.cloud-test.wso2.com 

In this example, it would look like this:

Code Block
languagenone
xxx.xxx.xxx.206 esb.cloud-test.wso2.com 

We have now finished configuring the ELB and ready to start the ELB server.

Starting the ELB server

Start the ELB server by typing the following command in the terminal:

sudo -E sh <ELB_HOME>/bin/wso2server.sh 
Info

If you skipped the step of configuring the ELB to listen on the default ports, you do not need to use the sudo command and can just start the ELB with the following command: sh <ELB_HOME>/bin/wso2server.sh

The ELB should print logs to the server console indicating that the cluster initialization is complete.

Now that the ELB is configured and running, you create a central database for all the nodes to use.

Setting up the database

Each Carbon-based product uses a database to store information such as user management details and registry data. All nodes in the cluster must use one central database.

  1. Download and install MySQL server.
  2. Download the MySQL jdbc driver.
  3. Define the host name for configuring permissions for the new database by opening the /etc/hosts file and adding the following line:

    Code Block
    <MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com
  4. Open a terminal/command window and log in to MySQL with the following command:

    Code Block
    mysql -u username -p
  5. When prompted, specify the password, and then create the database with the following command:

    Code Block
    mysql> create database wso2conum_db;
  6. Create a database schema and populate tables as follows. Make sure to replace <CARBON_HOME> with the absolute path of the WSO2 ELB directory.

    Code Block
    mysql> use wso2conum_db;
    mysql> source CARBON_HOME/dbscripts/mysql.sql;
  7. Create another database which will be used as the shared governance and configuration registry database.

    Code Block
    mysql> create database wso2conreg_db;
  8. Create registry database schema and populate tables as follows. Make sure to replace CARBON_HOME with the absolute path of wso2elb-2.1.0 directory

    Code Block
    mysql> use wso2conreg_db;
    mysql> source CARBON_HOME/dbscripts/mysql.sql;
  9. Grant permission to access the created database with the following command: 

    Code Block
    mysql> grant all on carbondb.* TO username@carbondb.mysql-wso2.com identified by "password";
  10. Unzip the downloaded MySQL driver zipped archive and copy the MySQL JDBC driver JAR (mysql-connector-java-x.x.xx-bin.jar) to the <ESB_HOME>/repository/component/lib directory for each worker and manager node.

We have now created a central database called carbondb with host carbondb.mysql-wso2.com, and with permission for user username with password password.

To configure a user management database and shared registry database, edit <ESB_MGR_HOME>/repository/conf/datasoruces/master-datasources.xml as shown below:

Code Block
languagehtml/xml
<datasource>
	<name>WSO2_SHARED_REG_DB</name>
	<description>The datasource used for shared config and governance registry</description>
	<jndiConfig>
		<name>jdbc/WSO2SharedDB</name>
	</jndiConfig>
	<definition type="RDBMS">
		<configuration>
			<url>jdbc:mysql://localhost:3306/wso2conreg_db</url>
			<username>root</username>
			<password>root</password>
			<driverClassName>com.mysql.jdbc.Driver</driverClassName>
			<maxActive>50</maxActive>
			<maxWait>60000</maxWait>
			<testOnBorrow>true</testOnBorrow>
			<validationQuery>SELECT 1</validationQuery>
			<validationInterval>30000</validationInterval>
		</configuration>
	</definition>
</datasource>
<datasource>
	<name>WSO2_UM_DB</name>
	<description>The datasource used for registry and user manager</description>
	<jndiConfig>
		<name>jdbc/WSO2UmDB</name>
	</jndiConfig>
	<definition type="RDBMS">
		<configuration>
			<url>jdbc:mysql://localhost:3306/wso2conum_db</url>
			<username>root</username>
			<password>root</password>
			<driverClassName>com.mysql.jdbc.Driver</driverClassName>
			<maxActive>50</maxActive>
			<maxWait>60000</maxWait>
			<testOnBorrow>true</testOnBorrow>
			<validationQuery>SELECT 1</validationQuery>
			<validationInterval>30000</validationInterval>
		</configuration>
	</definition>
</datasource>
Info

Make sure to replace username and password with your MySQL database username and password.

To configure the datasource, update the dataSource property found in <ESB_MGR_HOME>/repository/conf/user-mgt.xml as shown below:

Code Block
languagehtml/xml
<Property name="dataSource">jdbc/WSO2UmDB</Property>

Configure the shared registry database and mounting details in <ESB_MGR_HOME>/repository/conf/registry.xml as follows:

Code Block
languagehtml/xml
<dbConfig name="sharedregistry">
	<dataSource>jdbc/WSO2SharedDB</dataSource>
</dbConfig>
<remoteInstance url="https://localhost:9443/registry">
	<id>instanceid</id>
	<dbConfig>sharedregistry</dbConfig>
	<readOnly>false</readOnly>
	<enableCache>true</enableCache>
	<registryRoot>/</registryRoot>
</remoteInstance>
<mount path="/_system/config" overwrite="true">
	<instanceId>instanceid</instanceId>
	<targetPath>/_system/esbnodes</targetPath>
</mount>
<mount path="/_system/governance" overwrite="true">
	<instanceId>instanceid</instanceId>
	<targetPath>/_system/governance</targetPath>
</mount>

Now your database is set up. The next step is to configure the manager node.

Configuring the manager node 

In this section, we will configure data sources to allow the manager node to point to the central database, enable the manager node for clustering, change the port offset, and map the host names to IPs.

Configuring the data source

  1. Make sure that you have copied the MySQL JDBC driver JAR to the manager node as described in Setting up the central database.
  2. Open the master-datasources.xml file located in the <ESB_MANAGER_HOME>/repository/conf/datasources/ directory.
  3. Locate the WSO2_CARBON_DB data source configurations and change them as follows:

When you are finished, the data source configuration should look like this:

Code Block
languagehtml/xml
<datasource>
   <name>WSO2_CARBON_DB</name>
   <description>The datasource used for registry and user manager</description>
   <jndiConfig>
       <name>jdbc/WSO2CarbonDB</name>
   </jndiConfig>
   <definition type="RDBMS">
       <configuration>
           <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALS E;LOCK_TIMEOUT=60000</url>
           <username>username</username>
           <password>password</password>
           <driverClassName>com.mysql.jdbc.Driver</driverClassName>
           <maxActive>50</maxActive>
           <maxWait>60000</maxWait>
           <testOnBorrow>true</testOnBorrow>
           <validationQuery>SELECT 1</validationQuery>
           <validationInterval>30000</validationInterval>
       </configuration>
   </definition>
</datasource>
Info

In most WSO2 products, only one data source is used. If there is more than one data source, make sure they reference the central databases accordingly. For example, the API Manager deployment setup requires more specific data source configurations.

Setting up cluster configurations for the manager node

Configuring clustering for the manager node is similar to the way you configured it for the ELB node, but the localMemberPort is 4001 instead of 4000, and you define the ELB node instead of the ESB manager node as the well-known member.

  1. Open the <ESB_HOME>/repository/conf/axis2/axis2.xml file.
  2. Locate the Clustering section and verify or configure the properties as follows (some of these properties are already set correctly by default):
    • Enable clustering for this node: 
      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    • Set the membership scheme to wka to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later): 
      <parameter name="membershipScheme">wka</parameter>
    • Specify the name of the cluster this node will join:
      <parameter name="domain">wso2.esb.domain</parameter>
    • Specify the port used to communicate cluster messages: 
      <parameter name="localMemberPort">4001</parameter> 
      Note: This port number will not be affected by the port offset in carbon.xml. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.
    • Add a new property "subDomain" and set it to "mgt" to denote that this node belongs to mgt subdomain of the cluster as defined in loadbalancer.conf.

      Code Block
      languagehtml/xml
      <parameter name="properties">
                  <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
                  <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
                  <property name="subDomain" value="mgt"/>
                  <property name="port.mapping.8290" value="9763"/>
      </parameter>
    • The receiver's http/https port values are without the portOffset addition; they get auto-incremented by portOffset. The 'WSDLEPRPrefix' parameter should point to the worker node's host name (esb.cloud-test.wso2.com) and ELB's http (8280)/https (8243) transport ports.

    • Change the members listed in the <members> element so that it is applicable for your worker/manager clustering deployment pattern:

      Localtabgroup
      Localtab
      activetrue
      titleWorker/Manager Clustering Pattern 1

      Clear the members from the <members> element so that it is now empty. This is done as members and load balancers are not necessary in this pattern.

      Code Block
      languagehtml/xml
      <members>
         
      </members>
      Localtab
      titleWorker/Manager Clustering Pattern 2 and 3

      All other worker/manager clustering deployment patterns require members. Use the following as the configurations for the <members> element.

      Code Block
      languagehtml/xml
      <members>
      	<member>
      		<hostName>127.0.0.1</hostName>
      		<port>5000</port>
      	</member>
      </members>

Configuring the port offset and host name

Because we are running two Carbon-based products on the same server, we must change the port offset to avoid port conflicts. Additionally, we will add the cluster host name so that any requests sent to the manager host are redirected to the cluster, where the ELB will pick them up and manage them.

  1. Open <ESB_MANAGER_HOME>/repository/conf/carbon.xml.
  2. Locate the <Ports> tag and change the value of its sub-tag to: 
    <Offset>1</Offset>
  3. Locate the <HOSTNAME> tag and add the cluster host name: 
    <HostName>esb.cloud-test.wso2.com</HostName>
  4. Locate the <MgtHostName> tag and uncomment it. Make sure that the management host name is defined as follows:
    <MgtHostName>mgt.wso2.org</MgtHostName> 

In the next section, we will map the host names we specified to real IPs.

Mapping host names to IPs

In the manager node we have specified two host names: carbondb.mysql-wso2.com for the MySQL server and esb.cloud-test.wso2.com for the cluster. We will now map them to the actual IPs. Note that if you created the database on the same server as the manager node, you will have already added the first line, and if you created it on the same server as the ELB, you will have already added the second line.

Open the server's /etc/hosts file and add the following lines, where <MYSQL-DB-SERVER-IP> and <ELB-IP> are the actual IP addresses (in this example, xxx.xxx.xxx.206):

Code Block
languagenone
<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com
<ELB-IP> esb.cloud-test.wso2.com

In this example, it would look like this:

Code Block
languagenone
xxx.xxx.xxx.206 carbondb.mysql-wso2.com
xxx.xxx.xxx.206 esb.cloud-test.wso2.com

We have now finished configuring the manager node and are ready to start the ESB server.

Starting the ESB server

Start the ESB server by typing the following command in the terminal:

Code Block
sh <ESB_MANAGER_HOME>/bin/wso2server.sh -Dsetup 

The additional -Dsetup argument will clean the configurations, recreate the central database, and create the required tables in the database.

The ESB should print logs to the server console indicating that the cluster initialization is complete.

We have now finished configuring the manager node. Next, we will configure the ESB worker nodes.

Configuring the worker nodes

You configure worker nodes in very much the same way as you configured the manager node. Be sure to follow these steps for each worker node in the cluster. 

Configuring the data source

You configure the data source to connect to the central database. Because the data source changes are the same for all worker nodes, you can configure this file on one node and then copy it to the other worker nodes.

  1. Make sure that you have copied the MySQL JDBC driver JAR to each worker node as described in Setting up the central database.
  2. Open the master-datasources.xml file located in the <ESB_WORKER_HOME>/repository/conf/datasources/ directory.
  3. Locate the WSO2_CARBON_DB data source configurations and change them as follows:
    • Give user username access to the database: 
      <username>username</username>
      <password>password</password>
    • Specify the driver to use for connecting to the central database: 
      <driverClassName>com.mysql.jdbc.Driver</driverClassName>

When you are finished, the data source configuration on each worker node should look like this:

Code Block
languagehtml/xml
<datasource>
   <name>WSO2_CARBON_DB</name>
   <description>The datasource used for registry and user manager</description>
   <jndiConfig>
       <name>jdbc/WSO2CarbonDB</name>
   </jndiConfig>
   <definition type="RDBMS">
       <configuration>
           <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALS E;LOCK_TIMEOUT=60000</url>
           <username>username</username>
           <password>password</password>
           <driverClassName>com.mysql.jdbc.Driver</driverClassName>
           <maxActive>50</maxActive>
           <maxWait>60000</maxWait>
           <testOnBorrow>true</testOnBorrow>
           <validationQuery>SELECT 1</validationQuery>
           <validationInterval>30000</validationInterval>
       </configuration>
   </definition>
</datasource>

As mentioned previously, if there is more than one data source, configure them to reference the central database as well.

After you have finished configuring the data source, be sure to copy this configuration to the other worker nodes in the cluster.

Setting up cluster configurations for the worker nodes

Configuring clustering for the worker nodes is similar to the way you configured it for the manager node, but the localMemberPort will vary for each worker node, you add the subDomain property, and you add the ELB and ESB manager node to the well-known members, as described in the following steps.

  1. Open the <ESB_HOME>/repository/conf/axis2/axis2.xml file.
  2. Locate the Clustering section and verify or configure the properties as follows (some of these properties are already set correctly by default):
    • Enable clustering for this node: 
      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    • Set the membership scheme to wka to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later): 
      <parameter name="membershipScheme">wka</parameter>
    • Specify the name of the cluster this node will join: 
      <parameter name="domain">wso2.esb.domain</parameter>
    • Specify the port used to communicate cluster messages (if this node is on the same server as the ELB, manager node, or another worker node, be sure to set this to a unique value, such as 4000 and 4001 for worker nodes 1 and 3 and 4002 for worker node 2, which is on the same server as the ELB and manager node): 
      <parameter name="localMemberPort">4000</parameter>
      Note: This port number will not be affected by the port offset in carbon.xml. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.
    • Add a new property "subDomain" and set it to "worker" to denote that this node belongs to worker subdomain of the cluster as defined in loadbalancer.conf.

      Code Block
      languagehtml/xml
      <parameter name="properties">
                  <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
                  <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
                  <property name="subDomain" value="mgt"/>
                  <property name="port.mapping.8290" value="9763"/>
      </parameter>
    • Define the ELB and manager nodes as well-known members of the cluster by providing their host name and localMemberPort values. The manager node is defined here because it is required for the Deployment Synchronizer to function.

      Code Block
      languagehtml/xml
      <members>
          <member>
              <hostName>elb.wso2.com</hostName>
              <port>5000</port>
          </member>
          <member>
              <hostName>mgt.esb.wso2.com</hostName>
              <port>4001</port>
          </member>
      </members>

Adjusting the port offset

Because we are running two Carbon-based products on the same server, we must change the port offset to avoid port conflicts.

  1. Open <ESB_WORKER_HOME>/repository/conf/carbon.xml on each worker node.
  2. Locate the <Ports> tag and change the value of its sub-tag as follows on each worker node:
    • Worker1: <Offset>0</Offset> - No changes needed, because this will be the first node on this (xxx.xxx.xxx.132) server.

    • Worker2: <Offset>2</Offset> - Set the offset to 2, because there are already two more Carbon products (ELB and ESB manager node) running on this (xxx.xxx.xxx.206) server.

    • Worker3: <Offset>1</Offset> - Set the offset of 1, because Worker1 occupies the default ports on this (xxx.xxx.xxx.132) server.

In the next section, we will map the host names we specified to real IPs.

Mapping host names to IPs

In the worker nodes, we have used three hosts names: carbondb.mysql-wso2.com for the MySQL server, elb.wso2.com for the ELB, and mgt.esb.wso2.com for the ESB manager node. We will now map them to the actual IPs.

Open the server's /etc/hosts file and add the following lines, where <MYSQL-DB-SERVER-IP>, <ELB-IP>, and <ESB-Manager-IP> are the actual IP addresses (in this example, xxx.xxx.xxx.206):

Code Block
languagenone
<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com
<ELB-IP> elb.wso2.com
<ESB-Manager-IP> mgt.esb.wso2.com 

In this example, it would look like this:

Code Block
languagenone
xxx.xxx.xxx.206 carbondb.mysql-wso2.com
xxx.xxx.xxx.206 elb.wso2.com
xxx.xxx.xxx.206 mgt.esb.wso2.com 

We have now finished configuring the worker nodes and are ready to start them.

Info

If you want to remove all UI components from the worker nodes you need to run the ant createWorker task before you start the worker nodes. Note that this will remove management console capability from worker nodes.

Starting the ESB server

Start the ESB server by typing the following command in the terminal:

Code Block
sh <ESB_WORKER_HOME>/bin/wso2server.sh -DworkerNode=true 

The additional -DworkerNode=true argument indicates that this is a worker node.

When starting the Worker1, it should display logs in the console indicating that the cluster initialization is complete.

The ELB console should have these new messages:

Code Block
languagenone
INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx.132:4000(wso2.esb.domain)
INFO - MembershipManager Application member xxx.xxx.xxx.132:4000(wso2.esb.domain) joined group wso2.esb.domain
INFO - DefaultGroupManagementAgent Application member Host:xxx.xxx.xxx.132, Port: 4000, HTTP:8280, HTTPS:8243, Domain: wso2.esb.domain, Sub-domain:worker, Active:true joined application clustert

The manager node console should have these new messages:

Code Block
languagenone
INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx.132:4000(wso2.esb.domain)
INFO - RpcInitializationRequestHandler Received GetConfigurationCommand initialization request message from xxx.xxx.xxx.132:4000(wso2.esb.domain)

If you have similar messages in your consoles, you have finished configuring the worker nodes and the cluster is running. When you terminate one node, all nodes identify that the node has left the cluster. The same applies when a new node joins the cluster. If you want to add another new worker node, you can simply copy worker1 without any changes if you are running it on a new server (such as xxx.xxx.xxx.184). If you intend to use the new node on a server where another WSO2 product is running, you can use a copy of worker1 and change the port offset accordingly in the carbon.xml file. You may also have to change localMemberPort in axis2.xml if that product has clustering enabled. Be sure to map all host names to the relevant IP addresses when creating a new node.

Testing the cluster

To test the cluster, open the ESB management console on the manager node (use the management console URL displayed in the terminal when you started the node), add a sample proxy service with the log mediator in the inSequence so that logs will be displayed in the worker terminals, and then observe the cluster messages sent from the manager node to the workers.

The load balancer manages the active and passive states of the worker nodes, activating nodes as needed and leaving the rest in passive mode. To test this, send a request to the end point through the load balancer to verify that the proxy service is activated only on the active worker node(s) while the remaining worker nodes remain passive. For example, you would send the request to the following URL:

http://{Load_Balancer_Mapped_URL_for_worker}/services/{Sample_Proxy_Name}

Additional configuration

...

If you need to provide access to the management node from outside your network so external clients can upload applications and perform other management tasks, you configure the mgt sub-domain in loadbalancer.conf and map the host to the IP address of the ELB. For example, you would add the mgt sub-domain to loadbalancer.conf as follows:

Code Block
languagehtml/xml
titleloadbalancer.conf
esb {
  domains{
     wso2.esb.domain {
        tenant_range *;
        group_mgt_port 5000;
        mgt {
                hosts management.esb.cloud-test.wso2.com;
        }
        worker {
               hosts esb.cloud-test.wso2.com;
        }
     }
   }
}

You would then add the management.esb.cloud-test.wso2.com port mapping in the /etc/hosts file as follows:

...

languagenone
title/etc/hosts file

...

the database

Each Carbon-based product uses a database to store information such as user management details and registry data. Set up the databases to store information and work hand-in-hand with your cluster.

The next step is to configure the manager node.

Configuring the manager node 

In this section, we will configure data sources to allow the manager node to point to the central database, enable the manager node for clustering, change the port offset, and map the host names to IPs.

Configuring the data source

You configure datasources to allow the manager node to point to the central database. Make sure that you copy the database driver JAR to the manager node and follow the steps described in Setting up the Database.

Setting up cluster configurations for the manager node

Configuring clustering for the manager node is similar to the way you configured it for the ELB node, but the localMemberPort is 4001 instead of 4000, and you define the ELB node instead of the ESB manager node as the well-known member.

  1. Open the <ESB_HOME>/repository/conf/axis2/axis2.xml file.
  2. Locate the Clustering section and verify or configure the properties as follows (some of these properties are already set correctly by default):
    1. Enable clustering for this node: 
      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    2. Set the membership scheme to wka to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later): 
      <parameter name="membershipScheme">wka</parameter>
    3. Specify the name of the cluster this node will join (this is the domain defined in the loadbalancer.conf file on the ELB):
      <parameter name="domain">wso2.esb.domain</parameter>
    4. Specify the host used to communicate cluster messages:
      <parameter name="localMemberHost">xxx.xxx.xxx.206</parameter>
    5. Specify the port used to communicate cluster messages: 
      <parameter name="localMemberPort">4001</parameter>

      Note

      Note: This port number will not be affected by the port offset in carbon.xml. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.


    6. The receiver's http/https port values are without the portOffset addition; they get auto-incremented by portOffset. The 'WSDLEPRPrefix' parameter should point to the worker node's host name (esb.cloud-test.wso2.com) and ELB's http (8280)/https (8243) transport ports.

    7. Ensure that you set the value of the subDomain as mgt to specify that this is the manager node, which will ensure that traffic for the manager node is routed to this member.
      <propertyname="subDomain"value="mgt"/>

    8. Edit the <members> element so that it looks as follows:

      Code Block
      languagehtml/xml
      <members>
      	<member>
              <hostName>xxx.xxx.xxx.206</hostName>
              <port>4500</port>
          </member>
      </members>
      Info

      The IP address mentioned in the hostName represents the IP of the ELB.


  3. Locate the port mapping section and configure the properties as follows:

    <property name="port.mapping.80" value="9764"/>
    <property name="port.mapping.443" value="9444"/>

    Note

    This configuration will change as follows if you did not configure the ELB to listen on default ports:

    Code Block
    languagehtml/xml
    <property name="port.mapping.8280" value="9764"/>
    <property name="port.mapping.8243" value="9444"/>
    Info

    This value should increment based on the port offset value. In this example it is incremented by 1 since the port offset for the manager node is one.

    In a dynamically clustered set up where you front a WSO2 Carbon instance using a WSO2 ELB, it is the responsibility of a Carbon server to send its information to ELB. You can visualize this as a "member object somehow getting passed to ELB from the Carbon server instance". In the Carbon server's clustering section, under properties, you can define any member property. This way, you can let ELB know about the information other than the basic ones. Typically, this basic information includes host names, HTTP port, HTTPS port, etc.

    WSO2 ESB, WSO2 API Manager etc. are somewhat special with regard to ports as they usually have two HTTP ports (compared to one HTTP port for products like WSO2 AS). Hence, here we have to somehow send this additional information to ELB. The easiest way to do this is by setting a member property. Here, we use port.mapping property. Also, in order to front these special servers, we need two HTTP ports in ELB too, which are exposed to the outside. There's a deployment decision to be made here, i.e., which HTTP port of ELB should map to which HTTP port of the server (i.e., servlet HTTP port or NHTTP HTTP port). With that in mind, let's consider only the HTTP scenario. Say, in your ESB instance, you have used 8280 as the NHTTP transport port (axis2.xml) and 9763 as the Servlet transport port (catalina-server.xml). Also, ELB has 2 HTTP ports, one is 8280 and the other is 8290. Imagine there's a member object, and in this case, the member's HTTP port would be 8280 (usually the port defined in axis2.xml gets here). But since ELB has 2 ports, there's no way to correctly map ports, by only specifying member's HTTP port. There arises the importance of port.mapping property. You have to think of this property from the perspective of ELB.

    Let's assume we define the above property, now this means, if a request comes to ELB, in its 8290 port (see... we're thinking from ELB's perspective), forward that request to the 9764 port of the Member. Having only this property is enough, we do not need following property:

    Code Block
    languagehtml/xml
    <property name="port.mapping.8280" value="8280"></property>

    This occurs because the logic was written in a way that port.mapping properties get higher precedence over the default ports. This means, that when a request comes to ELB, ELB will first check whether the port it received the request from is specified as a port.mapping property. If it is, it will grab the target port from that property. If not, it will send the request to the default http port. Hence, if a request is received by the 8280 port of ELB, it will be automatically get redirected to 8280 port of the Member (since it's the HTTP port of Member).

    Similarly, we should define a mapping for https servlet port (8243).

  4. Allow access the management console only through the load balancer. Configure the HTTP/HTTPS proxy ports to communicate through the load balancer by editing the <PRODUCT_HOME>/repository/conf/tomcat/catalina-server.xml file as follows.

    Code Block
    languagexml
    <Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
    	port="9763"
    	proxyPort="80"
    	...
    	/>
    <Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
    	port="9443"
    	proxyPort="443"
    	...
    	/>
    Expand
    titleClick here for more information on this configuration.

    The Connector protocol tag sets the protocol to handle incoming traffic. The default value is HTTP/1.1, which uses an auto-switching mechanism to select either a blocking Java-based connector or an APR/native connector. If the PATH (Windows) or LD_LIBRARY_PATH (on most UNIX systems) environment variables contain the Tomcat native library, the APR/native connector will be used. If the native library cannot be found, the blocking Java-based connector will be used. Note that the APR/native connector has different settings from the Java connectors for HTTPS.

    The non-blocking Java connector used is an explicit protocol that does not rely on the auto-switching mechanism described above. The following is the value used:
    org.apache.coyote.http11.Http11NioProtocol

    The TCP port number is the value that this Connector will use to create a server socket and await incoming connections. Your operating system will allow only one server application to listen to a particular port number on a particular IP address. If the special value of 0 (zero) is used, Tomcat will select a free port at random to use for this connector. This is typically only useful in embedded and testing applications.

Configuring the port offset and host name

Because we are running two Carbon-based products on the same server, we must change the port offset to avoid port conflicts. Additionally, we will add the cluster host name so that any requests sent to the manager host are redirected to the cluster, where the ELB will pick them up and manage them.

  1. Open <ESB_MANAGER_HOME>/repository/conf/carbon.xml.
  2. Locate the <Ports> tag and change the value of its sub-tag to: 
    <Offset>1</Offset>
  3. Locate the <HOSTNAME> tag and add the cluster host name: 
    <HostName>esb.wso2.com</HostName>
  4. Locate the <MgtHostName> tag and uncomment it. Make sure that the management host name is defined as follows:
    <MgtHostName>mgt.esb.wso2.com</MgtHostName> 

In the next section, we will map the host names we specified to real IPs.

Mapping host names to IPs

In the manager node we have specified two host names: carbondb.mysql-wso2.com for the MySQL server and esb.cloud-test.wso2.com for the cluster. We will now map them to the actual IPs. Note that if you created the database on the same server as the manager node, you will have already added the first line, and if you created it on the same server as the ELB, you will have already added the second line.

Open the server's /etc/hosts file and add the following lines, where <MYSQL-DB-SERVER-IP> and <ELB-IP> are the actual IP addresses (in this example, xxx.xxx.xxx.206):

Code Block
languagenone
<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com
<ELB-IP> esb.wso2.com

In this example, it would look like this:

Code Block
languagenone
xxx.xxx.xxx.206 carbondb.mysql-wso2.com
xxx.xxx.xxx.206 esb.wso2.com

We have now finished configuring the manager node and are ready to start the ESB server.

Starting the ESB server

Start the ESB server by typing the following command in the terminal:

Code Block
sh <ESB_MANAGER_HOME>/bin/wso2server.sh -Dsetup 

The additional -Dsetup argument will clean the configurations, recreate the central database, and create the required tables in the database.

The ESB should print logs to the server console indicating that the cluster initialization is complete.

We have now finished configuring the manager node. Next, we will configure the ESB worker nodes.

Configuring the worker nodes

You configure worker nodes in very much the same way as you configured the manager node. Be sure to follow these steps for each worker node in the cluster. 

Configuring the data source

You configure the data source to connect to the central database. If there are multiple data sources, configure them to reference the central database as well. Since the data source changes are the same for all worker nodes, you can configure this file on one node and then copy it to the other worker nodes. Make sure you copy the database driver JAR to each worker node and follow the steps described in Setting up the central database.

After you have finished configuring the data source, be sure to copy this configuration to the other worker nodes in the cluster.

Setting up cluster configurations for the worker nodes

Configuring clustering for the worker nodes is similar to the way you configured it for the manager node, but the localMemberPort will vary for each worker node, you add the subDomain property, and you add the ELB and ESB manager node to the well-known members, as described in the following steps.

  1. Open the <ESB_HOME>/repository/conf/axis2/axis2.xml file.
  2. Locate the Clustering section and verify or configure the properties as follows (some of these properties are already set correctly by default):
    • Enable clustering for this node: 
      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    • Set the membership scheme to wka to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later): 
      <parameter name="membershipScheme">wka</parameter>
    • Specify the name of the cluster this node will join: 
      <parameter name="domain">wso2.esb.domain</parameter>
    • Specify the host used to communicate cluster messages:
      <parameter name="localMemberHost">xxx.xxx.xxx.206</parameter>
    • Specify the port used to communicate cluster messages (if this node is on the same server as the ELB, manager node, or another worker node, be sure to set this to a unique value, such as 4000 and 4001 for worker nodes 1 and 3 and 4002 for worker node 2, which is on the same server as the ELB and manager node): 
      <parameter name="localMemberPort">4002</parameter>
      Note: This port number will not be affected by the port offset in carbon.xml. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.
    • The receiver's http/https port values are without the portOffset addition; they get auto-incremented by portOffset. The 'WSDLEPRPrefix' parameter should point to the worker node's host name (esb.cloud-test.wso2.com) and ELB's http (8280)/https (8243) transport ports.

    • Add a new property "subDomain" and set it to "worker" to denote that this node belongs to worker subdomain of the cluster as defined in loadbalancer.conf.

      Code Block
      languagehtml/xml
      <parameter name="properties">
                  <property name="subDomain" value="worker"/>
      </parameter>
    • Define the ELB and manager nodes as well-known members of the cluster by providing their host name and localMemberPort values. The manager node is defined here because it is required for the Deployment Synchronizer to function.

      Code Block
      languagehtml/xml
      <members>
          <member>
              <hostName>xxx.xxx.xxx.206</hostName>
              <port>4500</port>
          </member>
          <member>
              <hostName>xxx.xxx.xxx.206</hostName>
              <port>4001</port>
          </member>
      </members>
      Info

      The member in port 4500 is the ELB, and port 4001 is the manager node. 4500 is the value of the group_mgt_port we specify in the loadbalancer.conf file of the ELB and 4001 is the localMemberPort specified in the manager node configurations.

Configuring the port offset and host name

Because we are running two Carbon-based products on the same server, we must change the port offset to avoid port conflicts.

  1. Open <ESB_WORKER_HOME>/repository/conf/carbon.xml on each worker node.
  2. Locate the <HOSTNAME> tag and add the cluster host name: 
    <HostName>esb.wso2.com</HostName>
  3. Locate the <Ports> tag and change the value of its sub-tag as follows on each worker node:
    • Worker1: <Offset>2</Offset> - Set the offset to 2, because there are already two more Carbon products (ELB and ESB manager node) running on this (xxx.xxx.xxx.206) server.

    • Worker2: <Offset>0</Offset> - No changes needed, because this will be the first node on this (xxx.xxx.xxx.132) server.

    • Worker3: <Offset>1</Offset> - Set the offset of 1, because Worker2 occupies the default ports on this (xxx.xxx.xxx.132) server.

In the next section, we will map the host names we specified to real IPs.

Mapping host names to IPs

In the worker nodes, we have used three hosts names: carbondb.mysql-wso2.com for the MySQL server, elb.wso2.com for the ELB, and mgt.esb.wso2.com for the ESB manager node. We will now map them to the actual IPs.

Open the server's /etc/hosts file and add the following lines, where <MYSQL-DB-SERVER-IP>, <ELB-IP>, and <ESB-Manager-IP> are the actual IP addresses (in this example, xxx.xxx.xxx.206): In this example, it would look like this:

Code Block
languagenone
xxx.xxx.xxx.206 carbondb.mysql-wso2.com
xxx.xxx.xxx.206 esb.wso2.com
xxx.xxx.xxx.206 mgt.esb.wso2.com 

We have now finished configuring the worker nodes and are ready to start them.

Info

If you want to remove all UI components from the worker nodes you need to run the ant createWorker task before you start the worker nodes. Note that this will remove management console capability from worker nodes.

Starting the ESB server

Tip

Tip: It is recommendation is to delete the <PRODUCT_HOME>/repository/deployment/server directory and create an empty server directory in the worker node. This is done to avoid any SVN conflicts that may arise. Note that when you do this, you may have to restart the worker node after you start it in order to avoid an error.

Start the ESB server by typing the following command in the terminal:

Code Block
sh <ESB_WORKER_HOME>/bin/wso2server.sh -DworkerNode=true 

The additional -DworkerNode=true argument indicates that this is a worker node.

When starting the Worker1, it should display logs in the console indicating that the cluster initialization is complete.

The ELB console should have these new messages:

Code Block
languagenone
INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx.132:4000(wso2.esb.domain)
INFO - MembershipManager Application member xxx.xxx.xxx.132:4000(wso2.esb.domain) joined group wso2.esb.domain
INFO - DefaultGroupManagementAgent Application member Host:xxx.xxx.xxx.132, Port: 4000, HTTP:8280, HTTPS:8243, Domain: wso2.esb.domain, Sub-domain:worker, Active:true joined application clustert

The manager node console should have these new messages:

Code Block
languagenone
INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx.132:4000(wso2.esb.domain)
INFO - RpcInitializationRequestHandler Received GetConfigurationCommand initialization request message from xxx.xxx.xxx.132:4000(wso2.esb.domain)

If you have similar messages in your consoles, you have finished configuring the worker nodes and the cluster is running. When you terminate one node, all nodes identify that the node has left the cluster. The same applies when a new node joins the cluster. If you want to add another new worker node, you can simply copy worker1 without any changes if you are running it on a new server (such as xxx.xxx.xxx.184). If you intend to use the new node on a server where another WSO2 product is running, you can use a copy of worker1 and change the port offset accordingly in the carbon.xml file. You may also have to change localMemberPort in axis2.xml if that product has clustering enabled. Be sure to map all host names to the relevant IP addresses when creating a new node.

Access the management console through the LB using the following URL: https://{manager_node_IP}:443/carbon