A WSO2 ESB cluster should contain two or more ESB instances that are configured to run within the same domain. To make an instance a member of the cluster, you must configure it to either of the available membership schemes:
- Well Known Address (WKA) membership scheme
- Multicast membership scheme
In this example, we will be using the WKA membership scheme, and the ELB will act as the Well Known Member in the cluster. It will accept all the service requests on behalf of the ESBs and divide the load among worker nodes in the ESB cluster.
Installing the products
Before you begin, download and extract WSO2 ESB and WSO2 ELB to a local directory on the sever. For this example, we have extracted one copy of the ELB and two copies of the ESB on the server with IP xxx.xxx.xxx.206 (the x's represent your actual IP prefix), and we extracted two copies of the ESB on the server with the IP xxx.xxx.xxx.132:
Server xxx.xxx.xxx.206:
- 1 ELB instance (Well Known Member)
- 1 ESB instance (worker node)
- 1 ESB instance (Dep-sync management / manager node )
Server xxx.xxx.xxx.132:
- 2 ESB instances (worker nodes)
Configuring the load balancer
You configure the ELB with the overall definition of the cluster and how it should distribute the load. You can achieve this by adding a few lines to a configuration file called loadbalancer.conf
. You specify the detailed clustering configurations in the axis2.xml
file. This section describes how to perform these steps.
Setting up load-balancing configurations
- Open the
<ELB_HOME>/repository/conf/loadbalancer.conf
file. Locate the ESB configuration and edit it as follows:
Code Block language html/xml esb { domains { wso2.esb.domain { hosts esb.cloud-test.wso2.com; sub_domain worker; tenant_range *; } } }
In this file, we specified the domain name (wso2.esb.domain
), which is used to identify the cluster. On startup, a node with this domain name will look for a cluster with this same domain name.
The ELB will divide the load among the sub-domains. With this sub-domain concept, we can virtually separate the cluster, according to the task that each collection of nodes intends to perform. We defined a sub-domain called worker
.
In the previous diagram, you can see that all the service requests need to be routed to the worker nodes through the ELB, which is the front end to the entire cluster. We used the hosts
attribute to configure the the publicly accessible host name (esb.cloud-test.wso2.com
), which clients can use to send their requests to the cluster. We will map the host name to the ELB server IP address later.
Finally, the tenant_range
attribute is used to handle tenant-aware load-balancing, which is another very powerful feature of the ELB. This attribute allows us to partition tenants into several clusters, so that when there is a large number of tenants to work with, we can instruct each cluster to work only with a particular tenant or a few selected tenants. This approach is also useful if you need a particular cluster to be dedicated to work for a single special tenant ("Private Jet Mode"). In this example, we are not enabling tenant partitioning, so we have used an asterisk ( * ) in front of the tenant_range
attribute to represent all possible tenants.
In summary, we have configured the load balancer to handle requests sent to esb.cloud-test.wso2.com
and to distribute the load among the worker nodes in the worker
sub-domain of the wso2.esb.domain
cluster. We are now ready to set up the cluster configurations.
Setting up cluster configurations on the ELB
Previously, we configured several properties of the cluster such as domain name and sub-domain, but we didn’t define them there. We now define these properties as we build the cluster.
- Open the
<ELB_HOME>/repository/conf/axis2/axis2.xml
file. - Locate the Clustering section and configure the properties as follows:
- Enable clustering for this node:
<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
- Set the membership scheme to
wka
to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later):<parameter name="membershipScheme">wka</parameter>
- Specify the name of the cluster this node will join:
<parameter name="domain">wso2.esb.domain</parameter>
- Specify the port used to communicate cluster messages:
<parameter name="localMemberPort">4000</parameter>
Note: This port number will not be affected by the port offset incarbon.xml
. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server. Define the ESB manager node as a well-known member of the cluster by providing its host name and the
localMemberPort
port you just specified:Code Block language html/xml <members> <member> <hostName>mgr.esb.wso2.com</hostName> <port>4001</port> </member> </members>
- Enable clustering for this node:
We have now completed the clustering-related configuration for the ELB. In the next section, we will make one last change to the ELB that will increase usability.
Configure ELB to listen on default ports
We will now change the ELB configuration to listen to the default HTTP and HTTPS ports.
- Open the
<ELB_HOME>/repository/conf/axis2/axis2.xml
file. - Locate the Transport Receiver section and configure the properties as follows:
- In the
<transportReceiver name="http" class="org.apache.synapse.transport.passthru.PassThroughHttpListener">
transport, enable service requests to be sent to the ELB's default HTTP port instead of having to specify port 8280:<parameter name="port">80</parameter>
- In the
<transportReceiver name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLListener">
transport, enable service requests to be sent to the ELB's default HTTPS port instead of having to specify port 8243:<parameter name="port">443</parameter>
- In the
In the next section, we will map the host names we specified to real IPs.
Mapping host names to IPs
In the ELB we have specified two host names: esb.cloud-test.wso2.com
for worker hosts and mgr.esb.wso2.com
for the manager node. We will now map them to IPs in case there is no DNS to map them.
Open the server's /etc/hosts
file and add the following lines, where <ELP-IP> and <ESB-Manager-IP> are the actual IP addresses (in this example, xxx.xxx.xxx.206):
Code Block | ||
---|---|---|
| ||
<ELB-IP> esb.cloud-test.wso2.com
<ESB-Manager-IP> mgr.esb.wso2.com |
We have now finished configuring the ELB and ready to start the ELB server.
Starting the ELB server
Start the ELB server by typing the following command in the terminal:
sudo -E sh <ELB_HOME>/bin/wso2server.sh
Info |
---|
If you skipped the step of configuring the ELB to listen on the default ports, you do not need to use the |
The ELB should print logs to the server console similar to the following:
Code Block | ||
---|---|---|
| ||
INFO - TribesClusteringAgent Initializing cluster...
INFO - TribesClusteringAgent Cluster domain: wso2.esb.domain
INFO - TribesClusteringAgent Using wka based membership management scheme
INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4000
INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4000
INFO - TribesClusteringAgent Local Member xxx.xxx.xxx.206:4000(wso2.esb.domain)
INFO - TribesUtil No members in current cluster
INFO - TribesClusteringAgent Cluster initialization completed. |
You are ready to configure the ESB manager node, enable clustering on the ESB worker nodes, and configure them to recognize the Well Known Member (the ESB) in the cluster.
##left off here
3.2. Set up Central Database
...
Before we go on with configuring ESB nodes, we have to set up a central database. Each carbon based product uses a database to store user management details, registry data etc. All nodes in the cluster need to use one central database.
...
Step 1: Download and install MySQL server.
...
Step 2: Download MySQL jdbc driver.
...
Step 3: We need to use a host name for configuring permissions for new database. So open /etc/hosts file and add following line,
<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com
...
Step 4: Now we need to create a new database. Open a terminal/command window, and login to mysql with the following command:
mysql -u username -p
...
When prompted, specify the password. Then create a database with the following command:
mysql> create database carbondb;
...
Grant permission to access the created database with:
mysql> grant all on carbondb.* TO username@carbondb.mysql-wso2.com identified by "password";
...
Step 5: Unzip the downloaded MySQL driver zipped archive. Copy MySQL JDBC driver jar (mysql-connector-java-x.x.xx-bin.jar) into <CARBON_HOME>/repository/component/lib directory for all worker and manager nodes.
...
In summary, now we have a central carbondb database in carbondb.mysql-wso2.com hosts, with the permission to the user username with the password password.
...
3.3 Configure the Manager Node
...
3.3.1 Configure Data Sources
...
We have to point the manager node to newly created central database in above (2.1.2) section.
...
Step 1: Copy MySQL JDBC driver jar (mysql-connector-java-x.x.xx-bin.jar) into the <ESB_MANAGER_HOME>/repository/component/lib directory as mentioned in Step 5 in previous (2.1.2) section.
...
Step 2: Open master-datasources.xml file located in <ESB_MANAGER_HOME>/repository/conf/datasources/ directory and locate the “WSO2_CARBON_DB” data source configurations and change as follows.
...
<url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=60000</url>
Defines the location of our central database.
<username>username</username>
Gives the username to access the database
<password>password</password>
Password for above user
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
The driver to be used to connect for the central database. Since we already copied the MySQL JDBC driver, we can use that.
...
Other configurations do not need any changes. So the final outcome would be like this,
...
<datasource>
<name>WSO2_CARBON_DB</name>
<description>The datasource used for registry and user manager</description>
<jndiConfig>
<name>jdbc/WSO2CarbonDB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALS E;LOCK_TIMEOUT=60000</url>
<username>username</username>
<password>password</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
...
Please note that, in most of our products there is only one datasource used. If there are more than one datasource they also should be refer the central databases accordingly.For an example API Manager deployment setup has bit more specific datasource configurations to be done, hence its is described in different section below.
...
Now we are complete configuring datasources for our ESB manager node.
...
3.3.2 Enable Clustering for the Manager Node
...
Now we have an idea of how we enable clustering. So let’s do it directly.
...
Step 1: Open axis2.xml file which is at <ESB_MANAGER_HOME>/repository/conf/axis2/ directory.
...
Step 2: Locate “Clustering” section and there should be clustering configurations as follows.
...
<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
...
<parameter name="membershipScheme">wka</parameter>
So this node will send cluster initiation messages to wka member(s) which we’ll defined in configuring worker nodes.
...
<parameter name="domain">wso2.esb.domain</parameter>
Defines the name of the cluster which this node going to join.
...
<parameter name="localMemberPort">4001</parameter>
This port is used to communicate cluster messages.
Note that this port number will not be affected by port offset if carbon.xml
Here we are setting a port to 4001 since we used 4000 for the ELB on this machine (xxx.xxx.xxx.206).
...
Define well known members for the cluster as follows,
<members>
<member>
<hostName>elb.wso2.com</hostName>
<port>4000</port>
</member>
</members>
So here we defines ELB as well known members for the manager node by giving ELBs hostName and localMemberPorts.
...
3.3.3 Change carbon.xml
...
Since we are running two carbon based products on the same machine we have to change port offset to avoid conflicts on ports they used.
...
Step 1: Open carbon.xml which is at <ESB_MANAGER_HOME>/repository/conf/ directory.
Step 2: This page describes how to create a cluster by walking through the example in the Overview. Although this example uses WSO2 ESB, these steps apply to other WSO2 products as well. For details on further configuration required for the WSO2 product you are clustering, see the links in the table of contents. Note that this page describes using WSO2 Elastic Load Balancer (ELB), but you can use a third-party load balancer in its place (for configuration details, see your load balancer's documentation).
Info |
---|
These instructions apply to Elastic Load Balancer 2.0.3. The following are the WSO2 products which this guide is applicable for:
|
For details on further configuration required for the WSO2 product you are clustering, see the links in the table of contents.
Table of Contents |
---|
Installing the products
Before you begin, download and extract WSO2 ESB 4.7.0 and WSO2 ELB 2.0.3 to a local directory on the sever. For this example, we have extracted one copy of the ELB and two copies of the ESB on the server with IP xxx.xxx.xxx.206 (the x's represent your actual IP prefix), and we extracted two copies of the ESB on the server with the IP xxx.xxx.xxx.132:
Server xxx.xxx.xxx.206:
- 1 ELB instance (Well Known Member)
- 1 ESB instance (worker node)
- 1 ESB instance (Dep-sync management / manager node )
Server xxx.xxx.xxx.132:
- 2 ESB instances (worker nodes)
Configuring the load balancer
You configure the ELB with the overall definition of the cluster and how it should distribute the load. You can achieve this by adding a few lines to a configuration file called loadbalancer.conf
. You specify the detailed clustering configurations in the axis2.xml
file. This section describes how to perform these steps.
Setting up load-balancing configurations
- Open the
<ELB_HOME>/repository/conf/loadbalancer.conf
file. Locate the ESB configuration and edit it as follows:
Code Block language html/xml esb { domains { wso2.esb.domain { hosts mgt.esb.wso2.com; sub_domain mgt; tenant_range *; } wso2.esb.domain { hosts worker.esb.wso2.com; sub_domain worker; tenant_range *; } } }
In this file, we specified the domain name (wso2.esb.domain
), which is used to identify the cluster. On startup, a node with this domain name will look for a cluster with this same domain name.
The ELB will divide the load among the sub-domains. With this sub-domain concept, we can virtually separate the cluster according to the task that each collection of nodes intends to perform. We defined a sub-domain called worker
.
All the service requests need to be routed to the worker nodes through the ELB, which is the front end to the entire cluster. We used the hosts
attribute to configure the publicly accessible host name (worker.esb.wso2.com
), which clients can use to send their requests to the cluster. We will map the host name to the ELB server IP address later.
Finally, the tenant_range
attribute is used to handle tenant-aware load-balancing, which is another very powerful feature of the ELB. This attribute allows us to partition tenants into several clusters, so that when there is a large number of tenants to work with, we can instruct each cluster to work only with a particular tenant or a few selected tenants. This approach is also useful if you need a particular cluster to be dedicated to work for a single special tenant ("Private Jet Mode"). In this example, we are not enabling tenant partitioning, so we have used an asterisk ( * ) in front of the tenant_range
attribute to represent all possible tenants.
In summary, we have configured the load balancer to handle requests sent to worker.esb.wso2.com
and to distribute the load among the worker nodes in the worker
sub-domain of the wso2.esb.domain
cluster. We are now ready to set up the cluster configurations.
Setting up cluster configurations on the ELB
Previously, we configured several properties of the cluster such as domain name and sub-domain, but we didn’t define them there. We now define these properties as we build the cluster.
- Open the
<ELB_HOME>/repository/conf/axis2/axis2.xml
file. - Locate the Clustering section and verify or configure the properties as follows (some of these properties are already set correctly by default):
- Enable clustering for this node:
<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
- Set the membership scheme to
wka
to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later):<parameter name="membershipScheme">wka</parameter>
- Specify a domain name for the ELB node (note that this domain it for potentially creating a cluster of ELB nodes and is not the cluster of ESB nodes that the ELB will load balance):
<parameter name="domain">wso2.carbon.lb.domain</parameter>
- Specify the port used to communicate with this ELB node:
<parameter name="localMemberPort">4000</parameter>
Note: This port number will not be affected by the port offset incarbon.xml
. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.
- Enable clustering for this node:
We have now completed the clustering-related configuration for the ELB. In the next section, we will make one last change to the ELB that will increase usability.
Anchor | ||||
---|---|---|---|---|
|
Configuring the ELB to listen on default ports
We will now change the ELB configuration to listen to the default HTTP and HTTPS ports.
- Open the
<ELB_HOME>/repository/conf/axis2/axis2.xml
file. - Locate the Transport Receiver section and configure the properties as follows:
- In the
<transportReceiver name="http" class="org.apache.synapse.transport.passthru.PassThroughHttpListener">
transport, enable service requests to be sent to the ELB's default HTTP port instead of having to specify port 8280:<parameter name="port">80</parameter>
- In the
<transportReceiver name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLListener">
transport, enable service requests to be sent to the ELB's default HTTPS port instead of having to specify port 8243:<parameter name="port">443</parameter>
- In the
These port values should be left as 8280 and 8243 if you are a Linux user without root privileges, because binding to ports below 1024 requires root privileges.
In the next section, we will map the host names we specified to real IPs.
Mapping the host name to the IP
In the ELB, we configured host names in loadbalancer.conf
to front the manager and worker service requests. We must now map these host names to the actual IP address. Open the server's /etc/hosts
file and add the following lines, where <ELP-IP>
is the actual IP address:
Code Block | ||
---|---|---|
| ||
<ELB-IP> worker.esb.wso2.com
<ELB-IP> mgt.esb.wso2.com |
In this example, it would look like this:
Code Block | ||
---|---|---|
| ||
xxx.xxx.xxx.206 worker.esb.wso2.com
xxx.xxx.xxx.206 mgt.esb.wso2.com |
We have now finished configuring the ELB and are ready to start the ELB server.
Starting the ELB server
Start the ELB server by typing the following command in the terminal:
sudo -E sh <ELB_HOME>/bin/wso2server.sh
Info |
---|
If you skipped the step of configuring the ELB to listen on the default ports, you do not need to use the |
The ELB should print logs to the server console similar to the following:
Code Block | ||
---|---|---|
| ||
INFO - TribesClusteringAgent Initializing cluster...
INFO - TribesClusteringAgent Cluster domain: wso2.carbon.lb.domain
INFO - TribesClusteringAgent Using wka based membership management scheme
INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4000
INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4000
INFO - TribesClusteringAgent Local Member xxx.xxx.xxx.206:4000(wso2.carbon.lb.domain)
INFO - TribesUtil No members in current cluster
INFO - TribesClusteringAgent Cluster initialization completed.
|
Now that the ELB is configured and running, you create a central database for all the nodes to use.
Setting up the central database
Each Carbon-based product uses a database to store information such as user management details and registry data. All nodes in the cluster must use one central database for config and governance registry mounts. These instructions assume you are installing MySQL as your relational database management system (RDBMS), but you can install another supported RDBMS as needed. You can create the following databases and associated data sources:
Database Name | Description |
---|---|
WSO2_USER_DB | JDBC user store and authorization manager |
REGISTRY_DB | Shared database for config and governance registry mounts in the product's nodes |
REGISTRY_LOCAL1 | Local registry space in the manager node |
REGISTRY_LOCAL2 | Local registry space in the worker node |
The following diagram illustrates how these databases are connected to the manager and worker nodes.
Download and install MySQL Server.
Download the MySQL JDBC driver.
Unzip the downloaded MySQL driver zipped archive, and copy the MySQL JDBC driver JAR (
mysql-connector-java-x.x.xx-bin.jar
) into the<PRODUCT_HOME>/repository/components/lib
directory of both the manager and worker nodes.- Define the host name for configuring permissions for the new database by opening the
/etc/hosts
file and adding the following line:<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com
Info You would do this step only if your database is not on your local machine and on a separate server.
- Enter the following command in a terminal/command window, where
username
is the username you want to use to access the databases:mysql -u username -p
- When prompted, specify the password that will be used to access the databases with the username you specified.
Create the databases using the following commands, where
<PRODUCT_HOME>
is the path to any of the product instances you installed, andusername
andpassword
are the same as those you specified in the previous steps:Code Block language none mysql> create database WSO2_USER_DB; mysql> use WSO2_USER_DB; mysql> source <PRODUCT_HOME>/dbscripts/mysql.sql; mysql> grant all on WSO2_USER_DB.* TO regadmin@"carbondb.mysql-wso2.com" identified by "regadmin"; mysql> create database REGISTRY_DB; mysql> use REGISTRY_DB; mysql> source <PRODUCT_HOME>/dbscripts/mysql.sql; mysql> grant all on REGISTRY_DB.* TO regadmin@"carbondb.mysql-wso2.com" identified by "regadmin"; mysql> create database REGISTRY_LOCAL1; mysql> use REGISTRY_LOCAL1; mysql> source <PRODUCT_HOME>/dbscripts/mysql.sql; mysql> grant all on REGISTRY_LOCAL1.* TO regadmin@"carbondb.mysql-wso2.com" identified by "regadmin"; mysql> create database REGISTRY_LOCAL2; mysql> use REGISTRY_LOCAL2; mysql> source <PRODUCT_HOME>/dbscripts/mysql.sql; mysql> grant all on REGISTRY_LOCAL2.* TO regadmin@"carbondb.mysql-wso2.com" identified by "regadmin";
Configuring the database on the manager node
On the manager node, open
<PRODUCT_HOME>/repository/conf/datasources/master-datasource.xml
, and configure the data sources to point to theREGISTRY_LOCAL1
,WSO2_REGISTRY_DB
, andWSO2_USER_DB
databases as follows (change the username, password, and database URL as needed for your environment):Code Block language html/xml <datasources-configuration xmlns:svns="http://org.wso2.securevault/configuration"> <providers> <provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider> </providers> <datasources> <datasource> <name>REGISTRY_LOCAL1</name> <description>The datasource used for registry- local</description> <jndiConfig> <name>jdbc/WSO2CarbonDB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/REGISTRY_LOCAL1?autoReconnect=true</url> <username>regadmin</username> <password>regadmin</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> </configuration> </definition> </datasource> <datasource> <name>REGISTRY_DB</name> <description>The datasource used for registry- config/governance</description> <jndiConfig> <name>jdbc/WSO2RegistryDB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/REGISTRY_DB?autoReconnect=true</url> <username>regadmin</username> <password>regadmin</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> </configuration> </definition> </datasource> <datasource> <name>WSO2_USER_DB</name> <description>The datasource used for registry and user manager</description> <jndiConfig> <name>jdbc/WSO2UMDB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/WSO2_USER_DB</url> <username>regadmin</username> <password>regadmin</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> </configuration> </definition> </datasource> </datasources> </datasources-configuration>
Info Make sure to replace
username
andpassword
with your MySQL database username and password.To configure the datasource, update the
dataSource
property found in<PRODUCT_HOME>/repository/conf/user-mgt.xml
of the manager node as shown below:Code Block language html/xml <Property name="dataSource">jdbc/WSO2UMDB</Property>
Configuring the database on the worker node
On the worker node, open
<PRODUCT_HOME>/repository/conf/datasources/master-datasource.xml
and configure the data sources to point to theREGISTRY_LOCAL2
,WSO2_REGISTRY_DB
, andWSO2_USER_DB
databases as follows (change the username, password, and database URL as needed for your environment):Code Block language html/xml <datasources-configuration xmlns:svns="http://org.wso2.securevault/configuration"> <providers> <provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider> </providers> <datasources> <datasource> <name>REGISTRY_LOCAL2</name> <description>The datasource used for registry- local</description> <jndiConfig> <name>jdbc/WSO2CarbonDB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/REGISTRY_LOCAL2?autoReconnect=true</url> <username>regadmin</username> <password>regadmin</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> </configuration> </definition> </datasource> <datasource> <name>REGISTRY_DB</name> <description>The datasource used for registry- config/governance</description> <jndiConfig> <name>jdbc/WSO2RegistryDB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/REGISTRY_DB?autoReconnect=true</url> <username>regadmin</username> <password>regadmin</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> </configuration> </definition> </datasource> <datasource> <name>WSO2_USER_DB</name> <description>The datasource used for registry and user manager</description> <jndiConfig> <name>jdbc/WSO2UMDB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://carbondb.mysql-wso2.com:3306/WSO2_USER_DB</url> <username>regadmin</username> <password>regadmin</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> </configuration> </definition> </datasource> </datasources> </datasources-configuration>
Info Make sure to replace
username
andpassword
with your MySQL database username and password.
Mounting the registry
Configure the shared registry database and mounting details in <PRODUCT_HOME>/
repository/conf/registry.xml
of the manager node as shown below:
Code Block | ||
---|---|---|
| ||
<dbConfig name="sharedregistry">
<dataSource>jdbc/WSO2RegistryDB</dataSource>
</dbConfig>
<remoteInstance url="https://localhost:9443/registry">
<id>instanceid</id>
<dbConfig>sharedregistry</dbConfig>
<readOnly>false</readOnly>
<enableCache>true</enableCache>
<registryRoot>/</registryRoot>
<cacheId>REGISTRY_DB@jdbc:mysql://carbondb.mysql-wso2.com:3306/REGISTRY_LOCAL1?autoReconnect=true</cacheId>
</remoteInstance>
<mount path="/_system/config" overwrite="true">
<instanceId>instanceid</instanceId>
<targetPath>/_system/asNodes</targetPath>
</mount>
<mount path="/_system/governance" overwrite="true">
<instanceId>instanceid</instanceId>
<targetPath>/_system/governance</targetPath>
</mount> |
Configure the shared registry database and mounting details in <PRODUCT_HOME>/
repository/conf/registry.xml
of the worker node as shown below:
Code Block | ||
---|---|---|
| ||
<dbConfig name="sharedregistry">
<dataSource>jdbc/WSO2RegistryDB</dataSource>
</dbConfig>
<remoteInstance url="https://localhost:9443/registry">
<id>instanceid</id>
<dbConfig>sharedregistry</dbConfig>
<readOnly>true</readOnly>
<enableCache>true</enableCache>
<registryRoot>/</registryRoot>
<cacheId>REGISTRY_DB@jdbc:mysql://carbondb.mysql-wso2.com:3306/REGISTRY_LOCAL2?autoReconnect=true</cacheId>
</remoteInstance>
<mount path="/_system/config" overwrite="true">
<instanceId>instanceid</instanceId>
<targetPath>/_system/asNodes</targetPath>
</mount>
<mount path="/_system/governance" overwrite="true">
<instanceId>instanceid</instanceId>
<targetPath>/_system/governance</targetPath>
</mount> |
The following are some key points to note when adding these configurations:
- The registry
mount path
is used to identify the type of registry. For example, ”/_system/config
” refers to configuration registry, and "/_system/governance
" refers to the governance registry. - The
dbconfig
entry enables you to identify the datasource you configured in themaster-datasources.xml
file. We use the unique namesharedregistry
to refer to that datasource entry. - The
remoteInstance
section refers to an external registry mount. We can specify the read-only/read-write nature of this instance as well as caching configurations and the registry root location. In case of a worker node, thereadOnly
property should betrue
, and in case of a manager node, this property should be set tofalse
. - You must define a unique name “id” for each remote instance, which is then referred to from mount configurations. In the above example, the unique ID for the remote instance is
instanceId
. - In each of the mounting configurations, we specify the actual mount path and target mount path. The
targetPath
can be any meaningful name. In this instance, it is/_system/asNodes
.
Now your database is set up. The next step is to configure the manager and worker node.
Configuring the manager node
In this section, we will configure data sources to allow the manager node to point to the central database, enable the manager node for clustering, change the port offset, and map the host names to IPs.
Configuring the data source
You configure datasources to allow the manager node to point to the central database. Make sure that you copy the database driver JAR to the manager node and follow the steps described in Setting up the central database.
Info |
---|
In most WSO2 products, only one data source is used. If there is more than one data source, make sure they reference the central databases accordingly. For example, the API Manager deployment setup requires more specific data source configurations. |
Setting up cluster configurations for the manager node
Configuring clustering for the manager node is similar to the way you configured it for the ELB node, but the localMemberPort
is 4001 instead of 4000, and you define the ELB node instead of the ESB manager node as the well-known member.
- Open the
<ESB_HOME>/repository/conf/axis2/axis2.xml
file. - Locate the Clustering section and verify or configure the properties as follows (some of these properties are already set correctly by default):
- Enable clustering for this node:
<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
- Set the membership scheme to
wka
to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later):<parameter name="membershipScheme">wka</parameter>
- Specify the name of the cluster this node will join:
<parameter name="domain">wso2.esb.domain</parameter>
- Specify the name of the cluster this node will join (this is the domain defined in the loadbalancer.conf file on the ELB):
<parameter name="domain">wso2.esb.domain</parameter>
- Specify the port used to communicate cluster messages:
<parameter name="localMemberPort">4001</parameter>
Note: This port number will not be affected by the port offset incarbon.xml
. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server. The receiver's http/https port values are without the
portOffset
addition; they get auto-incremented byportOffset
. The 'WSDLEPRPrefix
' parameter should point to the worker node's host name (worker.esb.wso2.com
) and ELB's http (8280)/https (8243) transport ports.Ensure that you set the value of the subDomain as mgt to specify that this is the manager node, which will ensure that traffic for the manager node is routed to this member.
<
property
name
=
"subDomain"
value
=
"mgt"
/>
Edit the
<members>
element so that it looks as follows:Code Block language html/xml <members> <member> <hostName>elb.wso2.com</hostName> <port>4000</port> </member> </members>
- Enable clustering for this node:
Locate the port mapping section and configure the properties as follows:
<property name="port.mapping.80" value="9764"/>
<property name="port.mapping.443" value="9444"/>
Note This configuration will change as follows if you did not configure the ELB to listen on default ports:
Code Block language html/xml <property name="port.mapping.8280" value="9764"/> <property name="port.mapping.8243" value="9444"/>
Info This value should increment based on the port offset value. In this example it is incremented by 1 since the port offset for the manager node is one.
In a dynamically clustered set up where you front a WSO2 Carbon instance using a WSO2 ELB, it is the responsibility of a Carbon server to send its information to ELB. You can visualize this as a "member object somehow getting passed to ELB from the Carbon server instance". In the Carbon server's clustering section, under properties, you can define any member property. This way, you can let ELB know about the information other than the basic ones. Typically, this basic information includes host names, HTTP port, HTTPS port, etc.
WSO2 ESB, WSO2 API Manager etc. are somewhat special with regard to ports as they usually have two HTTP ports (compared to one HTTP port for products like WSO2 AS). Hence, here we have to somehow send this additional information to ELB. The easiest way to do this is by setting a member property. Here, we use port.mapping property. Also, in order to front these special servers, we need two HTTP ports in ELB too, which are exposed to the outside. There's a deployment decision to be made here, i.e., which HTTP port of ELB should map to which HTTP port of the server (i.e., servlet HTTP port or NHTTP HTTP port). With that in mind, let's consider only the HTTP scenario. Say, in your ESB instance, you have used 8280 as the NHTTP transport port (axis2.xml) and 9763 as the Servlet transport port (catalina-server.xml). Also, ELB has 2 HTTP ports, one is 8280 and the other is 8290. Imagine there's a member object, and in this case, the member's HTTP port would be 8280 (usually the port defined in axis2.xml gets here). But since ELB has 2 ports, there's no way to correctly map ports, by only specifying member's HTTP port. There arises the importance of port.mapping property. You have to think of this property from the perspective of ELB.
Let's assume we define the above property, now this means, if a request comes to ELB, in its 8290 port (see... we're thinking from ELB's perspective), forward that request to the 9764 port of the Member. Having only this property is enough, we do not need following property:
Code Block language html/xml <property name="port.mapping.8280" value="8280"></property>
This occurs because the logic was written in a way that port.mapping properties get higher precedence over the default ports. This means, that when a request comes to ELB, ELB will first check whether the port it received the request from is specified as a port.mapping property. If it is, it will grab the target port from that property. If not, it will send the request to the default http port. Hence, if a request is received by the 8280 port of ELB, it will be automatically get redirected to 8280 port of the Member (since it's the HTTP port of Member).
Similarly, we should define a mapping for https servlet port (8243).
Configuring the port offset and host name
Because we are running two Carbon-based products on the same server, we must change the port offset to avoid port conflicts. Additionally, we will add the cluster host name so that any requests sent to the manager host are redirected to the cluster, where the ELB will pick them up and manage them.
- Open
<ESB_MANAGER_HOME>/repository/conf/carbon.xml
. - Locate the
<Ports>
tag and change the value of its sub-tag
...
- to:
<Offset>1</Offset>
- Located the
<HOSTNAME>
tag and add the cluster host name:<HostName>worker.esb.wso2.
...
Now we have complete clustering configurations on manager node. As before this to finish off, we have to specify IPs for hosts names if there exists any.
...
3.3.4 Map IPs
...
com</HostName>
In the next section, we will map the host names we specified to real IPs.
Mapping host names to IPs
In the manager node we have used specified two hosts names, one is used to specify the mysql server in master-datasources.xml and the other one is used to specify ELB host name when defining wka members in axis2.xml. So open /etc/hosts and add following lines. And note that if you have made the database in same machine you may have already added the first line. host names: carbondb.mysql-wso2.com
for the MySQL server and worker.esb.wso2.com
for the cluster. We will now map them to the actual IPs. Note that if you created the database on the same server as the manager node, you will have already added the first line, and if you created it on the same server as the ELB, you will have already added the second line.
Open the server's /etc/hosts
file and add the following lines, where <MYSQL-DB-SERVER-IP>
and <ELB-IP>
are the actual IP addresses (in this example, xxx.xxx.xxx.206):
Code Block | ||
---|---|---|
| ||
<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com |
...
<ELB-IP> worker.esb.wso2.com <ELB-IP> elb.wso2.com |
...
In the case of this sample it is as follows:example, it would look like this:
Code Block | ||
---|---|---|
| ||
xxx.xxx.xxx.206 carbondb.mysql-wso2.com |
...
xxx.xxx.xxx.206 worker.esb.wso2.com xxx.xxx.xxx.206 elb.wso2.com |
...
Now we are complete with the configurations of the ESB manager node. Lets start ESB by,
sh We have now finished configuring the manager node and are ready to start the ESB server.
Starting the ESB server
Start the ESB server by typing the following command in the terminal:
sh <ESB_MANAGER_HOME>/bin/wso2server.sh -
...
Dsetup
The additional “-
Dsetup” Dsetup
argument will clean the configurations, recreate DB, re-populate the configuration which is for our case because our central database is empty , and create the required tables need to be created in it.
When starting the server in manager console it should display a message as follows:
“INFO - TribesClusteringAgent Initializing in the database.
The ESB should print logs to the server console similar to the following:
Code Block | ||
---|---|---|
| ||
INFO - TribesClusteringAgent Initializing cluster... |
...
INFO - TribesClusteringAgent Cluster domain: wso2.esb.domain |
...
INFO - TribesClusteringAgent Using wka based membership management scheme |
...
INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4001 |
...
INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.206:4001 |
...
INFO |
...
INFO - TribesClusteringAgent Local Member xxx.xxx.xxx.206:4001(wso2.esb.domain)
INFO - TribesUtil Members of current cluster
...
- TribesClusteringAgent Local Member 10.200.0.102:4001(wso2.esb.domain) |
...
INFO |
...
INFO - RpcMembershipRequestHandler Received MEMBER_LIST message from xxx.xxx.xxx.206:4000(wso2.esb.domain)
INFO - TribesClusteringAgent Trying to send initialization request to xxx.xxx.xxx.206:4000(wso2.esb.domain)”
...
The ELB console should have these new messages to indicate the manager node joined the cluster.
“INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx.206:4001(wso2.esb.domain)
INFO - MembershipManager Application member xxx.xxx.xxx.206:4001(wso2.esb.domain) joined group wso2.esb.domain
INFO - TribesMembershipListener New member xxx.xxx.xxx.206:4001(wso2.esb.domain) joined cluster.
INFO - RpcInitializationRequestHandler Received GetConfigurationCommand initialization request message from xxx.xxx.xxx.206:4001(wso2.esb.domain)
INFO - DefaultGroupManagementAgent Application member Host:xxx.xxx.xxx.206, Port: 4001, HTTP:8280, HTTPS:8243, Domain: wso2.esb.domain, Sub-domain:null, Active:true joined application cluster”
...
We have now completed configuring the manager node.
Now let’s move on to configuring ESB worker nodes of our setup.
...
3.4 Configuring Worker Nodes for Clustering
...
Here we are configuring our final set of settings needed for clustering. We have to do these settings for all worker nodes in the cluster.
...
As done for the manager node we start by configuring our datasource.
...
3.4.1 Configure Data Sources
...
We have to point the worker nodes to newly created central database in above (2.1.2) section. In this section the datasource changes are the same for all three nodes, so it is possible to configure one node and copy that into the other two nodes.
...
Step 1: Copy MySQL JDBC driver jar (mysql-connector-java-x.x.xx-bin.jar) into the <ESB_WORKER_HOME>/repository/component/lib directory as mentioned in Step 5 in previous (2.1.2) section.
As our setup has 3 worker nodes, this driver has to be copied to all three nodes.
...
Step 2: Open and edit master-datasources.xml file of each worker node which is at <ESB_WORKER_HOME>/repository/conf/datasources/ directory. Locate the “WSO2_CARBON_DB” data source configurations and change as follows.
...
<url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=60000</url>
Defines the location of our central database.
<username>username</username>
Gives the username to access the database
<password>password</password>
Password for above user
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
The driver to be used to connect for the central database. Since we already copied the MySQL JDBC driver, we can use that.
...
Other configurations do not need to be changed. So the final outcome would be like this,
...
<datasource>
<name>WSO2_CARBON_DB</name>
<description>The datasource used for registry and user manager</description>
<jndiConfig>
<name>jdbc/WSO2CarbonDB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://carbondb.mysql-wso2.com:3306/carbondb?DB_CLOSE_ON_EXIT=FALS E;LOCK_TIMEOUT=60000</url>
<username>username</username>
<password>password</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
...
This is same as the datasource configurations of the manager node.
...
As before mentioned please note that, in most of our products there is only one datasource is used. But if there are more than one datasource they also should be refer the central databases accordingly.For an example API Manager deplyoment setup has bit more specific datasource configurations to be done, hence its is described in different () section below.
...
Now we are complete configuring datasources for our workers.
...
3.4.2 Enable Clustering for Worker Nodes
...
As mentioned earlier section, all three worker nodes should be configured as follows. Note that almost all the configurations are same across the worker nodes except localMemberPort.
...
Step 1: Open axis2.xml file which is at <ESB_WORKER_HOME>/repository/conf/axis2/ directory.
...
Step 2: Locate “Clustering” section and there should be clustering configurations as follows.
...
<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
...
<parameter name="membershipScheme">wka</parameter>
So this node will send cluster initiation messages to wka member(s) that we define in configuring worker nodes.
...
<parameter name="domain">wso2.esb.domain</parameter>
Defines the name of the cluster this node will join.
...
<parameter name="localMemberPort">4000</parameter>
This value changes according to the worker node. Worker1 and Worker3 will have localMemberPort values 4000 and 4001 respectively, as they are both on a separate machine(xxx.xxx.xxx.132) and ports are not used by any other.
But Worker2 is one same machine (xxx.xxx.xxx.206) as ELB and ESB Manager node, so it has needs to have the port 4002 . This port is used to communicate cluster messages.
Note that this port number will not be affected by port offset if carbon.xml
...
Add property “<property name="subDomain" value="worker"/>” under <parameter name="properties">
This defines the sub-domain of this node, here as a “worker“
...
Defines well known members for the cluster as follows,
<members>
<member>
<hostName>elb.wso2.com</hostName>
<port>4000</port>
</member>
<member>
<hostName>mgr.esb.wso2.com</hostName>
<port>4001</port>
</member>
</members>
So here we define ELB and ESB manager nodes as well known members for workers by giving their hostNames and localMemberPorts.
...
3.4.2 Change carbon.xml
...
Since we are running two carbon based products on the same machine we have to change port offset to avoid conflicts on ports they used.
...
Step 1: Open carbon.xml which is at <ESB_WORKER_HOME>/repository/conf/ directory.
...
Step 2: Locate the <Ports> tag and change the value of its sub tag as <Offset> as follows,
Worker1 : <Offset>0</Offset> - No changes needed as this one will be the first node in this (xxx.xxx.xxx.132) machine.
Worker2 : <Offset>2</Offset> - Should add offset of 2 since there are already two more carbon products(ELB and ESB manager node) running on this(xxx.xxx.xxx.206) machine.
Worker3 : <Offset>1</Offset> - Should add offset of 1 since Worker1 occupy the default ports on this (xxx.xxx.xxx.132) machine.
...
Now we have complete clustering configurations on worker nodes. As before this to finish off, we have to map host names to IPs.
...
3.4.3 Map IPs
...
In the worker nodes we have used three hosts names, one is used to specify the mysql server in master-datasources.xml and the other two is used to specify elb and manager host names when defining wka members in axis2.xml.
...
So open /etc/hosts and add following lines if they aren’t already there.
<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com
<ELB-IP> elb.wso2.com
<ESB-Manager-IP> mgr.esb.wso2.com
...
In our case it looked like as follows,
xxx.xxx.xxx.206 carbondb.mysql-wso2.com
xxx.xxx.xxx.206 elb.wso2.com
xxx.xxx.xxx.206 mgr.esb.wso2.com
...
Now we are all complete with the configurations of the ESB worker nodes also. Lets start worker nodes,
sh <ESB_WORKER_HOME>/bin/wso2server.sh -DworkerNode=true
...
When starting the Worker1, it should display logs similar as follow in its console,
“INFO - TribesClusteringAgent Initializing cluster...
INFO - TribesClusteringAgent Cluster domain: wso2.esb.domain
INFO - TribesClusteringAgent Using wka based membership management scheme
INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.132:4000
INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.132:4000
...
- TribesUtil No members in current cluster
INFO - TribesClusteringAgent Cluster initialization completed |
We have now finished configuring the manager node. Next, we will configure the ESB worker nodes.
Configuring the worker nodes
You configure worker nodes in very much the same way as you configured the manager node. Be sure to follow these steps for each worker node in the cluster.
Configuring the data source
You configure the data source to connect to the central database. If there are multiple data sources, configure them to reference the central database as well. Since the data source changes are the same for all worker nodes, you can configure this file on one node and then copy it to the other worker nodes. Make sure you copy the database driver JAR to each worker node and follow the steps described in Setting up the central database.
After you have finished configuring the data source, be sure to copy this configuration to the other worker nodes in the cluster.
Setting up cluster configurations for the worker nodes
Configuring clustering for the worker nodes is similar to the way you configured it for the manager node, but the localMemberPort
will vary for each worker node, you add the subDomain property, and you add the ELB and ESB manager node to the well-known members, as described in the following steps.
- Open the
<ESB_HOME>/repository/conf/axis2/axis2.xml
file. - Locate the Clustering section and verify or configure the properties as follows (some of these properties are already set correctly by default):
- Enable clustering for this node:
<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
- Set the membership scheme to
wka
to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later):<parameter name="membershipScheme">wka</parameter>
- Specify the name of the cluster this node will join:
<parameter name="domain">wso2.esb.domain</parameter>
Specify the port used to communicate cluster messages (if this node is on the same server as the ELB, manager node, or another worker node, be sure to set this to a unique value, such as 4000 and 4001 for worker nodes 1 and 3 and 4002 for worker node 2, which is on the same server as the ELB and manager node):
<parameter name="localMemberPort">4000</parameter>
Note Note: This port number will not be affected by the port offset in
carbon.xml
. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.- Define the sub-domain as worker by adding the following property under the
<parameter name="properties">
element:<property name="subDomain" value="worker"/>
Define the ELB and manager nodes as well-known members of the cluster by providing their host name and
localMemberPort
values. The manager node is defined here because it is required for the Deployment Synchronizer to function.Code Block language html/xml <members> <member> <hostName>elb.wso2.com</hostName> <port>4000</port> </member> <member> <hostName>mgt.esb.wso2.com</hostName> <port>4001</port> </member> </members>
- Enable clustering for this node:
Adjusting the port offset
Because we are running two Carbon-based products on the same server, we must change the port offset to avoid port conflicts. When setting the port offset, note that this results in the port values all being incremented by the offset value which is how port conflicts are avoided.
- Open
<ESB_WORKER_HOME>/repository/conf/carbon.xml
on each worker node. - Locate the <Ports> tag and change the value of its sub-tag as follows on each worker node:
Worker1:
<Offset>0</Offset>
- No changes needed, because this will be the first node on this (xxx.xxx.xxx.132) server.Worker2:
<Offset>2</Offset>
- Set the offset to 2, because there are already two more Carbon products (ELB and ESB manager node) running on this (xxx.xxx.xxx.206) server.Worker3:
<Offset>3</Offset>
- Set the offset of 3, because Worker1 occupies the default ports on this (xxx.xxx.xxx.132) server.
In the next section, we will map the host names we specified to real IPs.
Mapping host names to IPs
In the worker nodes, we have used three hosts names: carbondb.mysql-wso2.com
for the MySQL server, elb.wso2.com
for the ELB, and mgt.esb.wso2.com
for the ESB manager node. We will now map them to the actual IPs.
Open the server's /etc/hosts
file and add the following lines, where <MYSQL-DB-SERVER-IP>
, <ELB-IP>
, and <ESB-Manager-IP>
are the actual IP addresses (in this example, xxx.xxx.xxx.206):
Code Block | ||
---|---|---|
| ||
<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com
<ELB-IP> elb.wso2.com
<ESB-Manager-IP> mgt.esb.wso2.com |
In this example, it would look like this:
Code Block | ||
---|---|---|
| ||
xxx.xxx.xxx.206 carbondb.mysql-wso2.com
xxx.xxx.xxx.206 elb.wso2.com
xxx.xxx.xxx.206 mgt.esb.wso2.com |
We have now finished configuring the worker nodes and are ready to start them.
Starting the ESB server
Start the ESB server by typing the following command in the terminal:
sh <ESB_WORKER_HOME>/bin/wso2server.sh -DworkerNode=true
The additional -DworkerNode=true argument indicates that this is a worker node. This parameter basically makes a server read-only. A node with this parameter will not be able to do any changes such as writing or making modifications to the deployment repository etc.
Info | ||
---|---|---|
If you wish to start the worker in daemon mode, edit the
|
What you configure the axis2.xml (under the clustering section), the cluster sub domain must indicate that this node belongs to the "worker" sub domain in the cluster.
When starting the Worker1, it should display logs similar to the following in the console:
Code Block | ||
---|---|---|
| ||
INFO - TribesClusteringAgent Initializing cluster... INFO - TribesClusteringAgent Cluster domain: wso2.esb.domain INFO - TribesClusteringAgent Using wka based membership management scheme INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.132:4000 INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/xxx.xxx.xxx.132:4000 INFO - WkaBasedMembershipScheme Added static member xxx.xxx.xxx.206:4000(wso2.esb.domain) INFO - WkaBasedMembershipScheme Added static member xxx.xxx.xxx.206:4001(wso2.esb.domain) INFO - TribesClusteringAgent Local Member xxx.xxx.xxx.132:4000(wso2.esb.domain) INFO - TribesUtil Members of current cluster INFO - TribesUtil Member1 xxx.xxx.xxx.206:4000(wso2.esb.domain) INFO - TribesUtil Member2 xxx.xxx.xxx.206:4001(wso2.esb.domain) INFO - WkaBasedMembershipScheme Sending JOIN message to WKA members... INFO - TribesClusteringAgent Trying to send initialization request to xxx.xxx.xxx.206:4000(wso2.esb.domain) |
...
INFO - WkaBasedMembershipScheme Added static member xxx.xxx.xxx.206:4001(wso2.esb.domain)
...
INFO - GetConfigurationResponseCommand Received configuration initialization message
INFO - TribesClusteringAgent Cluster initialization completed. |
The ELB console should have these new messages:
Code Block | ||
---|---|---|
| ||
INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx.132:4000(wso2.esb.domain) |
...
INFO - TribesUtil Members of current cluster
...
INFO - MembershipManager Application member xxx.xxx.xxx. |
...
132:4000(wso2.esb.domain) |
...
joined group wso2.esb.domain INFO - DefaultGroupManagementAgent Application member Host:xxx.xxx.xxx. |
...
INFO - WkaBasedMembershipScheme Sending JOIN message to WKA members...
...
132, Port: 4000, HTTP:8280, HTTPS:8243, Domain: wso2.esb.domain, Sub-domain:worker, Active:true joined application clustert |
The manager node console should have these new messages:
Code Block | ||
---|---|---|
| ||
INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx. |
...
132:4000(wso2.esb.domain) |
...
INFO |
...
INFO - GetConfigurationResponseCommand Received configuration initialization message
INFO - TribesClusteringAgent Cluster initialization completed.
”
...
And in the ELB console should have these new messages
...
- RpcInitializationRequestHandler Received GetConfigurationCommand initialization request message from xxx.xxx.xxx.132:4000(wso2.esb.domain) |
INFO - MembershipManager Application member xxx.xxx.xxx.132:4000(wso2.esb.domain) joined group wso2.esb.domain
INFO - TribesMembershipListener New member If you have similar messages in your consoles, you have finished configuring the worker nodes and the cluster is running. When you terminate one node, all nodes identify that the node has left the cluster. The same applies when a new node joins the cluster. If you want to add another new worker node, you can simply copy worker1 without any changes if you are running it on a new server (such as xxx.xxx.xxx.132:4000(wso2.esb.domain) joined cluster.
INFO - DefaultGroupManagementAgent Application member Host:xxx.xxx.xxx.132, Port: 4000, HTTP:8280, HTTPS:8243, Domain: wso2.esb.domain, Sub-domain:worker, Active:true joined application clustert”
...
Now you have the complete cluster up and running. You can see when you terminate one node, all nodes identify that node has left the cluster. The same applies when a new node joins the cluster.
...
So if you want to add another new worker node what you have to do is, copy worker1 and you can simply use it without any changes if you are running on a new machine, lets say xxx.xxx.xxx. 184.
...
If you intend to use it in a machine where another wso2 product is running, you can use a copy of worker1 with a change to port offset accordingly in the carbon.xml file. You may have to change localMemberPort also in axis2.xml if that product has clustering enabled.
Either case, make sure you have mapped all the host names to the relevant IP addresses in /etc/hosts file when creating a new node.
184). If you intend to use the new node on a server where another WSO2 product is running, you can use a copy of worker1 and change the port offset accordingly in the carbon.xml
file. You may also have to change localMemberPort
in axis2.xml
if that product has clustering enabled. Be sure to map all host names to the relevant IP addresses when creating a new node.
Testing the cluster
To test the cluster, open the ESB management console on the manager node (use the management console URL displayed in the terminal when you started the node), add a sample proxy service with the log mediator in the inSequence so that logs will be displayed in the worker terminals, and then observe the cluster messages sent from the manager node to the workers.
The load balancer manages the active and passive states of the worker nodes, activating nodes as needed and leaving the rest in passive mode. To test this, send a request to the end point through the load balancer to verify that the proxy service is activated only on the active worker node(s) while the remaining worker nodes remain passive. For example, you would send the request to the following URL:
http://{Load_Balancer_Mapped_URL_for_worker}/services/{Sample_Proxy_Name}