The WSO2 ELB is now retired. Please set up your cluster with an alternative load balancer, preferably Nginx Plus. See Setting up a Cluster for more information.
You configure worker nodes in very much the same way as you configured the manager node. Be sure to follow these steps for each worker node in the cluster.
Configuring the data source
You configure the data source to connect to the central database. If there are multiple data sources, configure them to reference the central database as well. Since the data source changes are the same for all worker nodes, you can configure this file on one node and then copy it to the other worker nodes. Make sure you copy the database driver JAR to each worker node and follow the steps described in Setting up the central database.
Setting up cluster configurations for the worker nodes
Configuring clustering for the worker nodes is similar to the way you configured it for the manager node, but the localMemberPort
will vary for each worker node, you add the subDomain
property, and you add the ELB and AS manager node to the well-known members, as described in the following steps.
- Open the
<AS_HOME>/repository/conf/axis2/axis2.xml
file. - Locate the Clustering section and verify or configure the properties as follows (some of these properties are already set correctly by default):
- Enable clustering for this node:
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
- Set the membership scheme to
wka
to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later):<parameter name="membershipScheme">wka</parameter>
- Specify the name of the cluster this node will join:
<parameter name="domain">wso2.as.domain</parameter>
Specify the host used to communicate cluster messages.
You may run into issues when using host names in products based on WSO2 Carbon 4.2.0, so it is recommended that you use the IP address directly here.
<parameter name="localMemberHost">xxx.xxx.xxx.xx4</parameter>
Specify the port used to communicate cluster messages (if this node is on the same server as the ELB, manager node, or another worker node, be sure to set this to a unique value, such as 4000 and 4001 for worker nodes 1 and 2).
<parameter name="localMemberPort">4000</parameter>
This port number will not be affected by the port offset in
carbon.xml
. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.- Define the sub-domain as worker by adding the following property under the
<parameter name="properties">
element:<property name="subDomain" value="worker"/>
Define the ELB and manager nodes as well-known members of the cluster by providing their host name and
localMemberPort
values. The manager node is defined here because it is required for the Deployment Synchronizer to function in an efficient manner. The deployment synchronizer uses this configuration to identify the manager and synchronize deployment artifacts across the nodes of a cluster.The port you use here for the load balancer should ideally be the same as the
group_mgt_port
value specified in the loadbalancer.conf file. In our example, this is 4500 (and 4600 in the case of clustering deployment pattern 3).
- Enable clustering for this node:
Adjusting the port offset and host name
If we are running two Carbon-based products on the same server, we must change the port offset to avoid port conflicts. To do this:
- Open
<AS_HOME>/repository/conf/carbon.xml
on each worker node. Locate the
<Ports>
configuration and change the value of its sub-tag as follows on each worker node:In a scenario where your server includes only numerous worker instances, worker1 would have an offset of 0, worker2 would have an offset of 1 and worker3 would have an offset of 2. They would all have unique offset values. Note that this is only relevant if you are running multiple instances of your products on the same server.
Situation Value Description Only one WSO2 product on the server <Offset>0</Offset>
No changes needed, because this will be the first node on this (xxx.xxx.xxx.1) server. Two WSO2 products on the server <Offset>1</Offset>
Set the offset to 1, because another product occupies the default ports on this (xxx.xxx.xxx.1) server. Three WSO2 products on the same server <Offset>2</Offset>
Set the offset to 2, because there are already two more Carbon products (possibly ELB and AS manager node) running on this (xxx.xxx.xxx.1) server. - While making changes to the carbon.xml file, specify the host name as follows:
<HostName>as.wso2.com</HostName>
You can configure the deployment synchronizer which is also done in the carbon.xml file.
Configuring the catalina-server.xml file
Make the following configuration changes in the catalina-server.xml file which is found in the <AS_HOME>/repository/conf/tomcat/
directory.
<Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="9763" proxyPort="80" -------- /> <Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="9443" proxyPort="443" -------- />
In the next section, we will map the host names we specified to real IPs.
Mapping host names to IPs
In the worker nodes, we have used three hosts names: carbondb.mysql-wso2.com
for the MySQL server, elb.wso2.com
for the ELB, and mgt.as.wso2.com
for the AS manager node. We will now map them to the actual IPs.
Open the server's /etc/hosts
file and add the following lines, where <MYSQL-DB-SERVER-IP>
and <AS-Manager-IP>
are the actual IP addresses (in this example, xxx.xxx.xxx.206) of the database server and the manager node:
<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com <AS-Manager-IP> mgt.as.wso2.com
In this example, it would look like this:
xxx.xxx.xxx.206 carbondb.mysql-wso2.com xxx.xxx.xxx.xx3 mgt.as.wso2.com
We have now finished configuring the worker nodes and are ready to start them.
Starting the AS server
Tip: It is recommendation is to delete the <AS_HOME>/repository/deployment/server
directory and create an empty server directory in the worker node. This is done to avoid any SVN conflicts that may arise.
Start the AS server by typing the following command in the terminal:
sh <AS_HOME>/bin/wso2server.sh -DworkerNode=true
The additional -DworkerNode=true
argument indicates that this is a worker node. This parameter basically makes a server read-only. A node with this parameter will not be able to do any changes such as writing or making modifications to the deployment repository etc. This parameter also enables the worker profile, where the UI bundles will not be activated and only the back end bundles will be activated once the server starts up.
This is based on what you configured in the axis2.xml file (under the clustering section). The cluster sub domain must indicate that this node belongs to the "worker" sub domain in the cluster.
When starting the Worker1, it should display logs in the console indicating that the cluster initialization is complete.
The ELB console should have these new messages:
INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx.132:4000(wso2.as.domain) INFO - MembershipManager Application member xxx.xxx.xxx.132:4000(wso2.as.domain) joined group wso2.as.domain INFO - DefaultGroupManagementAgent Application member Host:xxx.xxx.xxx.132, Port: 4000, HTTP:8280, HTTPS:8243, Domain: wso2.as.domain, Sub-domain:worker, Active:true joined application clustert
The manager node console should have these new messages:
INFO - RpcMembershipRequestHandler Received JOIN message from xxx.xxx.xxx.132:4000(wso2.as.domain) INFO - RpcInitializationRequestHandler Received GetConfigurationCommand initialization request message from xxx.xxx.xxx.132:4000(wso2.as.domain)
If you have similar messages in your consoles, you have finished configuring the worker nodes and the cluster is running. When you terminate one node, all nodes identify that the node has left the cluster. The same applies when a new node joins the cluster. If you want to add another new worker node, you can simply copy worker1 without any changes if you are running it on a new server (such as xxx.xxx.xxx.184). If you intend to use the new node on a server where another WSO2 product is running, you can use a copy of worker1 and change the port offset accordingly in the carbon.xml
file. You may also have to change localMemberPort
in axis2.xml
if that product has clustering enabled. Be sure to map all host names to the relevant IP addresses when creating a new node.
The next step is to test the cluster.