This site contains the documentation that is relevant to older WSO2 product versions and offerings.
For the latest WSO2 documentation, visit https://wso2.com/documentation/.
Setting up a Cluster
This section describes how to set up a WSO2 product worker/manager separated cluster and how to configure this cluster with different third-party load balancers. The following sections give you information and instructions on how to set up your cluster.
Important: When configuring your WSO2 products for clustering, it is necessary to use a specific IP address and not localhost or host names in your configurations. So, keep this in mind when hosting WSO2 products in your production environment.
See Setting up a Cluster in AWS Mode for information on clustering WSO2 products that are deployed on Amazon EC2 instances. The instructions in that topic only include the configurations done to the <PRODUCT_HOME>/repository/conf/axis2/axis2.xml
file, so the configuration changes done to other configuration files must be done in addition to the steps in that topic.
See the following links for more information.
- See the Overview for general information on clustering.
- See Configuring Clustering for Specific Products for more detailed configurations on specific WSO2 products.
- See here for information on why you would want to separate the worker and manager nodes in your cluster and a wider variety of options if you prefer to use a different clustering deployment pattern.
Worker/manager separated clustering deployment pattern
In this pattern there are three WSO2 Application Server nodes; 1 node acts as the manager node and 2 nodes act as worker nodes for high availability and serving service requests. Although you can use any standard WSO2 product, WSO2 AS is used here for the purposes of this example. In this pattern, we allow access to the admin console through an external load balancer. Additionally, service requests are directed to worker nodes through this load balancer. The following image depicts the sample pattern this clustering deployment scenario will follow.
Here, we use two nodes as well-known members, one is the manager node and the other is one of the worker nodes. It is always recommended to use at least two well-known members to prevent restarting all the nodes in the cluster in case a well known member is shut down.
See Worker/Manager separated clustering patterns for a wider variety of options if you prefer to use a different clustering deployment pattern.
Configuring the load balancer
The load balancer automatically distributes incoming traffic across multiple WSO2 product instances. It enables you to achieve greater levels of fault tolerance in your cluster, and provides the required balancing of load needed to distribute traffic.
About clustering without a load balancer
The configurations in this subsection are not required if your clustering setup does not have a load balancer. If you follow the rest of the configurations in this topic while excluding this section, you will be able to set up your cluster without the load balancer.
Things to keep in mind
The configuration steps in this document are written assuming that default 80 and 443 ports are used and exposed by the 3rd party load balancer for this AS cluster. If any other ports are used instead of the default ones, please replace 80 and 443 values with the corresponding ports in the relevant places.
So with the above in mind, please note the following:
- Load balancer ports are HTTP 80 and HTTPS 443 as indicated in the deployment pattern above.
- Direct the HTTP requests to the worker nodes using http://xxx.xxx.xxx.xx3/<service> via HTTP 80 port.
- Direct the HTTPS requests to the worker nodes using https://xxx.xxx.xxx.xx3/<service> via HTTPS 443 port.
- Access the management console as https://xxx.xxx.xxx.xx2/carbon via HTTPS 443 port
- In a WSO2 AS cluster, the worker nodes address service requests on HTTP 9764 and https 9443 ports and can access the management console using the HTTPS 9443 port.
The following are some of the load balancers that you can configure and their respective configurations.
Tip: We recommend that you use NGINX Plus as your load balancer of choice.
Setting up the databases
See Setting up the Database for information on how to set up the databases for a cluster. The datasource configurations must be done in the <PRODUCT_HOME>/repository/conf/datasources/master-datasources.xml
file for both the manager and worker nodes. You would also have to configure the shared registry database and mounting details in the <PRODUCT_HOME>/
repository/conf/registry.xml
file.
Configuring the manager node
- Download and unzip the WSO2 AS binary distribution. Consider the extracted directory as
<PRODUCT_HOME>
. - Set up the cluster configurations. Edit the
<PRODUCT_HOME>/repository/conf/axis2/axis2.xml
file as follows.- Enable clustering for this node:
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
- Set the membership scheme to
wka
to enable the well-known address registration method (this node sends cluster initiation messages to the WKA members that we define later).
<parameter name="membershipScheme">wka</parameter>
- Specify the name of the cluster this node will join.
<parameter name="domain">wso2.as.domain</parameter>
- Specify the host used to communicate cluster messages.
<parameter name="localMemberHost">xxx.xxx.xxx.xx2</parameter>
Specify the port used to communicate cluster messages. This port number is not affected by the port offset value specified in the
<PRODUCT_HOME>/repository/conf/
carbon.xml
. If this port number is already assigned to another server, the clustering framework automatically increments this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.
<parameter name="localMemberPort">4100</parameter>
Specify the well known members. In this example, the well known member is a worker node. The port value for the WKA worker node must be the same value as it's
localMemberPort
(in this case 4200).<members> <member> <hostName>xxx.xxx.xxx.xx3</hostName> <port>4200</port> </member> </members>
Although this example only indicates one well-known member, it is recommended to add at least two well-known members here. This is done to ensure that there is high availability for the cluster.
You can also use IP address ranges for the
hostName
. For example, 192.168.1.2-10. This should ensure that the cluster eventually recovers after failures. One shortcoming of doing this is that you can define a range only for the last portion of the IP address. You should also keep in mind that the smaller the range, the faster the time it takes to discover members, since each node has to scan a lesser number of potential members.Change the following clustering properties.
<parameter name="properties"> <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/> <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/> </parameter>
- Enable clustering for this node:
Configure the
HostName
. To do this, edit the<PRODUCT_HOME>/repository/conf/carbon.xml
file as follows.<HostName>as.wso2.com</HostName> <MgtHostName>mgt.as.wso2.com</MgtHostName>
Enable SVN-based deployment synchronization with the
AutoCommit
property marked astrue
. To do this, edit the<PRODUCT_HOME>/repository/conf/carbon.xml
file as follows. See Configuring Deployment Synchronizer for more information on this.<DeploymentSynchronizer> <Enabled>true</Enabled> <AutoCommit>true</AutoCommit> <AutoCheckout>true</AutoCheckout> <RepositoryType>svn</RepositoryType> <SvnUrl>https://svn.wso2.org/repos/as</SvnUrl> <SvnUser>svnuser</SvnUser> <SvnPassword>xxxxxx</SvnPassword> <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId> </DeploymentSynchronizer>
In the
<PRODUCT_HOME>/repository/conf/carbon.xml
file, you can also specify the port offset value. This is ONLY applicable if you have multiple WSO2 products hosted on the same server.<Ports> ... <Offset>0</Offset> ... </Ports>
Map the host names to the IP. Add the below host entries to your DNS, or “
/etc/hosts
” file (in Linux) in all the nodes of the cluster. You can map the IP address of the database server. In this example, MySQL is used as the database server, so<MYSQL-DB-SERVER-IP>
is the actual IP address of the database server.<IP-of-MYSQL-DB-SERVER> carbondb.mysql-wso2.com
Allow access the management console only through the load balancer. Configure the HTTP/HTTPS proxy ports to communicate through the load balancer by editing the
<PRODUCT_HOME>/repository/conf/tomcat/catalina-server.xml
file as follows.<Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="9763" proxyPort="80" ... /> <Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="9443" proxyPort="443" ... />
Configuring the worker node
- Download and unzip the WSO2 AS binary distribution. Consider the extracted directory as
<PRODUCT_HOME>
. - Set up the cluster configurations. Edit the
<PRODUCT_HOME>/repository/conf/axis2/axis2.xml
file as follows.- Enable clustering for this node.
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
- Set the membership scheme to
wka
to enable the well-known address registration method (this node will send cluster initiation messages to WKA members that we will define later).
<parameter name="membershipScheme">wka</parameter>
- Specify the name of the cluster this node will join.
<parameter name="domain">wso2.as.domain</parameter>
- Specify the host used to communicate cluster messages.
<parameter name="localMemberHost">xxx.xxx.xxx.xx3</parameter>
Specify the port used to communicate cluster messages. If this node is on the same server as the manager node, or another worker node, set this to a unique value, such as 4000 and 4001 for worker nodes 1 and 2. This port number will not be affected by the port offset in
carbon.xml
. If this port number is already assigned to another server, the clustering framework will automatically increment this port number.
<parameter name="localMemberPort">4200</parameter>
Specify the well known member by providing the host name and
localMemberPort
values. Here, the well known member is the manager node. Defining the manager node is useful since it is required for the Deployment Synchronizer to function in an efficient manner. The deployment synchronizer uses this configuration to identify the manager and synchronize deployment artifacts across the nodes of a cluster.<members> <member> <hostName>xxx.xxx.xxx.xx2</hostName> <port>4100</port> </member> </members>
Although this example only indicates one well-known member, it is recommended to add at least two well-known members here. This is done to ensure that there is high availability for the cluster.
You can also use IP address ranges for the
hostName
. For example, 192.168.1.2-10. This should ensure that the cluster eventually recovers after failures. One shortcoming of doing this is that you can define a range only for the last portion of the IP address. You should also keep in mind that the smaller the range, the faster the time it takes to discover members, since each node has to scan a lesser number of potential members.
- Enable clustering for this node.
Configure the
HostName
. To do this, edit the<PRODUCT_HOME>/repository/conf/carbon.xml
file as follows.<HostName>as.wso2.com</HostName>
Enable SVN-based deployment synchronization with the
AutoCommit
property marked asfalse
. To do this, edit the<PRODUCT_HOME>/repository/conf/carbon.xml
file as follows.<DeploymentSynchronizer> <Enabled>true</Enabled> <AutoCommit>false</AutoCommit> <AutoCheckout>true</AutoCheckout> <RepositoryType>svn</RepositoryType> <SvnUrl>https://svn.wso2.org/repos/as</SvnUrl> <SvnUser>svnuser</SvnUser> <SvnPassword>xxxxxx</SvnPassword> <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId> </DeploymentSynchronizer>
In the
<PRODUCT_HOME>/repository/conf/carbon.xml
file, you can also specify the port offset value. This is ONLY applicable if you have multiple WSO2 products hosted on the same server.<Ports> ... <Offset>0</Offset> ... </Ports>
Map the host names to the IP. Add the below host entries to your DNS, or “
/etc/hosts
” file (in Linux) in all the nodes of the cluster. You can map the IP address of the database server. In this example, MySQL is used as the database server, so<MYSQL-DB-SERVER-IP>
is the actual IP address of the database server.<IP-of-MYSQL-DB-SERVER> carbondb.mysql-wso2.com
Configure the HTTP/HTTPS proxy ports to communicate through the load balancer by editing the
<PRODUCT_HOME>/repository/conf/tomcat/catalina-server.xml
file as follows.<Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="9763" proxyPort="80" ... /> <Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="9443" proxyPort="443" ... />
- Create the second worker node by getting a copy of the WSO2 product you just configured as a worker node and change the following in the
<PRODUCT_HOME>/repository/conf/axis2/axis2.xml
file. This copy of the WSO2 product can be moved to a server of its own.
<parameter name="localMemberPort">4300</parameter>
Testing the cluster
- Restart the configured load balancer.
- Start the manager node. The additional
-Dsetup
argument creates the required tables in the database.
sh <PRODUCT_HOME>/bin/wso2server.sh -Dsetup
Start the two worker nodes. The additional
-DworkerNode=true
argument indicates that this is a worker node. This parameter basically makes a server read-only. A node with this parameter will not be able to do any changes such as writing or making modifications to the deployment repository etc. This parameter also enables the worker profile, where the UI bundles will not be activated and only the back end bundles will be activated once the server starts up. When you configure the axis2.xml file, the cluster sub domain must indicate that this node belongs to the "worker" sub domain in the cluster.It is recommendation is to delete the
<PRODUCT_HOME>/repository/deployment/server
directory and create the following directory structure in order to avoid any SVN conflicts that may arise.<PRODUCT_HOME>/repository/deployment/server
<PRODUCT_HOME>/repository/deployment/server/synapse-configs/
<PRODUCT_HOME>/repository/deployment/server/synapse-configs/default
As depicted above you need to create an empty directory named
server
in the<APIM_Worker_HOME>/repository/deployment
directory. Thereafter, create a subdirectory namedsynapse-configs
under theserver
directory, and create another subdirectory nameddefault
under thesynapse-configs
directory. This is done to avoid any SVN conflicts that may arise.sh <PRODUCT_HOME>/bin/wso2server.sh -DworkerNode=
true
Check for ‘member joined’ log messages in all consoles.
Additional information on logs and new nodes
When you terminate one node, all nodes identify that the node has left the cluster. The same applies when a new node joins the cluster.
If you want to add another new worker node, you can simply copy worker1 without any changes if you are running it on a new server (such as xxx.xxx.xxx.184). If you intend to use the new node on a server where another WSO2 product is running, you can use a copy of worker1 and change the port offset accordingly in the
carbon.xml
file. You also have to changelocalMemberPort
inaxis2.xml
if that product has clustering enabled. Be sure to map all host names to the relevant IP addresses when creating a new node. The log messages indicate if the new node joins the cluster.- Access management console through the LB using the following URL: https://xxx.xxx.xxx.xx1:443/carbon
Test load distribution via http://xxx.xxx.xxx.xx1:80/.