Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

...

...

...

...

...

...

...

...

...

...

...

...

...

Warning

The WSO2 ELB is now retired. Please set up your cluster with an alternative load balancer, preferably Nginx Plus. See Setting up a Cluster for more information.

You configure worker nodes in very much the same way as you configured the manager node. Be sure to follow these steps for each worker node in the cluster. 

Table of Contents
maxLevel3
minLevel3

...

  1. Open the <AS_HOME>/repository/conf/axis2/axis2.xml file.
  2. Locate the Clustering section and verify or configure the properties as follows (some of these properties are already set correctly by default):
    1. Enable clustering for this node: 
      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    2. Set the membership scheme to  wka  to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later): 
      <parameter name="membershipScheme">wka</parameter>
    3. Specify the name of the cluster this node will join: 
      <parameter name="domain">wso2.as.domain</parameter>
    4. Specify the host used to communicate cluster messages.

      Note

      You may run into issues when using host names in products based on WSO2 Carbon 4.2.0, so it is recommended that you use the IP address directly here.

      <parameter name="localMemberHost">xxx.xxx.xxx.xx4</parameter> 

    5. Specify the port used to communicate cluster messages (if this node is on the same server as the ELB, manager node, or another worker node, be sure to set this to a unique value, such as 4000 and 4001 for worker nodes 1 and 2). 
      <parameter name="localMemberPort">4000</parameter> 

      Info

      This port number will not be affected by the port offset in carbon.xml. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.

    6. Define the sub-domain as worker by adding the following property under the  <parameter name="properties">  element: 
      <property name="subDomain" value="worker"/>
    7. Define the ELB and manager nodes as well-known members of the cluster by providing their host name and  localMemberPort  values. The manager node is defined here because it is required for the  Deployment Synchronizer  to function in an efficient manner. The deployment synchronizer uses this configuration to identify the manager and synchronize deployment artifacts across the nodes of a cluster. 

      Localtabgroup
      Localtab
      activetrue
      titleClustering deployment pattern 1

      This configuration is for clustering deployment pattern 1. The port value of the ELB must be the same as the group_mgt_port you specified in the loadbalancer.conf file /wiki/spaces/SAM/pages/5210255. The port value for the WKA manager node must be the same value as it's localMemberPort (in this case 4100).

      Code Block
      languagehtml/xml
      <members>
          <member>
              <hostName>xxx.xxx.xxx.xx1</hostName>
              <port>4500</port>
          </member>
          <member>
              <hostName>xxx.xxx.xxx.xx3</hostName>
              <port>4100</port>
          </member>
      </members>

      We configure the ELB and the manager as the well-known members of the cluster.

      Note

      You can also use IP address ranges for the hostName. For example, 192.168.1.2-10. This should ensure that the cluster eventually recovers after failures. This would also be the solution to the many issues where the logs in the ELB show the "No members available" message. One shortcoming of doing this is that you can define a range only for the last portion of the IP address. You should also keep in mind that the smaller the range, the faster the time it takes to discover members, since each node has to scan a lesser number of potential members.

      Localtab
      titleClustering deployment pattern 2

      This configuration is for clustering deployment pattern 2. The port value of the ELB must be the same as the group_mgt_port you specified in the loadbalancer.conf file /wiki/spaces/SAM/pages/5210255. The port value for the WKA manager node must be the same value as it's localMemberPort (in this case 4100).

      Code Block
      languagehtml/xml
      <members>
          <member>
              <hostName>xxx.xxx.xxx.xx1</hostName>
              <port>4500</port>
          </member>
          <member>
              <hostName>xxx.xxx.xxx.xx3</hostName>
              <port>4100</port>
          </member>
      </members>

      Here we configure the ELB and a manager node as the well-known member.

      Note

      You can also use IP address ranges for the hostName. For example, 192.168.1.2-10. This should ensure that the cluster eventually recovers after failures. This would also be the solution to the many issues where the logs in the ELB show the "No members available" message. One shortcoming of doing this is that you can define a range only for the last portion of the IP address. You should also keep in mind that the smaller the range, the faster the time it takes to discover members, since each node has to scan a lesser number of potential members.

      Localtab
      titleClustering deployment pattern 3

      This configuration is for clustering deployment pattern 3. The port value of the ELB must be the same as the group_mgt_port you specified in the loadbalancer.conf file /wiki/spaces/SAM/pages/5210255. Use the following as the configurations for the <members> element.

      Code Block
      languagehtml/xml
      <members>
          <member>
              <hostName>xxx.xxx.xxx.xx1</hostName>
              <port>4500</port>
          </member> 
          <member>
              <hostName>xxx.xxx.xxx.xx2</hostName>
              <port>4600</port>
          </member>
      </members>

      Here we are configuring both the ELBs as well-known members even though ELB2 serves requests for the worker sub-domain. This is done as both ELBs are in the same cluster and it is a best practice to have the ELB as a well-known member.

      Note

      You can also use IP address ranges for the hostName. For example, 192.168.1.2-10. This should ensure that the cluster eventually recovers after failures. This would also be the solution to the many issues where the logs in the ELB show the "No members available" message. One shortcoming of doing this is that you can define a range only for the last portion of the IP address. You should also keep in mind that the smaller the range, the faster the time it takes to discover members, since each node has to scan a lesser number of potential members.

      Info

      The port you use here for the load balancer should ideally be the same as the group_mgt_port value specified in the loadbalancer.conf file. In our example, this is 4500 (and 4600 in the case of clustering deployment pattern 3).

...

Code Block
languagenone
<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com
<AS-ManagerWorker-IP> mgt.as.wso2.com 

In this example, it would look like this:

Code Block
languagenone
xxx.xxx.xxx.206 carbondb.mysql-wso2.com
xxx.xxx.xxx.xx3xx4 mgt.as.wso2.com 

We have now finished configuring the worker nodes and are ready to start them.

...