This site contains the documentation that is relevant to older WSO2 product versions and offerings.
For the latest WSO2 documentation, visit https://wso2.com/documentation/.

Configuring the Elastic Load Balancer

The WSO2 ELB is now retired. Please set up your cluster with an alternative load balancer, preferably Nginx Plus. See Setting up a Cluster for more information.

The WSO2 Elastic Load Balancer automatically distributes incoming traffic across multiple WSO2 product instances. It enables you to achieve greater levels of fault tolerance in your cluster, and provides the required balancing of load needed to distribute traffic.

In this scenario, we are using WSO2 ELB as the load balancer. Alternatively you can use a third-party load balancer. See the following topics for details on configurations for some common load balancers. If you are using a different load balancer from the ones listed, see your load balancer's documentation.

You configure the ELB with the overall definition of the cluster in mind and define how it should distribute the load. You can achieve this by adding a few lines to a configuration file called loadbalancer.conf. Additionally, you must specify the detailed clustering configurations in the axis2.xml file. This topic describes how to perform these steps. Note that the configurations you use here will depend on the clustering deployment pattern you are using.

See here for more information on well-known addresses and the best ways to use them.

Setting up load-balancing configurations

  1. Open the <ELB_HOME>/repository/conf/loadbalancer.conf file.
  2. Locate the configuration pertaining to your WSO2 product and edit it as follows:

    For some products you may have to enter this in yourself as there is no sample configuration for this in the loadbalancer.conf file.

Note the following when editing the loadbalancer.conf file.

  • In this file, we specified the domain name (wso2.as.domain), which is used to identify the cluster. On startup, nodes that are configured with this domain name can join this cluster.
  • All the service requests must be routed to the worker nodes through the ELB, which is the front end to the entire cluster. Therefore, we specify the worker sub-domain and use the hosts attribute to configure the publicly accessible host name (as.wso2.com) that clients use to send their requests to the cluster. We will map the host name to the ELB server IP address later.
  • If you need to provide access to the management node from outside your network so external clients can upload applications and perform other management tasks, you configure the mgt sub-domain in loadbalancer.conf and map the host to the IP address of the ELB. See here for details on mapping the host. Notice that when declaring the manager as a sub-domain, you use mgt.as.wso2.com as the manager host while as.wso2.com is the worker host. This configuration usually indicates that the manager is the sub-domain here.
  • For service-based products (such as WSO2 ESB, Data Services Server, and Application Server), the ELB creates one group management agent per cluster to manage the service groups. Because we are configuring an AS cluster, we used the group_mgt_port attribute to specify the port for this cluster's group management agent. The port should be a unique value between 4000 and 5000. In scenarios where you are not using the ELB in front of the cluster, you configure the group management agent in the node's axis2.xml file instead as described in Group Management.
  • This group_mgt_port value must be different for different products when you have more than one product cluster fronted by the same WKA member, i.e., the ELB. For example, if you define an AS cluster as well as an ESB cluster in loadbalancer.conf as shown below, the group management ports that you define should be unique for each product configuration. 

    esb { 
      domains{ 
         wso2.esb.domain { 
            tenant_range *; 
            group_mgt_port 4500; 
            worker { 
                    hosts esb.wso2.com; 
            } 
         } 
       } 
    } 
    
    appserver { 
      domains{ 
         wso2.as.domain { 
            tenant_range *; 
            group_mgt_port 4600; 
            worker { 
                    hosts as.wso2.com; 
            } 
         } 
       } 
    } 
  • In the case of multiple load balancers fronting the cluster (as is the case in clustering deployment pattern 3), you specify the well-known addresses using the members attribute.
  • The tenant_range attribute is used to handle tenant-aware load-balancing, which is another very powerful feature of the ELB. This attribute allows us to partition tenants into several clusters, so that when there is a large number of tenants to work with, we can instruct each cluster to work only with a particular tenant or a few selected tenants. This approach is also useful if you need a particular cluster to be dedicated to work for a single special tenant ("Private Jet Mode"). The following are examples of the values you can specify for tenant_range:
    • 1,6,3,4: Tenant IDs 1, 6, 3, and 4
    • 1-3: Tenant IDs 1 to 3 (inclusive of both 1 and 3)
    • 43: Tenant ID 43
    • *: All tenants
  • In this example, we are not enabling tenant partitioning, so we have used an asterisk (*) in front of tenant_range to represent all possible tenants.

In summary, we have configured the load balancer to handle requests sent to as.wso2.com and to distribute the load among the worker nodes in the worker sub-domain of the wso2.as.domain cluster. We are now ready to set up the cluster configurations.

Setting up cluster configurations on the ELB

Previously, we configured several properties of the cluster such as domain name and sub-domain, but we didn’t define them there. We now define these properties as we build the cluster.

  1. Open the <ELB_HOME>/repository/conf/axis2/axis2.xml file.
  2. Locate the clustering section and verify or configure the properties as follows (some of these properties are already set correctly by default):
    1. Enable clustering for this node: 
      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    2. Set the membership scheme to wka to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later): 
      <parameter name="membershipScheme">wka</parameter>

    3. Specify a domain name for the ELB node (note that this domain is for potentially creating a cluster of ELB nodes and is not the cluster of AS nodes that the ELB will load balance): 
      <parameter name="domain">wso2.carbon.lb.domain</parameter>

    4. Specify the port used to communicate with this ELB node. This port number will not be affected by the port offset in the carbon.xml file. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.
      <parameter name="localMemberPort">4000</parameter> 

      The localMemberPort should NOT be the same value as the group_mgt_port value specified in the loadbalancer.conf file.

    5. Specify the name of the host or IP address used to communicate with the ELB node. This is the host name or IP address that ELB uses to communicate with the members of cluster. In other words, ELB advertises itself to the outside world using the value given in localMemberHost. When you specify a host name as the value of this parameter, you must follow a name resolution method such as /etc/hosts mapping or DNS. If you use an IP, you must specify that as the WKA member. 
      <parameter name="localMemberHost">xxx.xxx.xxx.xxx</parameter>

      You may run into issues when using host names in products based on WSO2 Carbon 4.2.0, so it is recommended that you use the IP address directly here. You can also use IP address ranges here. For example, 192.168.1.2-10.

In the case of multiple load balancers fronting the cluster, you must add members to the axis2.xml file as shown below.

Here, the second ELB runs in localhost, and port 4200 is the localMemberPort (found in the axis2.xml file) of the second ELB. You must specify a different port value for each ELB in the cluster. Do this configuration for both the ELBs. This configuration is done primarily to make both ELBs aware of each other. If both your well-known members are ELBs, then each ELB has to know about the other ELB in order to create the ELB cluster. To do that, in each ELB's cluster configuration, you have to mention the other ELB. This is exceptionally helpful in scenarios when one ELB restarts and it would need to get the existing cluster information to serve requests again. To do that, it would need to communicate with the other ELB.

We have now completed the clustering-related configuration for the ELB. In the next section, we will make a change to the ELB that will increase usability.

Configuring the ELB to listen on default ports

We will now change the ELB configuration to listen to the default HTTP and HTTPS ports.

  1. Open the <ELB_HOME>/repository/conf/axis2/axis2.xml file.
  2. Locate the Transport Receiver section and configure the properties as follows:

    <transportReceiver name="http" class="org.apache.synapse.transport.passthru.PassThroughHttpListener">
          <parameter name="port">80</parameter>
    </transportReceiver>
    
    
    <transportReceiver name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLListener">
            <parameter name="port" locked="false">443</parameter>
    </transportReceiver>

    The ports that ELB exposes by default are 8280 and 8243 for HTTP and HTTPS. The main reason for changing these ports is to eventually allow you to access the URLs without having to specify a port. 80 and 443 are the default browser ports for HTTP and HTTPS and are ideally suited for this purpose. Note that by specifying these configurations, you need to start the server as a root user, since 80 and 443 are restricted ports of an operating system.

In the next section, we will map the host names we specified to real IPs.

Mapping the host name to the IP

In the ELB, we configured a host name in loadbalancer.conf to front the worker service requests. We must now map this host name (as.wso2.com) to the actual IP address of the server that the ELB is running. Open the server's /etc/hosts file and add the following line, where <ELB-IP> is the actual IP address:

In this example, it would look like this:

xxx.xxx.xxx.xxx as.wso2.com 

You would have to map the host name of the localMemberHost here if you use the host name instead of the IP address.

Autoscaling the load balancer

Scalability is the ability of a system to continue to operate correctly even when it is scaled to a larger or smaller size. Autoscaling requires the cluster to scale out when load increases and scale in when load decreases. The cluster should always use the optimum amount of resources. See here for more information on autoscaling and how to configure it in WSO2 ELB.

We have now finished configuring the ELB and are ready to start the ELB server.

Starting the ELB server

Start the ELB server by typing the following command in the terminal:
sudo sh <ELB_HOME>/bin/wso2server.sh 

If you skipped the step of configuring the ELB to listen on the default ports, you do not need to use the sudo command and can just start the ELB with the following command:
sh <ELB_HOME>/bin/wso2server.sh

The ELB should print logs to the server console indicating that the cluster initialization is complete.

Now that the ELB is configured and running, create a central database for all the nodes to use.