This site contains the documentation that is relevant to older WSO2 product versions and offerings.
For the latest WSO2 documentation, visit https://wso2.com/documentation/.

Distributed Setup with Separate Worker/Manager Nodes

This page explains the minimum configuration instructions required to configure WSO2 ESB in a distributed setup with separated nodes as management node and worker node/s.

Shown below is the deployment diagram of this setup. The cluster consists of two sub cluster domains as worker/manager and is fronted by a single load balancer. Altogether, we will be configuring three service instances.

Configuration Steps:

Using similar instructions, this minimum configuration can be extended to include any number of worker/manager nodes into the cluster.

Note

Download esb-worker-mgt-deployment-pattern-1.zip file for the sample configurations discussed here.

Setting up WSO2 Elastic Load Balancer

1. Download and extract WSO2 ELB. This folder will be referred to as <elb-home>.

2. Go to <elb-home>/repository/conf/loadbalancer.conf and add the entries from line numbers 55-69. The rest of the entries carry default values.  

loadbalancer.conf file configuration

loadbalancer {
   # minimum number of load balancer instances
   instances 1;

   # whether autoscaling should be enabled or not.
   enable_autoscaler false;

   # autoscaling decision making task
   #autoscaler_task
   org.wso2.carbon.mediator.autoscale.lbautoscale.task.ServiceRequestsInFlightAutoscaler;

   # whether to use embedded autoscaler or not. By default, we use embedded autoscaler.
   #use_embedded_autoscaler true;

   #please use this whenever url-mapping is used through LB.
   #size_of_cache 100;

   # End point reference of the Autoscaler Service. This should be present, if you disabled embedded autoscaling.
   #autoscaler_service_epr https://host_address:https_port/services/AutoscalerService/;

   # interval between two task executions in milliseconds
   autoscaler_task_interval 60000;

   # after an instance booted up, task will wait maximum till this much of time and let the server started up
   server_startup_delay 180000; #default will be 60000ms

   # session time out
   session_timeout 90000;

   # enable fail over
   fail_over true;
}

# services' details which are fronted by this WSO2 Elastic Load Balancer
services {
   # default parameter values to be used in all services
   defaults {
      # minimum number of service instances required. WSO2 ELB will make sure that this much of instances
      # are maintained in the system all the time, of course only when autoscaling is enabled.
      min_app_instances 1;

      # maximum number of service instances that will be load balanced by this ELB.
      max_app_instances 5;

      # you need to calibrate autoscaling parameters before start using. Please go through following blog post
      # http://nirmalfdo.blogspot.com/2013/01/scale-up-early-scale-downslowly.html
      max_requests_per_second 5;
      alarming_upper_rate 0.7;
      alarming_lower_rate 0.2;
      scale_down_factor 0.25;
      rounds_to_average 2;
      message_expiry_time 60000;
   }
   esb {
      domains {
         wso2.esb.domain {
            hosts mgt.esb.cloud-test.wso2.com;
            sub_domain mgt;
            tenant_range *;
         }

         wso2.esb.domain {
            hosts esb.cloud-test.wso2.com;
            sub_domain worker;
            tenant_range *;
         }
      }
   }
}

The above configuration includes two sub domains under a single cluster domain named as "wso2.esb.domain". For the sake of simplicity, we will be configuring one ESB management node and one ESB worker node. Set up information is summarized in the table below.

 

Service Cluster Domain

Service Cluster sub DomainHost Name
ESB Management Nodewso2.esb.domainmgt

mgt.esb.cloud-test.wso2.com

ESB Worker Nodewso2.esb.domainworkeresb.cloud-test.wso2.com

3. Open the <elb-home>/repository/conf/passthru-http.properties file and set the http.socket.timeout property to the time in milliseconds that you want the socket to stay open after the last data packet has been transferred. The default is 6000 (60 seconds); setting it to 0 prevents the socket from timing out at all. To ensure that there is enough time for the data to be returned before the session closes, be sure to set the socket timeout value to less than the session timeout value in the previous step. 

4. Open <elb-home>/repository/conf/axis2/axis2.xml file and specify the ports that you like to expose to the clients, in the 'Transport Receivers' section. Note line numbers 6, 7, 12 and 13. Rest of the entries carry default values.

Transport receivers configuration in axis2.xml file

<!-- ================================================= -->
<!-- Transport Ins (Listeners) -->
<!-- ================================================= -->
<!--Default trasnport will be passthrough if you need to change please add it here -->

<transportReceiver name="http" class="org.apache.synapse.transport.passthru.PassThroughHttpListener">
   <parameter name="port">8280</parameter>
   <parameter name="non-blocking"> true</parameter>
   <parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.PassThroughNHttpGetProcessor</parameter>
</transportReceiver>

<transportReceiver name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLListener">
   <parameter name="port" locked="false">8243</parameter>
   <parameter name="non-blocking" locked="false">true</parameter>
   <parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.PassThroughNHttpGetProcessor</parameter>
   <!--parameter name="bind-address" locked="false">hostname or IP address</parameter-->
   <!--parameter name="WSDLEPRPrefix" locked="false">https://apachehost:port/somepath</parameter-->
   <parameter name="keystore" locked="false">
      <KeyStore>
         <Location>repository/resources/security/wso2carbon.jks</Location>
         <Type>JKS</Type>
         <Password>wso2carbon</Password>
         <KeyPassword>wso2carbon</KeyPassword>
      </KeyStore>
   </parameter>
   <parameter name="truststore" locked="false">
      <TrustStore>
         <Location>repository/resources/security/clienttruststore.jks</Location>
         <Type>JKS</Type>
         <Password>wso2carbon</Password>
      </TrustStore>
   </parameter>
   <!--<parameter name="SSLVerifyClient">require</parameter> supports optional|require or defaults to none -->
</transportReceiver>

Note

Transport Senders should also point to 'Pass-Through' transport. (Currently they are 'pass-through' by default.)

5. Open <elb-home>/repository/conf/axis2/axis2.xml file and specify the following configuration in the 'clustering' section. Note line numbers 10, 29 and 72. This is the default configuration and nothing needs to be changed.

Clustering configuration in axis2.xml file

<!-- ================================================= -->
    <!--                Clustering                         -->
    <!-- ================================================= -->
    <!--
     To enable clustering for this node, set the value of "enable" attribute of the "clustering"
     element to "true". The initialization of a node in the cluster is handled by the class
     corresponding to the "class" attribute of the "clustering" element. It is also responsible for
     getting this node to join the cluster.
     -->
    <clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
        <!--
           This parameter indicates whether the cluster has to be automatically initalized
           when the AxisConfiguration is built. If set to "true" the initialization will not be
           done at that stage, and some other party will have to explictly initialize the cluster.
        -->
        <parameter name="AvoidInitiation">true</parameter>
        <!--
           The membership scheme used in this setup. The only values supported at the moment are
           "multicast" and "wka"
           1. multicast - membership is automatically discovered using multicasting
           2. wka - Well-Known Address based multicasting. Membership is discovered with the help
                    of one or more nodes running at a Well-Known Address. New members joining a
                    cluster will first connect to a well-known node, register with the well-known node
                    and get the membership list from it. When new members join, one of the well-known
                    nodes will notify the others in the group. When a member leaves the cluster or
                    is deemed to have left the cluster, it will be detected by the Group Membership
                    Service (GMS) using a TCP ping mechanism.
        -->
        <parameter name="membershipScheme">wka</parameter>
        <!--
         The clustering domain/group. Nodes in the same group will belong to the same multicast
         domain. There will not be interference between nodes in different groups.
        -->
        <parameter name="domain">wso2.carbon.lb.domain</parameter>
        <!--
           When a Web service request is received, and processed, before the response is sent to the
           client, should we update the states of all members in the cluster? If the value of
           this parameter is set to "true", the response to the client will be sent only after
           all the members have been updated. Obviously, this can be time consuming. In some cases,
           such this overhead may not be acceptable, in which case the value of this parameter
           should be set to "false"
        -->
        <parameter name="synchronizeAll">false</parameter>
        <!--
          The maximum number of times we need to retry to send a message to a particular node
          before giving up and considering that node to be faulty
        -->
        <parameter name="maxRetries">10</parameter>
        <!-- The multicast address to be used -->
        <parameter name="mcastAddress">228.0.0.4</parameter>
        <!-- The multicast port to be used -->
        <parameter name="mcastPort">45564</parameter>
        <!-- The frequency of sending membership multicast messages (in ms) -->
        <parameter name="mcastFrequency">500</parameter>
        <!-- The time interval within which if a member does not respond, the member will be
         deemed to have left the group (in ms)
         -->
        <parameter name="memberDropTime">3000</parameter>
        <!--
           The IP address of the network interface to which the multicasting has to be bound to.
           Multicasting would be done using this interface.
        -->
        <parameter name="mcastBindAddress">127.0.0.1</parameter>
        <!-- The host name or IP address of this member -->
        
        <!--parameter name="localMemberHost">127.0.0.1</parameter-->
        
        <!--
        The TCP port used by this member. This is the port through which other nodes will
        contact this member
         -->
        <parameter name="localMemberPort">4000</parameter>
        <!--
        Preserve message ordering. This will be done according to sender order.
        -->
        <parameter name="preserveMessageOrder">false</parameter>
        <!--
        Maintain atmost-once message processing semantics
        -->
        <parameter name="atmostOnceMessageSemantics">false</parameter>
         
        <!--
           This interface is responsible for handling state replication. The property changes in
           the Axis2 context hierarchy in this node, are propagated to all other nodes in the cluster.
           The "excludes" patterns can be used to specify the prefixes (e.g. local_*) or
           suffixes (e.g. *_local) of the properties to be excluded from replication. The pattern
           "*" indicates that all properties in a particular context should not be replicated.
            The "enable" attribute indicates whether context replication has been enabled
        -->
        <stateManager class="org.apache.axis2.clustering.state.DefaultStateManager"
                      enable="false">
            <replication>
                <defaults>
                    <exclude name="local_*"/>
                    <exclude name="LOCAL_*"/>
                </defaults>
                <context class="org.apache.axis2.context.ConfigurationContext">
                    <exclude name="local_*"/>
                    <exclude name="UseAsyncOperations"/>
                    <exclude name="SequencePropertyBeanMap"/>
                </context>
                <context class="org.apache.axis2.context.ServiceGroupContext">
                    <exclude name="local_*"/>
                    <exclude name="my.sandesha.*"/>
                </context>
                <context class="org.apache.axis2.context.ServiceContext">
                    <exclude name="local_*"/>
                    <exclude name="my.sandesha.*"/>
                </context>
            </replication>
        </stateManager>
    </clustering>

6. Start the ELB server.

Management node configuration (portOffset=1)

1. Download and extract the WSO2 ESB distribution (will be referred to as <manager-home>).

2. Open <manager-home>/repository/conf/axis2/axis2.xml. Here you have to remove the default transport receivers / senders and add the following transport receivers and senders.

Transport receivers configuration in axis2.xml file

<transportReceiver name="http"
           	class="org.wso2.carbon.core.transports.http.HttpTransportListener">
	<!--
    	Uncomment the following if you are deploying this within an application server. You
        need to specify the HTTP port of the application server
  	-->
    <parameter name="port">9763</parameter>
    <parameter name="WSDLEPRPrefix" locked="false">http://esb.cloud-test.wso2.com:8280</parameter>

    <!--
    	Uncomment the following to enable Apache2 mod_proxy. The port on the Apache server is 80
       	in this case.
  	-->
    
</transportReceiver>

<transportReceiver name="https"
       		class="org.wso2.carbon.core.transports.http.HttpsTransportListener">
	<!--
    	Uncomment the following if you are deploying this within an application server. You
        need to specify the HTTPS port of the application server
  	-->
    <parameter name="port">9443</parameter>
    <parameter name="WSDLEPRPrefix" locked="false">https://esb.cloud-test.wso2.com:8243</parameter>

    <!--
    	Uncomment the following to enable Apache2 mod_proxy. The port on the Apache server is 443
       	in this case.
  	-->
    
</transportReceiver>

Transport senders configuration in axis2.xml file

<transportSender name="http"
				class="org.apache.axis2.transport.http.CommonsHTTPTransportSender">
	<parameter name="PROTOCOL">HTTP/1.1</parameter>
    <parameter name="Transfer-Encoding">chunked</parameter>
    <!-- This parameter has been added to overcome problems encounted in SOAP action parameter -->
    <parameter name="OmitSOAP12Action">true</parameter>
</transportSender>
<transportSender name="https"
				class="org.apache.axis2.transport.http.CommonsHTTPTransportSender">
 	<parameter name="PROTOCOL">HTTP/1.1</parameter>
    <parameter name="Transfer-Encoding">chunked</parameter>
    <!-- This parameter has been added to overcome problems encounted in SOAP action parameter -->
    <parameter name="OmitSOAP12Action">true</parameter>
</transportSender>


Note

The receiver's http/https port values mentioned above are without portOffset addition. They get auto-incremented by portOffset. The 'WSDLEPRPrefix' parameter should point to the worker node's host name (esb.cloud-test.wso2.com) and ELB's http(8280) / https(8243) transport ports.

3. Open <manager-home>/repository/conf/axis2/axis2.xml file and change the entries in the clustering configuration section as follows.

Clustering configuration in axis2.xml file

<!-- ================================================= -->
    <!--                Clustering                         -->
    <!-- ================================================= -->
    <!--
     To enable clustering for this node, set the value of "enable" attribute of the "clustering"
     element to "true". The initialization of a node in the cluster is handled by the class
     corresponding to the "class" attribute of the "clustering" element. It is also responsible for
     getting this node to join the cluster.
     -->
    <clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
        <!--
           This parameter indicates whether the cluster has to be automatically initalized
           when the AxisConfiguration is built. If set to "true" the initialization will not be
           done at that stage, and some other party will have to explictly initialize the cluster.
        -->
        <parameter name="AvoidInitiation">true</parameter>
        <!--
           The membership scheme used in this setup. The only values supported at the moment are
           "multicast" and "wka"
           1. multicast - membership is automatically discovered using multicasting
           2. wka - Well-Known Address based multicasting. Membership is discovered with the help
                    of one or more nodes running at a Well-Known Address. New members joining a
                    cluster will first connect to a well-known node, register with the well-known node
                    and get the membership list from it. When new members join, one of the well-known
                    nodes will notify the others in the group. When a member leaves the cluster or
                    is deemed to have left the cluster, it will be detected by the Group Membership
                    Service (GMS) using a TCP ping mechanism.
        -->
        <parameter name="membershipScheme">wka</parameter>
        <!--
         The clustering domain/group. Nodes in the same group will belong to the same multicast
         domain. There will not be interference between nodes in different groups.
        -->
        <parameter name="domain">wso2.esb.domain</parameter>
        <!--
        When a Web service request is received, and processed, before the
        response is sent to the client, should we update the states of all members in the cluster? If
        the value of this parameter is set to "true", the response to the client will be
        sent only after all the members have been updated. Obviously, this can be time consuming. In some cases,
        such this overhead may not be acceptable, in which case the value of
        this parameter should be set to "false"
        -->
        <parameter name="synchronizeAll">true</parameter>
        <!--
        The maximum number of times we need to retry to send a message to a
        particular node before giving up and considering that node to be faulty
        -->
        <parameter name="maxRetries">10</parameter>
        <!-- The multicast address to be used -->
        parameter name="mcastAddress">228.0.0.4</parameter>
        <!-- The multicast port to be used -->
        <parameter name="mcastPort">45564</parameter>
        <!-- The frequency of sending membership multicast messages (in ms) -->
        <parameter name="mcastFrequency">500</parameter>
        <!-- The time interval within which if a member does not respond, the
        member will be deemed to have left the group (in ms)
        -->
        <parameter name="memberDropTime">3000</parameter>
        <!--
        The IP address of the network interface to which the multicasting has to be bound to.
        Multicasting would be done using this interface.
        -->
        <!--parameter name="mcastBindAddress">127.0.0.1</parameter-->
        <!-- The host name or IP address of this member -->
        <!--parameter name="localMemberHost">127.0.0.1</parameter-->
        <!--
        The TCP port used by this member. This is the port through which other nodes will contact this member
        -->
        <parameter name="localMemberPort">4001</parameter>
        <!--
        Preserve message ordering. This will be done according to sender order.
        -->
        <parameter name="preserveMessageOrder">true</parameter>
        <!--
        Maintain atmost-once message processing semantics
        -->
        <parameter name="atmostOnceMessageSemantics">false</parameter>
        <!--
        Properties specific to this member
        -->
        <parameter name="properties">
           <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
           <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
           <property name="subDomain" value="mgt"/>
        </parameter>
        <!--
        The list of static or well-known members. These entries will only be valid if the "membershipScheme" above is set to "wka"
        -->
        <members>
           <member>
              <hostName>elb.wso2.com</hostName>
              <port>4000</port>
           </member>
        </members>
        <!--
        Enable the groupManagement entry if you need to run this node as a cluster manager.
        Multiple application domains with different GroupManagementAgent implementations
        can be defined in this section.
        -->
        <groupManagement enable="false">
           <applicationDomain name="apache.axis2.application.domain" 
             description="Axis2 group" 
             agent="org.apache.axis2.clustering.management.DefaultGroupManagementAgent"/>
        </groupManagement>
        <!--
        This interface is responsible for handling management of a specific
        node in the cluster The "enable" attribute indicates whether Node management has been enabled
        -->
        <nodeManager class="org.apache.axis2.clustering.management.DefaultNodeManager" enable="true"/>
        <!--
        This interface is responsible for handling state replication. The property changes in
        the Axis2 context hierarchy in this node, are propagated to all other nodes in the cluster.
        The "excludes" patterns can be used to specify the prefixes (e.g.local_*) or
        suffixes (e.g. *_local) of the properties to be excluded fromreplication. The pattern
        "*" indicates that all properties in a particular context should not be replicated.
        The "enable" attribute indicates whether context replication has beenenabled
        -->
        <stateManager class="org.apache.axis2.clustering.state.DefaultStateManager" enable="false">
           <replication>
              <defaults>
                <exclude name="local_*"/>
                <exclude name="LOCAL_*"/>
              </defaults>
              <context class="org.apache.axis2.context.ConfigurationContext">
                <exclude name="local_*"/>
                <exclude name="UseAsyncOperations"/>
                <exclude name="SequencePropertyBeanMap"/>
              </context>
              <context class="org.apache.axis2.context.ServiceGroupContext">
                <exclude name="local_*"/>
                <exclude name="my.sandesha.*"/>
              </context>
              <context class="org.apache.axis2.context.ServiceContext">
                <exclude name="local_*"/>
                <exclude name="my.sandesha.*"/>
              </context>
           </replication>
        </stateManager>
</clustering>

Note

  • Line 10 and 29: Clustering is enabled at axis2 level in order for management node to communicate with load balancer and the worker nodes.
  • Line 34: <parameter name="domain"> should be the service cluster domain of the management node, as specified in the loadbalancer.conf file.
  • Line 69: <parameter name="localMemberPort"> is a port other than 4000, which is ELB's localMemberPort. This is valid only if all servers run on a same machine.
  • Line 91-96: 
    • hostName: This is where we define the ELB's (WKA member) host name. This can be host name or simply IP address of the ELB server. If you use host name, map it to ELB's IP in /etc/host file of the machine.
    • port: Should be the 'localMemberPort' defined in ELB's axis2.xml file.

Since the WSO2 ESB management node is fronted by the WSO2 Load Balancer, the proxy ports associated with HTTP and HTTPS connectors should be configured. These proxy ports are the corresponding transport receiver ports opened by WSO2 ELB (configured in transport listeners section in axis2.xml).

4. Open <manager-home>/repository/conf/tomcat/catalina-server.xml and add the proxyPort attribute for both HTTP and HTTPS connectors as in line numbers 5, 6, 26 and 27 in the configuration below.

catalina-server.xml file configuration

<Server port="8005" shutdown="SHUTDOWN">
   <Service className="org.wso2.carbon.tomcat.ext.service.ExtendedStandardService" name="Catalina">
      <!-- optional attributes:proxyPort="80"-->
      <Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
         port="9763"
         proxyPort="8280"
         bindOnInit="false"
         maxHttpHeaderSize="8192"
         acceptorThreadCount="2"
         maxThreads="250"
         minSpareThreads="50"
         disableUploadTimeout="false"
         connectionUploadTimeout="120000"
         maxKeepAliveRequests="200"
         acceptCount="200"
         server="WSO2 Carbon Server"
         compression="on"
         compressionMinSize="2048"
         noCompressionUserAgents="gozilla, traviata"
         compressableMimeType="text/html,text/javascript,application/xjavascript,
         application/javascript,application/xml,text/css,application/xslt+xml,te
         xt/xsl,image/gif,image/jpg,image/jpeg"
      URIEncoding="UTF-8"/>
      <!--optional attributes:proxyPort="443"-->
      <Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
         port="9443"
         proxyPort="8243"
         bindOnInit="false"
         sslProtocol="TLS"
         maxHttpHeaderSize="8192"
         acceptorThreadCount="2"
         maxThreads="250"
         minSpareThreads="50"
         disableUploadTimeout="false"
         enableLookups="false"
         connectionUploadTimeout="120000"
         maxKeepAliveRequests="200"
         acceptCount="200"
         server="WSO2 Carbon Server"
         clientAuth="false"
         compression="on"
         scheme="https"
         secure="true"
         SSLEnabled="true"
         compressionMinSize="2048"
         noCompressionUserAgents="gozilla, traviata"
         compressableMimeType="text/html,text/javascript,application/xjavascript,
         application/javascript,application/xml,text/css,application/xslt+xml,text/xsl,image/gif,image/jpg,image/jpeg"
         keystoreFile="${carbon.home}/repository/resources/security/wso2carbon.jks"
         keystorePass="wso2carbon"
      URIEncoding="UTF-8"/>
      <Engine name="Catalina" defaultHost="localhost">
         <!--Realm className="org.apache.catalina.realm.MemoryRealm" pathname="${carbon.home}/repository/conf/tomcat/tomcat-users.xml"/-->
         <Realm className="org.wso2.carbon.tomcat.ext.realms.CarbonTomcatRealm"/>
         <Host name="localhost" unpackWARs="true" deployOnStartup="false" autoDeploy="false" appBase="${carbon.home}/repository/deployment/server/webapps/">
            <Valve className="org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve"/>
            <Valve className="org.apache.catalina.valves.AccessLogValve"
               directory="${carbon.home}/repository/logs"
               prefix="http_access_" suffix=".log"
               pattern="combined" />
            <Valve className="org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve"
               threshold="600"/>
            <Valve className="org.wso2.carbon.tomcat.ext.valves.CompositeValve"/>
         </Host>
      </Engine>
   </Service>
</Server>

Note

  • Port 9763 gets incremented by 1 due to the port offset. Therefore, the value at runtime will be 9764.
  • Proxy port value 8280 should be the same http port defined in <elb-home>/repository/conf/axis2/axis2.xml file. Proxy port will not get incremented by portOffset.
  • Port 9443 gets incremented by 1 due to the port offset. Therefore, the value at runtime will be 9444.
  • Proxy port value 8243 should be the same https port defined in <elb-home>/repository/conf/axis2/axis2.xml file. Proxy port will not get incremented by portOffset.

5. If you have multiple WSO2 Carbon-based products running in same server, to avoid possible port conflicts, the port offset of <manager-home>/repository/conf/carbon.xml should be changed as follows.

carbon.xml file configuration

<!-- Ports offset. This entry will set the value of the ports defined below to the define value + Offset. e.g. Offset=2 and HTTPS port=9443 will set the effective HTTPS port to 9445 -->

<Offset>1</Offset>

6. Update mgtHostName and HostName elements in carbon.xml as shown below.

<Server xmlns="http://wso2.org/projects/carbon/carbon.xml">
   <!--Product Name-->
   <Name>WSO2 Enterprise Service Bus</Name>

   <!--Product Version-->
   <Version>4.5.0</Version>

   <!--
   Host name or IP address of the machine hosting this server
   e.g. www.wso2.org, 192.168.1.10
   This is will become part of the End Point Reference of the
   services deployed on this server instance.
   -->
   <HostName>esb.cloud-test.wso2.com</HostName>

   <!--Host name to be used for the Carbon management console-->
   <MgtHostName>mgt.esb.cloud-test.wso2.com</MgtHostName>

   <!--
   The URL of the back end server. This is where the admin services are
   hosted and will be used by the clients in the front end server.
   This is required only for the Front-end server. This is used when
   seperating BE server from FE server
   -->
   <!--ServerURL>local://services/</ServerURL-->
   <ServerURL>https://mgt.esb.cloud-test.wso2.com:8243${carbon.context}/services/</ServerURL>

Note

  • HostName parmeter should point to the worker node (i.e. use worker node's host name).
  • MgtHostName parmeter should point to the management node (i.e. use management node's host name).
  • ServerURL should have 'mgt.esb.cloud-test.wso2.com' as the host name and 8243 (ELB's https transport receiver port) as the port.

7. Add the following entries to .../etc/hosts file. <ELB-IP> is the IP address of the ELB server.

<ELB-IP> elb.wso2.com
<ELB-IP> esb.cloud-test.wso2.com
<ELB-IP> mgt.esb.cloud-test.wso2.com

8. Start the ESB management node.

Note

Once the work-manager clustering configurations are added and if a port offset was specified, the management console should be accessed using the URL: https://mgt.esb.cloud-test.wso2.com:8243/carbon

9. Refer to the logs to ensure that the product instance has successfully joined the cluster and ready to receive requests through the load balancer.

INFO - DefaultGroupManagementAgent Application member Host:<IP-of-mgt-node>, Port: 4001, HTTP:9764, HTTPS:9444, Domain: wso2.esb.domain, Sub-domain:mgt, Active:true joined application cluster

Worker node configuration (portOffset=2)

1. Download and extract the WSO2 ESB distribution (will be referred to as <worker-home>).

2. Open <worker-home>/repository/conf/axis2/axis2.xml and update the following entries in 'Transport receivers' section.

Transport receivers configuration in axis2.xml file

<!-- ================================================= -->
<!-- Transport Ins (Listeners) -->
<!-- ================================================= -->

<transportReceiver name="http" class="org.apache.synapse.transport.passthru.PassThroughHttpListener">
	<parameter name="port" locked="false">8280</parameter>
    <parameter name="non-blocking" locked="false">true</parameter>
    <!--parameter name="bind-address" locked="false">hostname or IP address</parameter-->
    <parameter name="WSDLEPRPrefix" locked="false">http://esb.cloud-test.wso2.com:8280</parameter>
    <parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.PassThroughNHttpGetProcessor</parameter>
   	<!--<parameter name="priorityConfigFile" locked="false">location of priority configuration file</parameter>-->
</transportReceiver>

<transportReceiver name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLListener">
	<parameter name="port" locked="false">8243</parameter>
    <parameter name="non-blocking" locked="false">true</parameter>
 	<!--parameter name="bind-address" locked="false">hostname or IP address</parameter-->
    <parameter name="WSDLEPRPrefix" locked="false">https://esb.cloud-test.wso2.com:8243</parameter>
    <parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.PassThroughNHttpGetProcessor</parameter>
    <parameter name="keystore" locked="false">
    	<KeyStore>
        	<Location>repository/resources/security/wso2carbon.jks</Location>
            <Type>JKS</Type>
            <Password>wso2carbon</Password>
            <KeyPassword>wso2carbon</KeyPassword>
      	</KeyStore>
 	</parameter>
    <parameter name="truststore" locked="false">
    	<TrustStore>
        	<Location>repository/resources/security/client-truststore.jks</Location>
            <Type>JKS</Type>
            <Password>wso2carbon</Password>
      	</TrustStore>
 	</parameter>
    <!--<parameter name="SSLVerifyClient">require</parameter>
            supports optional|require or defaults to none -->
</transportReceiver>

3. Open <worker-home>/repository/conf/axis2/axis2.xml file and change the entries in the clustering configuration section as follows.

Clustering configuration in axis2.xml file

    <!-- ================================================= -->
    <!--                Clustering                         -->
    <!-- ================================================= -->
    <!--
     To enable clustering for this node, set the value of "enable" attribute of the "clustering"
     element to "true". The initialization of a node in the cluster is handled by the class
     corresponding to the "class" attribute of the "clustering" element. It is also responsible for
     getting this node to join the cluster.
     -->
    <clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
        <!--
           This parameter indicates whether the cluster has to be automatically initalized
           when the AxisConfiguration is built. If set to "true" the initialization will not be
           done at that stage, and some other party will have to explictly initialize the cluster.
        -->
        <parameter name="AvoidInitiation">true</parameter>
        <!--
           The membership scheme used in this setup. The only values supported at the moment are
           "multicast" and "wka"
           1. multicast - membership is automatically discovered using multicasting
           2. wka - Well-Known Address based multicasting. Membership is discovered with the help
                    of one or more nodes running at a Well-Known Address. New members joining a
                    cluster will first connect to a well-known node, register with the well-known node
                    and get the membership list from it. When new members join, one of the well-known
                    nodes will notify the others in the group. When a member leaves the cluster or
                    is deemed to have left the cluster, it will be detected by the Group Membership
                    Service (GMS) using a TCP ping mechanism.
        -->
        <parameter name="membershipScheme">wka</parameter>
        <!--
         The clustering domain/group. Nodes in the same group will belong to the same multicast
         domain. There will not be interference between nodes in different groups.
        -->
        <parameter name="domain">wso2.esb.domain</parameter>
        <!--
        When a Web service request is received, and processed, before the response is sent to the
        client, should we update the states of all members in the cluster? If the value of
        this parameter is set to "true", the response to the client will be sent only after
        all the members have been updated. Obviously, this can be time consuming. In some cases,
        such this overhead may not be acceptable, in which case the value of this parameter
        should be set to "false"
        -->
        <parameter name="synchronizeAll">true</parameter>
        <!--
        The maximum number of times we need to retry to send a message to a particular node
        before giving up and considering that node to be faulty
        -->
        <parameter name="maxRetries">10</parameter>
        <!-- The multicast address to be used -->
        <parameter name="mcastAddress">228.0.0.4</parameter>
        <!-- The multicast port to be used -->
        <parameter name="mcastPort">45564</parameter>
        <!-- The frequency of sending membership multicast messages (in ms) -->
        <parameter name="mcastFrequency">500</parameter>
        <!-- The time interval within which if a member does not respond, the
        member will be deemed to have left the group (in ms)
        -->
        <parameter name="memberDropTime">3000</parameter>
        <!--
        The IP address of the network interface to which the multicasting has
        to be bound to. Multicasting would be done using this interface.
        -->
        <!--parameter name="mcastBindAddress">127.0.0.1</parameter-->
        <!-- The host name or IP address of this member -->
        <!--parameter name="localMemberHost">127.0.0.1</parameter-->
        <!--
        The TCP port used by this member. This is the port through which other
        nodes will contact this member
        -->
        <parameter name="localMemberPort">4002</parameter>
        <!--
        Preserve message ordering. This will be done according to sender order.
        -->
        <parameter name="preserveMessageOrder">true</parameter>
        <!--
        Maintain atmost-once message processing semantics
        -->
        <parameter name="atmostOnceMessageSemantics">false</parameter>
        <!--
        Properties specific to this member
        -->
        <parameter name="properties">
                <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
                <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
                <property name="subDomain" value="worker"/>
        </parameter>
        <!--
        The list of static or well-known members. These entries will only be valid if the "membershipScheme" above is set to "wka"
        -->
        <members>
                <member>
                        <hostName>elb.wso2.com</hostName>
                        <port>4000</port>
                </member>
        </members>
        <!--
        Enable the groupManagement entry if you need to run this node as a cluster
        manager. Multiple application domains with different GroupManagementAgent
        implementations can be defined in this section.
        -->
        <groupManagement enable="false">
                <applicationDomain name="apache.axis2.application.domain" 
                    description="Axis2 group"
                    agent="org.apache.axis2.clustering.management.DefaultGroupManagementAgent"/>
        </groupManagement>
        <!--
        This interface is responsible for handling management of a specific
        node in the cluster The "enable" attribute indicates whether Node management has been
        enabled
        -->
        <nodeManager class="org.apache.axis2.clustering.management.DefaultNodeManager" enable="true"/>
        <!--
        This interface is responsible for handling state replication. The
        property changes in the Axis2 context hierarchy in this node, are propagated to all other
        nodes in the cluster. The "excludes" patterns can be used to specify the prefixes (e.g.
        local_*) or suffixes (e.g. *_local) of the properties to be excluded from replication. The pattern
        "*" indicates that all properties in a particular context should not be replicated.
        The "enable" attribute indicates whether context replication has been enabled
        -->
        <stateManager class="org.apache.axis2.clustering.state.DefaultStateManager" enable="false">
                <replication>
                        <defaults>
                                <exclude name="local_*"/>
                                <exclude name="LOCAL_*"/>
                        </defaults>
                        <context class="org.apache.axis2.context.ConfigurationContext">
                                <exclude name="local_*"/>
                                <exclude name="UseAsyncOperations"/>
                                <exclude name="SequencePropertyBeanMap"/>
                        </context>
                        <context class="org.apache.axis2.context.ServiceGroupContext">
                                <exclude name="local_*"/>
                                <exclude name="my.sandesha.*"/>
                        </context>
                        <context class="org.apache.axis2.context.ServiceContext">
                                <exclude name="local_*"/>
                                <exclude name="my.sandesha.*"/>
                        </context>
                </replication>
        </stateManager>
</clustering>

4. Open <manager-home>/repository/conf/tomcat/catalina-server.xml and add the configuration below.

catalina-server.xml file configuration

<Server port="8005" shutdown="SHUTDOWN">
     <Service className="org.wso2.carbon.tomcat.ext.service.ExtendedStandardService" name="Catalina">
        <!--
        optional attributes:
        proxyPort="80"
        -->
        <Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
           port="9763"
           proxyPort="8280"
           bindOnInit="false"
           maxHttpHeaderSize="8192"
           acceptorThreadCount="2"
           maxThreads="250"
           minSpareThreads="50"
           disableUploadTimeout="false"
           connectionUploadTimeout="120000"
           maxKeepAliveRequests="200"
           acceptCount="200"
           server="WSO2 Carbon Server"
           compression="on"
           compressionMinSize="2048"
           noCompressionUserAgents="gozilla, traviata"
           compressableMimeType="text/html,text/javascript,application/xjavascript,
           application/javascript,application/xml,text/css,application/xslt+xml,text/xsl,image/gif,image/jpg,image/jpeg"
           URIEncoding="UTF-8"/>
        <!--
        optional attributes:
        proxyPort="443"
        -->
        <Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
           port="9443"
           proxyPort="8243"
           bindOnInit="false"
           sslProtocol="TLS"
           maxHttpHeaderSize="8192"
           acceptorThreadCount="2"
           maxThreads="250"
           minSpareThreads="50"
           disableUploadTimeout="false"
           enableLookups="false"
           connectionUploadTimeout="120000"
           maxKeepAliveRequests="200"
           acceptCount="200"
           server="WSO2 Carbon Server"
           clientAuth="false"
           compression="on"
           scheme="https"
           secure="true"
           SSLEnabled="true"
           compressionMinSize="2048"
           noCompressionUserAgents="gozilla, traviata"
           compressableMimeType="text/html,text/javascript,application/xjavascript,application/javascript,application/xml,text/css,application/xslt+xml,text/xsl,image/gif,image/jpg,image/jpeg"
           keystoreFile="${carbon.home}/repository/resources/security/wso2carbon.jks"
           keystorePass="wso2carbon"
           URIEncoding="UTF-8"/>
           <Engine name="Catalina" defaultHost="localhost">
              <!--Realm className="org.apache.catalina.realm.MemoryRealm" pathname="${carbon.home}/repository/conf/tomcat/tomcat-users.xml"/-->
              <Realm className="org.wso2.carbon.tomcat.ext.realms.CarbonTomcatRealm"/>
              <Host name="localhost" unpackWARs="true" deployOnStartup="false" autoDeploy="false" appBase="${carbon.home}/repository/deployment/server/webapps/">
                 <Valve className="org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve"/>
                 <Valve className="org.apache.catalina.valves.AccessLogValve" 
                        directory="${carbon.home}/repository/logs"
                        prefix="http_access_" suffix=".log"
                        pattern="combined" />
                 <Valve className="org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve" threshold="600"/>
                 <Valve className="org.wso2.carbon.tomcat.ext.valves.CompositeValve"/>
               </Host>
            </Engine>
     </Service>
</Server>

5. If multiple WSO2 Carbon-based products are run in same server, to avoid possible port conflicts, the port offset of <worker-home>/repository/conf/carbon.xml should be changed as follows.

Carbon.xml file configuration

<!-- Ports offset. This entry will set the value of the ports defined below to the define value + Offset. e.g. Offset=2 and HTTPS port=9443 will set the effective HTTPS port to 9445 -->

<Offset>2</Offset>

6. Update mgtHostName and HostName elements in carbon.xml as shown below.

<Server xmlns="http://wso2.org/projects/carbon/carbon.xml">
   <!--
   Product Name
   -->
   <Name>WSO2 Enterprise Service Bus</Name>

   <!--
   Product Version
   -->
   <Version>4.5.0</Version>

   <!--
   Host name or IP address of the machine hosting this server
   e.g. www.wso2.org, 192.168.1.10
   This is will become part of the End Point Reference of the
   services deployed on this server instance.
   -->
   <HostName>esb.cloud-test.wso2.com</HostName>

   <!--
   Host name to be used for the Carbon management console
   -->
   <!--MgtHostName>esb.cloud-test.wso2.com</MgtHostName-->

   <!--
   The URL of the back end server. This is where the admin services are
   hosted and will be used by the clients in the front end server.
   This is required only for the Front-end server. This is used when
   seperating BE server from FE server
   -->
   <ServerURL>local://services/</ServerURL>

7. Add the following entries to /etc/hosts file. <ELB-IP> is the IP address of the ELB server.

<ELB-IP> elb.wso2.com
<ELB-IP> esb.cloud-test.wso2.com

8. Start worker node using following command:

$cd <worker-home>
$./bin/wso2server.sh -DworkerNode=true

9. Refer to the logs to ensure that the product instance has successfully joined the cluster and ready to receive requests through the load balancer.

INFO - DefaultGroupManagementAgent Application member Host:<IP-of-worker-node>, Port: 4002, HTTP:8282, HTTPS:8245, Domain: wso2.esb.domain, Sub-domain:worker, Active:true joined application cluster

Accessing ESB servers though the ELB

1. Map host names of two ESB nodes to ELB's IP address by adding relevant mappings in /etc/hosts file of the client machine. <ELB-IP> is the IP address of the ELB server.

<ELB-IP> mgt.esb.cloud-test.wso2.com
<ELB-IP> esb.cloud-test.wso2.com

2. Make sure that you have started all 3 servers (ELB, worker and manager). ELB should be started first, followed by management and then the worker node.

3. You can now access ESB worker/management setup by navigating to the URL:https://mgt.esb.cloud-test.wso2.com:8243

Adding a new ESB worker node

If you want to add a new ESB worker node to the setup, you can follow below steps.

1. Make a copy of the ESB worker node you configured.

2. Decide on a portOffset value and update the portOffset element in <worker-home>/repository/conf/carbon.xml file.

3. Change the localMemberPort value of <worker-home>/repository/conf/axis2/axis2.xml to a port that is not used already.

4. Start up the worker node, now you should see that the newly added worker node successfully joining the ELB.