This documentation is for WSO2 Application Server version 5.1.0. View documentation for the latest release.

Unknown macro: {next_previous_link3}
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

WSO2 Carbon version 4.0.0 supports an improved deployment model in which its architecture components are separated as 'worker' nodes and 'management' nodes. The management node(s) is used to deploy and configure artifacts (web applications, services, proxy services etc.) whereas the worker nodes are used to serve requests received by clients.
 
This worker/manager deployment setup provides proper separation of concerns between a Carbon-based product's UI components, management console and related functionality with its internal framework serving requests to deployment artifacts. Typically, the management nodes are in read-write mode and authorized to add new artifacts or make configuration changes, whereas the worker nodes are in read-only mode, authorized only to deploy artifacts and read configuration. This deployment model is improved security-vise since its management nodes can be set up behind an internal firewall and only exposed to internal clients, while only worker nodes can be exposed externally. Also, since the UI-related OSGi bundles are not loaded to 'worker' nodes, the deployment model is more efficient in memory utilization.
 
A worker/manager separated cluster can typically be implemented in the following ways:

Separate Sub Clusters with One Load Balancer

This model consists of two sub cluster domains as worker domain and management domain. These sub-domains take up load according to a defined load balancing algorithm and auto-scales according to the load on its nodes.  

 

Single Cluster with two Load Balancers

This model consists of a single cluster, where a selected node works as both a worker and a manager. This worker node requires two load balancers and configured in read-write mode, while the other worker nodes are set up in read-only mode. The management node also should be a well-known member in the non-management worker nodes so that state replication and cluster messaging works.

 

Shown below is the minimum configuration instructions to cluster two WSO2 Application Server instances. The cluster consists of two sub cluster domains as worker/management and is fronted by a single load balancer. Altogether, we will be configuring three instances as follows:

Using similar instructions, this minimum configuration can be extended to include many worker/manager nodes into the cluster.

Setting up WSO2 Elastic Load Balancer

1. Download and extract WSO2 ELB. This folder will be referred to as <elb-home>.

2. Go to <elb-home>/repository/conf/loadbalancer.conf and add the following entries under services.

appserver {
  domains{
     wso2.as.domain {
        hosts mgt.as.cloud-test.wso2.com;
        sub_domain mgt;
        tenant_range *;
     }
     wso2.as.domain {
        hosts as.cloud-test.wso2.com;
        sub_domain worker;
        tenant_range *;
     }
   }
}

The above configuration includes two sub domains under a single cluster domain named as "wso2.as.domain". Each sub domain consists of a single product instance (for the sake of simplicity). Management sub domain includes one management node and the worker sub domain includes one worker node.  

3. These two product instances can be setup in separate physical servers, 2 VM instances or a single machine. If they are set up in one machine, update the host file accordingly. For example,

127.0.0.1 mgt.as.cloud-test.wso2.com
127.0.0.1 as.cloud-test.wso2.com

4. Uncomment <localMemberHost> element in <elb-home>/repository/conf/axis2/axis2.xml and specify the IP address (or host name) to be exposed to members of the cluster.

<parameter name="localMemberHost">127.0.0.1</parameter>

5. Start the WSO2 Elastic Load Balancer instace. 

Management Node Configuration

1. Download and extract the WSO2 as distribution. (Will be referred to as <manager-home>) 

axis2.xml configuration

2. First, clustering should be enabled at axis2 level in order for management node to communicate with load balancer and the worker nodes. Open  <manager-home>/repository/conf/axis2/axis2.xml and update the clustering configuration as follows:

<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
<parameter name="membershipScheme">wka</parameter>

3. Specify the cluster domain as defined in loadbalancer.conf.

<parameter name="domain">wso2.as.domain</parameter>
<parameter name="localMemberHost">mgt.as.cloud-test.wso2.com</parameter>
<parameter name="localMemberPort">4250</parameter>

4. Add a new property "subDomain" and set it to "mgt" to denote that this node belongs to mgt subdomain of the cluster defined in loadbalancer.conf.

<parameter name="properties">
   <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
   <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
   <property name="subDomain" value="mgt"/>
</parameter>

5. Add load balancer’s IP or host name. For example, 127.0.0.1 for Load Balancer and the local member port (4000) as defined in the axis2.xml of WSO2 LB before.

<members>
   <member>
      <hostName>127.0.0.1</hostName>
      <port>4000</port>
   </member>
</members>
catalina-server.xml configuration

Since the  WSO2 as management node is fronted by the WSO2 Load Balancer, the proxy ports associated with HTTP and HTTPS connectors should be configured. These proxy ports are the corresponding transport receiver ports opened by WSO2 LB (configured in transport listeners section in axis2.xml).

6. Open <manager-home>/repository/conf/tomcat/catalina-server.xml and add the proxyPort attribute for both HTTP and HTTPS connectors as shown below.

<Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
           port="9763"
           proxyPort="8280">

<Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
           port="9443"
           proxyPort="8243">
carbon.xml configuration

7. Since multiple WSO2 Carbon-based products are run in same host, to avoid possible port conflicts, the port offset of <manager-home>/repository/conf/carbon.xml should be changed as follows.

<!-- Ports offset. This entry will set the value of the ports defined below to the define value + Offset. e.g. Offset=2 and HTTPS port=9443 will set the effective HTTPS port to 9445 -->

<Offset>1</Offset>

8. Update mgtHostName and HostName elements in carbon.xml as shown below.

<HostName>as.cloud-test.wso2.com</HostName>
<MgtHostName>mgt.as.cloud-test.wso2.com</MgtHostName>

9. The as management node is used for deploying artifacts. These artifacts should be synchronized automatically to the worker nodes in the cluster. This is handled by the deployment synchronization mechanism in WSO2 Carbon based products.  The default SVN based deployment synchronizer can be used to auto-commit the deployment artifacts to a pre-configured SVN repository. Then, the worker nodes can be configured to automatically check-out the artifacts from the same SVN location.
 
Include the following in <manager-home>/repository/conf/carbon.xml file to configure SVN based deployment synchronizer.

<DeploymentSynchronizer>
     <Enabled>true</Enabled>
     <AutoCommit>true</AutoCommit>
     <AutoCheckout>true</AutoCheckout>
     <RepositoryType>svn</RepositoryType>
     <SvnUrl>http://10.100.3.115/svn/repos/as</SvnUrl>
     <SvnUser>wso2</SvnUser>
     <SvnPassword>wso2123</SvnPassword>
     <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
</DeploymentSynchronizer>

 Make sure to replace the SvnUrl, SvnUser and SvnPassword according to your SVN repository.

10. Start the as instance in the management node.

Note

Once the work-manager clustering configurations are added, the management console should be accessed using the URL: https://mgt.as.cloud-test.wso2.com:9444/carbon

11. Refer to the logs to ensure that the product instance has successfully joined the cluster and ready to receive requests through the load balancer.

12. Also, try deploying an artifact through the manager node. You should receive an error since the server looks for the worker node, which is not in the cluster yet.  Although the request is issued through the management node, all requests are served by the worker node(s).

Worker Node Configuration 

1. Download and extract the WSO2 as distribution. (Will be referred to as <worker-home>) 

axis2.xml configuration 

Do the following  modifications in addition to the changes done to aixs2.xml of the management node. 

2. If two instances of cluster nodes are in the same machine, update localxMemberPort element in <worker-home>/repository/conf/axis2/axis2.xml as follow:

<parameter name="domain">wso2.as.domain</parameter>
<parameter name="localMemberHost">as.cloud-test.wso2.com</parameter>
<parameter name="localMemberPort">4251</parameter>

3. The worker node belongs to the "worker" sub domain of the cluster domain configured in loadbalancer.conf of WSO2 Load Balancer. Add a new property to <worker-home>/repository/conf/axis2/axis2.xml as "subDomain" to represent this.

<parameter name="properties">
    <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
    <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
    <property name="subDomain" value="worker"/>
</parameter>

4. Add load balancer’s IP or host name. For example, 127.0.0.1 for Load Balancer and the local member port (4000) as defined in the axis2.xml of WSO2 LB before.

<members>
   <member>
      <hostName>127.0.0.1</hostName>
      <port>4000</port>
   </member>
</members>
carbon.xml configuration

5. Since multiple WSO2 Carbon-based products are run in same host, to avoid possible port conflicts, the port offset of <worker-home>/repository/conf/carbon.xml should be changed as follows.

<!-- Ports offset. This entry will set the value of the ports defined below to the define value + Offset. e.g. Offset=2 and HTTPS port=9443 will set the effective HTTPS port to 9445 -->

<Offset>2</Offset>

6. Update the HostName element as shown below. MgtHostName element is not needed since this node is designated as the worker node.

<HostName>as.cloud-test.wso2.com</HostName>

7. Next, configure the SVN-based deployment synchronizer to automatically check-out deployment artifacts from a common SVN repository. The worker nodes of a cluster SHOULD NOT commit (write) artifacts. Therefore, disable AutoCommit property in the deployment synchronizer configuration as follows:

<DeploymentSynchronizer>
       <Enabled>true</Enabled>
       <AutoCommit>false</AutoCommit>
       <AutoCheckout>true</AutoCheckout>
       <RepositoryType>svn</RepositoryType>
       <SvnUrl>http://10.100.3.115/svn/repos/as</SvnUrl>
       <SvnUser>wso2</SvnUser>
       <SvnPassword>wso2123</SvnPassword>
       <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
</DeploymentSynchronizer>

8. Start the worker product instance. The workerNode system property must be set to true when starting the workers in a cluster. For example,

sh wso2server.sh -DworkerNode=true

9. Refer to the logs to ensure that the product instance has successfully joined the cluster. 

  • No labels