This site contains the documentation that is relevant to older WSO2 product versions and offerings.
For the latest WSO2 documentation, visit https://wso2.com/documentation/.

Overview



Introduction to clustering

You can install multiple instances of WSO2 products in a cluster. A cluster consists of multiple instances of a product that act as if they are a single instance and divide up the work. This approach improves performance, because requests are distributed among several servers instead of just one, and it ensures reliability, because if one instance becomes unavailable or is experiencing high traffic, another instance will seamlessly handle the requests. Clustering also provides the following benefits:

  • Continuous availability

  • Simplified administration

  • Increased scalability

  • Failover and switchover capabilities

  • Low cost

These characteristics are essential for enterprise applications deployed in a production environment. If you are still in development mode, you do not need a cluster, but once you are ready to start testing and to go into production, where performance and reliability are critical, you should create a cluster.

For additional general information on clustering, see the following resources:

WSO2 clustering methodology

Because all WSO2 products are built on the cluster-enabled Carbon platform, you can cluster all WSO2 products in a similar way. The WSO2 Elastic Load Balancer can be used to manage the load of WSO2 product clusters in the worker-manager clustering method. WSO2 Carbon version 4.0.0 onwards supports an improved deployment model in which clustered components are separated as 'worker' nodes and 'management' nodes. The management node(s) is used to deploy and configure artifacts (web applications, services, proxy services etc.) whereas the worker nodes are used to serve requests received by clients.
 
This worker/manager deployment setup provides proper separation of concerns between a Carbon-based product's UI components, management console, and related functionality with its internal framework serving requests to deployment artifacts. Typically, the management nodes are in read-write mode and authorized to add new artifacts or make configuration changes, whereas the worker nodes are in read-only mode, authorized only to deploy artifacts and read configuration. This deployment model provides improved security, since its management nodes can be set up behind an internal firewall and only exposed to internal clients while only worker nodes can be exposed externally. Also, since the UI-related OSGi bundles are not loaded to 'worker' nodes, the deployment model is more efficient in memory utilization.
 
A worker/manager separated cluster can typically be implemented in the following ways:

  • Separate manager and worker nodes
  • Manager and worker on same node (dual mode)

Separate manager and worker node setup

This model consists of two cluster sub-domains: worker and management. The load will be distributed to these sub-domains according to the defined load-balancing algorithm. Also, the load on the nodes in these sub-domains will be taken into consideration when auto-scaling.

Dual-mode setup

This model consists of a single cluster where a selected node works as both a worker and a manager. This worker node requires two load balancers and is configured in read-write mode, while the other worker nodes are set up in read-only mode. The management node also should be a well-known member in the non-management worker nodes so that state replication and cluster messaging works.

Clustering example

Following is an example of a clustered deployment. This example uses WSO2 Enterprise Service Bus (ESB), but you can use this same approach for any Carbon-based product. The example system setup will include:

  • Two servers (in our example, we use two servers with IP addresses xxx.xxx.xxx.132 and xxx.xxx.xxx.206)

  • One WSO2 Elastic Load Balancer (ELB) instance

  • Three WSO2 ESB instances as workers to handle service requests

  • One WSO2 ESB instance for managing configuration across the clustered nodes via Deployment Synchronizer

  • A Subversion (SVN) repository

This system will behave as a single, high-performing ESB available at IP address xxx.xxx.xxx.206 via default HTTP or HTTPS ports. The ELB performs load balancing by handling the incoming requests and routing them to the worker nodes. The worker nodes process the requests that were routed to them and send responses back to the client through the ELB.

All admin requests are sent to the manager node via HTTPs on port 9444. The manager node synchronizes the same configuration across the clustered worker nodes.

About membership schemes

A cluster should contain two or more instances of a product that are configured to run within the same domain. To make an instance a member of the cluster, you must configure it to either of the available membership schemes:

  • Well Known Address (WKA) membership scheme
  • Multicast membership scheme

In this example, we use the WKA membership scheme, and the ELB acts as the well-known member in the cluster. It will accept all the service requests on behalf of the ESB instances and divide the load among worker nodes in the ESB cluster.

High-level steps for creating a cluster

At a high level, you create a cluster as outlined in the following steps. The next section, Creating a Cluster, walks you through these steps in detail by describing how to configure the clustering example described above. Although those steps are for clustering WSO2 ESB, they apply to all WSO2 products. For details on additional configuration required for a specific WSO2 product, see the links in the table of contents.

To create a cluster:
  1. Install the load balancer and instances of the product you are clustering.
  2. Configure the load balancer:
    1. Define the cluster domain in loadbalancer.conf
    2. Configure clustering and HTTP/S ports in axis2.xml
    3. Map the cluster host name to the IP address in the /etc/hosts file
    4. Start the load balancer
  3. Set up the central database.
  4. Configure the manager node:
    1. Define the data source(s) for the central database in master-datasources.xml
    2. Configure clustering in axis2.xml
    3. Configure the port offset and cluster host name (so that requests to the manager node are redirected to the cluster) in carbon.xml
    4. Map the database and cluster host name to the IP addresses in the /etc/hosts file 
    5. Start the manager node
  5. Configure the worker nodes:
    1. Define the data source(s) for the central database in master-datasources.xml
    2. Configure clustering in axis2.xml
    3. Configure the port offset in carbon.xml
    4. Map the database and cluster host name to the IP addresses in the /etc/hosts file
    5. Start the worker nodes
  6. Test the cluster.

Â