com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_links' is unknown.

Clustering the ESB Profile

The following sections provide information and instructions on how to cluster the ESB profile of WSO2 Enterprise Integrator (WSO2 EI) with a third-party load balancer. 

The deployment pattern

This deployment scenario uses a two-node ESB cluster. That is, two ESB nodes are configured to serve requests with high availability and scalability. As depicted by the following diagram, the product nodes in the cluster are fronted by an external third-party load balancer, which routes requests to the two nodes on a round-robin basis.

Note that the two ESB nodes are configured as well-known members. It is always recommended to have all nodes of the cluster as well-known members.

When configuring your WSO2 products for clustering using Hazelcast, you need to use a specific IP address in your configurations and not localhost.

  • In this guide, the IP of the first ESB node is referred to as https://xxx.xxx.xxx.xx1 and the IP of the second ESB node is referred to as https://xxx.xxx.xxx.xx2.
  • If you want to test out clustering in a development environment using the same server, you need to port offset one of your ESB nodes. Follow the step given below:
    Specify the port offset value in the <EI_HOME>/conf/carbon.xml file.

    This is not recommended for production environments. Change all ports used in your configurations based on the offset value if you are setting a port offset for the specific node.

    For more information on the default ports of WSO2 Enterprise Integrator, see the WSO2 administration guide.

     Click here for more information on configuring the port offset.

    When you run multiple products/clusters or multiple instances of the same product on the same server or virtual machines (VMs), change their default ports with an offset value to avoid port conflicts. An offset defines the number by which all ports in the runtime (e.g., HTTP(S) ports) are increased. For example, if the default HTTP port is 9763 and the offset is 1, the effective HTTP port will change to 9764. For each additional product instance, set the port offset to a unique value. The offset of the default ports is zero.

    The port value will automatically increase as shown in the Port Value column in the following table, allowing all five WSO2 product instances or servers to run on the same machine.

    WSO2 product instance

    Port Offset

    Port Value

    WSO2 server 1

    0

    9443

    WSO2 server 2

    1

    9444

    WSO2 server 3

    2

    9445

    WSO2 server 4

    3

    9446

    WSO2 server 5

    4

    9447

    Example:

    <Ports>
    	...
    	<Offset>5</Offset>
    	...
    </Ports>

Configuring the load balancer

The load balancer automatically distributes incoming traffic across multiple WSO2 product instances. It enables you to achieve greater levels of fault tolerance in your cluster and provides the required balancing of load needed to distribute traffic.

Note the following facts when configuring the load balancer:

  • These configurations are not required if your clustering setup does not have a load balancer.

  • The load balancer ports of the deployment pattern that is shown above are HTTP 80 and HTTPS 443. If your system uses any other ports, be sure to replace 80 and 443 values with the corresponding ports when you follow the configuration steps in this section.

  • The load balancer directs requests to the server on a round robin basis. For example, the load balancer will direct requests to node 1 (xxx.xxx.xxx.xx1) of the ESB cluster as follows:
    • HTTP requests will be directed to node 1 using the http://xxx.xxx.xxx.xx1/<service> URL via HTTP 80 port.

    • HTTPS requests will be directed to node 1 using the https://xxx.xxx.xxx.xx1/<service> URL via HTTPS 443 port.

    • The management console of node 1 will be accessed using the https://xxx.xxx.xxx.xx1/carbon/ URL via HTTPS 443 port.

It is recommended to use NGINX Plus as your load balancer of choice.

Follow the steps below to configure NGINX Plus version 1.7.11 or NGINX community version 1.9.2 as the load balancer.

  1. Install NGINX Plus or the NGINX community version on your cluster network.

  2. Create a VHost file named ei.http.conf in the /etc/nginx/conf.d directory and add the following configurations.
    This configures NGINX Plus to direct the HTTP requests to the two ESB nodes (xxx.xxx.xxx.xx1 and xxx.xxx.xxx.xx2) via the HTTP 80 port using the http://ei.wso2.com/ URL. 

    If you are setting up NGINX on a Mac OS, you will not have the conf.d directory. Follow the steps given below to add the VHost files mentioned in this step and the preceding steps:

    1. Create a directory named conf in the nginx directory, and create the ei.http.conf file in it.

    2. Open the nginx/nginx.conf file and add the following entry before the final }.
      This includes all the files in the conf directory into the NGINX server.

      include conf/*.conf;
    Nginx Community Version and NGINX Plus
    upstream wso2.ei.com {
            server xxx.xxx.xxx.xx1:8280;
            server xxx.xxx.xxx.xx2:8280;
    }
    
    server {
            listen 80;
            server_name ei.wso2.com;
            location / {
    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header X-Forwarded-Server $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_read_timeout 5m;
    proxy_send_timeout 5m;
    proxy_pass http://wso2.ei.com;
    
    			   proxy_http_version 1.1;
            	   proxy_set_header Upgrade $http_upgrade;
            	   proxy_set_header Connection "upgrade";
            }
    }
  3. Create a VHost file (ei.https.conf) in the nginx/conf.d directory or in the nginx/conf directory if you are on a Mac OS and add the following configurations. This configures NGINX Plus to direct the HTTPS requests to the two ESB nodes (xxx.xxx.xxx.xx1 and xxx.xxx.xxx.xx2) via the HTTPS 443 port using the https://ei.wso2.com/ URL.

    Make sure that the SSL files you create in step 6 are in the /etc/nginx/ssl/ directory. If it is not, update the path given below.

  4. Configure NGINX to access the management console as https://ui.ei.wso2.com/carbon via the HTTPS 443 port. To do this, create a VHost file ( ui.ei.https.conf ) in the nginx/conf.d/ directory or in the nginx/conf directory if you are on a Mac OS and add the following configurations into it.

    Make sure that the SSL files you create in step 6 are in the /etc/nginx/ssl/ directory. If it is not, update the path given below.

    Nginx Community Version and NGINX Plus
    server {
    	listen 443;
    	server_name ui.ei.wso2.com;
    	ssl on;
    	ssl_certificate /etc/nginx/ssl/server.crt;
    	ssl_certificate_key /etc/nginx/ssl/server.key;
    
    	location / {
    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header X-Forwarded-Server $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_read_timeout 5m;
    proxy_send_timeout 5m;
    			   proxy_pass https://xxx.xxx.xxx.xx1:9443/;
     
    			   proxy_http_version 1.1;
    			   proxy_set_header Upgrade $http_upgrade;
    			   proxy_set_header Connection "upgrade";
        	}
    	error_log  /var/log/nginx/ui-error.log ;
        access_log  /var/log/nginx/ui-access.log;
    }
  5. Create a directory named ssl inside the nginx directory.

  6. Follow the instructions below to create SSL certificates for both ESB nodes.

    Enter ei.wso2.com as the common name when creating the keys.

    1. Execute the following command to create the Server Key: 

      $sudo openssl genrsa -des3 -out server.key 1024
    2. Execute the following command to request to sign the certificate:

      $sudo openssl req -new -key server.key -out server.csr
    3. Execute the following commands to remove the passwords:

      $sudo cp server.key server.key.org  
      $sudo openssl rsa -in server.key.org -out server.key
    4. Execute the following command to sign your SSL Certificate:

      $sudo openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
    5. Execute the following command to add the certificate to the <EI_HOME>/repository/resources/security/client-truststore.jks file:

      keytool -import -trustcacerts -alias server -file server.crt -keystore client-truststore.jks

      When prompted, give the default password wso2carbon.

  7. Execute the following command to restart the NGINX Plus server:

Creating the databases

All profiles of WSO2 EI uses a database to store information such as user management details and registry data. All nodes in the cluster must use one central database for config and governance registry mounts. You can create the following databases and associated datasources.

Database NameDescription
WSO2_USER_DB
JDBC user store and authorization manager
REGISTRY_DBShared database for config and governance registry mounts in the product's nodes
REGISTRY_LOCAL1Local registry space in Node 1
REGISTRY_LOCAL2Local registry space in Node 2

It is recommended to use an industry-standard RDBMS such as Oracle, PostgreSQL, MySQL, MS SQL, etc. for most enterprise testing and production environments. However, you can also use the embedded H2 database only for the REGISTRY_LOCAL1 and REGISTRY_LOCAL2.

Following the steps below to create the necessary databases.

These instructions assume you are installing MySQL as your relational database management system (RDBMS), but you can install another supported RDBMS as needed

  1. Download and install MySQL Server.

  2. Download the MySQL JDBC driver.

  3. Download and unzip the WSO2 EI binary distribution. 

    Throughout this guide, <EI_HOME> refers to the extracted directory of the WSO2 EI product distribution.

  4. Unzip the downloaded MySQL driver, and copy the MySQL JDBC driver JAR (mysql-connector-java-x.x.xx-bin.jar) into the <EI_HOME>/lib/ directory of both ESB nodes.

  5. Execute the following command in a terminal/command window, where the username is the username you want to use to access the databases: mysql -u username -p
  6. When prompted, specify the password to access the databases with the username you specified.

  7. Create the databases using the following commands:

    If you are using MySQL 5.7 or a later version, you need to use the mysql5.7.sql script instead of the mysql.sql script. This script has been tested on MySQL 5.7 and MySQL 8.

    mysql> create database WSO2_USER_DB;
    mysql> use WSO2_USER_DB;
    mysql> source <EI_HOME>/dbscripts/mysql.sql;
    mysql> grant all on WSO2_USER_DB.* TO regadmin@"carbondb.mysql-wso2.com" identified by "regadmin";
    
    mysql> create database REGISTRY_DB;
    mysql> use REGISTRY_DB;
    mysql> source <EI_HOME>/dbscripts/mysql.sql;
    mysql> grant all on REGISTRY_DB.* TO regadmin@"carbondb.mysql-wso2.com" identified by "regadmin";
    
    mysql> create database REGISTRY_LOCAL1;
    mysql> use REGISTRY_LOCAL1;
    mysql> source <EI_HOME>/dbscripts/mysql.sql;
    mysql> grant all on REGISTRY_LOCAL1.* TO regadmin@"carbondb.mysql-wso2.com" identified by "regadmin";
     
    mysql> create database REGISTRY_LOCAL2;
    mysql> use REGISTRY_LOCAL2;
    mysql> source <EI_HOME>/dbscripts/mysql.sql;
    mysql> grant all on REGISTRY_LOCAL2.* TO regadmin@"carbondb.mysql-wso2.com" identified 
    by "regadmin";

    About using MySQL in different operating systems

    For users of Microsoft Windows, when creating the database in MySQL, it is important to specify the character set as latin1. Failure to do this may result in an error (error code: 1709) when starting your cluster. This error occurs in certain versions of MySQL (5.6.x) and is related to the UTF-8 encoding. MySQL originally used the latin1 character set by default, which stored characters in a 2-byte sequence. However, in recent versions, MySQL defaults to UTF-8 to be friendlier to international users. Hence, you must use latin1 as the character set as indicated below in the database creation commands to avoid this problem. Note that this may result in issues with non-latin characters (like Hebrew, Japanese, etc.). The following is how your database creation command should look.

    mysql> create database <DATABASE_NAME> character set latin1;

    For users of other operating systems, the standard database creation commands will suffice. For these operating systems, the following is how your database creation command should look.

    mysql> create database <DATABASE_NAME>;

Configuring the ESB profile node

Do the following configurations for all nodes of your cluster.

Mounting the registry

Add the following configurations to the <EI_HOME>/conf/registry.xml file of each ESB node to configure the shared registry database and mounting details. This ensures that the shared registry for governance and configurations (i.e., the REGISTRY_DB database) mount on both ESB nodes. 

Note the following when adding these configurations:

  • The existing dbConfig called wso2registry must not be removed.
  • The datasource you specify in the <dbConfig name="sharedregistry"> tag must match the JNDI Config name you specify in the <EI_HOME>/conf/datasources/master-datasources.xml file.
  • The registry mount path denotes the type of registry. For example, ”/_system/config” refers to configuration Registry, and "/_system/governance" refers to the governance registry.

  • The <dbconfig> entry enables you to identify the datasource you configured in the <EI_HOME>/conf/datasources/master-datasources.xml file. The unique name "sharedregistry" refers to that datasource entry.

  • The <remoteInstance> section refers to an external registry mount. Specify the read-only/read-write nature of this instance, caching configurations and the registry root location in this section.
  • Also, specify the cache ID in the <remoteInstance> section. This enables caching to function properly in the clustered environment.

    Cache ID is the same as the JDBC connection URL of the registry database. This value is the Cache ID of the remote instance. It should be in the format of $database_username@$database_url, where $database_username is the username of the remote instance database and $database_url is the remote instance database URL. This cacheID denotes the enabled cache. In this case, the database it should connect to is REGISTRY_DB, which is the database shared across all the nodes. You can find that in the mounting configurations of the same datasource that is being used.

  • Define a unique instance ID for each remote instance (using the <id> tag). Be sure to refer to the same instance ID from the corresponding mount configurations (using the <instanceId> tag). In this example, the unique ID for the remote instance is "instanceId". This same ID as used as the instance ID for the config mount as well as the governance mount.

    Note that registry mounting will not be successful if the registry mount configuration (specified using the <mount> section) does not have a corresponding remote registry instance (specified using the <remoteInstance> section) with the same instance ID. If you have used mismatching instance IDs under the <remoteInstance> and <mount> configurations by mistake, you need to follow the steps given below to rectify the error:

    1. Delete the existing local registry.
      (If you are using a database other than the embedded H2, you need to perform the addition step of setting up a new database.)

    2. Apply the configurations in the registry.xml file.

    3. Restart the server.
  • Specify the actual mount path and target mount path in each of the mounting configurations. The target path can be any meaningful name. In this instance, it is "/_system/eiconfig".

<dbConfig name="sharedregistry">
<dataSource>jdbc/WSO2RegistryDB</dataSource>
</dbConfig> 

<remoteInstance url="https://localhost:9443/registry">
    <id>instanceid</id>
    <dbConfig>sharedregistry</dbConfig>
    <readOnly>false</readOnly>
    <enableCache>true</enableCache>
    <registryRoot>/</registryRoot>
<cacheId>regadmin@jdbc:mysql://carbondb.mysql-wso2.com:3306/REGISTRY_DB?autoReconnect=true</cacheId>
</remoteInstance>
 
<mount path="/_system/config" overwrite="true">
    <instanceId>instanceid</instanceId>
    <targetPath>/_system/eiconfig</targetPath>
</mount>
 
<mount path="/_system/governance" overwrite="true">
    <instanceId>instanceid</instanceId>
    <targetPath>/_system/governance</targetPath>
</mount>

Connecting to the databases

Update the datasources by following the steps given below.

  1. Open the <EI_HOME>/conf/datasources/master-datasources.xml file, and configure the datasources to point to the relevant databases for each ESB node. 

    • Replace the username, password, and database URL of your MySQL environment accordingly.
    • If you have not enabled SSL, append the useSSL=false property to the value of the <url> property.
  2. Add the following configuration in the <EI_HOME>/conf/user-mgt.xml file to configure the user stores. Update the dataSource property in all nodes in the <EI_HOME>/conf/user-mgt.xml file as shown below to configure the datasource: 

    <Property name="dataSource">jdbc/WSO2UMDB</Property>

Configuring cluster settings

WSO2 products use Hazelcast as its default clustering engine. Given below are the steps for connecting a product node to the cluster.

  1. Open the <EI_HOME>/conf/axis2/axis2.xml file for each of the two ESB nodes, and apply the following cluster configurations:
    1. Enable clustering for each node as follows:

      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    2. Confirm that the membership scheme is set to wka. This enables the well-known address registration method as shown below. Each node sends cluster initiation messages to the WKA members. 

      <parameter name="membershipScheme">wka</parameter>
    3. Specify the name of the cluster to which the node joins:

      <parameter name="domain">wso2.ei.domain</parameter>
    4. Specify the host to communicate cluster messages. For example, if the IP addresses of the two ESB nodes are xxx.xxx.xxx.xx1 and xxx.xxx.xxx.xx2, they should be specified in the configuration as shown below.

    5. Specify the port to communicate cluster messages as follows:

      <parameter name="localMemberPort">4100</parameter>

      This port number is not affected by the port offset value specified in the <EI_HOME>/conf/carbon.xml file. If this port number is already assigned to another server, the clustering framework automatically increments this port number.

      However, if there are two servers running on the same machine, ensure that a unique port is set for each server. For example, you can have port 4100 for node 1 and port 4200 for node 2.

    6. Specify the well-known members in the cluster in both nodes as shown below. For example, when you configure one ESB node, you need to specify the other nodes in the cluster as well-known members as shown below. The port value for the WKA node must be the same value as it's localMemberPort (in this case it is 4100).

      You can also use IP address ranges for the hostname (e.g., 192.168.1.2-10). However, you can define a range only for the last portion of the IP address. Smaller the range, faster the time it takes to discover members since each node has to scan a lesser number of potential members. The best practice is to add all the members (including itself) in all the nodes to avoid any conflicts in configurations.

      <members>
          <member>
      		<hostName>xxx.xxx.xxx.xx1</hostName>
      		<port>4100</port> 
      	</member>
      	<member>
      		<hostName>xxx.xxx.xxx.xx2</hostName>
      		<port>4100</port> 
      	</member>
      </members>
    7. Uncomment and edit the WSDLEPRPrefix element under org.apache.synapse.transport.passthru.PassThroughHttpListener in the transportReceiver section.

      <parameter name="WSDLEPRPrefix" locked="false">http://ei.wso2.com:80</parameter> 
    8. Uncomment and edit the WSDLEPRPrefix element under org.apache.synapse.transport.passthru.PassThroughHttpSSLListener in the transportReceiver section.

      <parameter name="WSDLEPRPrefix" locked="false">https://ei.wso2.com:443</parameter>
  2. Edit the <EI_HOME>/conf/carbon.xml file as follows to configure the hostname:

    <HostName>ei.wso2.com</HostName>

Optional: Configuring Hazelcast properties

You can configure the hazelcast properties for the product nodes by following the steps given below.

  1. Create the hazelcast.properties file with the following property configurations, and copy the file to the <EI_HOME>/conf/ directory. 

    #Disabling the hazelcast shutdown hook
    hazelcast.shutdownhook.enabled=false
    #Setting the hazelcast logging type to log4j
    hazelcast.logging.type=log4j

    The above configurations are explained below.

    • Hazelcast shutdown hook: This configuration disables the shutdown hook in hazelcast, which ensures that the hazelcast instance shuts down gracefully whenever the product node shuts down. If the hazelcast shutdown hook is enabled (which is the default behavior of a product), you will see errors such as "Hazelcast instance is not active!" at the time of shutting down the product node: This is because the hazelcast instance shuts down too early when the shutdown hook is enabled.
    • Hazelcast logging type: This configuration sets the hazelcast logging type to log4j, which allows hazelcast logs to be written to the wso2carbon.log file.
  2. If you have enabled log4j for hazelcast logging as shown above, be sure to enter the configuration shown below in the log4j.properties file (stored in the  <EI_HOME>/conf/  directory). This can be used to configure the log level for hazelcast logging. For a clustered production environment, it is recommended to use INFO as the log level as shown below.

    log4j.logger.com.hazelcast=INFO

General configurations

  1. Add the host entries to your DNS, or “/etc/hosts” file (in Linux) in all the nodes of the cluster to map the hostnames to the IP addresses. Map the following hostnames to your IP address in the /etc/host file.

    • ei.wso2.com

    • ui.ei.wso2.com
    • carbondb.mysql-wso2.com 

    Example:

    xxx.xxx.xxx.xxx	ei.wso2.com
    xxx.xxx.xxx.xxx ui.ei.wso2.com
    <IP-of-the-DB-SERVER> carbondb.mysql-wso2.com
  2. Edit the <EI_HOME>/conf/tomcat/catalina-server.xml file as follows: HERE
    • Add proxyPort="80" to the org.apache.coyote.http11.Http11NioProtocol class with the port defined as 9763:

      <Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
      	port="9763"
      	proxyPort="80"
      	...
      	/>
    • Add proxyPort="443" to the org.apache.coyote.http11.Http11NioProtocol class with the port defined as 9443:

      <Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
      	port="9443"
      	proxyPort="443"
      	...
      	/>
     Click here for more information on this configuration.

    The Connector protocol tag sets the protocol to handle incoming traffic. The default value is HTTP/1.1, which uses an auto-switching mechanism to select either a blocking Java-based connector or an APR/native connector. If the PATH (Windows) or LD_LIBRARY_PATH (on most UNIX systems) environment variables contain the Tomcat native library, the APR/native connector will be used. If the native library cannot be found, the blocking Java-based connector will be used. Note that the APR/native connector has different settings from the Java connectors for HTTPS.

    The non-blocking Java connector used is an explicit protocol that does not rely on the auto-switching mechanism described above. The following is the value used:
    org.apache.coyote.http11.Http11NioProtocol

    The TCP port number is the value that this Connector will use to create a server socket and await incoming connections. Your operating system will allow only one server application to listen to a particular port number on a particular IP address. If the special value of 0 (zero) is used, Tomcat will select a free port at random to use for this connector. This is typically only useful in embedded and testing applications.

Deploying artifacts across the nodes

Use the following deployment synchronization recommendations based on the rate of change of artifacts that happen in your cluster:

  • For a high rate of changes (i.e., if changes happen very frequently):
    - Network File Share (NFS)
  • For a medium rate of change
    - Remote Synchronization (Rsync)
  • For a low rate of changes (i.e., if changes happen once a week):
    - use the configuration management system to handle artifacts
    - other deployment options (e.g., Puppet, Chef etc.)

Make sure to choose the deployment synchronization method that suits your production environment.

Using Network File Share (NFS)

You can use a common shared file system such as Network File System (NFS) or any other shared file system as the content synchronization mechanism. You need to mount the <EI_HOME>/repository/deployment/server directory of the two nodes to the shared file system to share all the artifacts between both nodes. 

Using Remote Synchronization (Rsync)

If you are unable to maintain a shared file system, you can synchronize content using Rsync. The Rsync tool, which is a file copying tool is another common approach for synchronizing artifacts across all cluster nodes. Therefore, you can first deploy artifacts in one node of the cluster and then use Rsync to copy those artifacts to other nodes as described below.

  1. Create a file called nodes-list.txt,which lists all the nodes in the deployment. The following is a sample of the file for two nodes.

    Different nodes are separated into individual lines.

    nodes-list.txt
    ubuntu@192.168.1.1:~/setup/192.168.1.1/ei_node/repository/deployment/server
    ubuntu@192.168.1.2:~/setup/192.168.1.2/ei_node/repository/deployment/server
  2. Create a file to synchronize the  <PRODUCT_HOME>/repository/deployment/server/ directory between the nodes.

    You must create your own SSH key and define it as the pem_file. Alternatively, you can use an existing SSH key. Specify the ei_server_dir depending on the location in your local machine. Change the logs.txt file path and the lock location based on where they are located in your machine.

    Configure syncing the <EI_HOME>/repository/tenant/ directory to share the tenant artifacts across the cluster.

    rsync-for-ei-depsync.sh
    #!/bin/sh 
    ei_server_dir=~/wso2ei-6.4.0/repository/deployment/server/
    pem_file=~/.ssh/carbon-440-test.pem
     
     
    #delete the lock on exit
    trap 'rm -rf /var/lock/depsync-lock' EXIT
     
    mkdir /tmp/carbon-rsync-logs/ 
     
    #keep a lock to stop parallel runs
    if mkdir /var/lock/depsync-lock; then
      echo "Locking succeeded" >&2
    else
      echo "Lock failed - exit" >&2
      exit 1
    fi 
     
    #get the nodes-list.txt
    pushd `dirname $0` > /dev/null
    SCRIPTPATH=`pwd`
    popd > /dev/null
    echo $SCRIPTPATH
     
    for x in `cat ${SCRIPTPATH}/nodes-list.txt`
    do
    echo ================================================== >> /tmp/carbon-rsync-logs/logs.txt;
    echo Syncing $x;
    rsync --delete -arve "ssh -i  $pem_file -o StrictHostKeyChecking=no" $ei_server_dir $x >> /tmp/carbon-rsync-logs/logs.txt
    echo ================================================== >> /tmp/carbon-rsync-logs/logs.txt;
    done
  3. Execute the following command in your CLI to create a Cron job that executes the above file every minute for deployment synchronization.    

    *   *  *   *   *     /home/ubuntu/setup/rsync-for-depsync/rsync-for-ei-depsync.sh=

Testing the cluster

Follow the steps below to test the cluster.

  1. Deploy artifacts to each product deployment location. 

    Use a deployment synchronization mechanism to synchronize the artifacts in the <EI_HOME>/repository/deployment/ directory. Always deploy artifacts first to the ESB server profile node with the registry configured as read/write. Next, deploy the artifacts to the other nodes.

  2. Restart the configured load balancer.

  3. Execute the following command for both ESB nodes to start the servers: sh <EI_HOME>/bin/integrator.sh
  4. Check for ‘member joined’ log messages in all consoles.

    Additional information on logs and new nodes

    When you terminate one node, all nodes identify that the node has left the cluster. The same applies when a new node joins the cluster. If you want to add another new node, copy existing node without any changes if you are running it on a new server (such as xxx.xxx.xxx.184). If you intend to use the new node on a server where another WSO2 product is running, use a copy of node and change the port offset accordingly in the <EI_HOME>/conf/carbon.xml file. You also have to change localMemberPort in the <EI_HOME>/conf/axis2/axis2.xml file if that product has clustering enabled. Also, map all hostnames to the relevant IP addresses when creating a new node. The log messages indicate if the new node joins the cluster.

  5. Access the management console through the LB using the following URL:  https:ei.wso2.com/carbon  
  6. Test load distribution via the following URLs: http://ei.wso2.com:80/ or https://ei.wso2.com:443/
  7. Add a sample proxy service with the log mediator in the inSequence so that it will display logs in the terminals, and then observe the cluster messages sent.

  8. Send a request to the endpoint through the load balancer to verify that the proxy service is activated only on the active node(s) while the nodes remain passive. This is to test that the load balancer manages the active and passive states of the nodes, activating nodes as needed and leaving the rest in passive mode. For example, you would send the request to the following URL: http://{Load_Balancer_Mapped_URL_for_worker}/services/{Sample_Proxy_Name} 

Tuning performance of the cluster

Follow the steps below to tune performance of the cluster:

The example parameter values given below might not be the optimal values for the specific hardware configurations in your environment. Therefore, it is recommended to carry out load tests on your environment to tune the load balancer and other configurations accordingly.

  1. Change the following default memory allocation settings for the server node and the JVM tuning parameters in the server startup scripts (i.e., the <EI_HOME>/bin/integrator.sh or <EI_HOME>/bin/integrator.bat file) according to the expected server load: -Xms256m -Xmx1024m -XX:MaxPermSize=256m
  2. Modify important system files, which affect all programs running on the server. It is recommended to familiarize yourself with these files using Unix/Linux documentation before editing them.


com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_links2' is unknown.