This site contains the documentation that is relevant to older WSO2 product versions and offerings.
For the latest WSO2 documentation, visit https://wso2.com/documentation/.

Clustering Business Process Server 3.5.0, 3.5.1 and 3.6.0

The instructions in this topic walk you through the steps for creating a cluster of WSO2 Business Process Server (BPS) instances. WSO2 BPS Clustering is done mainly to ensure high availability and scalability. The following sections expand on clustering concepts, architecture and instructions to set up a cluster.

BPS clustering concepts

High Availability

High availability means there is redundancy in the system such that the service is available to the outside world irrespective of individual component failures. For example, if we have a two node cluster, even if one node fails, the other node must continue to serve requests until the failed node is restored again.

Scalability

Scalability means increasing the processing capacity by adding more server nodes.

Load Balancer

Load balancing is the method of distributing workload to multiple server nodes. In order to set up a properly functioning BPS cluster you would require a load balancer. The function of the load balancer is to monitor the availability of the server nodes in the cluster and route requests to all the available nodes in a fair manner. The load balancer would be the external facing interface of the cluster and it would receive all the requests coming to the cluster. It then distributes this load to all available nodes. If a node has failed, then the load balancer will not route requests to that node until that node is back online.

BPS clustering deployment architecture

In order to build a WSO2 Business Process Server cluster you would require the following.

  • Load balancerĀ 
  • Hardware/VM nodes for BPS Nodes
  • Database Server

The following diagram depicts the deployment of a two node WSO2 BPS cluster without the load balancer.

BPS nodes can be configured as manager node and worker node. A BPS cluster can have one manager node and multiple worker nodes. This is with respect to deployment of artifacts. The node that handles the artifact deployment first is considered manager and other nodes are considered workers.Ā The load balancer receives all the requests and distributes the load (requests) to the two BPS nodes.Ā 

AĀ Manager Node (Node1 in above diagram)Ā is where the workflow artifacts (business processes/Human Tasks) are first deployed. TheĀ Worker NodesĀ (Node2 in above diagram)Ā will look at the configuration generated by the manager node for a given deployment artifact and then deploy those artifacts in its runtime.Ā BPS requires this method of deployment because it does automatic versioning of the deployed BPEL/human task artifacts. Hence, in order to have the same version number for a given deployment artifact across all the nodes, we need to do the versioning at one node (manager node). A BPS instance decides whether it is a manager node or a worker node by looking at its configuration registry mounting.

BPS and the registry

In the simplest terms, theĀ registry is an abstraction over a database schema. It provides an API using which you can store data and retrieve data to a database. WSO2 BPS embeds the registry component and hence has a built-in registry. The registry is divided into three spaces.

  • Local Registry:Ā Local registry is used to store information local to a server node.
  • Configuration Registry:Ā Configuration registry is used to store information that needs to be shared across same type of server nodes. For example, configuration registry is shared across BPS server nodes. However, this same configuration registry would not be shared across another type of server nodes.
  • Governance Registry:Ā Governance registry is used to store information that can be shared across clusters of different type of servers. For example governance registry can be shared across BPS and ESB cluster. In the above diagram, these different registry configurations are depicted as individual databases.

Note: The BPS manager node refers to the configuration registry using a Read/Write link, while the BPS worker nodes refer to the configuration registry using a Read-only link.

BPS user store and authorization

BPS management console requires a user to login to the system in order to do management activities. Additionally, various permissions levels can be configured for access management. In human tasks, depending on the logged in user, what the user can do with tasks will change.

All this access control/authentication/authorization functions are inherited to BPS from the Carbon kernel. You can also configure an external LDAP/Active directory to grant users access to the server. All this user information/permission information is kept in the user store database. In the above diagram, UM DB refers to this database. This database is also shared across all the cluster nodes.

BPMN (activiti) database

BPS 3.5.0 introduces BPMN support by embedding popular Activiti BPMN engine. In order to to persist the bpmn packages, process instance information, BPS uses this db. Since we are embedding two process engines, Apache ODE for BPEL and Activiti for BPMN, we have kept the activiti db separate.

BPS (WS-BPEL/WS-Human Tasks) persistence database

BPS handles long running processes and human tasks. This means, the runtime state of the process instances/ human task instances have to be persisted to a database. BPS persistence database is the databases where we store these process/task configuration data and process/task instance state.

Installing BPS

  1. Download the latest version of BPS.
  2. Unzip the BPS zipped archive, and make a copy for each additional BPS node you want to create.

Setting up the databases

These instructions assume you are installing MySQL as your relational database management system (RDBMS), but you can install another supported RDBMS as needed. You must create the following databases and configure the associated datasources.

Database NameDescription
WSO2_USER_DBJDBC user store and authorization manager
REGISTRY_DBShared database for config and governance registry mounts in BPS nodes and user permissions database
REGISTRY_LOCAL1Local registry space in BPS node 1
REGISTRY_LOCAL2Local registry space in BPS node 2
BPS_DBProcess/Task models and instance data of the BPEL/WS-human tasks engines
BPMN_DBProcess and instance data for BPMN
  1. See instructions on setting up databasesĀ for installing and creating WSO2_USER_DB, REGISTRY_DB, REGISTRY_LOCAL1 and REGISTRY_LOCAL2. In addition to that, you need to create the BPS_DBĀ and the BPMN_DB.

  2. Create the BPS_DB and BPMN_DBĀ  databases using the following commands, where <BPS_HOME> is the path to any of the BPS instances you installed, and username and password are the same as those you specified in the previous steps:

    mysql> create database BPS_DB;
    mysql> use BPS_DB;
    mysql> source <BPS_HOME>/dbscripts/bps/bpel/create/mysql.sql;
    mysql> grant all on BPS_DB.* TO username@localhost identified by "password";
    Ā 
    mysql> create database BPMN_DB;
    mysql> use BPMN_DB;
    mysql> source <BPS_HOME>/dbscripts/bps/bpmn/create/activiti.mysql.create.engine.sql;
    mysql> source <BPS_HOME>/dbscripts/bps/bpmn/create/activiti.mysql.create.history.sql;
    mysql> source <BPS_HOME>/dbscripts/bps/bpmn/create/activiti.mysql.create.identity.sql;
    mysql> grant all on BPMN_DB.* TO username@localhost identified by "password";
  3. On the first BPS node, open <BPS_HOME>/repository/conf/datasources/master-datasources.xml and configure the data sources to point to the WSO2_USER_DB, WSO2_REGISTRY_DB, and REGISTRY_LOCAL1 databases (change the username, password, and database URL as needed for your environment).Ā Repeat this configuration on the second BPS node, this time configuring the local registry to point to REGISTRY_LOCAL2.Ā For details on how to do this, see the Setting up the Database topic.Ā 

  4. On each BPS node, open <BPS_HOME>/repository/conf/datasources/bps-datasources.xml, and configure the connection to the BPS database as follows (change the driver class, database URL, username, and password as needed for your environment):

     <datasource>
                <name>BPS_DS</name>
                <description></description>
                <jndiConfig>
                    <name>bpsds</name>
                </jndiConfig>
                <definition type="RDBMS">
                    <configuration>
                        <url>jdbc:mysql://localhost:3306/BPS_DB</url>
                        <username>root</username>
                        <password>root</password>
                        <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                        <testOnBorrow>true</testOnBorrow>
                        <validationQuery>SELECT 1</validationQuery>
                        <validationInterval>30000</validationInterval>
                        <useDataSourceFactory>false</useDataSourceFactory>
    	             <defaultAutoCommit>true</defaultAutoCommit>
     	             <maxActive>100</maxActive>
                        <maxIdle>20</maxIdle>
    	             <maxWait>10000</maxWait>
                    </configuration>
                </definition>
      </datasource>

    Note: The entry <defaultAutoCommit>true</defaultAutoCommit> is set to true. This is an important setting for the BPEL engine. You must do this for each node in the cluster.

  5. On each BPS node, open the <BPS_HOME>/repository/conf/datasources/activiti-datasources.xml file and add the relevant entries.

    <datasource>
                <name>ACTIVITI_DB</name>
                <description>The datasource used for activiti engine</description>
                <jndiConfig>
                    <name>jdbc/ActivitiDB</name>
                </jndiConfig>
                <definition type="RDBMS">
                    <configuration>
                        <url>jdbc:mysql://localhost:3306/BPMN_DB</url>
                        <username>root</username>
                        <password>root</password>
                        <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                        <maxActive>50</maxActive>
                        <maxWait>60000</maxWait>
                        <testOnBorrow>true</testOnBorrow>
                        <validationQuery>SELECT 1</validationQuery>
                        <validationInterval>30000</validationInterval>
                    </configuration>
                </definition>
    </datasource>


  6. On each BPS node, open <BPS_HOME>/repository/conf/registry.xml and configure the registry mounts.Ā Registry mount path is used to identify the type of registry. For example ā€/_system/configā€ refers to configuration registry and "/_system/governance" refers to governance registry. The following is an example configuration for BPS registry mounting. Leave the configuration for local registry as it is and add the following new entries.

    Registry configuration for BPS manager node
    <dbConfig name="wso2bpsregistry">
      <dataSource>jdbc/WSO2RegistryDB</dataSource>
    </dbConfig>
    
    <remoteInstance url="https://localhost:9443/registry">
      <id>instanceid</id>
      <dbConfig>wso2bpsregistry</dbConfig>
      <readOnly>false</readOnly>
      <enableCache>true</enableCache>
      <registryRoot>/</registryRoot>
      <cacheId>root@jdbc:mysql://localhost:3306/REGISTRY_DB</cacheId>
    </remoteInstance>
    
    <mount path="/_system/config" overwrite="true">
      <instanceId>instanceid</instanceId>
      <targetPath>/_system/config</targetPath>
    </mount>
    
    <mount path="/_system/governance" overwrite="true">
      <instanceId>instanceid</instanceId>
      <targetPath>/_system/governance</targetPath>
    </mount>

    Make note of the following for more details on configuring this.

    • This configuration enables you to identify the data source you configured in theĀ master-datasources.xmlĀ file using theĀ dbConfigĀ entry and we give a unique name to refer to that datasource entry, which is ā€œwso2bpsregistryā€.Ā 
    • TheĀ remoteInstanceĀ section refers to an external registry mount. We can specify the read-only/read-write nature of this instance as well as caching configurations and the registry root location. Additionally you must specify theĀ cacheIdĀ for caching to function properly in the clustered environment. Note thatĀ cacheIdĀ is same as the JDBC connection URL to the registry database. You define a unique name ā€œidā€ for each remote instance, which is then referred from mount configurations.Ā 
    • In the above example, the unique id for remote instance isĀ instanceId.Ā 
    • In each of the mounting configurations, specify the actual mount path and target mount path.
    Registry configuration for BPS slave node
    <dbConfig name="wso2bpsregistry">
      <dataSource>jdbc/WSO2RegistryDB</dataSource>
    </dbConfig>
    
    <remoteInstance url="https://localhost:9443/registry">
      <id>instanceid</id>
      <dbConfig>wso2bpsregistry</dbConfig>
      <readOnly>true</readOnly>
      <enableCache>true</enableCache>
      <registryRoot>/</registryRoot>
      <cacheId>root@jdbc:mysql://localhost:3306/ REGISTRY_DB</cacheId>
    </remoteInstance>
    
    <mount path="/_system/config" overwrite="true">
      <instanceId>instanceid</instanceId>
      <targetPath>/_system/config</targetPath>
    </mount>
    
    <mount path="/_system/governance" overwrite="true">
      <instanceId>instanceid</instanceId>
      <targetPath>/_system/governance</targetPath>
    </mount>

    Make note of the following for more details on configuring this.

    • This configuration is same as above withĀ readOnlyĀ property set toĀ trueĀ for remote instance configuration.
  7. On each BPS node, openĀ  <BPS_HOME>/repository/conf/user-mgt.xml Ā and configure the user stores.Ā In the user-mgt.xmlĀ file, enter the datasource information for the user store that you configured previously in the master-datasoures.xml file. You can change the admin username and password as well. However, you should do this before starting the server.

    <Configuration>
      <AddAdmin>true</AddAdmin>
      <AdminRole>admin</AdminRole>
      <AdminUser>
        <UserName>admin</UserName>
        <Password>admin</Password>
      </AdminUser>
      <EveryOneRoleName>everyone</EveryOneRoleName>
      <Property name="dataSource">jdbc/WSO2UMDB</Property>
    </Configuration>


  8. On each BPS node, openĀ <BPS_HOME>/repository/conf/axis2/axis2.xmlĀ and configure the clustering section.Ā The axis2.xml file is used to enable clustering. Well known address (WKA) based clustering method is normally used for BPS clustering. In WKA based clustering, you must have a subset of cluster members configured in all the members of the cluster. At least one well-known member has to be operational at all times.

    <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"  enable="true">
      <parameter name="membershipScheme">wka</parameter>
      <parameter name="localMemberHost">127.0.0.1</parameter>
      <parameter name="localMemberPort">4000</parameter>
      <members>
        <member>
          <hostName>127.0.0.1</hostName>
          <port>4000</port>
        </member>
        <member>
          <hostName>127.0.0.1</hostName>
          <port>4010</port>
        </member>
      </members>
    </clustering>

    Make note of the following for more details on configuring this.

    • Change the enable parameter to true.Ā 

    • Find the parameter membershipSchema and set it as wka.Ā 

    • Configure the loadMemberHost and localMemberPort entries. These must be different values for the master and slave if they are on the same server to prevent any conflicts.

    • Under the members section, add the hostName and port for each WKA member. As we have only two nodes in our sample cluster configuration, we will configure both nodes as WKA nodes.

  9. On each BPS node, openĀ <BPS_HOME>/repository/conf/etc/tasks-config.xmlĀ and change the taskServerMode configuration.Ā BPS is shipped with the task server component as well. By default, when we enable clustering, this component waits for two task server nodes. Hence we need to change this entry to STANDALONE in order to start the BPS server.
    <taskServerMode>STANDALONE</taskServerMode>

    About using AUTO

    Note that the Task Server configuration does not have an impact on the BPS server runtime functionality. Hence, using AUTO or STANDALONE here will not affect how the BPEL processes are executed during runtime.

    However, note that the default setting <taskServerCount>2</taskServerCount> in the <BPS_HOME>/repository/conf/etc/tasks-config.xml file has an impact here if you use AUTO. When the AUTO setting is enabled, and clustering is enabled, the server will wait till it picks up another node so that there are two Task Server instances up and running. Hence you will need to start both nodes simultaneously.Ā 

    So if you want to use AUTO, change the taskServerCount to 1 so that you can start the management node first.

  10. On each BPS node, openĀ <BPS_HOME>/repository/conf/bps.xmlĀ and configure the following:
    1. Enable distributed lock -Ā This entry enables the Hazelcast-based synchronizations mechanism to prevent concurrent modification of the instance state by cluster members.Ā 
      <tns:UseDistributedLock>true</tns:UseDistributedLock>
    • Configure theĀ scheduler thread pool size -Ā Thread pool size must always be smaller than theĀ maxActive database connections configured in the <BPS_HOME>/repository/conf/datasources/master-datasources.xml file. When configuring the thread pool size, allocate 10-15 threads per core depending on your setup. Leave some additional number of database connections since BPS uses database connections for management API as well.
      <tns:ODESchedulerThreadPoolSize>0</tns:ODESchedulerThreadPoolSize>

      Example settings for a two node cluster.

      -Ā Oracle Server configured database connection size - 250.Ā 

      -Ā maxActive entry in master-datasources.xml files for each node - 100

      -Ā SchedulerTreadPool size for each node - 50

    • Node ID - Optionally, a unique id can be assigned to each node. Uncomment following elements and give an unique id as following,
      <tns:NodeId>node1</tns:NodeId>

Running the products in a cluster

  1. Deploy artifacts to each product deployment location (<BPS_HOME>/repository/deployment/....) by either manually copying the artifacts or using Deployment Synchronizer.Ā On each BPS node, openĀ <BPS_HOME>/repository/conf/carbon.xmlĀ andĀ install SVNKitĀ (svnClientBundle-1.0.0.jar) fromĀ http://dist.wso2.org/tools/svnClientBundle-1.0.0.jarĀ to theĀ <BPS_HOME>/repository/components/dropinsĀ folder.

    If you want automatic deployment of artifacts across the cluster nodes, you can enable the deployment synchronizer feature in theĀ carbon.xmlĀ file.Ā 

    <DeploymentSynchronizer>
      <Enabled>true</Enabled>
      <AutoCommit>true</AutoCommit>
      <AutoCheckout>true</AutoCheckout>
      <RepositoryType>svn</RepositoryType>
      <SvnUrl>http://10.100.3.115/svn/repos/as</SvnUrl>
      <SvnUser>wso2</SvnUser>
      <SvnPassword>wso2123</SvnPassword>
      <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
    </DeploymentSynchronizer>

    Deployment synchronizer functions by committing the artifacts to the configured SVN location from one node (the node withĀ AutoCommitĀ option set toĀ true) and sending cluster messages to all other nodes about the addition/change of the artifact. When the cluster message is received, all other nodes will do an SVN update resulting in obtaining the changes to relevant deployment directories. Now the server will automatically deploy these artifacts.Ā For the master node, keepĀ AutoCommitĀ andĀ AutoCheckoutĀ entries asĀ true. For all other nodes, change theĀ AutoCommitĀ entry toĀ false.

  2. Start the BPS nodes. Use the following if your nodes are worker nodes:

    sh <BPS_HOME>/bin/wso2server.sh

Performance tuning in the BPS cluster

In the server startup script, you can configure the memory allocation for the server node as well as the JVM tuning parameters. If you open the wso2server.sh or wso2server.bat file located at the <BPS_HOME>/bin directory and go to the bottom of the file, you will find those parameters. Change them according to the expected server load. The following is the default memory allocation for a WSO2 server.
-Xms256m -Xmx1024m -XX:MaxPermSize=256m

  • Performance tuning requires you to modify important system files, which affect all programs running on the server. We recommend you to familiarize yourself with these files using Unix/Linux documentation before editing them.
  • The parameter values we discuss below are just examples. They might not be the optimal values for the specific hardware configurations in your environment. We recommend you to carry out load tests on your environment to tune the load balancer accordingly.

Load balancing

If needed, you can install a hardware load balancer or an HTTP load balancer such as NGINX PlusĀ as the front end to the BPS nodes. See /wiki/spaces/CLUSTER44x/pages/9732195 for details on configuring Nginx as the load balancer for a WSO2 product cluster.