This site contains the documentation that is relevant to older WSO2 product versions and offerings.
For the latest WSO2 documentation, visit https://wso2.com/documentation/.

Clustering Business Process Server

Creating a cluster of WSO2 Business Process Server (BPS) instances is similar to clustering other WSO2 products. The following instructions walk you through the steps:

BPS clustering deployment diagram

In order to build a WSO2 Business Process Server cluster you would require the following.

  • Load balancer 
  • Hardware/VM nodes for BPS Nodes
  • Database Server

The following diagram depicts the deployment of a two node WSO2 BPS cluster without the load balancer.

Master Node (Node1 in above diagram) is where the workflow artifacts (Business processes/Human Tasks) are first deployed. The Slave Nodes (Node2 in above diagramwill look at the configuration generated by the master node for a given deployment artifact and then deploy those artifacts in its runtime. BPS requires this method of deployment because it does automatic versioning of the deployed BPEL/human task artifacts. Hence, in order to have the same version number for a given deployment artifact across all the nodes, we need to do the versioning at one node (Master Node). A BPS server decides whether it is a master node or a slave node by looking at its registry mounting configuration.

Installing BPS

  1. Download the latest version of BPS.
  2. Unzip the BPS zipped archive, and make a copy for each additional BPS node you want to create.

Installing and creating the databases

These instructions assume you are installing MySQL as your relational database management system (RDBMS), but you can install another supported RDBMS as needed. You will create the following databases and associated data sources:

Database NameDescription
WSO2_USER_DBJDBC user store and authorization manager
REGISTRY_DBShared database for config and governance registry mounts in BPS nodes and user permissions database
REGISTRY_LOCAL1Local registry space in BPS node 1
REGISTRY_LOCAL2Local registry space in BPS node 2
BPS_DBInstance data of the process engine
  1. See instructions on setting up databases for installing and creating WSO2_USER_DB, REGISTRY_DB, REGISTRY_LOCAL1 and REGISTRY_LOCAL2. In addition to that, you need to create the BPS_DB.

  2. Create the BPS_DB database using the following commands, where <BPS_HOME> is the path to any of the BPS instances you installed, and username and password are the same as those you specified in the previous steps:

    mysql> create database BPS_DB;
    mysql> use BPS_DB;
    mysql> source <BPS_HOME>/dbscripts/bps/mysql.sql;
    mysql> grant all on BPS_DB.* TO username@localhost identified by "password";
  3. On the first BPS node, open <BPS_HOME>/repository/conf/datasources/master-datasource.xml and configure the data sources to point to the WSO2_USER_DB, WSO2_REGISTRY_DB, and REGISTRY_LOCAL1 databases (change the username, password, and database URL as needed for your environment). Repeat this configuration on the second BPS node, this time configuring the local registry to point to REGISTRY_LOCAL2. For details on how to do this, see the Setting up the Database topic. The following are the configurations you would need to connect to BPS_DB. This needs to be done in both nodes.

    <datasource>
                <name>BPS_DB</name>
                <description>The datasource used for BPS</description>
                <jndiConfig>
                    <name>jdbc/BPSDB</name>
                </jndiConfig>
                <definition type="RDBMS">
                    <configuration>
                        <url>jdbc:mysql://localhost:3306/BPS_DB?autoReconnect=true</url>
                        <username>root</username>
                        <password>root</password>
                        <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                        <maxActive>50</maxActive>
                        <maxWait>60000</maxWait>
                        <testOnBorrow>true</testOnBorrow>
                        <validationQuery>SELECT 1</validationQuery>
                        <validationInterval>30000</validationInterval>
                    </configuration>
                </definition>
    </datasource>
  4. On each BPS node, open <BPS_HOME>/repository/conf/datasources.properties, and configure the connection to the BPS database as follows (change the driver class, database URL, username, and password as needed for your environment):

    synapse.datasources=bpsds
    synapse.datasources.icFactory=com.sun.jndi.rmi.registry.RegistryContextFactory
    synapse.datasources.providerPort=2199
    
    synapse.datasources.bpsds.registry=JNDI
    synapse.datasources.bpsds.type=BasicDataSource
    synapse.datasources.bpsds.driverClassName=com.mysql.jdbc.Driver
    synapse.datasources.bpsds.url=jdbc:mysql://localhost:3306/BPS_DB?autoReconnect=true
    synapse.datasources.bpsds.username=root
    synapse.datasources.bpsds.password=root
    synapse.datasources.bpsds.validationQuery=SELECT 1
    synapse.datasources.bpsds.dsName=bpsds
    synapse.datasources.bpsds.maxActive=100
    synapse.datasources.bpsds.maxIdle=20
    synapse.datasources.bpsds.maxWait=10000
  5. On each BPS node, open <BPS_HOME>/repository/conf/registry.xml and configure the registry mounts.  To do this step, it is first necessary to understand BPS master nodes and slave nodes. 

    The registry mount path is used to identify the type of registry. For example ”/_system/config” refers to configuration registry and "/_system/governance" refers to governance registry.

    Registry configuration for BPS master node
    <dbConfig name="wso2bpsregistry">
      <dataSource>jdbc/WSO2RegistryDB</dataSource>
    </dbConfig>
    
    <remoteInstance url="https://localhost:9443/registry">
      <id>instanceid</id>
      <dbConfig>wso2bpsregistry</dbConfig>
      <readOnly>false</readOnly>
      <enableCache>true</enableCache>
      <registryRoot>/</registryRoot>
      <cacheId>regadmin@jdbc:mysql://carbondb.mysql-wso2.com:3306/REGISTRY_DB</cacheId>
    </remoteInstance>
    
    <mount path="/_system/config" overwrite="virtual">
      <instanceId>instanceid</instanceId>
      <targetPath>/_system/bpsConfig</targetPath>
    </mount>
    
    <mount path="/_system/governance" overwrite="virtual">
      <instanceId>instanceid</instanceId>
      <targetPath>/_system/governance</targetPath>
    </mount>

    Make note of the following for more details on configuring this.

    • This configuration enables you to identify the data source you configured in the master-datasources.xml file using the dbConfig entry and we give a unique name to refer to that datasource entry which is “wso2bpsregistry”. 
    • The remoteInstance section refers to an external registry mount. We can specify the read-only/read-write nature of this instance as well as caching configurations and the registry root location. Additionally we need to specify cacheId for caching to function properly in the clustered environment. Note that cacheId is same as the JDBC connection URL to our registry database. You define a unique name “id” for each remote instance which is then referred from mount configurations. 
    • In the above example, the unique id for remote instance is instanceId
    • In each of the mounting configurations, we specify the actual mount path and target mount path.
    Registry configuration for BPS slave node
    <dbConfig name="wso2bpsregistry">
      <dataSource>jdbc/WSO2RegistryDB</dataSource>
    </dbConfig>
    
    <remoteInstance url="https://localhost:9443/registry">
      <id>instanceid</id>
      <dbConfig>wso2bpsregistry</dbConfig>
      <readOnly>true</readOnly>
      <enableCache>true</enableCache>
      <registryRoot>/</registryRoot>
      <cacheId>regadmin@jdbc:mysql://carbondb.mysql-wso2.com:3306/REGISTRY_DB</cacheId>
    </remoteInstance>
    
    <mount path="/_system/config" overwrite="virtual">
      <instanceId>instanceid</instanceId>
      <targetPath>/_system/bpsConfig</targetPath>
    </mount>
    
    <mount path="/_system/governance" overwrite="virtual">
      <instanceId>instanceid</instanceId>
      <targetPath>/_system/governance</targetPath>
    </mount>

    Make note of the following for more details on configuring this.

    • This configuration is same as above with readOnly property set to true for remote instance configuration.
  6. On each BPS node, open  <BPS_HOME>/repository/conf/user-mgt.xml  and configure the user stores. 

    In the user-mgt.xml file, enter the datasource information for the user store which we configured previously in the master-datasoures.xml file. You can change the admin username and password as well. However, you should do this before starting the server.

    <Configuration>
      <AddAdmin>true</AddAdmin>
      <AdminRole>admin</AdminRole>
      <AdminUser>
        <UserName>admin</UserName>
        <Password>admin</Password>
      </AdminUser>
      <EveryOneRoleName>everyone</EveryOneRoleName>
      <Property name="dataSource">jdbc/WSO2UMDB</Property>
    </Configuration>


  7. On each BPS node, open <BPS_HOME>/repository/conf/axis2/axis2.xml and configure the clustering section.

    We use the axis2.xml file to enable clustering. We will use well known address (WKA) based clustering method. In WKA based clustering, we need to have a subset of cluster members configured in all the members of the cluster. At least one well known member has to be operational at all times.

    <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"  enable="true">
      <parameter name="membershipScheme">wka</parameter>
      <parameter name="localMemberHost">127.0.0.1</parameter>
      <parameter name="localMemberPort">4000</parameter>
      <members>
        <member>
          <hostName>127.0.0.1</hostName>
          <port>4000</port>
        </member>
        <member>
          <hostName>127.0.0.1</hostName>
          <port>4010</port>
        </member>
      </members>
    </clustering>

    Make note of the following for more details on configuring this.

    • Change the enable parameter to true

    • Find the parameter membershipSchema and set it as wka

    • Configure the loadMemberHost and localMemberPort entries. These must be different values for the master and slave if they are on the same server to prevent any conflicts.

    • Under the members section, add the hostName and port for each WKA member. As we have only two nodes in our sample cluster configuration, we will configure both nodes as WKA nodes.

  8. On each BPS node, open <BPS_HOME>/repository/conf/etc/tasks-config.xml and change the taskServerMode configuration. BPS is shipped with the task server component as well. By default, when we enable clustering, this component waits for two task server nodes. Hence we need to change this entry to STANDALONE in order to start the BPS server.
    <taskServerMode>STANDALONE</taskServerMode>

    About using AUTO

    Note that the Task Server configuration does not have an impact on the BPS server runtime functionality. Hence, using AUTO or STANDALONE here will not affect how the BPEL processes are executed during runtime.

    However, note that the default setting <taskServerCount>2</taskServerCount> in the <BPS_HOME>/repository/conf/etc/tasks-config.xml file has an impact here if you use AUTO. When the AUTO setting is enabled, and clustering is enabled, the server will wait till it picks up another node so that there are two Task Server instances up and running. Hence you will need to start both nodes simultaneously. 

    So if you want to use AUTO, change the taskServerCount to 1 so that you can start the management node first.

  9. On each BPS node, open <BPS_HOME>/repository/conf/bps.xml and configure the following:
    1. Enable distributed lock - This entry enables the Hazelcast-based synchronizations mechanism to prevent concurrent modification of the instance state by cluster members. 
      <tns:UseDistributedLock>true</tns:UseDistributedLock>
    2. Configure scheduler thread pool size - Thread pool size should always be smaller than maxActive database connections configured in the datasources.properties file. When configuring the thread pool size, allocate 10-15 threads per core depending on your setup. Leave some additional number of database connections since BPS uses database connections for management API as well.
      <tns:ODESchedulerThreadPoolSize>0</tns:ODESchedulerThreadPoolSize>

      Example settings for a two node cluster.

      Oracle Server configured database connection size - 250.
      maxActive entry in datasource.properties files for each node - 100
      SchedulerTreadPool size for each node - 50

Running the products in a cluster

  1. Deploy artifacts to each product deployment location (<BPS_HOME>/repository/deployment/....) by either manually copying the artifacts or using Deployment Synchronizer. On each BPS node, open <BPS_HOME>/repository/conf/carbon.xml and install SVNKit (svnClientBundle-1.0.0.jar) from http://dist.wso2.org/tools/svnClientBundle-1.0.0.jar to the <BPS_HOME>/repository/components/dropins folder.

    If you want automatic deployment of artifacts across the cluster nodes, you can enable the deployment synchronizer feature in the carbon.xml file. 

    <DeploymentSynchronizer>
      <Enabled>true</Enabled>
      <AutoCommit>true</AutoCommit>
      <AutoCheckout>true</AutoCheckout>
      <RepositoryType>svn</RepositoryType>
      <SvnUrl>http://10.100.3.115/svn/repos/as</SvnUrl>
      <SvnUser>wso2</SvnUser>
      <SvnPassword>wso2123</SvnPassword>
      <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
    </DeploymentSynchronizer>

    Deployment synchronizer functions by committing the artifacts to the configured SVN location from one node (the node with AutoCommit option set to true) and sending cluster messages to all other nodes about the addition/change of the artifact. When the cluster message is received, all other nodes will do an SVN update resulting in obtaining the changes to relevant deployment directories. Now the server will automatically deploy these artifacts. For the master node, keep AutoCommit and AutoCheckout entries as true. For all other nodes, change the AutoCommit entry to false.

  2. Start the BPS nodes. Use the following if your nodes are worker nodes:

    sh <BPS_HOME>/bin/wso2server.sh -DworkerNode=true

Performance tuning in the BPS cluster

In the server startup script, you can configure the memory allocation for the server node as well as the JVM tuning parameters. If you open the wso2server.sh or wso2server.bat file located at the <BPS_HOME>/bin directory and go to the bottom of the file, you will find those parameters. Change them according to the expected server load. The following is the default memory allocation for a WSO2 server.
-Xms256m -Xmx1024m -XX:MaxPermSize=256m

  • Performance tuning requires you to modify important system files, which affect all programs running on the server. We recommend you to familiarize yourself with these files using Unix/Linux documentation before editing them.
  • The parameter values we discuss below are just examples. They might not be the optimal values for the specific hardware configurations in your environment. We recommend you to carry out load tests on your environment to tune the load balancer accordingly.

Load balancing

If needed, you can install a hardware load balancer or an HTTP load balancer such as NGINX Plus as the front end to the BPS nodes. See Configuring NGINX Plus for details on configuring Nginx as the load balancer for a WSO2 product cluster.