...
Activiti BPMN engine uses Activiti datasource. Similar to BPS datasource you can allocate many more database connections for BPMN engine as necessary. Configure the Activiti datasource by editing the <BPS_HOME>/repository
Execution of each BPMN process instance makes multiple database calls. Therefore, when executing multiple process instances by concurrent threads (i.e., users), multiple database connections are used. Accordingly, the database connection pool has to be configured to provide the required number of connections based on the expected maximum concurrent process executions. This can be configured by setting the maxActive parameter of the <BPS_HOME>/repository/conf/datasources/activiti-datasourcesdatasource.xml
file and changing changing the maxActive value .
...
language | xml |
---|
...
. To avoid failures that may occur due to the congestion for db connections, maxActive should be equal to the expected number of concurrent process executions. However, lesser number of connections may be sufficient depending on the properties of executed process models (i.e., number/type of tasks) and the behavior of processes (i.e. presence of timer events, reaction time of process participants). If db connection pool size (i.e. maxActive) has to be reduced, it has to be done based on load tests with actual process models and expected process behaviors.
Maximum allowed connections for the database connection pool (i.e., maxActive) should not exceed the maximum allowed connections (i.e. DB sessions) for the database server. In addition, if the database server is shared with BPEL runtime or other server, make sure sufficient number of sessions are available for all shared servers. For example, if BPMN connection pool needs 100 connections and BPEL connection pool needs 50 connections, and if it is expected to have peak BPMN and BPEL loads at the same time, the number of database sessions should be at least 150.
Configure the Activiti datasource by editing the <BPS_HOME>/repository/conf/datasources/activiti-datasources.xml
file and changing the following.
Code Block | ||
---|---|---|
| ||
<datasources>
<datasource>
<name>ACTIVITI_DB</name>
<description>The datasource used for activiti engine</description>
<jndiConfig>
<name>jdbc/ActivitiDB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/activity</url>
<username>root</username>
<password>root</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
</datasources> |
...
Code Block |
---|
<tns:odeschedulerthreadpoolsize>50</tns:odeschedulerthreadpoolsize> |
Multi-threaded HTTP connection manager
...
Code Block | ||
---|---|---|
| ||
<tns:WSO2BPSxmlns:tns="http://wso2.org/bps/config"> ... <tns:MultithreadedHttpConnectionManagerConfig><tnsMultithreadedHttpConnectionManagerConfig> <tns:maxConnectionsPerHostvalue="20100"/><tns> <tns:maxTotalConnectionsvalue="200"/><> </tns:MultithreadedHttpConnectionManagerConfig> ... </tns:WSO2BPS> |
TimeOuts
This configuration is relevant when partner services take more time to response. When partner services are slow or take more time to response, callee BPEL process's invoke activity fails due to message exchange timeout. By increasing time will avoid these kind of failures. Also note that, slow partner services will slow entire BPEL process. This will cause to timeout the client application. Thus it is required increase timeout interval for client application. To do this, configure the <BPS_HOME>/repository/conf/bps.xml
file and the <BPS_HOME>/repository/conf/axis2/axis2.xml
file as shown below.
...
Code Block | ||||
---|---|---|---|---|
| ||||
<transportSender name="http" class="org.apache.axis2.transport.http.CommonsHTTPTransportSender"> <parameter name="PROTOCOL">HTTP/1.1</parameter> <parameter name="Transfer-Encoding">chunked</parameter> <!-- This parameter has been added to overcome problems encounted in SOAP action parameter --> <parameter name="OmitSOAP12Action">true</parameter> <parameter name="SO_TIMEOUT">600000</parameter> <parameter name="CONNECTION_TIMEOUT">600000</parameter> </transportSender> |
Here you must increase the default values for message exchange timeout and external service invocation timeout. Also set the SO_TIMEOUT
parameter and CONNECTION_TIMEOUT
parameter in HttpSender. Increase the timeout value from the default value to 10 minutes.
HumanTask caching
HumanTask caching is important when you have to deal with a large user store. HumanTasks are tightly coupled with users and user roles/groups. Because of this, BPS does lot of user store lookups for HumanTask operations. These user store calls can take considerable amount of time, if the user store is large or located remotely. This degrades the performance of the entire HumanTask engine. Caching user and role lookup data at the BPS side will reduce these remote user store calls and improve the overall performance of the HumanTask engine.
Enable HumanTask caching in the <BPS_HOME>/repository/conf/humantask.xml
file.
Code Block | ||
---|---|---|
| ||
<cacheconfiguration>
<enablecaching>true</enablecaching>
</cacheconfiguration> |
Number of HumanTask scheduler threads
This is relevant when you are not using HumanTask deadline/escalation. HumanTask deadline and escalation are scheduled tasks that are executed by the HumanTask scheduler. By default, 50 threads are allocated for the HumanTask scheduler. If you are not using deadline/escalations, you can configure this value to a lower value such as 5. This will utilize idle threads in BPS server. Note that, you can't set this to 0, because the HumanTask engine has several internal scheduled tasks to run.
Configure this value in the <BPS_HOME>/repository/conf/humantask.xml
file.
Code Block | ||
---|---|---|
| ||
<schedulerconfig>
<maxthreadpoolsize>5</maxthreadpoolsize>
</schedulerconfig> |
BPEL process persistence
Configuring BPEL process persistence is recommended. If a process is implemented in the request-response interaction model, use in-memory processes instead of persistence processes. This decision mainly depends on the specific business use-case.
Process-to-process communication
Use process-to-process communication. This reduces the overhead introduced by additional network calls, when calling one BPEL process from another deployed in the same BPS instance.
Event filtering
Configure event-filtering at process and scope level. A lot of database resources can be saved by reducing the number of events generated.
Non-visualized environments
Take precaution when deploying WSO2 BPS in virtualized environments. Random increases in network latencies and performance degradation have been observed when running BPS on VMs.
Process hydration and dehydration
One technique to reduce memory utilization of the BPS engine is process hydration and dehydration. User can configure the hydration/dehydration policy in the <BPS_HOME>/repository/conf/bps.xml
file or define a custom hydration/dehydration policy.
The following example enables the dehydration policy and sets the maximum deployed process count that can exist in memory at a particular time to 100. The maximum age of a process before it is dehydrated is set to 5 minutes.
Code Block | ||
---|---|---|
| ||
<tns:ProcessDehydrationmaxCount="100"value="true"><tns:MaxAgevalue="300000"/></tns:ProcessDehydration> |
MaxAgevalue
: Sets the maximum age of a process before it is dehydrated.ProcessDehydrationmaxCount
: The maximum deployed process count that can exist in memory at a particular time.
In-memory execution
For performance purposes, a process can be defines as being executed only in-memory. This greatly reduces the amount of generated queries and puts far less load on the database. Both persistent and non-persistent processes can cohabit in WSO2 BPS.
Shown below is an example of declaring a process as in-memory simply by adding an in-memory element in the deploy.xml file.
Code Block | ||
---|---|---|
| ||
<processname="pns:HelloWorld2">
<in-memory>true</in-memory>
<providepartnerLink="helloPartnerLink">
<servicename="wns:HelloService"port="HelloPort"/>
</provide>
</process> |
Info |
---|
In-memory executions put restrictions on the process and the process instances cannot be queried using the BPS Management API. Also, the process definition can only include a single receive activity (the one that will trigger the instance creation). |
Info |
---|
Configuration details for these optimizations vary in older BPS versions. Also, these optimizations are supported by Apache ODE, but configuration is different from WSO2 BPS. |
BPMN performance tuning
The BPMN runtime frequently accesses the database for persisting and retrieving process instance states. Therefore, performance of BPMN processes depends heavily on the database server. In order to get best performance, it is recommended to have a high speed network connection between BPS instances and the database server.
Execution of each BPMN process instance makes multiple database calls. Therefore, when executing multiple process instances by concurrent threads (i.e., users), multiple database connections are used. Accordingly, the database connection pool has to be configured to provide the required number of connections based on the expected maximum concurrent process executions. This can be configured by setting the maxActive parameter of the <BPS_HOME>/repository/conf/datasources/activiti-datasource.xml
file. To avoid failures that may occur due to the congestion for db connections, maxActive should be equal to the expected number of concurrent process executions. However, lesser number of connections may be sufficient depending on the properties of executed process models (i.e., number/type of tasks) and the behavior of processes (i.e. presence of timer events, reaction time of process participants). If db connection pool size (i.e. maxActive) has to be reduced, it has to be done based on load tests with actual process models and expected process behaviors.
...
<transportSender name="https" class="org.apache.axis2.transport.http.CommonsHTTPTransportSender">
<parameter name="PROTOCOL">HTTP/1.1</parameter>
<parameter name="Transfer-Encoding">chunked</parameter>
<!-- This parameter has been added to overcome problems encounted in SOAP action parameter -->
<parameter name="OmitSOAP12Action">true</parameter>
<parameter name="SO_TIMEOUT">600000</parameter>
<parameter name="CONNECTION_TIMEOUT">600000</parameter>
</transportSender>
|
Here you must increase the default values for message exchange timeout and external service invocation timeout. Also set the SO_TIMEOUT
parameter and CONNECTION_TIMEOUT
parameter in HttpSender. Increase the timeout value from the default value to 10 minutes.
HumanTask caching
HumanTask caching is important when you have to deal with a large user store. HumanTasks are tightly coupled with users and user roles/groups. Because of this, BPS does lot of user store lookups for HumanTask operations. These user store calls can take considerable amount of time, if the user store is large or located remotely. This degrades the performance of the entire HumanTask engine. Caching user and role lookup data at the BPS side will reduce these remote user store calls and improve the overall performance of the HumanTask engine.
Enable HumanTask caching in the <BPS_HOME>/repository/conf/humantask.xml
file.
Code Block | ||
---|---|---|
| ||
<cacheconfiguration>
<enablecaching>true</enablecaching>
</cacheconfiguration> |
Number of HumanTask scheduler threads
This is relevant when you are not using HumanTask deadline/escalation. HumanTask deadline and escalation are scheduled tasks that are executed by the HumanTask scheduler. By default, 50 threads are allocated for the HumanTask scheduler. If you are not using deadline/escalations, you can configure this value to a lower value such as 5. This will utilize idle threads in BPS server. Note that, you can't set this to 0, because the HumanTask engine has several internal scheduled tasks to run.
Configure this value in the <BPS_HOME>/repository/conf/humantask.xml
file.
Code Block | ||
---|---|---|
| ||
<schedulerconfig>
<maxthreadpoolsize>5</maxthreadpoolsize>
</schedulerconfig> |
BPEL process persistence
Configuring BPEL process persistence is recommended. If a process is implemented in the request-response interaction model, use in-memory processes instead of persistence processes. This decision mainly depends on the specific business use-case.
Process-to-process communication
Use process-to-process communication. This reduces the overhead introduced by additional network calls, when calling one BPEL process from another deployed in the same BPS instance.
Event filtering
Configure event-filtering at process and scope level. A lot of database resources can be saved by reducing the number of events generated.
Non-visualized environments
Take precaution when deploying WSO2 BPS in virtualized environments. Random increases in network latencies and performance degradation have been observed when running BPS on VMs.
Process hydration and dehydration
One technique to reduce memory utilization of the BPS engine is process hydration and dehydration. User can configure the hydration/dehydration policy in the <BPS_HOME>/repository/conf/bps.xml
file or define a custom hydration/dehydration policy.
The following example enables the dehydration policy and sets the maximum deployed process count that can exist in memory at a particular time to 100. The maximum age of a process before it is dehydrated is set to 5 minutes.
Code Block | ||
---|---|---|
| ||
<tns:ProcessDehydration maxCount="100" value="true"><tns:MaxAgevalue="300000"/></tns:ProcessDehydration> |
MaxAgevalue
: Sets the maximum age of a process before it is dehydrated.maxCount
: The maximum deployed process count that can exist in memory at a particular time.
In-memory execution
For performance purposes, a process can be defines as being executed only in-memory. This greatly reduces the amount of generated queries and puts far less load on the database. Both persistent and non-persistent processes can cohabit in WSO2 BPS.
Shown below is an example of declaring a process as in-memory simply by adding an in-memory element in the deploy.xml file.
Code Block | ||
---|---|---|
| ||
<processname="pns:HelloWorld2">
<in-memory>true</in-memory>
<providepartnerLink="helloPartnerLink">
<servicename="wns:HelloService"port="HelloPort"/>
</provide>
</process> |
Info |
---|
In-memory executions put restrictions on the process and the process instances cannot be queried using the BPS Management API. Also, the process definition can only include a single receive activity (the one that will trigger the instance creation). |
Info |
---|
Configuration details for these optimizations vary in older BPS versions. Also, these optimizations are supported by Apache ODE, but configuration is different from WSO2 BPS. |
BPMN performance tuning
The BPMN runtime frequently accesses the database for persisting and retrieving process instance states. Therefore, performance of BPMN processes depends heavily on the database server. In order to get best performance, it is recommended to have a high speed network connection between BPS instances and the database server.
BPMN runtime uses a database based ID generator for allocating IDs for all persisted entities. In a highly loaded clustered scenario (i.e., multiple BPS instances with a shared database), database transaction failures may occur if two BPS instances try to allocate IDs at the same time. This can be mitigated by increasing the number of IDs allocated in a single transaction by setting the "idBlockSize" property. Default value of ID block size is 2500. This can be increased by adding the following property to processEngineConfiguration
bean in the <BPS_HOME>/repository/conf/activiti.xml
file.
Code Block | ||
---|---|---|
| ||
<property name="idBlockSize" value="5000" /> |
Another option is to configure the StrongUuidGenerator
instead of using database based ID generator by adding the following property to processEngineConfiguration
bean in the <BPS_HOME>/repository/conf/activiti.xml
file.
...