This site contains the documentation that is relevant to older WSO2 product versions and offerings.
For the latest WSO2 documentation, visit https://wso2.com/documentation/.

Performance Tuning

This section describes some recommended performance tuning configurations to optimize the ESB. It assumes that you have set up the ESB on a server running Unix/Linux, which is recommended for a production deployment. To read more about performance in ESB 4.8.1 and how it compares to competitors, see http://wso2.com/library/articles/2014/02/esb-performance-round-7.5/.

Important

  • Performance tuning requires you to modify important system files, which affect all programs running on the server. We recommend you to familiarize yourself with these files using Unix/Linux documentation before editing them.
  • The parameter values we discuss below are just examples. They might not be the optimal values for the specific hardware configurations in your environment. We recommend you to carry out load tests on your environment to tune the ESB accordingly.

OS-level settings

  1. To optimize network and OS performance, configure the following settings in /etc/sysctl.conf file of Linux. These settings specify a larger port range, a more effective TCP connection timeout value, and a number of other important parameters at the OS-level.

    It is not recommended to use net.ipv4.tcp_tw_recycle = 1 when working with network address translation (NAT), such as if you are deploying products in EC2 or any other environment configured with NAT.

    net.ipv4.tcp_fin_timeout = 30
    fs.file-max = 2097152
    net.ipv4.tcp_tw_recycle = 1
    net.ipv4.tcp_tw_reuse = 1
    net.core.rmem_default = 524288
    net.core.wmem_default = 524288
    net.core.rmem_max = 67108864
    net.core.wmem_max = 67108864
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_wmem = 4096 65536 16777216
    net.ipv4.ip_local_port_range = 1024 65535      
  2. To alter the number of allowed open files for system users, configure the following settings in /etc/security/limits.conf file of Linux (be sure to include the leading * character).

    * soft nofile 4096
    * hard nofile 65535

    Optimal values for these parameters depend on the environment.

  3. To alter the maximum number of processes your user is allowed to run at a given time, configure the following settings in  /etc/security/limits.conf file of Linux (be sure to include the leading * character). Each carbon server instance you run would require upto 1024 threads (with default thread pool configuration). Therefore, you need to increase the nproc value by 1024 per each carbon server (both hard and soft).

    * soft nproc 20000
    * hard nproc 20000

JVM-level settings

When an XML element has a large number of sub elements and the system tries to process all the sub elements, the system can become unstable due to a memory overhead. This is a security risk.

To avoid this issue, you can define a maximum level of entity substitutions that the XML parser allows in the system. You do this using the entity expansion limit as follows in the <APIM_HOME>/bin/wso2server.bat file (for Windows) or the <APIM_HOME>/bin/wso2server.sh file (for Linux/Solaris). The default entity expansion limit is 64000.

-DentityExpansionLimit=10000

In a clustered environment, the entity expansion limit has no dependency on the number of worker nodes.

WSO2 Carbon platform-level settings

In multitenant mode, the WSO2 Carbon runtime limits the thread execution time. That is, if a thread is stuck or taking a long time to process, Carbon detects such threads, interrupts and stops them. Note that Carbon prints the current stack trace before interrupting the thread. This mechanism is implemented as an Apache Tomcat valve. Therefore, it should be configured in the <PRODUCT_HOME>/repository/conf/tomcat/catalina-server.xml file as shown below.

<Valve className="org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve" threshold="600"/>
  • The className is the Java class used for the implementation. Set it to org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.
  • The threshold gives the minimum duration in seconds after which a thread is considered stuck. The default value is 600 seconds.

ESB-level settings

  1.  Memory allocated for the ESB can be increased by modifying <ESB_HOME>/bin/wso2server.sh file.
    • Default setting for WSO2 ESB 4.6.0 and later is: -Xms256m -Xmx512m -XX:MaxPermSize=256m
    • This can be changed for benchmarking as shown in the following example: -Xms2048m -Xmx2048m -XX:MaxPermSize=1024m
  2. Add the following parameter to enable streaming XPath in <ESB_HOME>/repository/conf/synapse.properties file. For example:

    synapse.streaming.xpath.enabled=true
    synapse.temp_data.chunk.size=3072 
  3. Disable the service/API invocation access logs as follows:
    • If you have not yet started the server, you can disable log4j.logger.org.apache.synapse.transport.http.access in the <ESB_HOME>/repository/conf/log4j.properties file.

    • If the server has already been started, go to Configure -> Logging in the management console, and in the Configure Log4J Loggers section, set org.apache.synapse.transport.http.access to OFF.
      For more  information on when to use log4j.properties or the management console, see Setting Up Logging.
       
  4.  If you are using the Clone or Iterate mediator to handle a higher load, increase the number of threads in the synapse.properties file to balance the load. See Configuring synapse.properties for further information.

  5. Check the configurations in the <ESB_HOME>/repository/conf/passthru-http.properties file and change the default values to optimise the HTTP transport according to your production environment if required. See Configuring passthru-http.properties for further information.
  6. If you want to use the HTTP-NIO transport, comment out PPT and un-comment the HTTP-NIO transport in <ESB_HOME>/repository/conf/axis2/axis2.xml file. Then create a nhttp.properties file for the ESB in the <ESB_HOME>/repository/conf directory, and configure the socket timeout values, connection timeout values, and HTTP receiver thread pool parameters. The default values should be modified based on your production environment. See Configuring nhttp.properties for further information.

Increasing maximum JMS proxies

If you create several JMS proxy services in WSO2 ESB, you might see a message like like the following:

[2013-11-07 20:25:41,875]  WARN - JMSListener Polling tasks on destination : JMStoHTTPStockQuoteProxy18 of type queue for service JMStoHTTPStockQuoteProxy18 have not yet started after 3 seconds ..

This issue occurs when you do not have enough threads available to consume messages from JMS queues. The maximum number of concurrent consumers (that is, the number of JMS proxies) that can be deployed is limited by the base transport worker pool that is used by the JMS transport. You can configure the size of this worker pool using the system properties snd_t_core and snd_t_max. Note that increasing these values will also increase the memory consumption, becuase the worker pool will allocate more resources.

To adjust the values of these properties, you can modify the server startup script if you want to increase the available threads for all transports (requires more memory), or create a jms.properties file if you want to increase the available threads just for the JMS transport. Both approaches are described below.

To increase the threads for all transports:
  1. Open the wso2server.sh or wso2server.bat file in your <ESB_HOME>/bin directory for editing.
  2. Change the values of the properties as follows: 
    •  -Dsnd_t_core=200
    •  -Dsnd_t_max=250
To increase the threads for just the JMS transport:
  1. Create a file named jms.properties with the following properties:
    • snd_t_core=200

    • snd_t_max=250

  2. Create a directory called conf under your <ESB_HOME> directory and save the file in this directory.

For examples that illustrate how to tune the performance of the ESB, go to Performance Tuning WSO2 ESB with a practical example and WSO2 ESB tuning performance with threads.