This site contains the documentation that is relevant to older WSO2 product versions and offerings.
For the latest WSO2 documentation, visit https://wso2.com/documentation/.
Performance Tuning
This section describes some recommended performance tuning configurations to optimize the ESB. It assumes that you have set up the ESB on a server running Unix/Linux, which is recommended for a production deployment.
Important
- Performance tuning requires you to modify important system files, which affect all programs running on the server. We recommend you to familiarize yourself with these files using Unix/Linux documentation before editing them.
- The parameter values we discuss below are just examples. They might not be the optimal values for the specific hardware configurations in your environment. We recommend you to carry out load tests on your environment to tune the ESB accordingly.
OS-Level Settings
1. To optimize network and OS performance, configure the following settings in /etc/sysctl.conf file of Linux. These settings specify a larger port range, a more effective TCP connection timeout value, and a number of other important parameters at the OS-level.
It is not recommended to use net.ipv4.tcp_tw_recycle = 1
when working with network address translation (NAT), such as if you are deploying products in EC2 or any other environment configured with NAT.
net.ipv4.tcp_fin_timeout = 30 fs.file-max = 2097152 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_tw_reuse = 1 net.core.rmem_default = 524288 net.core.wmem_default = 524288 net.core.rmem_max = 67108864 net.core.wmem_max = 67108864 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.ip_local_port_range = 1024 65535
2. To alter the number of allowed open files for system users, configure the following settings in /etc/security/limits.conf file of Linux.
* soft nofile 4096 * hard nofile 65535
Optimal values for these parameters depend on the environment.
ESB-Level Settings
1. Memory allocated for the ESB can be increased by modifying $ESB_HOME/bin/wso2server.sh file.
- Default setting for WSO2 ESB 4.6.0 is: -Xms256m -Xmx512m -XX:MaxPermSize=256m
- This can be changed for benchmarking as shown in the following example: -Xms2048m -Xmx2048m -XX:MaxPermSize=1024m
2. Add following parameter to enable streaming XPath in $ESB_HOME/repository/conf/synapse.properties file. For example,
synapse.streaming.xpath.enabled=true synapse.temp_data.chunk.size=3072
3. Disable access logs in $ESB_HOME/repository/conf/log4j.properties file as follows.
log4j.logger.org.apache.synapse.transport.nhttp.access=WARN log4j.logger.org.apache.synapse.transport.passthru.access=WARN
Pass-Through Transport Configurations
4. Set the following properties in <ESB_HOME>/repository/conf/passthru-http.properties file to optimize the pass-through transport.
http.socket.timeout=120000 worker_pool_size_core=400 worker_pool_size_max=500 worker_thread_keepalive_sec=60 worker_pool_queue_length=-1 io_threads_per_reactor=2 http.max.connection.per.host.port=32767 io_buffer_size=16384
Each parameter in the above configuration is described below.
Parameter Name | Description |
---|---|
http.socket.timeout | Maximum period of inactivity between two consecutive data packets. Given in milliseconds. Also defined as SO_TIMEOUT. |
worker_pool_size_core | Initial number of threads in the worker thread pool. |
worker_pool_size_max | Maximum number of threads in the worker thread pool. Specifying a maximum limit helps to avoid performance degradation that can occur due to context switching. Once this limit is reached and all the threads in the pool are busy, they will be in a BLOCKED state. In such situations an increase in the number of messages would fire the error |
worker_thread_keepalive_sec | Defines the keep-alive time for extra threads in the worker pool. |
worker_pool_queue_length | Defines the length of the queue that is used to hold runnable tasks to be executed by the worker pool. |
io_threads_per_reactor | Defines the number of IO dispatcher threads used per reactor. |
http.max.connection.per.host.port | Defines the maximum number of connections per host port. |
io_buffer_size | Size in bytes of the buffer through which data passes. |
Recommended Values
http.socket.timeout:
120000worker_pool_size_core
: 400worker_pool_size_max
: 500worker_thread_keepalive_sec
: Default value is 60s. This should be less than the socket timeout.worker_pool_queue_length
: Set to -1 to use an unbounded queue. If a bound queue is used and the queue gets filled to its capacity, any further attempts to submit jobs will fail, causing some messages to be dropped by Synapse. The thread pool starts queuing jobs when all the existing threads are busy and the pool has reached the maximum number of threads. So, the recommended queue length is -1.io_threads_per_reactor
: Value is based on the number of processor cores in the system. (Runtime.getRuntime().availableProcessors())http.max.connection.per.host.port
: Default value is 32767, which works for most systems but you can tune it based on your operating system (for example, Linux supports 65K connections).
io_buffer_size
: 16384
HTTP-NIO Transport Configurations
5. PTT is the default transport used by the ESB. If you want to use the HTTP-NIO transport , comment out PPT and un-comment the HTTP-NIO transport in <ESB_HOME>/repository/conf/axis2/axis2.xml file.
6. To tune the HTTP-NIO transport performance, create a nhhtp.properties file for the ESB in the <ESB_HOME>/repository/conf directory, and configuring the socket timeout values, connection timeout values, and HTTP receiver thread pool parameters. For example,
http.socket.timeout=120000 http.socket.buffer-size=8192 http.tcp.nodelay=1 http.connection.stalecheck=0 # HTTP Listener thread pool parameters snd_t_core=200 snd_t_max=250 snd_alive_sec=5 snd_qlen=-1 snd_io_threads=16 # HTTP Sender thread pool parameters lst_t_core=200 lst_t_max=250 lst_alive_sec=5 lst_qlen=-1 lst_io_threads=16
Each parameter in the above configuration is described below.
Parameter Name | Description |
---|---|
http.socket.timeout | Maximum period of inactivity between two consecutive data packets. Given in milliseconds. Also defined as SO_TIMEOUT. |
http.socket.buffer-size | Determines the size of the internal socket buffer used to retain data while receiving / transmitting HTTP messages. |
http.tcp.nodelay | Determines whether Nagle's algorithm is to be used, which improves efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. |
http.connection.stalecheck | Determines whether a stale connection check is to be used. WSO2 ESB uses the following listener/sender architectural style with non-blocking IO: Client -> ESB: Non-blocking Transport Listener -> Mediation Flow -> ESB: Non-blocking Transport Sender -> Back-End. |
lst_t_core | Transport Sender worker pool's initial thread count. |
lst_t_max | Transport Sender worker pool's maximum thread counts. |
lst_io_threads | Sender-side IO workers, which is recommended to be equal to the number of CPU cores. I/O reactors usually employ a small number of dispatch threads (often as few as one) to dispatch I/O event notifications to a greater number (often as many as several thousands) of I/O sessions or connections. Generally, one dispatch thread is maintained per CPU core. |
lst_alive_sec | sender-side keep-alive seconds. |
lst_qlen | Sender queue length, which is infinite by default. |
snd_t_core snd_t_max snd_io_threads snd_alive_sec snd_qlen | These listener-side parameters have the same definition of their sender-side counterpart. Generally, same values of the sender-side parameters are set to listener-side parameters as well. |
For examples that illustrate how to tune the performance of the ESB, go to Performance Tuning WSO2 ESB with a practical example and WSO2 ESB tuning performance with threads.