The HTTP transport of the WSO2 Enterprise Integrator(WSO2 EI) can be used to handle blocking and non-blocking calls. This section describes how you can tune this transport for better performance in WSO2 EI.
Improving the non-blocking invocation performance
WSO2 EI supports two non-blocking transports, namely the passthrough transport and the nhttp transport. The passthrough transport is the default transport of WSO2 EI, but you can set the NHTTP transport as the default transport by renaming the <EI_HOME>/conf/axis2/axis2_nhttp.xml
file to axis2.xml
.
You can improve the non-blocking invocation performance by either configuring properties related to the HTTP Pass Through transport or by configuring the properties related to the NHTTP transport, based on which transport you are using as the default WSO2 EI transport in your production environment.
Configuring passthru-http.properties
You can configure the following properties as required in the <EI_Home>/conf/passthru-http.properties
file:
Property | Description | Default Value |
---|---|---|
| WSO2 EI uses a thread pool executor to create threads and to handle incoming requests. This parameter controls the number of core threads used by the executor pool. If you increase this parameter value, the number of requests received that can be processed by EI increases, hence, the throughput also increases. The nature of the integration scenario and the number of concurrent requests received by the ESB are the main factors that help to determine worker_pool_size_core. | 400 |
| This is the maximum number of threads in the worker thread pool. Specifying a maximum limit avoids performance degradation that can occur due to context switching. If the specified value is reached, you will see the error “SYSTEM ALERT - HttpServerWorker threads were in BLOCKED state during last minute”. This can occur due to an extraordinarily high number of requests sent at a time when all the threads in the pool are busy, and the maximum number of threads is already reached. If the queue ( | 500 |
| This is the maximum period of inactivity between two consecutive data packets, specified in milliseconds. | 120000 |
| This defines the keep-alive time for extra threads in the worker pool. The value specified here should be less than the socket timeout value. Once this time has elapsed for an extra thread, it will be destroyed. The purpose of this parameter is to optimize the usage of resources by avoiding wastage that results by having unutilized extra threads. | 60 |
| This defines the length of the queue that is used to hold runnable tasks to be executed by the worker pool. The thread pool starts queuing jobs when all the existing threads are busy, and the pool has reached the maximum number of threads. The value for this parameter should be -1 to use an unbound queue. If a bound queue is used and the queue gets filled to its capacity, any further attempts to submit jobs fail causing some messages to be dropped by Synapse. | -1 |
| This defines the number of IO dispatcher threads used per reactor. The value specified should not exceed the number of cores in the server. | The default value is equal to number of cores in the server. |
| This is the value of the memory buffer allocated when reading data into the memory from the underlying socket/file channels. You should leave this property set to the default value. | 16384 |
| This defines the maximum number of connections allowed per host port. | 32767 |
| If this parameter is set to true, it is possible to open another socket on the same port as the socket that is currently used by the EI server to listen to connections. This is useful when recovering from a crash. In such instances, if the socket is not properly closed, a new socket can be opened to listen to connections. | true |
| This is used to configure the SessionInputBuffer size of http core. The SessionInputBuffer is used to fill data that is read from the OS socket. This parameter does not affect the OS socket buffer size. | 8192 |
| If this parameter is set to true, all services deployed to WSO2 EI cannot be accessed via the http:<EI>:8240/services/ and https:<EI>:8243/services/ URls. | true |
http.user.agent.preserve | If this parameter is set to true, the user-agent HTTP header of messages passing through the ESB is preserved and printed in the outgoing message. | false |
| This parameter allows you to specify the header field/s of messages passing through the EI that need to be preserved and printed in the outgoing message such as When uploading files using this property, if you run into any header dropping issues such as content type (or any other headers) not passing to back end or media type (charset) being missing at the Pass Through Transport level, add the When you add the The main difference between using this property and using the FORCE_HTTP_CONTENT_LENGTH and COPY_CONTENT_LENGTH_FROM_INCOMING properties together in an API/proxy service is that http.headers.preserve=Content-Length property applies at a global (server) level, whereas, you can use the other two properties to have this behaviour locally in the API/proxy service. | Content-Type |
| If this parameter is set to true, the HTTP connections with the back end service are closed soon after the request is served. It is recommended to set this property to false so that WSO2 EI does not have to create a new connection every time it sends a request to a back-end service. However, you may need to close connections after they are used if the back-end service does not provide sufficient support for keep-alive connections. | false |
Configuring nhttp.properties
You can configure the NHTTP properties as required in the <EI_Home>/conf/nhttp.properties
file:
Following are the properties used by the non-blocking HTTP transport:
Property Description Default Value http.socket.timeout.receiver
This is the maximum period of inactivity between two consecutive data packets on the transport listener side. This is the socket timeout value for connection between the client and the WSO2 EI server. 60000 http.socket.timeout.sender
This is the maximum period of inactivity between two consecutive data packets for the transport sender side. This is the socket timeout value for connection between the WSO2 EI server and the backend server. 60000 nhttp_buffer_size
This is the size of the buffer through which data passes when receiving/transmitting NHTTP requests. 8192 http.tcp.nodelay
This determines whether Nagle's algorithm (for improving the efficiency of TCP/IP by reducing the number of packets sent over the network) is used. Use the value 0 to enable this algorithm and the value 1 to disable it. The algorithm should be enabled if you need to reduce bandwidth consumption. 1 http.connection.stalecheck
This determines whether stale connection check is used. Use the value 0 to enable stale connection check and the value 1 to disable it. When this parameter is enabled, connections that are no longer used are identified and disabled before each request execution. Stale connection check should be disabled when performing critical operations. 0 http.block_service_list
If this parameter is set to true
, all services deployed to WSO2 EI cannot be viewed via the http:<EI>:8240/services/
andhttps:<EI>:8243/services/
URls.false
http.headers.preserve
This parameter allows you to specify the header field/s of messages passing through WSO2 EI that need to be preserved and printed in the outgoing message (e.g., http.headers.preserve = Location, Date, Server
). Supported header fields are:Date, Server
andUser-Agent.
Following are the HTTP sender thread pool properties:
Parameter Name Description Default Value snd_t_core
Transport listener worker pool's initial thread count. 20 snd_t_max
Transport listener worker pool's maximum thread count. Once this limit is reached and all the threads in the pool are busy, they will be in a BLOCKED
state. In such situations an increase in the number of messages would fire the errorSYSTEM ALERT - HttpServerWorker threads were in BLOCKED state during last minute
.100 snd_alive_sec
Listener-side keep-alive seconds. 5 snd_qlen
The listener queue length. -1 snd_io_threads
Listener-side IO workers. 2 When there is an increased load, it is recommended to increase the number of threads mentioned in the properties above to balance it.
Following are the HTTP listener thread pool properties:
Note
Listener-side properties generally have the same values as the sender-side properties.
Property Description Default Value lst_t_core
Transport sender worker pool's initial thread count. 20 lst_t_max
Transport sender worker pool's maximum thread count. Once this limit is reached and all the threads in the pool are busy, they will be in a BLOCKED
state. In such situations an increase in the number of messages would fire the errorSYSTEM ALERT - HttpServerWorker threads were in BLOCKED state during last minute
.100 lst_alive_sec
Sender-side keep-alive seconds. 5 lst_qlen
The sender queue length. -1 lst_io_threads
Sender-side IO workers. 2 Following is a property for AIX-based deployment:
Property Description Default Value http.nio.interest-ops-queueing
Determines whether interestOps() queueing is enabled for the I/O reactors. true
Improving the blocking invocation performance
The Callout mediator as well as the Call mediator in blocking mode uses the axis2 CommonsHTTPTransportSender
internally to invoke services. It uses the MultiThreadedHttpConnectionManager
to handle connections, but by default it only allows two simultaneous connections per host. So if there are more than two requests per host, the requests have to wait until a connection is available. Therefore if the backend service is slow, many requests have to wait until a connection is available from the MultiThreadedHttpConnectionManager
.This can lead to a significant degrade in the performance of WSO2 EI.
In order to overcome this issue, you can edit the CommonsHTTPTransportSender configuration in the <EI_HOME>/conf/axis2/axis2_blocking_client.xml
file, and increase the value of the defaultMaxConnectionsPerHost
parameter.
For example, if you need to set 100 simultaneous connections per second, you can set the value of the defaultMaxConnectionsPerHost
parameter as follows:
<transportsender class="org.apache.axis2.transport.http.CommonsHTTPTransportSender" name="http"> <parameter name="PROTOCOL">HTTP/1.1</parameter> <parameter name="Transfer-Encoding">chunked</parameter> <parameter name="cacheHttpClient">true</parameter> <parameter name="defaultMaxConnectionsPerHost">100</parameter> </transportsender>