com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_links' is unknown.

Tuning Performance

This section describes some recommended performance tuning configurations to optimize the API Manager. It assumes that you have set up the API Manager on Unix/Linux, which is recommended for a production deployment. We also recommend a distributed API Manager setup for most production systems. Out of all components of an API Manager distributed setup, the API Gateway is the most critical, because it handles all inbound calls to APIs. Therefore, we recommend you to have at least a 2-node cluster of API Gateways in a distributed setup.

The values we discuss below are only general recommendations for the API Gateway. Generally, they work best when there are 350 to 30000 calls per second to the API Gateway, but these values might not be optimal for the specific hardware configurations in your environment. We recommend you to carry out load tests on your environment to tune the API Manager accordingly.

Improvement AreaPerformance Recommendations
API Gateway nodes

Increase memory allocated by modifying /bin/wso2server.sh with the following setting:

  • -Xms2048m -Xmx2048m -XX:MaxPermSize=1024m
NHTTP transport of API Gateway

Recommended values for <AM_HOME>/repository/conf/nhttp.properties file are given below. Note that the commented out values in this file are the default values that will be applied if you do not change anything.

Property descriptions:
snd_t_coreTransport sender worker pool's initial thread count
snd_t_maxTransport sender worker pool's maximum thread count
snd_io_threadsSender-side IO workers, which is recommended to be equal to the number of CPU cores. I/O reactors usually employ a small number of dispatch threads (often as few as one) to dispatch I/O event notifications to a greater number (often as many as several thousands) of I/O sessions or connections. Generally, one dispatch thread is maintained per CPU core.
snd_alive_secSender-side keep-alive seconds
snd_qlenSender queue length, which is infinite by default
Recommended values:

# HTTP Sender thread pool parameters

  • snd_t_core=200
  • snd_t_max=250
  • snd_alive_sec=5
  • snd_qlen=-1
  • snd_io_threads=16

# HTTP Listener thread pool parameters

  • lst_t_core=200
  • lst_t_max=250
  • lst_alive_sec=5
  • lst_qlen=-1
  • lst_io_threads=16
PassThrough transport of API Gateway

Recommended values for <AM_HOME>/repository/conf/passthru-http.properties file are given below. Note that the commented out values in this file are the default values that will be applied if you do not change anything.

Property descriptions

worker_thread_keepalive_sec

Defines the keep-alive time for extra threads in the worker pool
worker_pool_queue_lengthDefines the length of the queue that is used to hold runnable tasks to be executed by the worker pool
io_threads_per_reactorDefines the number of IO dispatcher threads used per reactor

http.max.connection.per.host.port

Defines the maximum number of connections per host port
worker_pool_queue_lengthDetermines the length of the queue used by the PassThrough transport thread pool to store pending jobs.
 Recommended values
  • worker_thread_keepalive_sec : Default value is 60s. This should be less than the socket timeout.

  • worker_pool_queue_length : Set to -1 to use an unbounded queue. If a bound queue is used and the queue gets filled to its capacity, any further attempts to submit jobs will fail, causing some messages to be dropped by Synapse. The thread pool starts queuing jobs when all the existing threads are busy and the pool has reached the maximum number of threads. So, the recommended queue length is -1.

  • io_threads_per_reactor : Value is based on the number of processor cores in the system. (Runtime.getRuntime().availableProcessors())

  • http.max.connection.per.host.port : Default value is 32767, which works for most systems but you can tune it based on your operating system (for example, Linux supports 65K connections).

  • http.socket.timeout=120000
  • worker_pool_size_core=400
  • worker_pool_size_max=500
  • io_buffer_size=16384
  • http.socket.timeout=60000
  • snd_t_core=200 
  • snd_t_max=250 
  • snd_io_threads=16 
  • lst_t_core=200 
  • lst_t_max=250 
  • lst_io_threads=16

Make the number of threads equal to the number of processor cores.
Key management nodes

Set the following in <APIM_HOME>/repository/conf/axis2/axis2_client.xml file:

<parameter name="defaultMaxConnPerHost">1000</parameter> 
<parameter name="maxTotalConnections">30000</parameter> 

Set the MySQL maximum connections:

mysql> show variables like "max_connections"; 
 max_connections was 151 
 set to global max_connections = 250; 

Set the open files limit to 200000 by editing the /etc/sysctl.conf file:

sudo sysctl -p

Set the following in CatlinaServer.sh batch file:

maxThreads="750" 
minSpareThreads="150" 
disableUploadTimeout="false" 
enableLookups="false" 
connectionUploadTimeout="120000" 
maxKeepAliveRequests="600" 
acceptCount="600" 

Set the following connection pool elements in <APIM_HOME>/repository/conf/datasources/master-datasources.xml file:

<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>

Note that you set the <testOnBorrow> element to true and provide a validation query (e.g., in Oracle, SELECT 1 FROM DUAL), which is run to refresh any stale connections in the connection pool. Set a suitable value for the <validationInterval> element, which defaults to 30000 milliseconds. It determines the time period after which the next iteration of the validation query will be run on a particular connection. It avoids excess validations and ensures better performance.

com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_links2' is unknown.