com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_links' is unknown.

Network and OS Level Performance Tuning

When it comes to performance, the OS that the server runs plays an important role. This page describes the parameters that you can configure to optimize the network and OS performance.

If you are running MacOS Sierra and experience long startup times for WSO2 products, try mapping your Mac hostname to 127.0.0.1 and ::1 in the /etc/hosts file as described in this blog post.

Following are the files and parameters you can configure to optimize performance:

  • Configure the following parameters in the /etc/sysctl.conf file of Linux for maximum concurrency. These parameters can be set to specify a larger port range, a more effective TCP connection timeout value, and a number of other important settings at the OS-level based on your requirement.
     

    Note

    Since all these settings apply at the OS level, changing settings can affect other programs running on the server. The sample values specified here might not be the optimal values for your production system. You need to apply the values and run a performance test to find the best values for your production system.

    ParameterDescriptionRecommended Value
    net.ipv4.tcp_fin_timeoutThis is the length of time (in seconds) that TCP takes to receive a final FIN before the socket is closed. Setting this is required to prevent DoS attacks.30
    net.ipv4.tcp_tw_recycle

    This enables fast recycling of TIME_WAIT sockets.

    Note

    Change this with caution and ONLY in internal networks where the network connectivity speeds are faster.
    It is not recommended to use net.ipv4.tcp_tw_recycle = 1 when working with network address translation (NAT), such as if you are deploying products in EC2 or any other environment configured with NAT. 


    1
    net.ipv4.tcp_tw_reuseThis allows reuse of sockets in TIME_WAIT state for new connections when it is safe from the network stack’s perspective.1
    net.core.rmem_defaultThis sets the default OS receive buffer size for all types of connections.524288
    net.core.wmem_defaultThis sets the default OS send buffer size for all types of connections.524288
    net.core.rmem_maxThis sets the maximum OS receive buffer size for all types of connections.67108864
    net.core.wmem_maxThis sets the maximum OS send buffer size for all types of connections.67108864
    net.ipv4.tcp_rmemThis specifies the receive buffer space for each TCP connection and has three values that hold the following information:
    The first value is the minimum receive buffer space for each TCP connection, and this buffer is always allocated to a TCP socket, even under high pressure on the system. 

    The second value is the default receive buffer space allocated for each TCP socket. This value overrides the /proc/sys/net/core/rmem_default value used by other protocols.
    The last value is the maximum receive buffer space allocated for a TCP socket.
    4096 87380 16777216
    net.ipv4.tcp_wmem

    This specifies the send buffer space for each TCP connection and has three values that hold the following information:
    The first value is the minimum TCP send buffer space available for a single TCP socket.
    The second value is the default send buffer space allowed for a single TCP socket to use.
    The third value is the maximum TCP send buffer space.

    Every TCP socket has the specified amount of buffer space to use before the buffer is filled up, and each of the three values are used under different conditions.

    4096 65536 16777216

    net.ipv4.ip_local_port_range

    This defines the local port range that is used by TCP and UDP to choose the local port. The first number is the first local port allowed for TCP and UDP traffic, and the second number is the last port number.
    If your Linux server is opening a large number of outgoing network connections, you need to increase the default local port range. In Linux, the default range of IP port numbers allowed for TCP and UDP traffic is small, and if this range is not changed accordingly, a server can come under fire if it runs out of ports.

    1024 65535

    fs.file-max

    This is the maximum number of file handles that the kernel can allocate. The kernel has a built-in limit on the number of files that a process can open. If you need to increase this limit,  you can increase the fs.file-max value although it can take up some system memory.2097152
  • Configure the following parameters in the /etc/security/limits.conf file of Linux if you need to alter the maximum number of open files allowed for system users. 

    * soft nofile 4096
    * hard nofile 65535

    Note

    The * character denotes that the limit is applicable to all system users in the server, and the values specified above are the default values for normal system usage.

    The hard limit is used to enforce hard resource limits and the soft limit is to enforce soft resource limits. The hard limit is set by the super user and is enforced by the Kernel. You cannot increase the hard limit unless you have super user privileges. You can increase or decrease the soft limit as necessary, but the maximum limit you can increase this is up to the hard limit value that is set.

     

  • Configure the following settings in the  /etc/security/limits.conf file of Linux if you need to alter the maximum number of processes a system user is allowed to run at a given time. Each carbon server instance you run requires upto 1024 threads with the default thread pool configuration. Therefore, you need to increase both the hard and soft nproc value by 1024 per carbon server.

    * soft nproc 20000
    * hard nproc 20000

    Note

    The * character denotes that the limit is applicable to all system users in the server, and the values specified above are the default values for normal system usage.

com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_links2' is unknown.