Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Replaced with latest docs

This topic provides instructions on how to configure the multiple Gateways in WSO2 API Manager (WSO2 API-M) in a distributed deployment to facilitate high availability (HA). The instructions in this topic are based on the Distributed Deployment of API Manager and these configurations will only work if the configurations in that topic are done correctly. For instance, all datasource configurations are already done for the Gateway in that topic and hence do not need to be repeated here.

The following sections provide specific instructions on configuring the Gateway cluster.

Table of Contents
maxLevel3
minLevel3

Gateway deployment pattern

The configurations in this topic will be done based on the following pattern. This pattern is used as a basic Gateway cluster where the worker nodes and manager nodes are separated.

Image Removed

Expand
titleClick here to view a sample of the full API Manager cluster.

Image Removed

Step 1 - Configure the load balancer

NGINX is used for this scenario, but you can use any load balancer that you prefer here. The following are the configurations that need to be done for the load balancer.

Use the following steps to configure Nginx as the load balancer for WSO2 products.

  1. Install Nginx using the following command.
    $sudo apt-get install nginx
  2. Configure Nginx Plus to direct the HTTP requests to the two worker nodes via the HTTP 80 port using the http://am.wso2.com/<service>. To do this, create a VHost file (am.http.conf) in the /etc/nginx/conf.d/ directory and add the following configurations into it.

    Code Block
    upstream wso2.am.com {
            sticky cookie JSESSIONID;
            server xxx.xxx.xxx.xx4:9763;
            server xxx.xxx.xxx.xx5:9763;
    }
    
    server {
            listen 80;
            server_name am.wso2.com;
            location / {
                   proxy_set_header X-Forwarded-Host $host;
                   proxy_set_header X-Forwarded-Server $host;
                   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                   proxy_set_header Host $http_host;
                   proxy_read_timeout 5m;
                   proxy_send_timeout 5m;
                   proxy_pass http://wso2.am.com;
            }
    }
  3. Configure Nginx Plus to direct the HTTPS requests to the two worker nodes via the HTTPS 443 port using https://am.wso2.com/<service>. To do this, create a VHost file (am.https.conf) in the /etc/nginx/conf.d/ directory and add the following configurations into it.

    Code Block
    upstream ssl.wso2.am.com {
    	sticky cookie JSESSIONID;
    	server xxx.xxx.xxx.xx4:9443;
    	server xxx.xxx.xxx.xx5:9443;
    }
    
    server {
    listen 443;
    	server_name am.wso2.com;
    	ssl on;
    	ssl_certificate /etc/nginx/ssl/wrk.crt;
    	ssl_certificate_key /etc/nginx/ssl/wrk.key;
    	location / {
                   proxy_set_header X-Forwarded-Host $host;
                   proxy_set_header X-Forwarded-Server $host;
                   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                   proxy_set_header Host $http_host;
                   proxy_read_timeout 5m;
                   proxy_send_timeout 5m;
    	    proxy_pass https://ssl.wso2.am.com;
        	}
    }
  4. Configure Nginx Plus to access the Management Console as  https://mgt.am.wso2.com/carbon via HTTPS 443 port. This is to direct requests to the manager node. To do this, create a VHost file (mgt.am.https.conf) in the /etc/nginx/conf.d/ directory and add the following configurations into it.

    Code Block
    server {
    	listen 443;
    	server_name mgt.am.wso2.com;
    	ssl on;
    	ssl_certificate /etc/nginx/ssl/mgt.crt;
    	ssl_certificate_key /etc/nginx/ssl/mgt.key;
    
    	location / {
                   proxy_set_header X-Forwarded-Host $host;
                   proxy_set_header X-Forwarded-Server $host;
                   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                   proxy_set_header Host $http_host;
                   proxy_read_timeout 5m;
                   proxy_send_timeout 5m;
    	    proxy_pass https://xxx.xxx.xxx.xx3:9443/;
        	}
    	error_log  /var/log/nginx/mgt-error.log ;
               access_log  /var/log/nginx/mgt-access.log;
    }
  5. Restart the Nginx Plus server.
    $sudo service nginx restart

    Tip

    Tip: You do not need to restart the server if you are simply making a modification to the VHost file. The following command should be sufficient in such cases.
    sudo service nginx reload

Step 2 - Create SSL certificates

Create SSL certificates for both the manager and worker nodes using the instructions that follow. 

Note
  • In the sample configurations given under Step 1 - Configure the Load Balancer, the names of ssl key and the certificate for the manager node is mgt.key and mgt.crt, and for worker node its wrk.key and wrk.crt respectively. Please uset these names for <key_name> and <certificate_name> when you are generating keys and certificates.
  • While creating keys, enter the host name (am.wso2.com or mgt.am.wso2.com) as the Common Name.
  1. Create the Server Key.

    Code Block
    $sudo openssl genrsa -des3 -out <key_name>.key 1024
  2. Certificate Signing Request.

    Code Block
    $sudo openssl req -new -key <key_name>.key -out server.csr
  3. Remove the password.

    Code Block
    $sudo cp <key_name>.key <key_name>.key.org 
    $sudo openssl rsa -in <key_name>.key.org -out <key_name>.key
  4. Sign your SSL Certificate.

    Code Block
    $sudo openssl x509 -req -days 365 -in server.csr -signkey <key_name>.key -out <certificate_name>.crt
  5. Copy the key and certificate files generated in above step 4 to /etc/nginx/ssl/ location. 

You have now configured the load balancer to handle requests sent to am.wso2.com and to distribute the load among the worker nodes in the worker sub-domain of the wso2.am.domain cluster.

Panel

You are now ready to set up the cluster configurations. The next step is to configure the Gateway manager.

Step 3 - Configure the Gateway manager

Management nodes specialize in management of the setup. Only management nodes are authorized to add new artifacts into the system or make configuration changes. Management nodes are usually behind an internal firewall and are exposed to clients running within the organization only. This section involves setting up the Gateway node and enabling  

Furthermore, these instructions use a shared file system as the content synchronization mechanism.

Info
titleWhy use a shared file system?

WSO2 recommends using a shared file system as the content synchronization mechanism to synchronize the artifacts among the WSO2 API-M Gateway nodes, because a shared file system does not require a specific node to act as a Gateway Manager, instead all the nodes have the worker manager capabilities. As a result, this helps to share all the APIs with any of the nodes; thereby, avoiding the vulnerability of a single point of failure. For this purpose you can use a common shared file system such as Network File System (NFS) or any other shared file system. 

Follow the instructions below to configure the API-M Gateway in a distributed environment: 

Table of Contents

Note that the configurations in this topic are done based on the following pattern. 
Image Added

Step 1 - Configure the load balancer

For more information see, Configuring the Proxy Server and the Load Balancer.

Step 2 - Configure the Gateway

When using the shared file system, all nodes have the manager worker capability. Therefore, there is no need of having a separate manager node. Follow the instructions below to set up the Gateway nodes and enable it to work with the other components in the distributed setup.

Step 3.1 - Configure the common configurations

Carryout the following configurations on the Gateway manager node.

Note that these configurations are common to the Gateway Manager and Gateway Worker nodes.

  • Open the <API-M_HOME>/repository/conf/api-manager.xml file in the Gateway node.
  • Modify the api-manager.xml file as follows. This configures the connection to the Key Manager component.

    noneChange admin password

    To change the admin password go to Changing the super admin password. See the note given under step 2 for instructions to follow if your password has special characters.

    Configure key management related communication.

    trueCluster fronted by a load balancer

    In a clustered setup if the Key Manager is fronted by a load balancer, you have to use WSClient as KeyValidatorClientType in <API-M_HOME>/repository/conf/api-manager.xml. This should be configured in all Gateway and Key Manager components.

    none

    Disable the Thrift Client to optimize performance.
    You need to configure this in the Gateway  <API-M_HOME>/repository/conf/api-manager.xml file.

    Cluster without a load balancer

    In a clustered setup if the Key Manager is NOT fronted by a load balancer, you have to use ThriftClient as KeyValidatorClientType in <API-M_HOME>/repository/conf/api-manager.xml. This should be configured in all Gateway and Key Manager components.

    none

    Disable the Thrift Client to optimize performance.
    You need to configure this in the Gateway <API-M_HOME>/repository/conf/api-manager.xml file.

    Specify the ThriftClientPort and ThriftServerPort values. 10397 is the default.

    Specify the hostname or IP of the Key Manager. The default is localhost. In a distributed deployment we must set this parameter in both Key Manager nodes and Gateway nodes only if the Key Manager is running on a separate machine. Gateway uses this parameter to connect to the key validation thrift service.

    If you need to enable JSON Web Token (JWT) you have to enable it in all Gateway and Key Manager components.
    For more information on enabling JWT, see .Step 3.2 -
    Table of Content Zoneexpand
    maxLevel4
    locationtop
    Multiexcerpt
    MultiExcerptNamecommon-gw-steps
    titleClick here for information on configuring the Gateway.
    1. Configure the carbon.xml file.
      The following configurations are done in the 
    <GATEWAY
    1. <API-M_
    MANAGER
    1. GATEWAY_HOME>/repository/conf/carbon.xml file.
      1. Open 
      <GATEWAY_MANAGER
      1. <API-M_GATEWAY_HOME>/repository/conf/carbon.xml.
      2. Locate the <HostName> tag and add the cluster

      host name:  <HostName>am
      1. hostname. For an example, if the hostname is gw.am.wso2.com

        Code Block
        <HostName>gw.am.wso2.com</HostName>
      2. Locate the <MgtHostName> tag and uncomment it. Make sure that the management

      host name
      1. hostname is defined as follows:

      <MgtHostName> mgt
      1. Code Block
        <MgtHostName>gw.am.wso2.
      com <
      1. com</
      MgtHostName> 
      1. MgtHostName>
    Step 3.3 -
    1. Configure the catalina-server.xml file.
      Specify the following configurations in the catalina-server.xml file, which is located in the 

    <GATEWAY_MANAGER
    1. <API-M_GATEWAY_HOME>/repository/conf/tomcat

    /
    1.  directory.

      Code Block
      languagehtml/xml
      <Connector  protocol="org.apache.coyote.http11.Http11NioProtocol"
                      port="9763"
                      proxyPort="80"
      --------
      />
      <Connector  protocol="org.apache.coyote.http11.Http11NioProtocol"
                      port="9443"
                      proxyPort="443"
      --------
      />

      The TCP port number is the value that this Connector

     will use
    1.  uses to create a server socket and

    await
    1. waits for incoming connections.

    Your
    1. In your operating system

    will allow
    1. , only one server application

    to
    1. can listen to a particular port number on a particular IP address.

    Step 3.4 -
    1. Map the hostnames to IPs.
      Open the server's  /etc/hosts  file and add the following.

      Code Block
      languagenone
      <GATEWAY
    -WORKER
    1. -IP> gw.am.wso2.com
    In this example, it would look like this:
    1. Code Block
    language
    1. title
    none
    1. Example Format
      xxx.xxx.xxx.xx4 gw.am.wso2.com

    Once you replicate these configurations for all the manager nodes, your Gateway manager is configured. Next configure the Gateway worker.  

    Step 4 - Configure the Gateway worker

    Worker nodes specialize in serving requests to deployment artifacts and and reading them. They can be exposed to external clients.

    Table of Content Zone
    maxLevel4
    minLevel4
    locationtop

    Step 4.1 - Configure the common configurations

    Carryout the following configurations on the Gateway worker node.

    Multiexcerpt include
    MultiExcerptNamecommon-gw-steps
    PageWithExcerptDistributed Deployment of the Gateway

    Step 4.2 - Configure the carbon.xml file

    1. Open <GATEWAY_WORKER_HOME>/repository/conf/carbon.xml on each worker node.
    2. Specify the host name as follows.
      <HostName>am.wso2.com</HostName>

    You can configure the Deployment Synchronizer, which is also done in the  carbon.xml file.

    Step 4.3 - Configure the catalina-server.xml file

    Make the following configuration changes in the catalina-server.xml file which is found in the <GATEWAY_WORKER_HOME>/repository/conf/tomcat/ directory.

    Code Block
    languagehtml/xml
    <Connector  protocol="org.apache.coyote.http11.Http11NioProtocol"
                    port="9763"
                    proxyPort="80"
    --------
    />
    <Connector  protocol="org.apache.coyote.http11.Http11NioProtocol"
                    port="9443"
                    proxyPort="443"
    --------
    />

    In the next section, we will map the host names we specified to real IPs.

    Step 4.4 - Map the hostnames to IPs

    Open the server's /etc/hosts file and add the following.

    Code Block
    languagenone
    <GATEWAY-MANAGER-IP> mgt.am.wso2.com 

    In this example, it would look like this:

    Code Block
    languagenone
    xxx.xxx.xxx.xx3 mgt.am.wso2.com 

    ...

    1. Note

      Replicate the configurations in all the other Gateway nodes.

    2. Mount the directory required for the shared file system.
      Mount the <API-M_HOME>/repository/deployment/server directory of all the Gateway nodes to the shared file system to share all APIs between all the Gateway nodes.

    Step 3 - Optionally, configure Hazelcast

    You can seamlessly deploy WSO2 API Manager using local caching in a clustered setup without Hazelcast clustering. However, there are edge case scenarios where you need to enable Hazelcast clustering. To  For more information, see Working with Hazelcast Clustering to identify whether you need Hazelcast clustering and to configure Hazelcast clustering if needed, see Working with Hazelcast Clusteringit.

    Step

    ...

    4 - Start the Gateway Nodes

    ...

    Follow the instructions below to start the Gateway

    ...

    nodes:

    1. Comment the following configurations in the <API-M_HOME>/repository/conf/api-manager.xml file.

      Code Block
      <JMSEventPublisherParameters>
         <java.naming.factory.initial>org.wso2.andes.jndi.PropertiesFileInitialContextFactory</java.naming.factory.initial>
         <java.naming.provider.url>repository/conf/jndi.properties</java.naming.provider.url>
         <transport.jms.DestinationType>topic</transport.jms.DestinationType>
         <transport.jms.Destination>throttleData</transport.jms.Destination>
         <transport.jms.ConcurrentPublishers>allow</transport.jms.ConcurrentPublishers>
         <transport.jms.ConnectionFactoryJNDIName>TopicConnectionFactory</transport.jms.ConnectionFactoryJNDIName>
      </JMSEventPublisherParameters>
    2. Start the WSO2 API-M Gateway manager node by typing the following command in the terminalcommand prompt.

      Code Block
      sh <GATEWAY<API-M_MANAGERGATEWAY_HOME>/bin/wso2server.sh

    Step 6.2 - Start the Gateway worker

    Tip

    It is recommendation is to delete the <API-M_HOME>/repository/deployment/server directory and create an empty server directory in the worker node. This is done to avoid any SVN conflicts that may arise. Note that when you do this, you may have to restart the worker node after you start it in order to avoid an error.

    ...

    Update the <API-M_HOME>/repository/conf/api-manager.xml file by commenting out the following configurations.

    Code Block
    <JMSEventPublisherParameters>
       <java.naming.factory.initial>org.wso2.andes.jndi.PropertiesFileInitialContextFactory</java.naming.factory.initial>
       <java.naming.provider.url>repository/conf/jndi.properties</java.naming.provider.url>
       <transport.jms.DestinationType>topic</transport.jms.DestinationType>
       <transport.jms.Destination>throttleData</transport.jms.Destination>
       <transport.jms.ConcurrentPublishers>allow</transport.jms.ConcurrentPublishers>
       <transport.jms.ConnectionFactoryJNDIName>TopicConnectionFactory</transport.jms.ConnectionFactoryJNDIName>
    </JMSEventPublisherParameters>

    Start the Gateway worker by typing the following command in the terminal.

    Code Block
    sh <GATEWAY_WORKER_HOME>/bin/wso2server.sh -Dprofile=gateway-worker

    ...

    Note

    What if I am unable to use a shared file system?

    If you are unable to have a shared file system, you can use remote synchronization (rsync) instead, but note that when using rsync there is the vulnerability of a single point of failure, because rsync needs one node to act as the Gateway Manager as it only provides write permission to one node. For more information, see Configuring the Gateway in a Distributed Environment with rsync.

    Why can't I use SVN based deployment synchronization (Dep Sync)?

    WSO2 has identified some inconsistencies when using Hazelcast clustering. As a result, from API-M 2.1.0 onward WSO2 API-M has been designed so that it is possible to deploy API-M in a clustered setup without using Hazelcast clustering, so that users can use Hazelcast clustering only when necessary. However, if you use deployment synchronization as a content synchronization mechanism, you are compelled to use Hazelcast clustering. Therefore, WSO2 does not recommend using SVN based deployment synchronization.