This site contains the documentation that is relevant to older WSO2 product versions and offerings.
For the latest WSO2 documentation, visit https://wso2.com/documentation/.

Clustering the App Manager Gateway

This topic provides instructions on how to configure a standard Gateway cluster for the App Manager. The instructions in this topic are based on the Clustering App Manager topic and these configurations will only work if the configurations in that topic are done correctly. For instance, all datasource configurations are already done for the Gateway in that topic and hence do not need to be repeated here.

The following sections provide specific instructions on configuring the Gateway cluster.

Gateway deployment pattern

The configurations in this topic will be done based on the following pattern. This pattern is used as a basic Gateway cluster where the worker nodes and manager nodes are separated.

Configuring the load balancer

Nginx Plus is used for this scenario, but you can use any load balancer that you prefer here. The following are the configurations that need to be done for the load balancer.

Use the following steps to configure Nginx as the load balancer for WSO2 products.

  1. Install Nginx using the following command.
    $sudo apt-get install nginx
  2. Configure Nginx Plus to direct the HTTP requests to the two worker nodes via the HTTP 80 port using the http://appm.wso2.com/<service>. To do this, create a VHost file (appm.http.conf) in the /etc/nginx/conf.d/ directory and add the following configurations into it.

    upstream gatewaywkhttp {
            server xxx.xxx.xxx.xx3::8280;
            server xxx.xxx.xxx.xx4:8280;
            sticky learn create=$upstream_cookie_jsessionid
            lookup=$cookie_jsessionid
            zone=gw_http_sessions:1m;
    }
    server {
     	listen 80;
     server_name appm.wso2.com;
     location / {
                proxy_set_header X-Forwarded-Host $host;
                proxy_set_header X-Forwarded-Server $host;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    			proxy_set_header Host $http_host;
    			proxy_read_timeout 5m;
    			proxy_send_timeout 5m;
    			proxy_connect_timeout   2;
    			proxy_next_upstream     error timeout invalid_header http_500;
    			proxy_pass http://gatewaywkhttp/;
    			proxy_redirect http://gatewaywkhttp/ http://appm.wso2.com/;        
          }
    }
  3. Configure Nginx Plus to direct the HTTPS requests to the two worker nodes via the HTTPS 443 port using https://appm.wso2.com/<service>. To do this, create a VHost file (gateway.https.conf) in the /etc/nginx/conf.d/ directory and add the following configurations into it.

    upstream gatewaywkhttps {
    	server xxx.xxx.xxx.xx3:8243;
    	server xxx.xxx.xxx.xx4:8243;
    
    sticky learn create=$upstream_cookie_jsessionid
    lookup=$cookie_jsessionid
    zone=gw_https_sessions:1m;
     }
    
    server {
    isten 443;
    server_name appm.wso2.com;
    ssl on;
    ssl_certificate /etc/nginx/ssl/mgt.crt;
    ssl_certificate_key /etc/nginx/ssl/mgt.key;
     location / {
    			proxy_set_header X-Forwarded-Host $host;
    			proxy_set_header X-Forwarded-Server $host;
    			proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    			proxy_set_header Host $http_host;
    			proxy_read_timeout 5m;
    			proxy_send_timeout 5m;
    			proxy_connect_timeout   2;
    			proxy_next_upstream     error timeout invalid_header http_500;      proxy_pass https://gatewaywkhttps/;
    			proxy_redirect https://gatewaywkhttps/ https://appm.wso2.com/;
    		 }
    }
  4. Configure Nginx Plus to access the Management Console as https://mgt.appm.wso2.com/carbon via HTTPS 443 port. This is to direct requests to the manager node. To do this, create a VHost file (mgt.appm.https.conf) in the /etc/nginx/conf.d/ directory and add the following configurations into it.

    upstream gatewaymgthttps {
    server xxx.xxx.xxx.xx2:9443;
    sticky learn create=$upstream_cookie_jsessionid
    lookup=$cookie_jsessionid
    zone=gw_mgt_https_sessions:1m;
    }
    
    server {
    listen 443;
    server_name mgt.appm.wso2.com;
    ssl on;
    ssl_certificate /etc/nginx/ssl/mgt.crt;
    ssl_certificate_key /etc/nginx/ssl/mgt.key;
    
    location / {
    			proxy_set_header X-Forwarded-Host $host;
    			proxy_set_header X-Forwarded-Server $host;
    			proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;     proxy_set_header Host $http_host;
    			proxy_read_timeout 5m;
    			proxy_send_timeout 5m;
    			proxy_next_upstream     error timeout invalid_header http_500;         proxy_connect_timeout   2;
    			proxy_pass https://gatewaymgthttps/;
    			proxy_redirect https://gatewaymgthttps/ https://mgt.appm.wso2.com/;
    		}
    }
  5. Restart the Nginx Plus server.
    $sudo service nginx restart

    Tip: You do not need to restart the server if you are simply making a modification to the VHost file. The following command should be sufficient in such cases.

    $sudo service nginx reload 

Create SSL certificates

Create SSL certificates for both the manager and worker nodes using the instructions that follow.

  1. Create the Server Key.
    $sudo openssl genrsa -des3 -out server.key 1024
  2. Certificate Signing Request.
    $sudo openssl req -new -key server.key -out server.csr
  3. Remove the password.
    $sudo cp server.key server.key.org
    $sudo openssl rsa -in server.key.org -out server.key
  4. Sign your SSL Certificate.
    $sudo openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

While creating keys, enter the host name (appm.wso2.com or mgt.appm.wso2.com) as the Common Name.

You have now configured the load balancer to handle requests sent to appm.wso2.com and to distribute the load among the worker nodes in the worker sub-domain of the wso2.appm.domain cluster.

You are now ready to set up the cluster configurations. The next step is to configure the Gateway manager.

Configuring the Gateway manager

Management nodes specialize in management of the setup. Only management nodes are authorized to add new artifacts into the system or make configuration changes. Management nodes are usually behind an internal firewall and are exposed to clients running within the organization only.

Configuring the axis2.xml file

The following configurations are done in the <GATEWAY_MANAGER_HOME>/repository/conf/axis2/axis2.xml file.

  1. Open the <GATEWAY_MANAGER_HOME>/repository/conf/axis2/axis2.xml file.
  2. Locate the clustering section and verify or configure the properties as follows (some of these properties are already set correctly by default).
    1. Enable clustering for this node: 
      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    2. Set the membership scheme to wka to enable the well-known address registration method (this node will send cluster initiation messages to WKA members that we will define later): 
      <parameter name="membershipScheme">wka</parameter>
    3. Specify the name of the cluster this node will join:
      <parameter name="domain">wso2.am.domain</parameter>
    4. Specify the host used to communicate cluster messages. This is the IP of the Gateway manager node.
      <parameter name="localMemberHost">xxx.xxx.xxx.xx3</parameter>
    5. Specify the port used to communicate cluster messages: 
      <parameter name="localMemberPort">4500</parameter>

      This port number will not be affected by the port offset in carbon.xml. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server. 

    6. Do the following port mapping configurations for the Gateway manager node. There are two types of transports in API Manager and when a request comes into the API Manager, it always goes to the default transport which is the PTT/NIO transport. So when you access the management console of the Gateway Manager node, you send a servlet request. If you do not specify the port mapping parameter in the manager node, it would hit the PTT/NIO transport and the request would fail.

      <parameter name="properties">
      	<property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
      	<property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
      	<property name="subDomain" value="mgt"/>
      	<property name="port.mapping.80" value="9763"/>
      	<property name="port.mapping.443" value="9443"/>
      </parameter>
    7. The receiver's HTTP/HTTPS port values are without the portOffset addition; they get auto-incremented by portOffset. In the case of an ESB cluster, the 'WSDLEPRPrefix' parameter should point to the worker node's host name (am.wso2.com) and load balancer's http (80)/https (443) transport ports.

    8. Change the members listed in the <members> element. This defines the WKA members.

      <members>
          <member>
                <hostName>xxx.xxx.xxx.xx3</hostName>
                <port>4500</port>
          </member>
      	<member>
                <hostName>xxx.xxx.xxx.xx4</hostName>
                <port>4200</port>
          </member>
      </members>

      Here we configure the manager node and worker node as the well-known members.

      it is recommended to add at least two well-known members here. This is done to ensure that there is high availability for the cluster.

      You can also use IP address ranges for the hostName. For example, 192.168.1.2-10. This should ensure that the cluster eventually recovers after failures. One shortcoming of doing this is that you can define a range only for the last portion of the IP address. You should also keep in mind that the smaller the range, the faster the time it takes to discover members, since each node has to scan a lesser number of potential members.

Configuring the carbon.xml file

The following configurations are done in the <GATEWAY_MANAGER_HOME>/repository/conf/carbon.xml file.

  1. Open <GATEWAY_MANAGER_HOME>/repository/conf/carbon.xml.
  2. Locate the <HostName> tag and add the cluster host name: 
    <HostName>appm.wso2.com</HostName>
  3. Locate the <MgtHostName> tag and uncomment it. Make sure that the management host name is defined as follows:
    <MgtHostName>mgt.appm.wso2.org</MgtHostName> 

Configuring the catalina-server.xml file

Specify the following configurations in the catalina-server.xml file located in the <GATEWAY_MANAGER_HOME>/repository/conf/tomcat/ directory.

<Connector  protocol="org.apache.coyote.http11.Http11NioProtocol"
                port="9763"
                proxyPort="80"
--------
/>
<Connector  protocol="org.apache.coyote.http11.Http11NioProtocol"
                port="9443"
                proxyPort="443"
--------
/>

The TCP port number is the value that this Connector will use to create a server socket and await incoming connections. Your operating system will allow only one server application to listen to a particular port number on a particular IP address.

Mapping host names to IPs

Open the server's /etc/hosts file and add the following.

<GATEWAY-WORKER-IP> appm.wso2.com

In this example, it would look like this:

xxx.xxx.xxx.xx4 appm.wso2.com

We have now finished configuring the manager node and are ready to start the Gateway manager.

Starting the Gateway manager

Start the Gateway manager by typing the following command in the terminal.

sh <GATEWAY_MANAGER_HOME>/bin/wso2server.sh -Dsetup 

The additional -Dsetup argument will create the required tables in the database. The above configuration runs the API Manager with all it's different components, i.e., Publisher, Store, Gateway and IDP. So to start this as a manager node with only Gateway functionality available, use the following command instead.

sh <GATEWAY_MANAGER_HOME>/bin/wso2server.sh -Dprofile=gateway-manager

Once you replicate these configurations for all the manager nodes, your Gateway manager is configured. Next configure the Gateway worker.

Configuring the Gateway worker

Worker nodes specialize in serving requests to deployment artifacts and and reading them. They can be exposed to external clients.

Configuring the axis2.xml file

The following configurations are done in the <GATEWAY_WORKER_HOME>/repository/conf/axis2/axis2.xml file.

  1. Open the <GATEWAY_WORKER_HOME>/repository/conf/axis2/axis2.xml file.
  2. Locate the clustering section and verify or configure the properties as follows (some of these properties are already set correctly by default):
    1. Enable clustering for this node: 
      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    2. Set the membership scheme to  wka  to enable the Well Known Address registration method (this node will send cluster initiation messages to WKA members that we will define later): 
      <parameter name="membershipScheme">wka</parameter>
    3. Specify the name of the cluster this node will join: 
      <parameter name="domain">wso2.am.domain</parameter>
    4. Specify the host used to communicate cluster messages. This is the IP address of the Gateway worker.
      <parameter name="localMemberHost">xxx.xxx.xxx.xx4</parameter> 

    5. Specify the port used to communicate cluster messages (if this node is on the same server as the manager node, or another worker node, be sure to set this to a unique value, such as 4000 and 4001 for worker nodes 1 and 2). 
      <parameter name="localMemberPort">4200</parameter>

      This port number will not be affected by the port offset in carbon.xml. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server.

    6. Define the sub-domain as worker by adding the following property under the  <parameter name="properties">  element: 
      <property name="subDomain" value="worker"/>
    7. Define the manager and worker nodes as well-known members of the cluster by providing their host name and localMemberPort values. The manager node is defined here because it is required for the Deployment Synchronizer to function in an efficient manner. The deployment synchronizer uses this configuration to identify the manager and synchronize deployment artifacts across the nodes of a cluster.

      <members>
          <member>
              <hostName>xxx.xxx.xxx.xx3</hostName>
              <port>4500</port>
          </member>
          <member>
              <hostName>xxx.xxx.xxx.xx4</hostName>
              <port>4200</port>
          </member>
      </members>

      it is recommended to add at least two well-known members here. This is done to ensure that there is high availability for the cluster.

      You can also use IP address ranges for the hostName. For example, 192.168.1.2-10. This should ensure that the cluster eventually recovers after failures. One shortcoming of doing this is that you can define a range only for the last portion of the IP address. You should also keep in mind that the smaller the range, the faster the time it takes to discover members, since each node has to scan a lesser number of potential members.

Configuring the carbon.xml file

  1. Open <GATEWAY_WORKER_HOME>/repository/conf/carbon.xml on each worker node.
  2. Specify the host name as follows.
    <HostName>appm.wso2.com</HostName>

You can configure the deployment synchronizer, which is also done in the carbon.xml file.

Configuring the catalina-server.xml file

Make the following configuration changes in the catalina-server.xml file which is found in the <GATEWAY_WORKER_HOME>/repository/conf/tomcat/ directory.

<Connector  protocol="org.apache.coyote.http11.Http11NioProtocol"
                port="9763"
                proxyPort="80"
--------
/>
<Connector  protocol="org.apache.coyote.http11.Http11NioProtocol"
                port="9443"
                proxyPort="443"
--------
/>

In the next section, we will map the host names we specified to real IPs.

Mapping host names to IPs

Open the server's /etc/hosts file and add the following.

<GATEWAY-MANAGER-IP> mgt.appm.wso2.com 

In this example, it would look like this:

xxx.xxx.xxx.xx3 mgt.appm.wso2.com 

We have now finished configuring the worker nodes and are ready to start them.

Starting the Gateway worker

Tip: It is recommendation is to delete the <PRODUCT_HOME>/repository/deployment/server directory and create an empty server directory in the worker node. This is done to avoid any SVN conflicts that may arise. Note that when you do this, you may have to restart the worker node after you start it in order to avoid an error.

Start the Gateway worker by typing the following command in the terminal:

sh <GATEWAY_WORKER_HOME>/bin/wso2server.sh -Dprofile=gateway-worker

The additional -Dprofile=gateway-worker argument indicates that this is a worker node specific to the Gateway. This parameter basically makes a server read-only. A node with this parameter will not be able to do any changes such as writing or making modifications to the deployment repository, etc. Starting this node as a Gateway worker ensures that Store and Publisher related functionality is disabled. This parameter also ensures that the node starts as the worker profile, where the UI bundles will not be activated and only the back end bundles will be activated once the server starts up.