This topic provides instructions on how to configure the multiple Gateways in WSO2 API Manager (WSO2 API-M) in a distributed deployment to facilitate high availability (HA). The instructions in this topic are based on the Distributed Deployment of API Manager and these configurations will only work if the configurations in that topic are done correctly. For instance, all datasource configurations are already done for the Gateway in that topic and hence do not need to be repeated here.
The following sections provide specific instructions on configuring the Gateway cluster.
Table of Contents | ||||
---|---|---|---|---|
|
Gateway deployment pattern
The configurations in this topic will be done based on the following pattern. This pattern is used as a basic Gateway cluster where the worker nodes and manager nodes are separated.
Expand | ||
---|---|---|
| ||
Step 1 - Configure the load balancer
NGINX is used for this scenario, but you can use any load balancer that you prefer here. The following are the configurations that need to be done for the load balancer.
Use the following steps to configure Nginx as the load balancer for WSO2 products.
- Install Nginx using the following command.
$sudo apt-get install nginx
Configure Nginx Plus to direct the HTTP requests to the two worker nodes via the HTTP 80 port using the
http://am.wso2.com/<service>
. To do this, create a VHost file (am.http.conf
) in the/etc/nginx/conf.d/
directory and add the following configurations into it.Code Block upstream wso2.am.com { sticky cookie JSESSIONID; server xxx.xxx.xxx.xx4:9763; server xxx.xxx.xxx.xx5:9763; } server { listen 80; server_name am.wso2.com; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_read_timeout 5m; proxy_send_timeout 5m; proxy_pass http://wso2.am.com; } }
Configure Nginx Plus to direct the HTTPS requests to the two worker nodes via the HTTPS 443 port using
https://am.wso2.com/<service>
. To do this, create a VHost file (am.https.conf
) in the/etc/nginx/conf.d/
directory and add the following configurations into it.Code Block upstream ssl.wso2.am.com { sticky cookie JSESSIONID; server xxx.xxx.xxx.xx4:9443; server xxx.xxx.xxx.xx5:9443; } server { listen 443; server_name am.wso2.com; ssl on; ssl_certificate /etc/nginx/ssl/wrk.crt; ssl_certificate_key /etc/nginx/ssl/wrk.key; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_read_timeout 5m; proxy_send_timeout 5m; proxy_pass https://ssl.wso2.am.com; } }
Configure Nginx Plus to access the Management Console as
https://mgt.am.wso2.com/carbon
via HTTPS 443 port. This is to direct requests to the manager node. To do this, create a VHost file (mgt.am.https.conf
) in the/etc/nginx/conf.d/
directory and add the following configurations into it.Code Block server { listen 443; server_name mgt.am.wso2.com; ssl on; ssl_certificate /etc/nginx/ssl/mgt.crt; ssl_certificate_key /etc/nginx/ssl/mgt.key; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_read_timeout 5m; proxy_send_timeout 5m; proxy_pass https://xxx.xxx.xxx.xx3:9443/; } error_log /var/log/nginx/mgt-error.log ; access_log /var/log/nginx/mgt-access.log; }
Restart the Nginx Plus server.
$sudo service nginx restart
Tip Tip: You do not need to restart the server if you are simply making a modification to the VHost file. The following command should be sufficient in such cases.
sudo service nginx reload
Step 2 - Create SSL certificates
Create SSL certificates for both the manager and worker nodes using the instructions that follow.
Note |
---|
|
Create the Server Key.
Code Block $sudo openssl genrsa -des3 -out <key_name>.key 1024
Certificate Signing Request.
Code Block $sudo openssl req -new -key <key_name>.key -out server.csr
Remove the password.
Code Block $sudo cp <key_name>.key <key_name>.key.org $sudo openssl rsa -in <key_name>.key.org -out <key_name>.key
Sign your SSL Certificate.
Code Block $sudo openssl x509 -req -days 365 -in server.csr -signkey <key_name>.key -out <certificate_name>.crt
Copy the key and certificate files generated in above step 4 to
/etc/nginx/ssl/
location.
You have now configured the load balancer to handle requests sent to am.wso2.com
and to distribute the load among the worker nodes in the worker
sub-domain of the wso2.am.domain
cluster.
Panel |
---|
You are now ready to set up the cluster configurations. The next step is to configure the Gateway manager. |
Step 3 - Configure the Gateway manager
Management nodes specialize in management of the setup. Only management nodes are authorized to add new artifacts into the system or make configuration changes. Management nodes are usually behind an internal firewall and are exposed to clients running within the organization only. This section involves setting up the Gateway node and enabling
Furthermore, these instructions use a shared file system as the content synchronization mechanism.
Info | ||
---|---|---|
| ||
WSO2 recommends using a shared file system as the content synchronization mechanism to synchronize the artifacts among the WSO2 API-M Gateway nodes, because a shared file system does not require a specific node to act as a Gateway Manager, instead all the nodes have the worker manager capabilities. As a result, this helps to share all the APIs with any of the nodes; thereby, avoiding the vulnerability of a single point of failure. For this purpose you can use a common shared file system such as Network File System (NFS) or any other shared file system. |
Follow the instructions below to configure the API-M Gateway in a distributed environment:
Table of Contents |
---|
Note that the configurations in this topic are done based on the following pattern.
Step 1 - Configure the load balancer
For more information see, Configuring the Proxy Server and the Load Balancer.
Step 2 - Configure the Gateway
When using the shared file system, all nodes have the manager worker capability. Therefore, there is no need of having a separate manager node. Follow the instructions below to set up the Gateway nodes and enable it to work with the other components in the distributed setup.
Table of Content Zoneexpand | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||||
Multiexcerpt | ||||||||||||||||||||||||
|
| |||||||||||||||||||||||
Once you replicate these configurations for all the manager nodes, your Gateway manager is configured. Next configure the Gateway worker. |
Step 4 - Configure the Gateway worker
Worker nodes specialize in serving requests to deployment artifacts and and reading them. They can be exposed to external clients.
Table of Content Zone | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||||||||||||
Step 4.1 - Configure the common configurationsCarryout the following configurations on the Gateway worker node.
Step 4.2 - Configure the carbon.xml file
You can configure the Deployment Synchronizer, which is also done in the Step 4.3 - Configure the catalina-server.xml fileMake the following configuration changes in the catalina-server.xml file which is found in the
In the next section, we will map the host names we specified to real IPs. Step 4.4 - Map the hostnames to IPsOpen the server's
In this example, it would look like this:
|
...
|
Step 3 - Optionally, configure Hazelcast
You can seamlessly deploy WSO2 API Manager using local caching in a clustered setup without Hazelcast clustering. However, there are edge case scenarios where you need to enable Hazelcast clustering. To For more information, see Working with Hazelcast Clustering to identify whether you need Hazelcast clustering and to configure Hazelcast clustering if needed, see Working with Hazelcast Clusteringit.
Step
...
4 - Start the Gateway Nodes
...
Follow the instructions below to start the Gateway
...
nodes:
Comment the following configurations in the
<API-M_HOME>/repository/conf/api-manager.xml
file.Code Block <JMSEventPublisherParameters> <java.naming.factory.initial>org.wso2.andes.jndi.PropertiesFileInitialContextFactory</java.naming.factory.initial> <java.naming.provider.url>repository/conf/jndi.properties</java.naming.provider.url> <transport.jms.DestinationType>topic</transport.jms.DestinationType> <transport.jms.Destination>throttleData</transport.jms.Destination> <transport.jms.ConcurrentPublishers>allow</transport.jms.ConcurrentPublishers> <transport.jms.ConnectionFactoryJNDIName>TopicConnectionFactory</transport.jms.ConnectionFactoryJNDIName> </JMSEventPublisherParameters>
Start the WSO2 API-M Gateway manager node by typing the following command in the terminalcommand prompt.
Code Block sh <GATEWAY<API-M_MANAGERGATEWAY_HOME>/bin/wso2server.sh
Step 6.2 - Start the Gateway worker
Tip |
---|
It is recommendation is to delete the |
...
Update the <API-M_HOME>/repository/conf/api-manager.xml
file by commenting out the following configurations.
Code Block |
---|
<JMSEventPublisherParameters>
<java.naming.factory.initial>org.wso2.andes.jndi.PropertiesFileInitialContextFactory</java.naming.factory.initial>
<java.naming.provider.url>repository/conf/jndi.properties</java.naming.provider.url>
<transport.jms.DestinationType>topic</transport.jms.DestinationType>
<transport.jms.Destination>throttleData</transport.jms.Destination>
<transport.jms.ConcurrentPublishers>allow</transport.jms.ConcurrentPublishers>
<transport.jms.ConnectionFactoryJNDIName>TopicConnectionFactory</transport.jms.ConnectionFactoryJNDIName>
</JMSEventPublisherParameters> |
Start the Gateway worker by typing the following command in the terminal.
Code Block |
---|
sh <GATEWAY_WORKER_HOME>/bin/wso2server.sh -Dprofile=gateway-worker |
...
Note |
---|
What if I am unable to use a shared file system? If you are unable to have a shared file system, you can use remote synchronization (rsync) instead, but note that when using rsync there is the vulnerability of a single point of failure, because rsync needs one node to act as the Gateway Manager as it only provides write permission to one node. For more information, see Configuring the Gateway in a Distributed Environment with rsync. Why can't I use SVN based deployment synchronization (Dep Sync)? WSO2 has identified some inconsistencies when using Hazelcast clustering. As a result, from API-M 2.1.0 onward WSO2 API-M has been designed so that it is possible to deploy API-M in a clustered setup without using Hazelcast clustering, so that users can use Hazelcast clustering only when necessary. However, if you use deployment synchronization as a content synchronization mechanism, you are compelled to use Hazelcast clustering. Therefore, WSO2 does not recommend using SVN based deployment synchronization. |