This site contains the documentation that is relevant to older WSO2 product versions and offerings.
For the latest WSO2 documentation, visit https://wso2.com/documentation/.

Setting Up Open Banking Key Manager Deployment

This page guides you through setting up a high availability (HA) clustered deployment of WSO2 Open Banking Key Manager. For more information about the deployment pattern and its high-level architecture, see HA clustered deployment. 

You can install multiple instances of WSO2 products in a cluster to ensure that if one instance becomes unavailable or is experiencing high traffic, another instance will seamlessly handle the requests. For complete information on clustering concepts, see Clustering Overview in the Common Product Administration Guide.

Creating a cluster of WSO2 Open Banking Key Manager instances involves a standard two-node cluster for high availability. To ensure that the instances share governance registry artifacts, you must create a JDBC mount.



At a high level, use the given options to cluster Key Manager with a minimum of two nodes. The first section includes instructions on setting up databases. The second section involves setting up a standard two node cluster, the third section involves setting up the Key Manager server in a clustered environment and additional configurations if you need to set up a load balancer to front your cluster.

In a standard WSO2 Open Banking 1.5.0 deployment, users can skip the steps mentioned below.

  • Configuring the user store

  • Configuring the datasources

  • Mounting the registry

 Click here to find more information on the steps...

Configuring the user store

WSO2 products allow you to configure multiple user stores to store your users and their roles. Your user store can be one of the following:

  • A Directory Service that can communicate over LDAP protocol like OpenLDAP

  • Active Directory

  • A database that can communicate over JDBC

Note: The instructions in this page demonstrate configuring a JDBC user store. The user store configurations are available in the <WSO2_OB_KM_HOME>/repository/conf/user-mgt.xml file. Point all the cluster nodes to the same user store.

For more information on how to set up other types of user stores, see Configuring User Stores.



Configuring the datasources

  1. Create databases. See Setting up the Physical Database in the WSO2 Administration Guide for database scripts and more information. 
    This documentation demonstrates deployment with user management database (WSO2UM_DB) and an identity database (WSO2AM_DB).

    Alternatively, you can create more databases for each type of data to separate the data logically. Note that this will NOT make a difference in performance and is not actually necessary.

    However, if you wish to separate the data logically into separate databases, see Setting Up Separate Databases for Open Banking Key Manager Clustering.

  2. Configure the datasource for the databases in both nodes of your cluster in the <WSO2_OB_KM_HOME>/repository/conf/datasources/master-datasources.xml file that contains database related configurations.
    For instructions on how to configure the datasource depending on the type of database you created, see  Changing the Carbon Database in the WSO2 Product Administration Guide. Following is a sample configuration of the user management, identity, and registry databases for a MySQL database. Make sure your datasource configurations are as follows:
    <datasource>
    	<name>WSO2_CARBON_DB</name>
    	<description>The datasource used for registry and user manager</description>
    	<jndiConfig>
    		<name>jdbc/WSO2CarbonDB</name>
    	</jndiConfig>
    	<definition type="RDBMS">
    		<configuration>
    			<url>jdbc:h2:repository/database/WSO2CARBON_DB;DB_CLOSE_ON_EXIT=FALSE</url>
    			<username>wso2carbon</username>
    			<password>wso2carbon</password>
    			<driverClassName>org.h2.Driver</driverClassName>
    			<maxActive>50</maxActive>
    			<maxWait>60000</maxWait>
    			<testOnBorrow>true</testOnBorrow>
    			<validationQuery>SELECT 1</validationQuery>
    			<validationInterval>30000</validationInterval>
    		</configuration>
    	</definition>
    </datasource>
    <datasource>
    	<name>WSO2AM_DB</name>
    	<description>The datasource used for API Manager database</description>
    	<jndiConfig>
    		<name>jdbc/WSO2AM_DB</name>
    	</jndiConfig>
    	<definition type="RDBMS">
    		<configuration>
    			<url>jdbc:mysql://localhost:3306/openbank_apimgtdb?autoReconnect=true&useSSL=false</url>
    			<username>root</username>
    			<password>root</password>
    			<driverClassName>com.mysql.jdbc.Driver</driverClassName>
    			<maxActive>150</maxActive>
    			<maxWait>60000</maxWait>
    			<testOnBorrow>true</testOnBorrow>
    			<validationQuery>SELECT 1</validationQuery>
    			<validationInterval>30000</validationInterval>
    			<defaultAutoCommit>false</defaultAutoCommit>
    		</configuration>
    	</definition>
    </datasource>
    <datasource>
    	<name>WSO2UM_DB</name>
    	<description>The datasource used by user manager</description>
    	<jndiConfig>
    		<name>jdbc/WSO2UM_DB</name>
    	</jndiConfig>
    	<definition type="RDBMS">
    		<configuration>
    			<url>jdbc:mysql://localhost:3306/openbank_userdb?autoReconnect=true&useSSL=false</url>
    			<username>root</username>
    			<password>root</password>
    			<driverClassName>com.mysql.jdbc.Driver</driverClassName>
    			<maxActive>150</maxActive>
    			<maxWait>60000</maxWait>
    			<testOnBorrow>true</testOnBorrow>
    			<validationQuery>SELECT 1</validationQuery>
    			<validationInterval>30000</validationInterval>
    			<defaultAutoCommit>false</defaultAutoCommit>
    		</configuration>
    	</definition>
    </datasource>
    <datasource>
    	<name>WSO2CONFIG_DB</name>
    	<description>The datasource used by the registry</description>
    	<jndiConfig>
    		<name>jdbc/WSO2Config_DB</name>
    	</jndiConfig>
    	<definition type="RDBMS">
    		<configuration>
    			<url>jdbc:mysql://localhost:3306/openbank_iskm_configdb?autoReconnect=true&useSSL=false</url>
    			<username>root</username>
    			<password>root</password>
    			<driverClassName>com.mysql.jdbc.Driver</driverClassName>
    			<maxActive>150</maxActive>
    			<maxWait>60000</maxWait>
    			<testOnBorrow>true</testOnBorrow>
    			<validationQuery>SELECT 1</validationQuery>
    			<validationInterval>30000</validationInterval>
    			<defaultAutoCommit>false</defaultAutoCommit>
    		</configuration>
    	</definition>
    </datasource>
    <datasource>
    	<name>WSO2REG_DB</name>
    	<description>The datasource used by the registry</description>
    	<jndiConfig>
    		<name>jdbc/WSO2REG_DB</name>
    	</jndiConfig>
    	<definition type="RDBMS">
    		<configuration>
    			<url>jdbc:mysql://localhost:3306/openbank_govdb?autoReconnect=true&useSSL=false</url>
    			<username>root</username>
    			<password>root</password>
    			<driverClassName>com.mysql.jdbc.Driver</driverClassName>
    			<maxActive>150</maxActive>
    			<maxWait>60000</maxWait>
    			<testOnBorrow>true</testOnBorrow>
    			<validationQuery>SELECT 1</validationQuery>
    			<validationInterval>30000</validationInterval>
    			<defaultAutoCommit>false</defaultAutoCommit>
    		</configuration>
    	</definition>
    </datasource> 
  3. Open the <WSO2_OB_KM_HOME>/repository/conf/finance/open-banking.xml file and configure the WSO2OpenBankingDB datasource in both nodes of your cluster.
    <DataSource>
    	<!-- Include a data source name (jndiConfigName) from the set of data sources defined in master-datasources.xml 
        -->
    	<Name>jdbc/WSO2OpenBankingDB</Name>
    </DataSource>

Mounting the registry

Mount the governance and configuration registry in the <WSO2_OB_KM_HOME>/repository/conf/registry.xml file to share the registry across all nodes in the cluster. For more information on mounting the registry, see Sharing Databases in a Cluster.

Make sure the WSO2Config_DB and WSO2REG_DB configurations are updated in both nodes as follows:

<dbConfig name="configRegistry">
	<dataSource>jdbc/WSO2Config_DB</dataSource>
</dbConfig>

<remoteInstance url="https://localhost:9443/registry">
	<id>configInstance</id>
	<dbConfig>configRegistry</dbConfig>
	<readOnly>false</readOnly>
	<enableCache>true</enableCache>
	<registryRoot>/</registryRoot>
	<cacheId>jdbc:mysql://localhost:3306/openbank_iskm_configdb</cacheId>
</remoteInstance>

<mount path="/_system/config" overwrite="true">
	<instanceId>configInstance</instanceId>
	<targetPath>/_system/config</targetPath>
</mount>

<dbConfig name="governanceRegistry">
	<dataSource>jdbc/WSO2REG_DB</dataSource>
</dbConfig>

<remoteInstance url="https://localhost:9443/registry">
	<id>governanceInstance</id>
	<dbConfig>governanceRegistry</dbConfig>
	<readOnly>false</readOnly>
	<enableCache>true</enableCache>
	<registryRoot>/</registryRoot>
	<cacheId>jdbc:mysql://localhost:3306/openbank_govdb</cacheId>
</remoteInstance>

<mount path="/_system/governance" overwrite="true">
	<instanceId>governanceInstance</instanceId>
	<targetPath>/_system/governance</targetPath>
</mount>

Note: The production recommendation is to set the <versionResourcesOnChange> property in the <WSO2_OB_KM_HOME>/repository/conf/registry.xml file to false. This is because the automatic versioning of resources can be an extremely expensive operation.

<versionResourcesOnChange>false</versionResourcesOnChange>

In the Registry browser, verify that the governance collection is shown with the symlink icon.

  1. Start the Key Manager server. See Starting up and verifying product nodes for more information.
  2. Log in to the Management Console.
  3. Navigate to Home > Registry > Browse.



Clustering Key Manager for high availability

Follow the instructions below to cluster WSO2 Open Banking Key Manager.

  1. Install WSO2 OB KM on each node.
  2. Do the following changes to the <WSO2_OB_KM_HOME>/repository/conf/axis2/axis2.xml file for both nodes.

    1. Enable clustering on node 1 and node 2 by setting the clustering element to true:  

      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    2. Specify the name of the cluster that the WSO2 OB KM node will join.

      <parameter name="domain">wso2.obkm.domain</parameter>
    3. Use the Well Known Address (WKA) based clustering method. In WKA-based clustering, we need to have a subset of cluster members configured in all the members of the cluster. At least one well known member has to be operational at all times.

      <parameter name="membershipScheme">wka</parameter>

      WSO2 supports the following membership schemes as well.

      • Multicast membership scheme
      • AWS membership scheme
      • Kubernetes membership scheme

      For more information, see Clustering WSO2 Products - About Membership Schemes.

    4. Configure the localMemberHost and localMemberPort entries. These must be different port values for the two nodes if they are on the same server to prevent any conflicts.


      <parameter name="localMemberHost">127.0.0.1</parameter>
      <parameter name="localMemberPort">4000</parameter>
    5. Under the members section, add the hostName and port for each WKA member. As we have only two nodes in our sample cluster configuration, we will configure both nodes as WKA nodes.

      <members>
          <member>
            <hostName>127.0.0.1</hostName>
            <port>4000</port>
          </member>
          <member>
            <hostName>127.0.0.2</hostName>
            <port>4010</port>
          </member>
      </members>

      Note: You can also use IP address ranges for the hostName. For example,  192.168.1.2-10. This should ensure that the cluster eventually recovers after failures. One shortcoming of doing this is that you can define a range only for the last portion of the IP address. You should also keep in mind that the smaller the range, the faster the time it takes to discover members since each node has to scan a lesser number of potential members.

  3. Configure caching.

    It is not recommended to use distributed caching due to many practical issues that are related to configuring and running this properly. WSO2 Open Banking Key Manager employs Hazelcast as the primary method of implementing cluster messages while using distributed caching in a simple setup.

    For information on clustering, see Clustering WSO2 Products.

    About Caching

    • Why caching

      Caching is an additional layer on top of databases. It enables to keep the recently used data that are fetched from the database in local memory, so that for subsequent data requests instead of fetching from the database the data can be served from the local memory. Caching has certain advantages and disadvantages that you need to evaluate when deciding on your caching strategy.

    • Advantages
      • The load on the underlying database or LDAP is reduced as data is served from already fetched data in memory.

      • Improved performance due to the reduced number of database calls for repetitive data fetching.

    • Disadvantages
      • Coherency problems may occur when the data change is not immediately reflected on cached data if one node or an external system updates the database.

      • Data in memory can become stale yet be served. For example, serving data from memory while its corresponding record in the database is deleted.

    Historically, WSO2 Open Banking Key Manager used distributed caching to utilize the above-mentioned advantages as well as to minimize the coherence problem. However, in newer deployment patterns where the network is not tightly controlled, distributed caching fails in unexpected ways. Hence, we no longer recommend using distributed caching. Instead, it is recommended to have local caches (if required) and cache invalidation messages (if required) by considering the information given below.

    • The ForceLocalCache property

      When Hazelcast clustering is enabled, certain caches act as distributed caches. The ForceLocalCache property within the <cache> section in the <WSO2_OB_KM_HOME>/repository/conf/carbon.xml file marks that all the caches should act like local caches even in a clustered setup.

      <ForceLocalCache>true</ForceLocalCache>

      Cache invalidation uses Hazelcast messaging to distribute the invalidation message over the cluster and invalidate the caches properly.  This is used to minimize the coherence problem in a multi-node setup.

    • Typical clustered deployment cache scenarios

      ScenarioLocal CachingDistributed CachingHazelcast ClusteringDistributed InvalidationDescription
      1. All caches are local with distributed cache invalidationEnabledNot ApplicableEnabledEnabled
      • This is the recommended approach.

      • Hazelcast messaging invalidates the caches.

      2. All caches are local without distributed cache invalidationEnabledNot ApplicableDisabledDisabled
      • Invalidation clears only the caches in specific nodes. Other caches are cleared at cache expiration. 

      • Hazelcast communication is not used.

      • As the decisions take time to propagate over nodes (default cache timeout is 15 minutes), there is a security risk in this method. To reduce the risk, reduce the default cache timeout period. To learn how to reduce the default cache timeout period, see Configuring Cache Layers - timeout.

      3. No cachingDisabledDisabledDisabledDisabled
      • The data are directly acquired from the database. 

      • Eliminates the security risks caused due to not having cache invalidation.

      • This method will create a performance degradation due to the lack of caching.

      4. Certain caches are disabled while the remaining are localEnabled for the available local cachesNot ApplicableEnabledEnabled
      • To reduce the security risk created in the second scenario and to improve performance in comparison with the third scenario, disable the security-related caches and sustain the performance-related caches as local caches. 

      • This requires identification of these caches depending on the use case.

      5. Distributed caching enabledDisabled—the ForceLocalCache property is set to false.EnabledEnabledNot Applicable
      • This scenario is only recommended if the network has tight tolerance where the network infrastructure is capable of handling high bandwidth with very low latency. 

      • Typically this applies only when you deploy all the nodes in a single server rack having fiber-optic cables. In any other environments, this implementation will cause cache losses. Thus, this implementation is not recommended for general use.

  4. Configure the following:

    1. Make sure the jdbc/WSO2UM_DB datasource is configured in the <WSO2_OB_KM_HOME>/repository/conf/user-mgt.xml file. This refers to the user store you configured in the  Configuring the user store section above.  

      <UserManager>
      	<Realm>
      		<Configuration>
      			<Property name="dataSource">jdbc/WSO2UM_DB</Property>
      		</Configuration>
      	</Realm>
      </UserManager>
    2. Make sure the <WSO2_OB_KM_HOME>/repository/conf/identity/identity.xml file of both node1 and node2 is configured to use jdbc/WSO2AM_DB datasource. This refers to the datasource you configured in the Configuring the datasources section above.

      <JDBCPersistenceManager>
         	 <DataSource>
         		<Name>jdbc/WSO2AM_DB</Name>
         	 </DataSource>
      </JDBCPersistenceManager>
    3. Configure the <WSO2_OB_KM_HOME>/repository/conf/identity/identity.xml file in both node1 and node2, so that they are pointed to the load balancer.

      <OAuth1RequestTokenUrl>${carbon.protocol}://<LOAD_BALANCER_HOST>/oauth/request-token</OAuth1RequestTokenUrl>
      <OAuth1AuthorizeUrl>${carbon.protocol}://<LOAD_BALANCER_HOST>/oauth/authorize-url</OAuth1AuthorizeUrl>
      <OAuth1AccessTokenUrl>${carbon.protocol}://<LOAD_BALANCER_HOST>/oauth/access-token</OAuth1AccessTokenUrl>
      <OAuth2AuthzEPUrl>${carbon.protocol}://localhost:8243/authorize</OAuth2AuthzEPUrl>
      <OAuth2TokenEPUrl>${carbon.protocol}://localhost:8243/token</OAuth2TokenEPUrl>
      <OAuth2RevokeEPUrl>${carbon.protocol}://<LOAD_BALANCER_HOST>/oauth2/revoke</OAuth2RevokeEPUrl>
      <OAuth2IntrospectEPUrl>${carbon.protocol}://<LOAD_BALANCER_HOST>/oauth2/introspect</OAuth2IntrospectEPUrl>
      <OAuth2UserInfoEPUrl>${carbon.protocol}://localhost:8243/userinfo</OAuth2UserInfoEPUrl>
      <OIDCCheckSessionEPUrl>${carbon.protocol}://${carbon.host}:${carbon.management.port}/oidc/checksession</OIDCCheckSessionEPUrl>
      <OIDCLogoutEPUrl>${carbon.protocol}://<LOAD_BALANCER_HOST>/oidc/logout</OIDCLogoutEPUrl>
      <OAuth2ConsentPage>${carbon.protocol}://<LOAD_BALANCER_HOST>/ob/authenticationendpoint/oauth2_authz.do</OAuth2ConsentPage>
      <OAuth2ErrorPage>${carbon.protocol}://<LOAD_BALANCER_HOST>/ob/authenticationendpoint/oauth2_error.do</OAuth2ErrorPage>
      <OIDCConsentPage>${carbon.protocol}://<LOAD_BALANCER_HOST>/ob/authenticationendpoint/oauth2_consent.do</OIDCConsentPage>
      <OIDCLogoutConsentPage>${carbon.protocol}://<LOAD_BALANCER_HOST>/ob/authenticationendpoint/oauth2_logout_consent.do</OIDCLogoutConsentPage>
      <OIDCLogoutPage>${carbon.protocol}://<LOAD_BALANCER_HOST>/ob/authenticationendpoint/oauth2_logout.do</OIDCLogoutPage>
      <OIDCWebFingerEPUrl>${carbon.protocol}://localhost:${carbon.management.port}/.well-known/webfinger</OIDCWebFingerEPUrl>
    4. Add the following authentication endpoint configurations to the <WSO2_OB_KM_HOME>/repository/conf/identity/application-authentication.xml> file in both node1 and node 2.

      <AuthenticationEndpointURL>https://<LOAD_BALANCER_HOST>/ob/authenticationendpoint/login.do</AuthenticationEndpointURL>
      <AuthenticationEndpointRetryURL>https://<LOAD_BALANCER_HOST>/ob/authenticationendpoint/retry.do</AuthenticationEndpointRetryURL>
      <AuthenticationEndpointMissingClaimsURL>/ob/authenticationendpoint/claims.do</AuthenticationEndpointMissingClaimsURL>

Configuring Open Banking API Manager

Make sure to do the following changes in your Open Banking API Manager server:

  • By default, the in-sequences files for the APIs in the <WSO2_OB_APIM_HOME>/repository/resources/finance/apis directory are pointed to Open Banking Key Manager. In order to use the load balancer, update the in-sequence files by pointing them to the load balancer, instead of Open Banking Key Manager where applicable.

  • To point the Open Banking API Manager to the Key Manager cluster, open the <WSO2_OB_APIM_HOME>/repository/conf/api-manager.xml and configure the following:

    <APIKeyManager>
    	<Configuration>
    		<ServerURL>https://<LOAD_BALANCER_HOST>${carbon.context}services/</ServerURL>
    	</Configuration>
    </APIKeyManager>
    <APIKeyValidator>
    	<!-- Server URL of the API key manager -->
    	<!--Required in OB-->
    	<ServerURL>https://<LOAD_BALANCER_HOST>${carbon.context}services/</ServerURL>
    </APIKeyValidator>
    <AuthManager>
    	<!-- Server URL of the Authentication service -->
    	<!--openbanking_hostname Required in OB-->
    	<ServerURL>https://<LOAD_BALANCER_HOST>${carbon.context}services/</ServerURL>
    </AuthManager>
    <RevokeAPIURL>https://ssl.nginx.com:${https.nio.port}/revoke</RevokeAPIURL>



Changing hostnames and ports

Configure the Key Manager node1 using the following steps.

  1. Go to the <WSO2_OB_KM_HOME>/repository/conf/tomcat/catalina-server.xml file and configure the proxy ports as follows:

    <Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="9443" proxyPort="443"
    <Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="9763" proxyPort="80"

    Tip: If you are using an Openshift Docker container for the deployment, do the following.

    Add the following Tomcat RemoteIPValve to the <WSO2_OB_KM_HOME>/repository/conf/tomcat/catalina-server.xml file.

    <Valve
      className="org.apache.catalina.valves.RemoteIpValve"
      internalProxies="reg_ex_for_internal_docker_IPs"
      remoteIpHeader="x-forwarded-for"
      proxiesHeader="x-forwarded-by"
      protocolHeader="x-forwarded-proto"
    />
  2. In the <WSO2_OB_KM_HOME>/repository/conf/carbon.xml file, define the hostname for your server.

    <HostName>wso2.obkm.com</HostName>
    <MgtHostName>wso2.obkm.com</MgtHostName>

    This hostname is used by the OB Key Manager cluster. It must be specified in the /etc/hosts file as:

    127.0.0.1   wso2.obkm.com

Follow all the configuration steps that were done in node1 for node2 as well. 


Enabling artifact synchronization

To enable synchronization for runtime artifacts, you must have a shared file system. You can use one of the following depending on your environment.

  • Network File System (NFS): This is one of the most commonly known shared file systems and can be used in a Linux environment.
  • Server Message Block (SMB) file system: This can be used in a Windows environment.
  • Amazon EFS: This can be used in an AWS environment.
  1. Once you choose a file system, mount it in the nodes that participate in the cluster.
  2. Create two directories called Deployment and Tenants in the shared file system.
  3. Create a symlink from the <WSO2_OB_KM_HOME>/repository/deployment path to the Deployment directory of the shared file system that you created in step 2 of this section.
  4. Create a symlink from the <WSO2_OB_KM_HOME>/repository/tenants path to the Tenants directory of the shared file system that you created in step 2 of this section.

    Instead of mounting the file system directly to the <WSO2_OB_KM_HOME>/repository/deployment and <WSO2_OB_KM_HOME>/repository/tenants paths, a symlink is created to avoid issues that may occur if you delete the product to redeploy it; in which case the file system would get mounted to a non-existing path.


Fronting with a load balancer

In this section, an Nginx server is used as an example. If you need to set up the WSO2 Open Banking Key Manager cluster with Nginx, you can follow the instructions given below (you must do this after setting up the cluster following the above instructions). When clustering WSO2 Open Banking Key Manager with a load balancer, make sure to enable sticky sessions. This is required for the management console and the dashboard to work and if we disable temporary session data persistence in the <WSO2_OB_KM_HOME>/repository/conf/identity/identity.xml file. 

Sticky sessions for SSO

Sticky sessions are required to ensure a flawless Single Sign On (SSO) workflow when temporary session data persistence is disabled. It is recommended to use sticky sessions for SSO in order to have a higher throughput.

For more information on sticky sessions, see Sticky Sessions with Manager Nodes. The following is the deployment diagram with the load balancer.

Configuring Nginx

 Click here to see how it is done

Use the following steps to configure NGINX Plus version 1.7.11 or nginx community version 1.9.2 as the load balancer for WSO2 products. (In these steps, we refer to both versions collectively as "Nginx".)

  1. Install Nginx (NGINX Plus or nginx community) in a server configured in your cluster.
  2. Configure Nginx to direct the HTTP requests to the two worker nodes via the HTTP 80 port using http://obkm.wso2.com/<service>.

    To do this, create a VHost file (obkm.http.conf) in the /etc/nginx/conf.d directory and add the following configurations into it.

    Note: Shown below is a general Nginx configuration. Click this link for more specific configuration with exposing various endpoints:

     Nginx configuration with exposing /oauth2, /commonauth, and other endpoints
    Nginx configuration with exposing /oauth2, /commonauth, and other endpoints
    upstream ssl.nginx.com {
    	server z.z.z.z:9443;  
     	server x.x.x.x:9yyy  
      ip_hash; 
    }
    
    server {
    	listen 443;
    	server_name nginx.mycomp.org;   
    	ssl on;
    	ssl_certificate /home/abc/mycomp_org.crt; 
    	ssl_certificate_key /home/abc/mycomporg.key;
    
    	location /oauth2/token {
     		proxy_set_header X-Forwarded-Host $host;
    		proxy_set_header X-Forwarded-Server $host;
    		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host;
    		proxy_read_timeout 5m;
    		proxy_send_timeout 5m;
    		
    		proxy_pass  https://ssl.nginx.com/oauth2/token ;
    		proxy_redirect https://z.z.z.z:9443/oauth2/token https://nginx.mycomp.org/oauth2/token ;
    		proxy_redirect https://server x.x.x.x:9yyy/oauth2/token https://nginx.mycomp.org/oauth2/token ; 
    	}
    
    	location /commonauth {
    		proxy_set_header X-Forwarded-Host $host;
    		proxy_set_header X-Forwarded-Server $host;
    		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host;
    		proxy_read_timeout 5m;
    		proxy_send_timeout 5m;
    		proxy_pass https://ssl.nginx.com/commonauth;
    		proxy_redirect https://z.z.z.z:9443/commonauth https://nginx.mycomp.org/commonauth ;
    		proxy_redirect https://server x.x.x.x:9yyy/commomnauth https://nginx.mycomp.org/commonauth;
    	}
    
    	location /oauth2/authorize {
    		proxy_set_header X-Forwarded-Host $host;
    		proxy_set_header X-Forwarded-Server $host;
    		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host;
    		proxy_read_timeout 5m;
    		proxy_send_timeout 5m;
    		proxy_pass https://ssl.nginx.com/oauth2/authorize;
    		proxy_redirect https://z.z.z.z:9443/oauth2/authorize https://nginx.mycomp.org/oauth2/authorize ;
    		proxy_redirect https://server x.x.x.x:9yyy/oauth2/authorize https://nginx.mycomp.org/oauth2/ authorize;
    	}
    
    	location /authenticationendpoint/ {
    		proxy_set_header X-Forwarded-Host $host;
    		proxy_set_header X-Forwarded-Server $host;
    		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host;
    		proxy_read_timeout 5m;
    		proxy_send_timeout 5m;
    		proxy_pass https://ssl.nginx.com/authenticationendpoint/;
    		proxy_redirect https://z.z.z.z:9443/authenticationendpoint/ https://nginx.mycomp.org/authenticationendpoint/ ;
    		proxy_redirect https://server x.x.x.x:9yyy/authenticationendpoint https://nginx.mycomp.org/ authenticationendpoint;
    	}
    
    	location /oauth2/userinfo {
    		proxy_set_header X-Forwarded-Host $host;
    		proxy_set_header X-Forwarded-Server $host;
    		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host;
    		proxy_read_timeout 5m;
    		proxy_send_timeout 5m;
    		proxy_pass https://ssl.nginx.com/oauth2/userinfo;
    		proxy_redirect https://z.z.z.z:9443/oauth2/userinfo https://nginx.mycomp.org/oauth2/userinfo ;
    		proxy_redirect https://server x.x.x.x:9yyy/oauth2/userinfo https://nginx.mycomp.org/oauth2/ userinfo;
    	}
    }
    HTTP configurations
    upstream ssl.wso2.obkm.com {
            server xxx.xxx.xxx.xx3:9763;
            server xxx.xxx.xxx.xx4:9763;
    }
    
    server {
            listen 80;
            server_name obkm.wso2.com;
            location / {
                   proxy_set_header X-Forwarded-Host $host;
                   proxy_set_header X-Forwarded-Server $host;
                   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                   proxy_set_header Host $http_host;
                   proxy_read_timeout 5m;
                   proxy_send_timeout 5m;
                   proxy_pass http://ssl.wso2.obkm.com;
     
    			   proxy_http_version 1.1;
            	   proxy_set_header Upgrade $http_upgrade;
            	   proxy_set_header Connection "upgrade";
            }
    }
  3. Now that you've configured HTTP requests, you must also configure HTTPS requests. Configure Nginx to direct the HTTPS requests to the two worker nodes via the HTTPS 443 port using https://obkm.wso2.com/<service>. To do this, create a VHost file (obkm.https.conf) in the /etc/nginx/conf.d directory and add the following configurations into it.

    Note: The configurations for nginx community version and NGINX Plus are different here since the community version does not support the sticky directive.

  4. Configure Nginx to access the Management Console as https://mgt.obkm.wso2.com/carbon via HTTPS 443 port. This is to direct requests to the manager node. To do this, create a VHost file (mgt.obkm.https.conf) in the /etc/nginx/conf.d directory and add the following configurations into it.

    Management Console configurations
    server {
    	listen 443;
    	server_name mgt.obkm.wso2.com;
    	ssl on;
    	ssl_certificate /etc/nginx/ssl/mgt.crt;
    	ssl_certificate_key /etc/nginx/ssl/mgt.key;
    
    	location / {
                   proxy_set_header X-Forwarded-Host $host;
                   proxy_set_header X-Forwarded-Server $host;
                   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                   proxy_set_header Host $http_host;
                   proxy_read_timeout 5m;
                   proxy_send_timeout 5m;
    			   proxy_pass https://xxx.xxx.xxx.xx2:9443/;
     
    			   proxy_http_version 1.1;
    			   proxy_set_header Upgrade $http_upgrade;
    			   proxy_set_header Connection "upgrade";
        	}
    	error_log  /var/log/nginx/mgt-error.log ;
               access_log  /var/log/nginx/mgt-access.log;
    }
  5. Reload the Nginx server. $sudo service nginx reload

    If you have made modifications to anything other than the VHost files, you may need to restart the Nginx server instead of reloading:  

    $sudo service nginx restart

Create SSL certificates

 Click here to see how it is done

Create SSL certificates for both the manager and worker nodes using the instructions that follow:

  1. Create the server key.

    $sudo openssl genrsa -des3 -out server.key 1024
  2. Create the certificate signing request.

    $sudo openssl req -new -key server.key -out server.csr
  3. Remove the password.

    $sudo cp server.key server.key.org
             
    $sudo openssl rsa -in server.key.org -out server.key
  4. Sign your SSL certificate.

    $sudo openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
  5. Execute the following command to import the created certificate file to the client truststore:

    keytool -import -trustcacerts -alias server -file server.crt -keystore client-truststore.jks

While creating keys, enter the hostname ( obkm.wso2.com or mgt.obkm.wso2.com ) as the common name.

Configure the Proxy Port in Open Banking Key Manager Nodes

 Click here to see how it is done

By default, WSO2 Open Banking Key Manager runs on 9446 port. The following steps describe how you can configure a proxy port of 443.

  1. Open <WSO2_OB_KM_HOME>/repository/conf/tomcat/catalina-server.xml file and add the proxy port 443 in https connector as follows.

    <Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
    	port="9443"
    	proxyPort="443" 	

    It is not possible to configure proxy port from load balancer itself since there is a post request while authenticating to IS Dashboard. So, If you are planning to use Identity server Dashboard, you must do this configuration. Below configurations are also needed if you are using the dashboard. 

  2. Configure proxy port and host in <WSO2_OB_KM_HOME>/repository/deployment/server/jaggeryapps/dashboard/conf/site.json file as follows:

    {
      "proxy":{
     	"proxyHost":"nginx.mycomp.org" 
      	"proxyHTTPSPort":"443", 
      	"proxyContextPath":"", 
      	"servicePath":"/services"
      }		
    }	
  3. Configure proxy port and host in <WSO2_OB_KM_HOME>/repository/deployment/server/jaggeryapps/portal/conf/site.json file as follows:

    {
      "proxy":{
     	"proxyHost":"nginx.mycomp.org" 
      	"proxyHTTPSPort":"443", 
      	"proxyContextPath":"" 
      },
      "fido":{
      	"appId":""
      }		
    }
  4.  Configure proxy port and host in <WSO2_OB_KM_HOME>/repository/deployment/server/webapps/shindig/WEB-INF/web.xml 

    <context-param>
    	<param-name> system.properties </param-name>
    	<param-value>
      		<![CDATA[
     	shindig.host= 
     	shindig.port=443
     	aKey=/shindig/gadgets/proxy?container=default&url=
     	]]>



Starting up and verifying product nodes

If both nodes are running on the same server, set the port offset to avoid port conflicts.

 Click here to see how it is done

By default, the Open Banking Key Manager server port offset is 3. So if you're running both Key Manager nodes in the same server the port offset can be as follows:

OB_KM_NODE1: offset 3

OB_KM_NODE2: offset 4

Changing the offset for default ports

When you run multiple WSO2 products, multiple instances of the same product, or multiple WSO2 product clusters on the same server or virtual machines (VMs), you must change their default ports with an offset value to avoid port conflicts. The default HTTP and HTTPS ports (without offset) of a WSO2 product are 9763 and 9443 respectively. Port offset defines the number by which all ports defined in the runtime such as the HTTP/S ports will be changed. For example, if the default HTTP port is 9763 and the port offset is 1, the effective HTTP port will change to 9764. For each additional WSO2 product instance, you set the port offset to a unique value. The default port offset is 0.

There are two ways to set an offset to a port:

  • Pass the port offset to the server during startup. The following command starts the server with the default port incremented by 3:

    ./wso2server.sh -DportOffset=3
  • Set the Ports configuration in <WSO2_OB_KM_HOME>/repository/conf/carbon.xml with the desired value as follows:

    <Ports>
    	<Offset>3</Offset>
    </Ports>

Usually, when you offset the port of the server, all ports it uses are changed automatically. However, there are few exceptions as follows in which you have to change the ports manually according to the offset. The following table indicates the changes that occur when the offset value is modified.

WSO2 Server instance

PortOffset

Sample Default Port Value

WSO2 Product 1

0

9443

WSO2 Product 2

1

9444

WSO2 Product 3

2

9445

WSO2 Product 4

3

9446

WSO2 Product 5

4

9447

  1. Start Nginx.

  2. Go to <WSO2_OB_KM_HOME>/bin and start the nodes using the following command on both nodes:

    ./wso2server.sh
  3. Access the Management console using the following URL: https://wso2.obkm.com/carbon/