This site contains the documentation that is relevant to older WSO2 product versions and offerings.
For the latest WSO2 documentation, visit https://wso2.com/documentation/.
Setting Up Open Banking Key Manager Deployment
This page guides you through setting up a high availability (HA) clustered deployment of WSO2 Open Banking Key Manager. For more information about the deployment pattern and its high-level architecture, see HA clustered deployment. You can install multiple instances of WSO2 products in a cluster to ensure that if one instance becomes unavailable or is experiencing high traffic, another instance will seamlessly handle the requests. For complete information on clustering concepts, see Clustering Overview in the Common Product Administration Guide. Creating a cluster of WSO2 Open Banking Key Manager instances involves a standard two-node cluster for high availability. To ensure that the instances share governance registry artifacts, you must create a JDBC mount.
At a high level, use the given options to cluster Key Manager with a minimum of two nodes. The first section includes instructions on setting up databases. The second section involves setting up a standard two node cluster, the third section involves setting up the Key Manager server in a clustered environment and additional configurations if you need to set up a load balancer to front your cluster.
In a standard WSO2 Open Banking 1.5.0 deployment, users can skip the steps mentioned below.
Configuring the user store
Configuring the datasources
Mounting the registry
Follow the instructions below to cluster WSO2 Open Banking Key Manager. Do the following changes to the Enable clustering on node 1 and node 2 by setting the clustering element to true: Specify the name of the cluster that the WSO2 OB KM node will join. Use the Well Known Address (WKA) based clustering method. In WKA-based clustering, we need to have a subset of cluster members configured in all the members of the cluster. At least one well known member has to be operational at all times. WSO2 supports the following membership schemes as well. For more information, see Clustering WSO2 Products - About Membership Schemes. Configure the Under the Note: You can also use IP address ranges for the Configure caching. It is not recommended to use distributed caching due to many practical issues that are related to configuring and running this properly. WSO2 Open Banking Key Manager employs Hazelcast as the primary method of implementing cluster messages while using distributed caching in a simple setup. For information on clustering, see Clustering WSO2 Products. About Caching Why caching Caching is an additional layer on top of databases. It enables to keep the recently used data that are fetched from the database in local memory, so that for subsequent data requests instead of fetching from the database the data can be served from the local memory. Caching has certain advantages and disadvantages that you need to evaluate when deciding on your caching strategy. The load on the underlying database or LDAP is reduced as data is served from already fetched data in memory. Improved performance due to the reduced number of database calls for repetitive data fetching. Coherency problems may occur when the data change is not immediately reflected on cached data if one node or an external system updates the database. Data in memory can become stale yet be served. For example, serving data from memory while its corresponding record in the database is deleted. Historically, WSO2 Open Banking Key Manager used distributed caching to utilize the above-mentioned advantages as well as to minimize the coherence problem. However, in newer deployment patterns where the network is not tightly controlled, distributed caching fails in unexpected ways. Hence, we no longer recommend using distributed caching. Instead, it is recommended to have local caches (if required) and cache invalidation messages (if required) by considering the information given below. The ForceLocalCache property When Hazelcast clustering is enabled, certain caches act as distributed caches. The Cache invalidation uses Hazelcast messaging to distribute the invalidation message over the cluster and invalidate the caches properly. This is used to minimize the coherence problem in a multi-node setup. Typical clustered deployment cache scenarios This is the recommended approach. Hazelcast messaging invalidates the caches. Invalidation clears only the caches in specific nodes. Other caches are cleared at cache expiration. Hazelcast communication is not used. As the decisions take time to propagate over nodes (default cache timeout is 15 minutes), there is a security risk in this method. To reduce the risk, reduce the default cache timeout period. To learn how to reduce the default cache timeout period, see Configuring Cache Layers - timeout. The data are directly acquired from the database. Eliminates the security risks caused due to not having cache invalidation. This method will create a performance degradation due to the lack of caching. To reduce the security risk created in the second scenario and to improve performance in comparison with the third scenario, disable the security-related caches and sustain the performance-related caches as local caches. This requires identification of these caches depending on the use case. This scenario is only recommended if the network has tight tolerance where the network infrastructure is capable of handling high bandwidth with very low latency. Typically this applies only when you deploy all the nodes in a single server rack having fiber-optic cables. In any other environments, this implementation will cause cache losses. Thus, this implementation is not recommended for general use. Configure the following: Make sure the Make sure the Configure the Add the following authentication endpoint configurations to the Configuring Open Banking API Manager Make sure to do the following changes in your Open Banking API Manager server: By default, the in-sequences files for the APIs in the To point the Open Banking API Manager to the Key Manager cluster, open the Configure the Key Manager node1 using the following steps. Go to the Tip: If you are using an Openshift Docker container for the deployment, do the following. Add the following In the This hostname is used by the OB Key Manager cluster. It must be specified in the Follow all the configuration steps that were done in node1 for node2 as well. To enable synchronization for runtime artifacts, you must have a shared file system. You can use one of the following depending on your environment. Create a symlink from the Instead of mounting the file system directly to the In this section, an Nginx server is used as an example. If you need to set up the WSO2 Open Banking Key Manager cluster with Nginx, you can follow the instructions given below (you must do this after setting up the cluster following the above instructions). When clustering WSO2 Open Banking Key Manager with a load balancer, make sure to enable sticky sessions. This is required for the management console and the dashboard to work and if we disable temporary session data persistence in the Sticky sessions for SSO Sticky sessions are required to ensure a flawless Single Sign On (SSO) workflow when temporary session data persistence is disabled. It is recommended to use sticky sessions for SSO in order to have a higher throughput. For more information on sticky sessions, see Sticky Sessions with Manager Nodes. The following is the deployment diagram with the load balancer. If both nodes are running on the same server, set the port offset to avoid port conflicts. Start Nginx. Go to Access the Management console using the following URL: Clustering Key Manager for high availability
<WSO2_OB_KM_HOME>/repository/conf/axis2/axis2.xml
file for both nodes.
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
<parameter name="domain">wso2.obkm.domain</parameter>
<parameter name="membershipScheme">wka</parameter>
localMemberHost
and localMemberPort
entries. These must be different port values for the two nodes if they are on the same server to prevent any conflicts.<parameter name="localMemberHost">127.0.0.1</parameter>
<parameter name="localMemberPort">4000</parameter>
members
section, add the hostName
and port
for each WKA member. As we have only two nodes in our sample cluster configuration, we will configure both nodes as WKA nodes.<members>
<member>
<hostName>127.0.0.1</hostName>
<port>4000</port>
</member>
<member>
<hostName>127.0.0.2</hostName>
<port>4010</port>
</member>
</members>
hostName
. For example, 192.168.1.2-10. This should ensure that the cluster eventually recovers after failures. One shortcoming of doing this is that you can define a range only for the last portion of the IP address. You should also keep in mind that the smaller the range, the faster the time it takes to discover members since each node has to scan a lesser number of potential members.ForceLocalCache
property within the <cache>
section in the <WSO2_OB_KM_HOME>/repository/conf/carbon.xml
file marks that all the caches should act like local caches even in a clustered setup.<ForceLocalCache>true</ForceLocalCache>
Scenario Local Caching Distributed Caching Hazelcast Clustering Distributed Invalidation Description 1. All caches are local with distributed cache invalidation Enabled Not Applicable Enabled Enabled 2. All caches are local without distributed cache invalidation Enabled Not Applicable Disabled Disabled 3. No caching Disabled Disabled Disabled Disabled 4. Certain caches are disabled while the remaining are local Enabled for the available local caches Not Applicable Enabled Enabled 5. Distributed caching enabled Disabled—the ForceLocalCache
property is set to false
.Enabled Enabled Not Applicable jdbc/WSO2UM_DB
datasource is configured in the <WSO2_OB_KM_HOME>/repository/conf/user-mgt.xml
file. This refers to the user store you configured in the Configuring the user store section above. <UserManager>
<Realm>
<Configuration>
<Property name="dataSource">jdbc/WSO2UM_DB</Property>
</Configuration>
</Realm>
</UserManager>
<WSO2_OB_KM_HOME>/repository/conf/identity/identity.xml
file of both node1 and node2 is configured to use jdbc/WSO2AM_DB
datasource. This refers to the datasource you configured in the Configuring the datasources section above.<JDBCPersistenceManager>
<DataSource>
<Name>jdbc/WSO2AM_DB</Name>
</DataSource>
</JDBCPersistenceManager>
<WSO2_OB_KM_HOME>/repository/conf/identity/identity.xml
file in both node1 and node2, so that they are pointed to the load balancer.<OAuth1RequestTokenUrl>${carbon.protocol}://<LOAD_BALANCER_HOST>/oauth/request-token</OAuth1RequestTokenUrl>
<OAuth1AuthorizeUrl>${carbon.protocol}://<LOAD_BALANCER_HOST>/oauth/authorize-url</OAuth1AuthorizeUrl>
<OAuth1AccessTokenUrl>${carbon.protocol}://<LOAD_BALANCER_HOST>/oauth/access-token</OAuth1AccessTokenUrl>
<OAuth2AuthzEPUrl>${carbon.protocol}://localhost:8243/authorize</OAuth2AuthzEPUrl>
<OAuth2TokenEPUrl>${carbon.protocol}://localhost:8243/token</OAuth2TokenEPUrl>
<OAuth2RevokeEPUrl>${carbon.protocol}://<LOAD_BALANCER_HOST>/oauth2/revoke</OAuth2RevokeEPUrl>
<OAuth2IntrospectEPUrl>${carbon.protocol}://<LOAD_BALANCER_HOST>/oauth2/introspect</OAuth2IntrospectEPUrl>
<OAuth2UserInfoEPUrl>${carbon.protocol}://localhost:8243/userinfo</OAuth2UserInfoEPUrl>
<OIDCCheckSessionEPUrl>${carbon.protocol}://${carbon.host}:${carbon.management.port}/oidc/checksession</OIDCCheckSessionEPUrl>
<OIDCLogoutEPUrl>${carbon.protocol}://<LOAD_BALANCER_HOST>/oidc/logout</OIDCLogoutEPUrl>
<OAuth2ConsentPage>${carbon.protocol}://<LOAD_BALANCER_HOST>/ob/authenticationendpoint/oauth2_authz.do</OAuth2ConsentPage>
<OAuth2ErrorPage>${carbon.protocol}://<LOAD_BALANCER_HOST>/ob/authenticationendpoint/oauth2_error.do</OAuth2ErrorPage>
<OIDCConsentPage>${carbon.protocol}://<LOAD_BALANCER_HOST>/ob/authenticationendpoint/oauth2_consent.do</OIDCConsentPage>
<OIDCLogoutConsentPage>${carbon.protocol}://<LOAD_BALANCER_HOST>/ob/authenticationendpoint/oauth2_logout_consent.do</OIDCLogoutConsentPage>
<OIDCLogoutPage>${carbon.protocol}://<LOAD_BALANCER_HOST>/ob/authenticationendpoint/oauth2_logout.do</OIDCLogoutPage>
<OIDCWebFingerEPUrl>${carbon.protocol}://localhost:${carbon.management.port}/.well-known/webfinger</OIDCWebFingerEPUrl>
<WSO2_OB_KM_HOME>/repository/conf/identity/application-authentication.xml>
file in both node1 and node 2.<AuthenticationEndpointURL>https://<LOAD_BALANCER_HOST>/ob/authenticationendpoint/login.do</AuthenticationEndpointURL>
<AuthenticationEndpointRetryURL>https://<LOAD_BALANCER_HOST>/ob/authenticationendpoint/retry.do</AuthenticationEndpointRetryURL>
<AuthenticationEndpointMissingClaimsURL>/ob/authenticationendpoint/claims.do</AuthenticationEndpointMissingClaimsURL>
<WSO2_OB_APIM_HOME>/repository/resources/finance/apis
directory are pointed to Open Banking Key Manager. In order to use the load balancer, update the in-sequence files by pointing them to the load balancer, instead of Open Banking Key Manager where applicable.<WSO2_OB_APIM_HOME>/repository/conf/api-manager.xml
and configure the following:<APIKeyManager>
<Configuration>
<ServerURL>https://<LOAD_BALANCER_HOST>${carbon.context}services/</ServerURL>
</Configuration>
</APIKeyManager>
<APIKeyValidator>
<!-- Server URL of the API key manager -->
<!--Required in OB-->
<ServerURL>https://<LOAD_BALANCER_HOST>${carbon.context}services/</ServerURL>
</APIKeyValidator>
<AuthManager>
<!-- Server URL of the Authentication service -->
<!--openbanking_hostname Required in OB-->
<ServerURL>https://<LOAD_BALANCER_HOST>${carbon.context}services/</ServerURL>
</AuthManager>
<RevokeAPIURL>https://ssl.nginx.com:${https.nio.port}/revoke</RevokeAPIURL>
Changing hostnames and ports
<WSO2_OB_KM_HOME>/repository/conf/tomcat/catalina-server.xml
file and configure the proxy ports as follows:<Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="9443" proxyPort="443"
<Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="9763" proxyPort="80"
Tomcat
RemoteIPValve
to the <WSO2_OB_KM_HOME>/repository/conf/tomcat/catalina-server.xml
file.<Valve
className="org.apache.catalina.valves.RemoteIpValve"
internalProxies="reg_ex_for_internal_docker_IPs"
remoteIpHeader="x-forwarded-for"
proxiesHeader="x-forwarded-by"
protocolHeader="x-forwarded-proto"
/>
<WSO2_OB_KM_HOME>/repository/conf/carbon.xml
file, define the hostname for your server.<HostName>wso2.obkm.com</HostName>
<MgtHostName>wso2.obkm.com</MgtHostName>
/etc/hosts
file as:127.0.0.1 wso2.obkm.com
Enabling artifact synchronization
<WSO2_OB_KM_HOME>/repository/deployment
path to the Deployment directory of the shared file system that you created in step 2 of this section.<WSO2_OB_KM_HOME>/repository/tenants
path to the Tenants directory of the shared file system that you created in step 2 of this section.<WSO2_OB_KM_HOME>/repository/deployment
and <WSO2_OB_KM_HOME>/repository/tenants
paths, a symlink is created to avoid issues that may occur if you delete the product to redeploy it; in which case the file system would get mounted to a non-existing path.Fronting with a load balancer
<WSO2_OB_KM_HOME>/repository/conf/identity/identity.xml
file. Configuring Nginx
Create SSL certificates
Configure the Proxy Port in Open Banking Key Manager Nodes
Starting up and verifying product nodes
<WSO2_OB_KM_HOME>/bin
and start the nodes using the following command on both nodes:./wso2server.sh
https://wso2.obkm.com/carbon/