Configuring a Minimum High Availability Cluster
The following diagram indicates the minimum deployment pattern used for high availability.
DAS supports a deployment scenario that has focus on high availability (HA) along with HA processing. To enable HA processing, you should have two DAS servers in a cluster.
For this deployment, both the DAS nodes should be configured to receive all events. To achieve this, clients can either send all the requests to both the nodes or each request to any one of the two nodes (i.e., using load balancing or failover mechanisms). If clients send all the requests to both nodes, the user has to specify that events are duplicated in the cluster (i.e., the same event comes to all the members of the cluster). Alternatively, if a client sends a request to one node, internally it sends that particular request to the other node as well. This way, even if the clients send requests to only one node, both DAS nodes receive all the requests.
In this scenario, one DAS node works in active mode and the other works in passive mode. However, both nodes process all the data.
If the active node fails, the other node becomes active and receives all the requests.
When the failed node is up again, it fetches all the internal states of the current active node via synching.
The newly arrived node then becomes the passive node and starts processing all the incoming messages to keep its state synched with the active node so that it can become active if the current active node fails.
Warning: Some of the requests may be lost during the time the passive node switches to the active mode.
Prerequisites
Before you configure a minimum high availability DAS cluster, the following needs to be carried out.
- Download and install WSO2 DAS from here.
Make sure that you have allocated the required memory for DAS nodes, and installed the required supporting applications as mentioned in WSO2 DAS Documentation - Installation Prerequisites.
In WSO2 DAS clustered deployments, Spark is run in a seperate JVM. It is recommended to allocate 4GB of memory for Carbon JVM and 2GB for Spark.
- Follow the steps below to set up MySQL.
Download and install MySQL Server.
Download the MySQL JDBC driver.
Unzip the downloaded MySQL driver zipped archive, and copy the MySQL JDBC driver JAR (
mysql-connector-java-x.x.xx-bin.jar
) into the<DAS_HOME>/repository/components/lib
directory of all the nodes in the cluster.- Enter the following command in a terminal/command window, where
username
is the username you want to use to access the databases.
mysql -u username -p
- When prompted, specify the password that will be used to access the databases with the username you specified.
Create two databases named
userdb
andregdb.
About using MySQL in different operating systems
For users of Microsoft Windows, when creating the database in MySQL, it is important to specify the character set as latin1. Failure to do this may result in an error (error code: 1709) when starting your cluster. This error occurs in certain versions of MySQL (5.6.x) and is related to the UTF-8 encoding. MySQL originally used the latin1 character set by default, which stored characters in a 2-byte sequence. However, in recent versions, MySQL defaults to UTF-8 to be friendlier to international users. Hence, you must use latin1 as the character set as indicated below in the database creation commands to avoid this problem. Note that this may result in issues with non-latin characters (like Hebrew, Japanese, etc.). The following is how your database creation command should look.
mysql> create database <DATABASE_NAME> character set latin1;
For users of other operating systems, the standard database creation commands will suffice. For these operating systems, the following is how your database creation command should look.
mysql> create database <DATABASE_NAME>;
Execute the following script for the two databases you created in the previous step (If you are using MySQL 5.7.x , use 'mysql5.7.sql' script).
mysql> source <DAS_HOME>/dbscripts/mysql.sql;
Create the following databases in MySQL.
WSO2_ANALYTICS_EVENT_STORE_DB
WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB
It is recommended to create the databases with the same names given above because they are the default JNDI names that are included in the
<DAS_HOME>/repository/conf/analytics/analytics-config.xml
file as shown in the extract below. If you change the name, theanalytics-config.xml
file should be updated with the changed name.<analytics-record-store name="EVENT_STORE"> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation> <properties> <property name="datasource">WSO2_ANALYTICS_EVENT_STORE_DB</property> <property name="category">read_write_optimized</property> </properties> </analytics-record-store> <analytics-record-store name="EVENT_STORE_WO"> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation> <properties> <property name="datasource">WSO2_ANALYTICS_EVENT_STORE_DB</property> <property name="category">write_optimized</property> </properties> </analytics-record-store> <analytics-record-store name="PROCESSED_DATA_STORE"> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation> <properties> <property name="datasource">WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</property> <property name="category">read_write_optimized</property> </properties> </analytics-record-store>
Required configurations
When configuring the minimum high availability cluster following setups should be done for both nodes.
- Do the following database-related configurations.
Follow the steps below to configure the <
DAS_HOME>/repository/conf/datasources/master-datasources.xml
file as requiredEnable the all the nodes to access the users database by configuring a datasource to be used by user manager as shown below.
<datasource> <name>WSO2UM_DB</name> <description>The datasource used by user manager</description> <jndiConfig> <name>jdbc/WSO2UM_DB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://[MySQL DB url]:[port]/userdb</url> <username>[user]</username> <password>[password]</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> </configuration> </definition> </datasource>
Enable the nodes to access the registry database by configuring the
WSO2REG_DB
data source as follows.<datasource> <name>WSO2REG_DB</name> <description>The datasource used by the registry</description> <jndiConfig> <name>jdbc/WSO2REG_DB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://[MySQL DB url]:[port]/regdb</url> <username>[user]</username> <password>[password]</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> </configuration> </definition> </datasource>
For detailed information about registry sharing strategies, see the library article Sharing Registry Space across Multiple Product Instances.
Point to
WSO2_ANALYTICS_EVENT_STORE_DB
andWSO2_ANALYTICS_PROCESSED_DATA_STORE_DB
in the<DAS_HOME>/repository/conf/datasources/analytics-datasources.xml
file as shown below.<datasources-configuration> <providers> <provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider> </providers> <datasources> <datasource> <name>WSO2_ANALYTICS_EVENT_STORE_DB</name> <description>The datasource used for analytics record store</description> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://[MySQL DB url]:[port]/WSO2_ANALYTICS_EVENT_STORE_DB</url> <username>[username]</username> <password>[password]</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> <defaultAutoCommit>false</defaultAutoCommit> </configuration> </definition> </datasource> <datasource> <name>WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</name> <description>The datasource used for analytics record store</description> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://[MySQL DB url]:[port]/WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</url> <username>[username]</username> <password>[password]</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> <defaultAutoCommit>false</defaultAutoCommit> </configuration> </definition> </datasource> </datasources> </datasources-configuration>
For more information, see Datasources in DAS documentation.
To share the user store among the nodes, open the
<DAS_HOME>/repository/conf/user-mgt.xml
file and modify thedataSource
property of the<configuration>
element as follows.<configuration> ... <Property name="dataSource">jdbc/WSO2UM_DB</Property> </configuration>
The datasource name specified in this configuration should be the same as the datasource used by user manager that you configured in sub step a, i.
In the
<DAS_HOME>/repository/conf/registry.xml
file, add or modify thedataSource
attribute of the<dbConfig name="govregistry">
element as follows.<dbConfig name="govregistry"> <dataSource>jdbc/WSO2REG_DB</dataSource> </dbConfig> <remoteInstance url="https://localhost:9443/registry"> <id>gov</id> <cacheId>user@jdbc:mysql://localhost:3306/regdb</cacheId> <dbConfig>govregistry</dbConfig> <readOnly>false</readOnly> <enableCache>true</enableCache> <registryRoot>/</registryRoot> </remoteInstance> <mount path="/_system/governance" overwrite="true"> <instanceId>gov</instanceId> <targetPath>/_system/governance</targetPath> </mount> <mount path="/_system/config" overwrite="true"> <instanceId>gov</instanceId> <targetPath>/_system/config</targetPath> </mount>
Do not replace the following configuration when adding in the mounting configurations. The registry mounting configurations mentioned in the above steps should be added in addition to the following.
<dbConfig name="wso2registry"> <dataSource>jdbc/WSO2CarbonDB</dataSource> </dbConfig>
- Update the
<DAS_HOME>/repository/conf/axis2/axis2.xml
file as follows to enable Hazelcast clustering for both nodes.
Set
clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"
totrue
as shown below to enable Hazlecast clustering.<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
Enable wka mode on both nodes as shown below. For more information on wka mode, see About Membership Schemes.
<parameter name="membershipScheme">wka</parameter>
Add both the DAS nodes as well known members in the cluster under the
members
tag in each node as shown in the example below.<members> <member> <hostName>[node1 IP]</hostName> <port>[node1 port]</port> </member> <member> <hostName>[node2 IP]</hostName> <port>[node2 port]</port> </member> </members>
For each node, enter the respective server IP address as the value for the
localMemberHost
property as shown below.<parameter name="localMemberHost">[Server_IP_Address]</parameter>
Configure the
<DAS_HOME>/repository/conf/event-processor.xml
file as follows to cluster DAS in the Receiver.Enable the
HA
mode by setting the following property.<mode name="HA" enable="true">
Disable the
Distributed
mode by setting the following property.<mode name="Distributed" enable="false">
For each node, enter the respective server IP address of the event under the
HA mode
Config section as shown in the example below. Here, you need to specify the server IP address for event synchronizing nodes, manager nodes as well as presenter nodes.When you enable the HA mode for WSO2 DAS, state persistence is enabled by default. If there is no real time use case that requires any state information after starting the cluster, you should disable event persistence by setting the
persistence
attribute tofalse
in the<DAS_HOME>/repository/conf/event-processor.xml
file as shown below.<persistence enable="false"> <persistenceIntervalInMinutes>15</persistenceIntervalInMinutes> <persisterSchedulerPoolSize>10</persisterSchedulerPoolSize> <persister class="org.wso2.carbon.event.processor.core.internal.persistence.FileSystemPersistenceStore"> <property key="persistenceLocation">cep_persistence</property> </persister> </persistence>
When state persistence is enabled for WSO2 DAS, the internal state of DAS is persisted in files. These files are not automatically deleted. Therefore, if you want to save space in your DAS pack, you need to delete them manually.
These files are created in the
<DAS_HOME>/cep_persistence/<tenant-id>
directory. This directory has a separate sub-directory for each execution plan. Each execution plan can have multiple files. The format of each file name is<TIMESTAMP>_<EXECUTION_PLAN_NAME>
(e.g,1493101044948_MyExecutionPlan
). If you want to clear files for a specific execution plan, you need to leave the two files with the latest timestamps and delete the rest.When you enable the HA mode, events are synchronized between the active node and the passive node by default. This can cause a high system overhead and affect performance when the throughput per second is very high. Therefore, it is recommeded to configure both nodes to receive the same events instead of allowing event synchronization to take place.
To configure both nodes to receive the same events, set the
event.duplicated.in.cluster=true
property for each event receiver configured for the cluster. When this property is set for a receiver, each event received by it is sent to both the nodes in the cluster, and therefore, WSO2 DAS does not carry out event synchronization for that receiver.<!-- HA Mode Config --> <mode name="HA" enable="true"> ... <eventSync> <hostName>[<Publicly_Exposed_IP_Address>/<Hostname_of_the_server>]</hostName>
<management> <hostName>[<Publicly_Exposed_IP_Address>/<Hostname_of_the_server>]</hostName>
<presentation> <hostName>[<Publicly_Exposed_IP_Address>/<Hostname_of_the_server>]</hostName>
Make sure you avoid specifying
localhost /127.0.0.1 /0.0.0.0
as the hostname in above configuration. This is because the the hostname of a node is used by the other node in the cluster to communicate with it.
The following node types are configured for the HA deployment mode in the
<DAS_HOME>/repository/conf/event-processor.xml
file.eventSync
: Both the active and the passive nodes in this setup are event synchronizing nodes as explained in the introduction. Therefore, each node should have the host and the port on which it is operating specified under the<eventSync>
element.Note that the
eventSync
port is not automatically updated to the port in which each node operates via port offset.management
: In this setup, both the nodes carry out the same tasks, and therefore, both nodes are considered manager nodes. Therefore, each node should have the host and the port on which it is operating specified under the<management>
element.Note that the
management
port is not automatically updated to the port in which each node operates via port offset.-
presentation
: You can optionally specify only one of the two nodes in this setup as the presenter node. The dashboards in which processed information is displayed are configured only in the presenter node. Each node should have the host and the port on which the assigned presenter node is operating specified under the<presentation>
element. The host and the port as well as the other configurations under the<presentation>
element are effective only when thepresenter enable="false
property is set under the<!-- HA Mode Config -->
section.
Update the
<DAS_HOME>/repository/conf/analytics/spark/spark-defaults.conf
file as follows to use the Spark cluster embedded within DAS.Keep the
carbon.spark.master
configuration ascarbon
. This instructs Spark to create a Spark cluster using the Hazelcast cluster.In earlier versions of DAS, the default Spark
carbon.spark.master
configuration waslocal
. From DAS 3.1.0, this has been deprecated and thecarbon
configuration should be used instead.
If thelocal
configuration is used, the following warning might be seen in the logs:Using 'local' with Carbon clustering is deprecated. Please use 'carbon.spark.master carbon' instead!
This log warning is harmless, and only serves as a reminder that carbon mode should be used instead.
Enter
2
as the value for thecarbon.spark.master.count
configuration. This specifies that there should be two masters in the Spark cluster. One master serves as an active master and the other serves as a stand-by master.
The following example shows the
<DAS_HOME>/repository/conf/analytics/spark/spark-defaults.conf
file with changes mentioned above.carbon.spark.master carbon carbon.spark.master.count 2
For more information, see Spark Configurations in DAS documentation.
Important: If the path to
<DAS_HOME>
is different in the two nodes, please do the following.In order to share the C-Apps deployed among the nodes, configure the SVN-based deployment synchronizer. For detailed instructions, see Configuring SVN-Based Deployment Synchronizer.
DAS Minimum High availability Deployment set up does not use a manager and a worker. For the purpose of configuring the deployment synchronizer, you can add the configurations relevant to the manager for the node of your choice, and add the configurations relating to the worker for the other node.
If you do not configure the deployment synchronizer, you are required to deploy any C-App you use in the DAS Minimum High Availability Deployment set up to both the nodes.
If the physical DAS server has multiple network interfaces with different IPs, and if you want Spark to use a specific Interface IP, open either the
<DAS_HOME>/bin/load-spark-env-vars.sh
file (for Linux) or<DAS_HOME>/bin/load-spark-env-vars.bat
file (for Windows), and add the following parameter to configure the Spark IP address.export SPARK_LOCAL_IP=<IP_Address>
Create a file named "my-node-id.dat" in <DAS_HOME>/repository/conf/analytics folder and add a id of your preference. This ID should be unique across all other DAS nodes in the cluster. Same ID should not be in more than one node. If this ID is not provided, a node ID will be generated by the server. This node id is also stored in the Database(Primary record store which is WSO2_ANALYTICS_EVENT_STORE_DB) for future reference.
<Any id which is unique across all other my-node-id.dat files in the cluster>
If, for any reason, you want to replace the DAS servers with new packs and want to point the new packs to the same databases, you need to back up the my-node-id.dat files from both previous installations and restore them in the new DAS packs. If this is not done,
two new node ids will be created while the older two node ids are still in the databases. So altogether, there will be 4 node ids in the database. This might lead to various indexing inconsistencies. If you are going to clean the whole DB, you can start without restoring the older node ids.
Configuring Hazelcast properties
You can configure the hazelcast properties for the product nodes by following the steps given below.
Create the
hazelcast.properties
file with the following property configurations, and copy the file to the<DAS_HOME>/repository/conf/
directory.#Disabling the hazelcast shutdown hook hazelcast.shutdownhook.enabled=false #Setting the hazelcast logging type to log4j hazelcast.logging.type=log4j
The above configurations are explained below.
- Hazelcast shutdown hook: This configuration disables the shutdown hook in hazelcast, which ensures that the hazelcast instance shuts down gracefully whenever the product node shuts down. If the hazelcast shutdown hook is enabled (which is the default behavior of a product), you will see errors such as "Hazelcast instance is not active!" at the time of shutting down the product node: This is because the hazelcast instance shuts down too early when the shutdown hook is enabled.
- Hazelcast logging type: This configuration sets the hazelcast logging type to log4j, which allows hazelcast logs to be written to the
wso2carbon.log
file.
If you have enabled log4j for hazelcast logging as shown above, be sure to enter the configuration shown below in the
log4j.properties
file (stored in the<DAS_HOME>/repository/conf/
directory). This can be used to configure the log level for hazelcast logging. For a clustered production environment, it is recommended to use INFO as the log level as shown below.log4j.logger.com.hazelcast=INFO
Starting the cluster
Once you complete the configurations mentioned above, start the two DAS nodes. If the cluster is successfully configured, the following CLI logs are generated.
The following is displayed in the CLIs of both nodes, and it indicates that the registry mounting is successfully done.
[2016-01-28 14:20:53,596] INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} - Configured Registry in 107ms [2016-01-28 14:20:53,631] INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} - Connected to mount at govregistry in 7ms [2016-01-28 14:20:53,818] INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} - Connected to mount at govregistry in 0ms
A CLI log similar to the following is displayed for the first node you start to indicate that it has successfully started.
[2016-01-28 14:32:40,283] INFO {org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent} - Using wka based membership management scheme [2016-01-28 14:32:40,284] INFO {org.wso2.carbon.core.clustering.hazelcast.util.MemberUtils} - Added member: Host:10.100.0.46, Remote Host:null, Port: 4000, HTTP:-1, HTTPS:-1, Domain: null, Sub-domain:null, Active:true [2016-01-28 14:32:40,284] INFO {org.wso2.carbon.core.clustering.hazelcast.util.MemberUtils} - Added member: Host:10.100.0.46, Remote Host:null, Port: 4001, HTTP:-1, HTTPS:-1, Domain: null, Sub-domain:null, Active:true [2016-01-28 14:32:41,665] INFO {org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent} - Hazelcast initialized in 1379ms [2016-01-28 14:32:41,728] INFO {org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent} - Local member: [9c7619a9-8460-465d-8fd0-7eab1c464386] - Host:10.100.0.46, Remote Host:null, Port: 4000, HTTP:9763, HTTPS:9443, Domain: wso2.carbon.domain, Sub-domain:worker, Active:true [2016-01-28 14:32:41,759] INFO {org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent} - Elected this member [9c7619a9-8460-465d-8fd0-7eab1c464386] as the Coordinator node [2016-01-28 14:32:41,847] INFO {org.wso2.carbon.event.processor.manager.core.internal.HAManager} - CEP HA Snapshot Server started on 0.0.0.0:10005 [2016-01-28 14:32:41,850] INFO {org.wso2.carbon.event.processor.manager.core.internal.HAManager} - Became CEP HA Active Member [2016-01-28 14:32:41,885] INFO {org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent} - Cluster initialization completed
Once you start the second node, a CLI log similar to the following will be displayed for the first node to indicate that another node has joined the cluster.
[2016-01-28 14:34:13,252] INFO {org.wso2.carbon.core.clustering.hazelcast.wka.WKABasedMembershipScheme} - Member joined [504bceff-4a08-46fe-83e6-b9561d3fff81]: /10.100.0.46:4001 [2016-01-28 14:34:15,963] INFO {org.wso2.carbon.event.processor.manager.commons.transport.client.TCPEventPublisher} - Connecting to 10.100.0.46:11224 [2016-01-28 14:34:15,972] INFO {org.wso2.carbon.event.processor.manager.core.internal.EventHandler} - CEP sync publisher initiated to Member '10.100.0.46:11224'
A CLI log similar to the following is displayed for the second node once it joins the cluster.
[2016-01-28 14:34:27,086] INFO {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} - Spark Master map size after starting masters : 2
Following are some exceptions you may view in the start up log when you start the cluster.
When you start the passive node of the HA cluster, the following errors are displayed.
This is because the artifacts are yet to be deployed in the passive node even though it has received the sync message from the active node. This error is no longer displayed once the start up for the passive node is complete.
When the Apache Spark Cluster is not properly instantiated due to not being able to bind with the IP configured on axis2.xml, the following errors are displayed.
Please go through the Required Configurations emphasising Step 6.
When Java Preferences subsystem not being able to write to the directory /etc/.java/.systemPrefs because /etc directory is not write accessible to all users.
WARN {java.util.prefs.FileSystemPreferences} - Could not lock System prefs. Unix error code 7. {java.util.prefs.FileSystemPreferences}
Then, follow below approach to solve that,
1. Create a directory in a place accessible to the user running the JVM and have the following substructure. <CREATED_DIR>/.java/.systemPrefs2. Start DAS server with the Java option -Djava.util.prefs.systemRoot=<CREATED_DIR>/.java or add above java option to wso2server.sh startup script.
Testing the HA deployment
The HA deployment you configured can be tested as follows.
- Access the Spark UIs of the active master and the stand-by master using <
node ip>:8081
in each node.- Information relating to the active master is displayed as shown in the example below.
- Information relating to the stand-by master is displayed as shown in the example below.
- Information relating to the active master is displayed as shown in the example below.
- Click the links under Running Applications in the Spark UI of the active master to check the Spark application UIs of those applications. A working application is displayed as shown in the following example.
- Click the Environment tab of a Spark application UI to check whether all the configuration parameters are correctly set. You can also check whether the class path variables in this tab can be accessed manually.
- Check the Spark UIs of workers to check whether they have running executors. If a worker UI does not have running executors or if it is continuously creating executors, it indicates an issue in the Spark cluster configuration. The following example shows a worker UI with a running executor.
- Check the symbolic parameter, and check if you could manually access it via a
cd <directory>
command in the CLI. - Log into the DAS Management Console and navigate to Main => Manage => Batch Analytics => Console to open the Interactive Analytics Console. Run a query in this console.
In earlier versions of DAS, the default Spark carbon.spark.master
configuration was local. From DAS 3.1.0, this has been deprecated and the carbon configuration should be used instead.
If the local
configuration is used, the following warning might be seen in the logs:
Using 'local' with Carbon clustering is deprecated. Please use 'carbon.spark.master carbon' instead!
This log warning is harmless and only serves as a reminder that the carbon
mode should be used instead of the local
mode.