Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Download and install WSO2 DAS from here.
  2. Make sure that you have allocated the required memory for DAS nodes, and installed the required supporting applications as mentioned in WSO2 DAS Documentation - Installation Prerequisites.

    Info

    In WSO2 DAS clustered deployments, Spark is run in a seperate JVM. It is recommended to allocate 4GB of memory for Carbon JVM and 2GB for Spark.

  3. Follow the steps below to set up MySQL.
    1. Download and install MySQL Server.

    2. Download the MySQL JDBC driver.

    3. Unzip the downloaded MySQL driver zipped archive, and copy the MySQL JDBC driver JAR (mysql-connector-java-x.x.xx-bin.jar) into the <DAS_HOME>/repository/components/lib directory of all the nodes in the cluster.

    4. Enter the following command in a terminal/command window, where username is the username you want to use to access the databases.
      mysql -u username -p 
    5. When prompted, specify the password that will be used to access the databases with the username you specified.
    6. Create two databases named userdb and regdb.

      Info
      titleAbout using MySQL in different operating systems

      For users of Microsoft Windows, when creating the database in MySQL, it is important to specify the character set as latin1. Failure to do this may result in an error (error code: 1709) when starting your cluster. This error occurs in certain versions of MySQL (5.6.x) and is related to the UTF-8 encoding. MySQL originally used the latin1 character set by default, which stored characters in a 2-byte sequence. However, in recent versions, MySQL defaults to UTF-8 to be friendlier to international users. Hence, you must use latin1 as the character set as indicated below in the database creation commands to avoid this problem. Note that this may result in issues with non-latin characters (like Hebrew, Japanese, etc.). The following is how your database creation command should look.

      mysql> create database <DATABASE_NAME> character set latin1;

      For users of other operating systems, the standard database creation commands will suffice. For these operating systems, the following is how your database creation command should look.

      mysql> create database <DATABASE_NAME>;
    7. Execute the following script for the two databases you created in the previous step.
      mysql> source <DAS_HOME>/dbscripts/mysql.sql; 

      Expand
      titleClick here to view the commands for performing steps f and g
      Code Block
      mysql> create database userdb;
      mysql> use userdb;
      mysql> source <DAS_HOME>/dbscripts/mysql.sql;
      mysql> grant all on userdb.* TO username@localhost identified by "password";
       
       
      mysql> create database regdb;
      mysql> use regdb;
      mysql> source <DAS_HOME>/dbscripts/mysql.sql;
      mysql> grant all on regdb.* TO username@localhost identified by "password";
    8. Create the following databases in MySQL.

      • WSO2_ANALYTICS_EVENT_STORE_DB
      • WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB
      Info

      It is recommended to create the databases with the same names given above because they are the default JNDI names that are included in the <DAS_HOME>/repository/conf/analytics/analytics-conf.xml file as shown in the extract below. If you change the name, the analytics-conf.xml file should be updated with the changed name.

      Code Block
      languagexml
      <analytics-record-store name="EVENT_STORE">
         <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation>
         <properties>
            <property name="datasource">WSO2_ANALYTICS_EVENT_STORE_DB</property>
            <property name="category">read_write_optimized</property>
         </properties>
      </analytics-record-store>
      <analytics-record-store name="EVENT_STORE_WO">
         <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation>
         <properties>
            <property name="datasource">WSO2_ANALYTICS_EVENT_STORE_DB</property>
            <property name="category">write_optimized</property>
         </properties>
      </analytics-record-store>
      <analytics-record-store name="PROCESSED_DATA_STORE">
         <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation>
         <properties>
            <property name="datasource">WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</property>
            <property name="category">read_write_optimized</property>
         </properties>
      </analytics-record-store>

...

  1. Follow the steps below to point the user stores of all the nodes to a single user store database, and to mount all governance registries to a single registry and configuration registries to a single configuration registry.
    1. Follow the steps below to configure the <DAS_HOME>/repository/conf/datasources/master-datasources.xml file as required
      1. Enable the all the nodes to access the users database by configuring a datasource to be used by user manager as shown below. 

        Code Block
        languagexml
        <datasource>    
        <name>WSO2UM_DB</name>
            <description>The datasource used by user manager</description>
            <jndiConfig>
            <name>jdbc/WSO2UM_DB</name>
            </jndiConfig>
            <definition type="RDBMS">
            <configuration>
                <url>jdbc:mysql://[MySQL DB url]:[port]/userdb</url>
                <username>[user]</username>
                <password>[password]</password>
                <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                <maxActive>50</maxActive>
                <maxWait>60000</maxWait>
                <testOnBorrow>true</testOnBorrow>
                <validationQuery>SELECT 1</validationQuery>
                <validationInterval>30000</validationInterval>
            </configuration>
            </definition>
        </datasource>
      2. Enable the nodes to access the registry database by configuring the WSO2REG_DB data source as follows.

        Code Block
        languagexml
        <datasource>
            <name>WSO2REG_DB</name>
            <description>The datasource used by the registry</description>
            <jndiConfig>
            <name>jdbc/WSO2REG_DB</name>
            </jndiConfig>
            <definition type="RDBMS">
            <configuration>
                <url>jdbc:mysql://[MySQL DB url]:[port]/regdb</url>
                <username>[user]</username>
                <password>[password]</password>
                <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                <maxActive>50</maxActive>
                <maxWait>60000</maxWait>
                <testOnBorrow>true</testOnBorrow>
                <validationQuery>SELECT 1</validationQuery>
                <validationInterval>30000</validationInterval>
            </configuration>
            </definition>
        </datasource>

        For detailed information about registry sharing strategies, see the library article Sharing Registry Space across Multiple Product Instances.


    2. Point to WSO2_ANALYTICS_EVENT_STORE_DB and WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB in the <DAS_HOME>/repository/conf/datasources/analytics-datasources.xml file as shown below.

      Code Block
      languagexml
      <datasources-configuration>
          <providers>
              <provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider>
          </providers>
       
          <datasources>
              <datasource>
                  <name>WSO2_ANALYTICS_EVENT_STORE_DB</name>
                  <description>The datasource used for analytics record store</description>
                  <definition type="RDBMS">
                      <configuration>
                          <url>jdbc:mysql://[MySQL DB url]:[port]/WSO2_ANALYTICS_EVENT_STORE_DB</url>
                          <username>[username]</username>
                          <password>[password]</password>
                          <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                          <maxActive>50</maxActive>
                          <maxWait>60000</maxWait>
                          <testOnBorrow>true</testOnBorrow>
                          <validationQuery>SELECT 1</validationQuery>
                          <validationInterval>30000</validationInterval>
                          <defaultAutoCommit>false</defaultAutoCommit>
                      </configuration>
                  </definition>
              </datasource>
              <datasource>
                  <name>WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</name>
                  <description>The datasource used for analytics record store</description>
                  <definition type="RDBMS">
                      <configuration>
                          <url>jdbc:mysql://[MySQL DB url]:[port]/WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</url>
                          <username>[username]</username>
                          <password>[password]</password>
                          <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                          <maxActive>50</maxActive>
                          <maxWait>60000</maxWait>
                          <testOnBorrow>true</testOnBorrow>
                          <validationQuery>SELECT 1</validationQuery>
                          <validationInterval>30000</validationInterval>
                          <defaultAutoCommit>false</defaultAutoCommit>
                      </configuration>
                  </definition>
              </datasource>
          </datasources>
      </datasources-configuration>

      For more information, see Datasources in DAS documentation.

    3. To share the user store among the nodes, open the <DAS_HOME>/repository/conf/user-mgt.xmlfile and modify the dataSource property of the <configuration> element as follows.

      Code Block
      languagexml
      <configuration> ...
          <Property name="dataSource">jdbc/WSO2UM_DB</Property>
      </configuration>
      Info

      The datasource name specified in this configuration should be the same as the datasource used by user manager that you configured in sub step a, i.

    4. In the <DAS_HOME>/repository/conf/registry.xml file, add or modify the dataSource attribute of the <dbConfig name="govregistry"> element as follows.

      Code Block
      languagexml
      <dbConfig name="govregistry">
      	<dataSource>jdbc/WSO2REG_DB</dataSource>
      </dbConfig>
      <remoteInstance url="https://localhost:9443/registry"> 
      	<id>gov</id>
      	<cacheId>user@jdbc:mysql://localhost:3306/regdb</cacheId>
      	<dbConfig>govregistry</dbConfig>
      	<readOnly>false</readOnly>
      	<enableCache>true</enableCache>
      	<registryRoot>/</registryRoot>
      </remoteInstance>
      <mount path="/_system/governance" overwrite="true">
      	<instanceId>gov</instanceId>
      	<targetPath>/_system/governance</targetPath>
      </mount>
      <mount path="/_system/config" overwrite="true">
      	<instanceId>gov</instanceId>
      	<targetPath>/_system/config</targetPath>
      </mount>
      Note

      Do not replace the following configuration when adding in the mounting configurations. The registry mounting configurations mentioned in the above steps should be added in addition to the following.

      Code Block
      languagexml
      <dbConfig name="wso2registry">
          <dataSource>jdbc/WSO2CarbonDB</dataSource>
      </dbConfig>
  2. Update the <DAS_HOME>/repository/conf/axis2/axis2.xml file as follows for he nodes to enable Hazlecast clustering.


    1. To enable Hazlecast clustering, set the clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" property to true as shown below.

      Code Block
      languagexml
      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    2. To ensure that all the nodes in the cluster identify each other, enable the wka mode for all of them as shown below.

      Code Block
      languagexml
      <parameter name="membershipScheme">wka</parameter>
    3. Add all the nodes in the cluster as well known members under the <members> element as shown below.

      Info

      In a fully distributed DAS setup, Apache Spark identifies the worker nodes that analyze data via Hazalcast clustering. Only the nodes that are assigned as analyzer nodes should be identified as Spark workers, they should be in a separate cluster.


      1. For each of the two analyser nodes, list the members as shown below. This groups them in a separate cluster.

        Code Block
        languagexml
        <members>
        	<member>
        		<hostName>[Analyzer1 IP]</hostName>
        		<port>[Analyzer1 port]</port>
        	</member>
        	<member>
        		<hostName>[Analyzer2 IP]</hostName>
        		<port>[Analyzer2 port]</port>
        	</member>
        </members>
      2. For the other nodes that are not analyser nodes, list the members as shown below.

        Code Block
        languagexml
        <members>
        	<member>
        		<hostName>[indexer1 IP]</hostName>
        		<port>[indexer1 port]</port>
        	</member>
        	<member>
        		<hostName>[indexer2 IP]</hostName>
        		<port>[indexer2 port]</port>
        	</member>
        	<member>
        		<hostName>[Receiver1 IP]</hostName>
        		<port>[Receiver1 port]</port>
        	</member>
        	<member>
        		<hostName>[Receiver2 IP]</hostName>
        		<port>[Receiver2 port]</port>
        	</member>
        </members>
    4. For each node, enter the respective server IP address as the value for the localMemberHost property as shown below.

      When you enable the HA mode for WSO2 DAS, state persistence is enabled by default. If there is no real time use case that requires any state information after starting the cluster, you should disable event persistence by setting the persistence attribute to false in the <DAS
      Code Block
      languagexml
      <parameter name="localMemberHost">[Server_IP_Address]</parameter>
      Info
  3. Update the  <DAS_HOME>/repository/conf/event-processor.xml file as shown below file of the nodes as follows.
    1. Make sure that the HA mode is enabled as follows.

      Code Block
      languagexml
      <persistence
      <mode name="HA" enable="
      false
      true">
    2. Enable the Distributed mode as shown below.

      Code Block
      languagexml
    3. Make sure that the HA mode is enabled as follows.

      Code Block
      languagexml
      <mode name="HA" enable="true">
    4. Enable the Distributed mode as shown below.

      Code Block
      languagexml
      <mode name="Distributed" enable="true">
    5. Set the following property for the two nodes that should function as presenter nodes.

      Info

      This property should be set only for the presenter nodes.

      Code Block
      languagexml
      <mode 
      <persistenceIntervalInMinutes>15</persistenceIntervalInMinutes> <persisterSchedulerPoolSize>10</persisterSchedulerPoolSize> <persister class="org.wso2.carbon.event.processor.core.internal.persistence.FileSystemPersistenceStore"> <property key="persistenceLocation">cep_persistence</property> </persister> </persistence>
      Tip

      When state persistence is enabled for WSO2 DAS, the internal state of DAS is persisted in files. These files are not automatically deleted. Therefore, if you want to save space in your DAS pack, you need to delete them manually.

      These files are created in the <DAS_HOME>/cep_persistence/<tenant-id> directory. This directory has a separate sub-directory for each execution plan. Each execution plan can have multiple files. The format of each file name is <TIMESTAMP>_<EXECUTION_PLAN_NAME> (e.g, 1493101044948_MyExecutionPlan). If you want to clear files for a specific execution plan, you need to leave the two files with the latest timestamps and delete the rest.

    Update the  <DAS_HOME>/repository/conf/event-processor.xml file of the nodes as follows.
    1. <presenter enable="true"/>name="Distributed" enable="true">
    2. Set the following property for the two nodes that should function as presenter nodes.

      Info

      This property should be set only for the presenter nodes.

      Code Block
      languagexml
      <presenter enable="true"/>
    3. For each receiver node, enter the respective server IP address under the HA mode Config section as shown in the example below.

      Info

      When you enable the HA mode for WSO2 DAS, state persistence is enabled by default. If there is no real time use case that requires any state information after starting the cluster, you should disable event persistence by setting the persistence attribute to false in the <DAS_HOME>/repository/conf/event-processor.xml file as shown below.

      Code Block
      languagexml
      <persistence enable="false">
          <persistenceIntervalInMinutes>15</persistenceIntervalInMinutes>
          <persisterSchedulerPoolSize>10</persisterSchedulerPoolSize>
          <persister class="org.wso2.carbon.event.processor.core.internal.persistence.FileSystemPersistenceStore">
              <property key="persistenceLocation">cep_persistence</property>
          </persister>
      </persistence>
      Tip

      When state persistence is enabled for WSO2 DAS, the internal state of DAS is persisted in files. These files are not automatically deleted. Therefore, if you want to save space in your DAS pack, you need to delete them manually.

      These files are created in the <DAS_HOME>/cep_persistence/<tenant-id> directory. This directory has a separate sub-directory for each execution plan. Each execution plan can have multiple files. The format of each file name is <TIMESTAMP>_<EXECUTION_PLAN_NAME> (e.g, 1493101044948_MyExecutionPlan). If you want to clear files for a specific execution plan, you need to leave the two files with the latest timestamps and delete the rest.

      Code Block
      languagexml
      <!-- HA Mode Config -->
      <mode name="HA" enable="true">
         ...
          <eventSync>
              <hostName>[Server_IP_Address]</hostName>
  4. To define the two analyzer nodes as Spark masters, configure the <DAS_home>/repository/conf/analytics/spark/spark-defaults.conf file as follows.

    Info

    2 Sparkmasters are created because this cluster is a high-availability cluster. When the active spark master fails, the other node configured as a Spark master becomes active and continues to carry out the tasks os the Spark master.


    1. Specify the number of Spark masters as 2 by setting the following property.

      Code Block
      languagejs
      carbon.spark.master.count  2
    2. Specify a DAS symbolic link for both nodes as shown in the example below. 

      Info
       The
      • The directory path for the Spark Class path is different for each node depending on the location of the <DAS_HOME>. The symbolic link redirects the Spark Driver Application to the relevant directory for each node when it creates the Spark class path.
      • In a multi node DAS cluster that runs in a RedHat Linux environment, you also need to update the <DAS_HOME>/bin/wso2server.sh file with the following entry so that the <DAS_HOME> is exported. This is because the symbolic link may not be resolved correctly in this operating system.

        Export CARBON_HOME=<symbolic link>

      For more information about Spark related configurations, see Spark Configurations.


  5. In order to share the C-Apps deployed among the nodes, configure the SVN-based deployment synchronizer. For detailed instructions, see Configuring SVN-Based Deployment Synchronizer.

    Info

    If you do not configure the deployment synchronizer, it is required to deploy any C-App you use in the fully distributed HA set up to all the nodes.

...