Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Do the following database-related configurations.
    1. Follow the steps below to configure the <DAS_HOME>/repository/conf/datasources/master-datasources.xml file as required

      1. Enable the all the nodes to access the users database by configuring a datasource to be used by user manager as shown below. 

        Code Block
        languagexml
        <datasource>
        	<name>WSO2UM_DB</name>
        	<description>The datasource used by user manager</description>
        	<jndiConfig>
          	<name>jdbc/WSO2UM_DB</name>
        	</jndiConfig>
        	<definition type="RDBMS">
          	<configuration>
            	<url>jdbc:mysql://[MySQL DB url]:[port]/userdb</url>
            	<username>[user]</username>
            	<password>[password]</password>
            	<driverClassName>com.mysql.jdbc.Driver</driverClassName>
            	<maxActive>50</maxActive>
            	<maxWait>60000</maxWait>
            	<testOnBorrow>true</testOnBorrow>
            	<validationQuery>SELECT 1</validationQuery>
            	<validationInterval>30000</validationInterval>
          	</configuration>
        	</definition>
        </datasource>
      2. Enable the nodes to access the registry database by configuring the WSO2REG_DB data source as follows.

        Code Block
        languagexml
        <datasource>
        	<name>WSO2REG_DB</name>
        	<description>The datasource used by the registry</description>
        	<jndiConfig>
          	<name>jdbc/WSO2REG_DB</name>
        	</jndiConfig>
        	<definition type="RDBMS">
          	<configuration>
            	<url>jdbc:mysql://[MySQL DB url]:[port]/regdb</url>
            	<username>[user]</username>
            	<password>[password]</password>
            	<driverClassName>com.mysql.jdbc.Driver</driverClassName>
            	<maxActive>50</maxActive>
            	<maxWait>60000</maxWait>
            	<testOnBorrow>true</testOnBorrow>
            	<validationQuery>SELECT 1</validationQuery>
            	<validationInterval>30000</validationInterval>
          	</configuration>
        	</definition>
        </datasource>

        For detailed information about registry sharing strategies, see the library article Sharing Registry Space across Multiple Product Instances.

    2. Point to WSO2_ANALYTICS_EVENT_STORE_DB and WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB in the <DAS_HOME>/repository/conf/datasources/analytics-datasources.xml file as shown below.

      Code Block
      languagexml
      <datasources-configuration>
          <providers>
              <provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider>
          </providers>
      
          <datasources>
              <datasource>
                  <name>WSO2_ANALYTICS_EVENT_STORE_DB</name>
                  <description>The datasource used for analytics record store</description>
                  <definition type="RDBMS">
                      <configuration>
                          <url>jdbc:mysql://[MySQL DB url]:[port]/WSO2_ANALYTICS_EVENT_STORE_DB</url>
                          <username>[username]</username>
                          <password>[password]</password>
                          <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                          <maxActive>50</maxActive>
                          <maxWait>60000</maxWait>
                          <testOnBorrow>true</testOnBorrow>
                          <validationQuery>SELECT 1</validationQuery>
                          <validationInterval>30000</validationInterval>
                          <defaultAutoCommit>false</defaultAutoCommit>
                      </configuration>
                  </definition>
              </datasource>
              <datasource>
                  <name>WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</name>
                  <description>The datasource used for analytics record store</description>
                  <definition type="RDBMS">
                      <configuration>
                          <url>jdbc:mysql://[MySQL DB url]:[port]/WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</url>
                          <username>[username]</username>
                          <password>[password]</password>
                          <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                          <maxActive>50</maxActive>
                          <maxWait>60000</maxWait>
                          <testOnBorrow>true</testOnBorrow>
                          <validationQuery>SELECT 1</validationQuery>
                          <validationInterval>30000</validationInterval>
                          <defaultAutoCommit>false</defaultAutoCommit>
                      </configuration>
                  </definition>
              </datasource>
          </datasources>
      </datasources-configuration>

      For more information, see Datasources in DAS documentation.

    3. To share the user store among the nodes, open the <DAS_HOME>/repository/conf/user-mgt.xmlfile and modify the dataSource property of the <configuration> element as follows.

      Code Block
      languagexml
      <configuration> 
      ...
          <Property name="dataSource">jdbc/WSO2UM_DB</Property>
      </configuration>
      Info

      The datasource name specified in this configuration should be the same as the datasource used by user manager that you configured in sub step a, i.

    4. In the <DAS_HOME>/repository/conf/registry.xml file, add or modify the dataSource attribute of the <dbConfig name="govregistry"> element as follows.

      Code Block
      languagexml
      <dbConfig name="govregistry">
      	<dataSource>jdbc/WSO2REG_DB</dataSource>
      </dbConfig>
      <remoteInstance url="https://localhost:9443/registry"> 
      	<id>gov</id>
      	<cacheId>user@jdbc:mysql://localhost:3306/regdb</cacheId>
      	<dbConfig>govregistry</dbConfig>
      	<readOnly>false</readOnly>
      	<enableCache>true</enableCache>
      	<registryRoot>/</registryRoot>
      </remoteInstance>
      <mount path="/_system/governance" overwrite="true">
      	<instanceId>gov</instanceId>
      	<targetPath>/_system/governance</targetPath>
      </mount>
      <mount path="/_system/config" overwrite="true">
      	<instanceId>gov</instanceId>
      	<targetPath>/_system/config</targetPath>
      </mount>
      Note

      Do not replace the following configuration when adding in the mounting configurations. The registry mounting configurations mentioned in the above steps should be added in addition to the following.

      Code Block
      languagexml
      <dbConfig name="wso2registry">
          <dataSource>jdbc/WSO2CarbonDB</dataSource>
      </dbConfig>
  2. Update the <DAS_HOME>/repository/conf/axis2/axis2.xml file as follows to enable Hazelcast clustering for both nodes.
    Anchor
    Hazelcast
    Hazelcast

    1. Set clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" to true as shown below to enable Hazlecast clustering.

      Code Block
      languagexml
      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    2. Enable wka mode on both nodes as shown below. For more information on wka mode, see About Membership Schemes

      Code Block
      languagexml
      <parameter name="membershipScheme">wka</parameter>
    3. Add both the DAS nodes as well known members in the cluster under the members tag in each node as shown in the example below.

      Code Block
      languagexml
      <members>
          <member>
              <hostName>[node1 IP]</hostName>
              <port>[node1 port]</port>
          </member>
          <member>
              <hostName>[node2 IP]</hostName>
              <port>[node2 port]</port>
          </member>
      </members>
    4. For each node, enter the respective server IP address as the value for the localMemberHost property as shown below.

      Code Block
      languagexml
      <parameter name="localMemberHost">[Server_IP_Address]</parameter>
    Expand
    titleClick here to view the complete clustering section of the axis2.xml file. with the changes mentioned above.
    Code Block
    languagexml
        <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"
                    enable="true">
    
            <!--
               This parameter indicates whether the cluster has to be automatically initalized
               when the AxisConfiguration is built. If set to "true" the initialization will not be
               done at that stage, and some other party will have to explictly initialize the cluster.
            -->
            <parameter name="AvoidInitiation">true</parameter>
    
            <!--
               The membership scheme used in this setup. The only values supported at the moment are
               "multicast" and "wka"
    
               1. multicast - membership is automatically discovered using multicasting
               2. wka - Well-Known Address based multicasting. Membership is discovered with the help
                        of one or more nodes running at a Well-Known Address. New members joining a
                        cluster will first connect to a well-known node, register with the well-known node
                        and get the membership list from it. When new members join, one of the well-known
                        nodes will notify the others in the group. When a member leaves the cluster or
                        is deemed to have left the cluster, it will be detected by the Group Membership
                        Service (GMS) using a TCP ping mechanism.
            -->
            
            <parameter name="membershipScheme">wka</parameter>
    
            <!--<parameter name="licenseKey">xxx</parameter>-->
            <!--<parameter name="mgtCenterURL">http://localhost:8081/mancenter/</parameter>-->
    
            <!--
             The clustering domain/group. Nodes in the same group will belong to the same multicast
             domain. There will not be interference between nodes in different groups.
            -->
            <parameter name="domain">wso2.carbon.domain</parameter>
    
            <!-- The multicast address to be used -->
            <!--<parameter name="mcastAddress">228.0.0.4</parameter>-->
    
            <!-- The multicast port to be used -->
            <parameter name="mcastPort">45564</parameter>
    
            <parameter name="mcastTTL">100</parameter>
    
            <parameter name="mcastTimeout">60</parameter>
    
            <!--
               The IP address of the network interface to which the multicasting has to be bound to.
               Multicasting would be done using this interface.
            -->
            <!--
                <parameter name="mcastBindAddress">10.100.5.109</parameter>
            -->
            <!-- The host name or IP address of this member -->
    
            <parameter name="localMemberHost">[node IP]</parameter>
    
            <!--
                The bind adress of this member. The difference between localMemberHost & localMemberBindAddress
                is that localMemberHost is the one that is advertised by this member, while localMemberBindAddress
                is the address to which this member is bound to.
            -->
            <!--
            <parameter name="localMemberBindAddress">[node IP]</parameter>
            -->
    
            <!--
            The TCP port used by this member. This is the port through which other nodes will
            contact this member
             -->
            <parameter name="localMemberPort">[node port]</parameter>
    
            <!--
                The bind port of this member. The difference between localMemberPort & localMemberBindPort
                is that localMemberPort is the one that is advertised by this member, while localMemberBindPort
                is the port to which this member is bound to.
            -->
            <!--
            <parameter name="localMemberBindPort">4001</parameter>
            -->
    
            <!--
            Properties specific to this member
            -->
            <parameter name="properties">
                <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
                <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
                <property name="subDomain" value="worker"/>
            </parameter>
    
            <!--
            Uncomment the following section to load custom Hazelcast data serializers.
            -->
            <!--
            <parameter name="hazelcastSerializers">
                <serializer typeClass="java.util.TreeSet">org.wso2.carbon.hazelcast.serializer.TreeSetSerializer
                </serializer>
                <serializer typeClass="java.util.Map">org.wso2.carbon.hazelcast.serializer.MapSerializer</serializer>
            </parameter>
            -->
    
            <!--
               The list of static or well-known members. These entries will only be valid if the
               "membershipScheme" above is set to "wka"
            -->
            <members>
                <member>
                    <hostName>[node1 IP]</hostName>
                    <port>[node1 port]</port>
                </member>
                <member>
                    <hostName>[node2 IP]</hostName>
                    <port>[node2 port]</port>
                </member>
            </members>
    
            <!--
            Enable the groupManagement entry if you need to run this node as a cluster manager.
            Multiple application domains with different GroupManagementAgent implementations
            can be defined in this section.
            -->
            <groupManagement enable="false">
                <applicationDomain name="wso2.as.domain"
                                   description="AS group"
                                   agent="org.wso2.carbon.core.clustering.hazelcast.HazelcastGroupManagementAgent"
                                   subDomain="worker"
                                   port="2222"/>
            </groupManagement>
        </clustering>
  3. Configure the  <DAS_HOME>/repository/conf/event-processor.xml file as follows to cluster DAS in the Receiver.

    Anchor
    EventProcessor
    EventProcessor

    1. Enable the HA mode by setting the following property.

      Code Block
      languagexml
      <mode name="HA" enable="true">
    2. Disable the Distributed mode by setting the following property.

      Code Block
      languagexml
      <mode name="Distributed" enable="false">
    3. For each node, enter the respective server IP address of the event under the HA mode Config section as shown in the example below. Here, you need to specify the server IP address for event synchronizing nodes, manager nodes as well as presenter nodes.

      Info

      When you enable the HA mode for WSO2 DAS, state persistence is enabled by default. If there is no real time use case that requires any state information after starting the cluster, you should disable event persistence by setting the persistence attribute to false in the <DAS_HOME>/repository/conf/event-processor.xml file as shown below.

      Code Block
      languagexml
      <persistence enable="false">
          <persistenceIntervalInMinutes>15</persistenceIntervalInMinutes>
          <persisterSchedulerPoolSize>10</persisterSchedulerPoolSize>
          <persister class="org.wso2.carbon.event.processor.core.internal.persistence.FileSystemPersistenceStore">
              <property key="persistenceLocation">cep_persistence</property>
          </persister>
      </persistence>
      Tip

      When state persistence is enabled for WSO2 DAS, the internal state of DAS is persisted in files. These files are not automatically deleted. Therefore, if you want to save space in your DAS pack, you need to delete them manually.

      These files are created in the <DAS_HOME>/cep_persistence/<tenant-id> directory. This directory has a separate sub-directory for each execution plan. Each execution plan can have multiple files. The format of each file name is <TIMESTAMP>_<EXECUTION_PLAN_NAME> (e.g, 1493101044948_MyExecutionPlan). If you want to clear files for a specific execution plan, you need to leave the two files with the latest timestamps and delete the rest.

      Tip

      When you enable the HA mode, events are synchronized between the active node and the passive node by default. This can cause a high system overhead and affect performance when the throughput per second is very high. Therefore, it is recommeded to configure both nodes to receive the same events instead of allowing event synchronization to take place.

      To configure both nodes to receive the same events, set the event.duplicated.in.cluster=true  property for each event receiver configured for the cluster. When this property is set for a receiver, each event received by it is sent to both the nodes in the cluster, and therefore, WSO2 DAS does not carry out event synchronization for that receiver.

      Code Block
      languagexml
          <!-- HA Mode Config -->
          <mode name="HA" enable="true">
             ...
              <eventSync>
                  <hostName>[<Publicly_Exposed_IP_Address>/<Hostname_of_the_server>]</hostName>
      Code Block
      languagexml
      <management>
      	<hostName>[<Publicly_Exposed_IP_Address>/<Hostname_of_the_server>]</hostName>
      Code Block
      languagexml
      <presentation>
      	<hostName>[<Publicly_Exposed_IP_Address>/<Hostname_of_the_server>]</hostName>
      Note

      Make sure you avoid specifying localhost /127.0.0.1 /0.0.0.0 as the hostname in above configuration. This is because the the hostname of a node is used by the other node in the cluster to communicate with it.

    Expand
    titleClick here to view the complete event-processor.xml file with the changes mentioned above.
    Code Block
    languagexml
    <eventProcessorConfiguration>
        <mode name="SingleNode" enable="false">
            <persistence enable="false">
                <persistenceIntervalInMinutes>15</persistenceIntervalInMinutes>
                <persisterSchedulerPoolSize>10</persisterSchedulerPoolSize>
                <persister class="org.wso2.carbon.event.processor.core.internal.persistence.FileSystemPersistenceStore">
                    <property key="persistenceLocation">cep_persistence</property>
                </persister>
            </persistence>
        </mode>
    
        <!-- HA Mode Config -->
        <mode name="HA" enable="true">
            <nodeType>
                <worker enable="true"/>
                <presenter enable="false"/>
            </nodeType>
            <checkMemberUpdateInterval>10000</checkMemberUpdateInterval>
            <eventSync>
                <hostName>172.18.1.217</hostName>
                <port>11224</port>
                <reconnectionInterval>20000</reconnectionInterval>
                <serverThreads>20000</serverThreads>
                <!--Size of TCP event publishing client's send buffer in bytes-->
                <publisherTcpSendBufferSize>5242880</publisherTcpSendBufferSize>
                <!--Character encoding of TCP event publishing client-->
                <publisherCharSet>UTF-8</publisherCharSet>
                <publisherBufferSize>1024</publisherBufferSize>
                <publisherConnectionStatusCheckInterval>30000</publisherConnectionStatusCheckInterval>
                <!--Number of events that could be queued at receiver before they are synced between CEP/DAS nodes-->
                <receiverQueueSize>1000000</receiverQueueSize>
                <!--Max total size of events that could be queued at receiver before they are synced between CEP/DAS nodes-->
                <receiverQueueMaxSizeMb>10</receiverQueueMaxSizeMb>
                <!--Number of events that could be queued at publisher to sync output between CEP/DAS nodes-->
                <publisherQueueSize>1000000</publisherQueueSize>
                <!--Max total size of events that could be queued at publisher to sync output between CEP/DAS nodes-->
                <publisherQueueMaxSizeMb>10</publisherQueueMaxSizeMb>
            </eventSync>
            <management>
                <hostName>172.18.1.217</hostName>
                <port>10005</port>
                <tryStateChangeInterval>15000</tryStateChangeInterval>
                <stateSyncRetryInterval>10000</stateSyncRetryInterval>
            </management>
            <presentation>
                <hostName>172.18.1.217</hostName>
                <port>11000</port>
                <!--Size of TCP event publishing client's send buffer in bytes-->
                <publisherTcpSendBufferSize>5242880</publisherTcpSendBufferSize>
                <!--Character encoding of TCP event publishing client-->
                <publisherCharSet>UTF-8</publisherCharSet>
                <publisherBufferSize>1024</publisherBufferSize>
                <publisherConnectionStatusCheckInterval>30000</publisherConnectionStatusCheckInterval>
            </presentation>
        </mode>
    
        <!-- Distributed Mode Config -->
        <mode name="Distributed" enable="false">
            <nodeType>
                <worker enable="true"/>
                <manager enable="true">
                    <hostName>0.0.0.0</hostName>
                    <port>8904</port>
                </manager>
                <presenter enable="false">
                    <hostName>0.0.0.0</hostName>
                    <port>11000</port>
                </presenter>
            </nodeType>
            <management>
                <managers>
                    <manager>
                        <hostName>localhost</hostName>
                        <port>8904</port>
                    </manager>
                    <manager>
                        <hostName>localhost</hostName>
                        <port>8905</port>
                    </manager>
                </managers>
                <!--Connection re-try interval to connect to Storm Manager service in case of a connection failure-->
                <reconnectionInterval>20000</reconnectionInterval>
                <!--Heart beat interval (in ms) for event listeners in "Storm Receivers" and "CEP Publishers" to acknowledge their
                availability for receiving events"-->
                <heartbeatInterval>5000</heartbeatInterval>
                <!--Storm topology re-submit interval in case of a topology submission failure-->
                <topologyResubmitInterval>10000</topologyResubmitInterval>
            </management>
            <transport>
                <!--Port range to be used for events listener servers in "Storm Receiver Spouts" and "CEP Publishers"-->
                <portRange>
                    <min>15000</min>
                    <max>15100</max>
                </portRange>
                <!--Connection re-try interval (in ms) for connection failures between "CEP Receiver" to "Storm Receiver" connections
                and "Storm Publisher" to "CEP Publisher" connections-->
                <reconnectionInterval>20000</reconnectionInterval>
                <!--Size of the output queue of each "CEP Receiver" which stores events to be published into "Storm Receivers" .
                This must be a power of two-->
                <cepReceiverOutputQueueSize>8192</cepReceiverOutputQueueSize>
                <!--Size of the output queue of each "Storm Publisher" which stores events to be published into "CEP Publisher" .
                This must be a power of two-->
                <stormPublisherOutputQueueSize>8192</stormPublisherOutputQueueSize>
                <!--Size of TCP event publishing client's send buffer in bytes-->
                <tcpEventPublisherSendBufferSize>5242880</tcpEventPublisherSendBufferSize>
                <!--Character encoding of TCP event publishing client-->
                <tcpEventPublisherCharSet>UTF-8</tcpEventPublisherCharSet>
                <!--Size of the event queue in each storm spout which stores events to be processed by storm bolts -->
                <stormSpoutBufferSize>10000</stormSpoutBufferSize>
                <connectionStatusCheckInterval>20000</connectionStatusCheckInterval>
            </transport>
            <presentation>
                <presentationOutputQueueSize>1024</presentationOutputQueueSize>
                <!--Size of TCP event publishing client's send buffer in bytes-->
                <tcpEventPublisherSendBufferSize>5242880</tcpEventPublisherSendBufferSize>
                <!--Character encoding of TCP event publishing client-->
                <tcpEventPublisherCharSet>UTF-8</tcpEventPublisherCharSet>
                <connectionStatusCheckInterval>20000</connectionStatusCheckInterval>
            </presentation>
            <statusMonitor>
                <lockTimeout>60000</lockTimeout>
                <updateRate>60000</updateRate>
            </statusMonitor>
            <stormJar>org.wso2.cep.storm.dependencies.jar</stormJar>
            <distributedUIUrl></distributedUIUrl>
            <memberUpdateCheckInterval>20000</memberUpdateCheckInterval>
        </mode>
    </eventProcessorConfiguration>
    Info

    The following node types are configured for the HA deployment mode in the <DAS_HOME>/repository/conf/event-processor.xml file.

    • eventSync : Both the active and the passive nodes in this setup are event synchronizing nodes as explained in the introduction. Therefore, each node should have the host and the port on which it is operating specified under the <eventSync> element.

      Info

      Note that the eventSync port is not automatically updated to the port in which each node operates via port offset.

       

    • management : In this setup, both the nodes carry out the same tasks, and therefore, both nodes are considered manager nodes. Therefore, each node should have the host and the port on which it is operating specified under the <management> element.

      Info

      Note that the management port is not automatically updated to the port in which each node operates via port offset.

    • presentation : You can optionally specify only one of the two nodes in this setup as the presenter node. The dashboards in which processed information is displayed are configured only in the presenter node. Each node should have the host and the port on which the assigned presenter node is operating specified under the <presentation> element. The host and the port as well as the other configurations under the <presentation> element are effective only when the presenter enable="false property is set under the <!-- HA Mode Config --> section.
  4. Update the <DAS_HOME>/repository/conf/analytics/spark/spark-defaults.conf file as follows to use the Spark cluster embedded within DAS.

    • Keep the carbon.spark.master configuration as carbon. This instructs Spark to create a Spark cluster using the Hazelcast cluster.

      Info

      In earlier versions of DAS, the default Spark carbon.spark.master configuration was local. From DAS 3.1.0, this has been deprecated and the carbon configuration should be used instead.
      If the local configuration is used, the following warning might be seen in the logs:

      Code Block
      Using 'local' with Carbon clustering is deprecated. Please use 'carbon.spark.master carbon' instead!

      This log warning is harmless, and only serves as a reminder that carbon mode should be used instead.

    • Enter 2 as the value for the carbon.spark.master.count configuration. This specifies that there should be two masters in the Spark cluster. One master serves as an active master and the other serves as a stand-by master. 

    The following example shows the <DAS_HOME>/repository/conf/analytics/spark/spark-defaults.conf file with changes mentioned above.

    Code Block
    carbon.spark.master carbon
    carbon.spark.master.count 2

    For more information, see Spark Configurations in DAS documentation.

    Warning

    Important: If the path to <DAS_HOME> is different in the two nodes, please do the following. 

    Localtabgroup
    Localtab
    activetrue
    titleUNIX environment

    Create a symbolic link to <DAS_HOME> in both nodes, where paths of those symbolic links are identical. This ensures that if we use the symbolic link to access the DAS server, we can use a common path. To do this, set the following property in the <DAS_HOME>/repository/conf/analytics/spark/spark-defaults.conf file.

    carbon.das.symbolic.link /home/ubuntu/das/das_symlink/

    Info

    In a multi node cluster, DAS servers consume a lot of CPU power and sometimes run out of memory. This is often the result of too many log directories being created in the <DAS_HOME>/work directory due to the symbolic link not being properly set. To address this, it is recommended to update the DAS cluster that runs in a RedHat or Linux environment, you also need to update the <DAS_HOME>/bin/wso2server.sh file  file with the following entry so that the the <DAS_HOME> is exported is exported. This is because the symbolic link may not be resolved correctly in these operating systems.

    Export CARBON_HOME=<symbolic link>

    Localtab
    titleWindows environment

    In the Windows environment there is a strict requirement to have both DAS distributions in a common path.

  5. In order to share the C-Apps deployed among the nodes, configure the SVN-based deployment synchronizer. For detailed instructions, see Configuring SVN-Based Deployment Synchronizer.

    Tip

    DAS Minimum High availability Deployment set up does not use a manager and a worker. For the purpose of configuring the deployment synchronizer, you can add the configurations relevant to the manager for the node of your choice, and add the configurations relating to the worker for the other node.

    Info

    If you do not configure the deployment synchronizer, you are required to deploy any C-App you use in the DAS Minimum High Availability Deployment set up to both the nodes.

  6. If the physical DAS server has multiple network interfaces with different IPs, and if you want Spark to use a specific Interface IP, open either the <DAS_HOME>/bin/load-spark-env-vars.sh file (for Linux) or <DAS_HOME>/bin/load-spark-env-vars.bat file (for Windows), and add the following parameter to configure the Spark IP address. 

    Code Block
    export  SPARK_LOCAL_IP=<IP_Address>
  7. Create a file named "my-node-id.dat" in <DAS_HOME>/repository/conf/analytics folder and add a id of your preference. This ID should be unique across all other DAS nodes in the cluster. Same ID should not be in more than one node. If this ID is not provided, a node ID will be generated by the server. This node id is also stored in the Database(Primary record store which is WSO2_ANALYTICS_EVENT_STORE_DB) for future reference.

    Code Block
    <Any id which is unique across all other my-node-id.dat files in the cluster>
    Note

    If, for any reason, you want to replace the DAS servers with new packs and want to point the new packs to the same databases, you need to back up the my-node-id.dat files from both previous installations and restore them in the new DAS packs. If this is not done,

    two new node ids will be created while the older two node ids are still in the databases. So altogether, there will be 4 node ids in the database. This might lead to various indexing inconsistencies. If you are going to clean the whole DB, you can start without restoring the older node ids.

...