You can enable/disable WSO2 DAS components depending on how you deploy the DAS servers. You can disable some of the components/features that do not need to function in a specific node. This not only enables the DAS nodes to use the resources effectively for the intended operation of the node, but also to provides high availability for the selected operations of DAS.
This is done by setting a system property with server startup command as explained below. Therefore, you can simply use that same DAS distribution for all the nodes. Alternatively, you can enable/disable selected components/features by starting up the server using one or more corresponding system properties.
Command (on Linux) | Function | Usage | ||
---|---|---|---|---|
sh <PRODUCT_HOME>/bin/wso2server.sh -DdisableAnalyticsEngine=true | If this system property is set, then the Spark server will not startup in this node. | You can use this property when you want to have a node only as a receiver node, indexing node, publisher node or real time analytics node. | ||
sh <PRODUCT_HOME>/bin/wso2server.sh -DdisableAnalyticsExecution=true | If this system property is set, then the node does not join the task execution of the Spark scripts that you have scheduled in the cluster, nor you can’t execute any Spark scripts from the node. | You can use this property when you want to have a node only as a receiver node, indexing node, publisher node, realtime analytics node, or Spark analyzer node which accepts jobs from a remote server. | ||
sh <PRODUCT_HOME>/bin/wso2server.sh -DdisableIndexing=true | If this system property is set, then the node will not participate in the indexing task. | You can use this property when you want to have a node only as a receiver node, publisher node, realtime analytics node, or Spark analyzer node which accepts jobs from a remote server. | ||
sh <PRODUCT_HOME>/bin/wso2server.sh -DdisableEventSink=true | If this system property is set, the events received for the streams will not participate in persisting the events even the stream has been configured to be persisted. | You can use this property when you want to have a node only as a real time analytics node. | ||
sh <PRODUCT_HOME>/bin/wso2server.sh -DdisableDataPurging=true | If this system property is set, then that particular node will not join the purging operation. This property is applicable for both global task operations and tasks scheduled through the Data Explorer. | This is only useful in a clustered environment. When you schedule a purging operation, that particular task can be scheduled in any DAS node. But if you want to prevent those purging tasks being scheduled in a particular node such as a dashboard node, then you have to use this property. | ||
sh <PRODUCT_HOME>/bin/wso2server.sh -DdisableAnalyticsSparkCtx=true | If this system property is set, then that particular node not will not instantiate a Spark context. This means that there will not be a Spark app created in the server startup. | This property allows you to use a DAS cluster as a Spark cluster, and submit Spark apps to it. For an example, WSO2 Machine Learner (ML) can submit its Spark Apps to DAS. For more information on connecting WSO2 ML to an external Spark cluster, see the With external Spark cluster section in the Deployment Patterns page of WSO2 ML documentation. | ||
sh <PRODUCT_HOME>/bin/wso2server.sh -DenableAnalyticsStats=true | If this system property is set, then the Spark query execution statistics are printed in the Carbon Console. | This property can be used when you need to view the Analytics statistics in the Carbon Console. | ||
| If this system property is set, statistics of the background indexing tasks are printed in the Carbon console. | This property can be used when you need to view the indexing statistics in the Carbon Console. | ||
sh <PRODUCT_HOME>/bin/wso2server.sh -DdisableIndexThrottling=true | ||||
| If this system property is set, the ML Spark context creation is disabled. | This property is used in scenarios where predictive analytics is performed in DAS. When WSO2 Machine Learner features are used in DAS, ML Spark context creation takes place by default. This prevents you from running the batch analytics features in DAS since it is not allowed to run multiple Spark contexts. Setting the - DdisableMLSparkCtx=true allows you to disable the ML Spark Context. After setting this property, you can continue using the predictive analytics features in DAS. |
Info |
---|
You can use multiple parameter values to disable multiple DAS components simultaneously. For example, you can use the following command to simultaneously disable both Spark server-related and Spark scripts execution-related components, which are described above. sh |