Click the relevant tab for instructions to deploy WSO2 ML in the preferred mode.
...
Localtabgroup |
---|
Localtab |
---|
|
WSO2 ML is bundled with an inbuilt Apache Spark instance. In the standalone deployment pattern of WSO2 ML, Spark runs in local mode with one or more worker threads within the same machine. WSO2 ML is set to run in the standalone mode by default. The ML handles the driver program of the Spark instance which submits jobs to the Spark master. The number of worker threads with which Spark is run can be set by the property spark.master in the < WSO2ML_HOME>/repository/conf/etc/spark-config.xml file. Possible values are as follows. Value | Description |
---|
local | Runs Spark locally with one worker thread. There will be no multiple threads running in parallel. | local[k] | Runs Spark locally with k number of threads. (K is ideally the number of cores in your machine). | local[*] | Runs Spark locally with a number of worker threads that equals the number of logical cores in your machine. |
|
Localtab |
---|
title | With external Spark Cluster |
---|
|
By default, WSO2 ML runs with an inbuilt Apache Spark instance. However, when working with big data, you can handle those large data sets in a distributed environment through WSO2 ML. You can carry out data pre-processing and model building processes on an Apache Spark cluster to share the workload between the nodes of the cluster. Using a Spark cluster optimizes the performance and reduces the time consumed to build and train a machine learning model for a large data set. Follow the steps below to run the ML jobs by connecting WSO2 ML to an external Apache Spark cluster. Info |
---|
- When following the instructions below you need to use Apache Spark version 1.4.1 with Apache Hadoop version 2.6 and later in the Apache Spark cluster.
- The Spark deployment pattern can be Standalone, Yarn or Mesos.
- WSO2 ML is unaware of the underlying configuration of the Spark cluster. It only interacts with the Spark master to which the jobs are submitted.
|
Press Ctrl+C keys to shutdown the WSO2 ML server. For more information on shutting down WSO2 ML server, see Running the Product . - Create a directory named <SPARK_HOME>/ml/ and copy the following jar files into it. These jar files can be found in the
<ML_HOME>/repository/components/plugins directory.org.wso2.carbon.ml.core_1.0.2.jar org.wso2.carbon.ml.commons_1.0.2.jar org.wso2.carbon.ml.database_1.0.2.jar kryo_2.24.0.wso2v1.jar
Create a file named spark-env.sh in the <SPARK_HOME>/conf/ directory and add the following entries. Code Block |
---|
| SPARK_MASTER_IP=127.0.0.1
SPARK_CLASSPATH={SPARK_HOME}/ml/org.wso2.carbon.ml.core_1.0.2.jar:{SPARK_HOME}/ml/org.wso2.carbon.ml.commons_1.0.2.jar:{SPARK_HOME}/ml/org.wso2.carbon.ml.database_1.0.2.jar:{SPARK_HOME}/ml/kryo_2.24.0.wso2v1.jar |
Restart the external Spark cluster using the following commands: Code Block |
---|
{SPARK_HOME}$ ./sbin/stop-all.sh
{SPARK_HOME}$ ./sbin/start-all.sh |
In the <ML_HOME>/repository/conf/etc/spark-config.xml file, enter the Spark master URL as the value of the < spark.master> property as shown in the example below. Tip |
---|
You can find the Spark Master URL in the Apache Spark Web UI as shown below.
|
Code Block |
---|
| <property name="spark.master">{SPARK_MASTER_URL}</property> |
Restart the WSO2 ML server. For more information on restarting WSO2 ML server, see Running the Product.
|
Localtab |
---|
title | With DAS as the Spark Cluster |
---|
|
WSO2 DAS has an embedded Spark server which automatically creates a Spark cluster when the DAS is started in a clustered mode. Follow the steps below to run the ML jobs by connecting WSO2 ML to a WSO2 DAS cluster that serves as a Spark cluster. Setup DAS cluster using Carbon clustering. Configure it to have at least one worker node. For more information on setting up a DAS cluster, see Clustering Data Analytics Server. Install the following ML features in each DAS node from the P2 repository of your ML version. For more information on installing features, see Installing and Managing Features. - Stop all DAS nodes. For more information on stopping DAS nodes, see Running the Product in DAS documentation.
Start DAS cluster again without initializing Spark contexts with CarbonAnalytics and ML features. Use the following options when starting the cluster. Option | Purpose |
---|
-DdisableAnalyticsSparkCtx=true | To disable CarbonAnalytics Spark context. | -DdisableMLSparkCtx=true | To disable ML Spark context. |
To configure ML to use DAS as the Spark cluster, set the following property in the <ML_HOME>/repository/conf/etc/spark-config.xml file. Code Block |
---|
| <property name="spark.master">{SPARK_MASTER}</property> |
Add the jars to Spark executor extra class path.
- org.wso2.carbon.ml.commons_1.0.2.jar
- org.wso2.carbon.ml.core_1.0.2.jar
- org.wso2.carbon.ml.database_1.0.2.jar
- spark-mllib_2.10_1.4.1.wso2v1wso2v2.jar
- arpack_combined_0.1.0.wso2v1.jar
- breeze_2.10_0.11.1.wso2v1.jar
- core_1.1.2.wso2v1.jar
- jblas_1.2.3.wso2v1.jar
- spire_2.10_0.7.4.wso2v1.jar
These should also be added to the Spark driver extra class path as Spark configuration properties in the <ML_HOME>/repository/conf/etc/spark-config.xml file as shown below.
Code Block |
---|
| <property name="spark.driver.extraClassPath">{ML_HOME}/repository/components/plugins/org.wso2.carbon.ml.commons_1.0.2.jar:{ML_HOME}/repository/components/plugins/org.wso2.carbon.ml.core_1.0.2.jar:{ML_HOME}/repository/components/plugins/org.wso2.carbon.ml.database_1.0.2.jar:{ML_HOME}/repository/components/plugins/spark-mllib_2.10_1.4.1.wso2v1wso2v2.jar:{ML_HOME}/repository/components/plugins/arpack_combined_0.1.0.wso2v1.jar:{ML_HOME}/repository/components/plugins/breeze_2.10_0.11.1.wso2v1.jar:{ML_HOME}/repository/components/plugins/core_1.1.2.wso2v1.jar:{ML_HOME}/repository/components/plugins/jblas_1.2.3.wso2v1.jar:{ML_HOME}/repository/components/plugins/spire_2.10_0.7.4.wso2v1.jar
</property> |
Code Block |
---|
| <property name="spark.executor.extraClassPath">{ML_HOME}/repository/components/plugins/org.wso2.carbon.ml.commons_1.0.2.jar:{ML_HOME}/repository/components/plugins/org.wso2.carbon.ml.core_1.0.2.jar:{ML_HOME}/repository/components/plugins/org.wso2.carbon.ml.database_1.0.2.jar:{ML_HOME}/repository/components/plugins/spark-mllib_2.10_1.4.1.wso2v1wso2v2.jar:{ML_HOME}/repository/components/plugins/arpack_combined_0.1.0.wso2v1.jar:{ML_HOME}/repository/components/plugins/breeze_2.10_0.11.1.wso2v1.jar:{ML_HOME}/repository/components/plugins/core_1.1.2.wso2v1.jar:{ML_HOME}/repository/components/plugins/jblas_1.2.3.wso2v1.jar:{ML_HOME}/repository/components/plugins/spire_2.10_0.7.4.wso2v1.jar
</property> |
- Enter values that are less than or equal to the allocated resources for Spark workers in the DAS cluster for the following two properties in the
<ML_HOME>/repository/conf/etc/spark-config.xml file. This ensures that the ML does not call for unsatisfiable resources from the DAS Spark cluster.spark.executor.memory: Code Block |
---|
| <property name="spark.executor.memory">{memory_in_m/g}</property> |
spark.executor.cores:
Code Block |
---|
| <property name="spark.executor.cores">{number_of_cores}</property> |
- Start the ML server. For more information on starting WSO2 ML server, see Running the Product.
|
Localtab |
---|
title | External H2O cluster |
---|
| Info |
---|
This deployment method should only be used when using deep learning algorithms. |
The deep learning algorithms used in WSO2 ML use the H2O Library. Therefore, when using those algorithms, the ML needs to connect to the H2O server. This connection can be made in one of the following two modes. Mode | Description |
---|
Local Mode | The H2O server starts along with the ML server. | Client Mode | The ML connects to an external H2O cloud as a client node. |
Click on the relevant tab for instructions to deploy ML with an external H2O cluster in the preferred mode. Localtabgroup |
---|
Localtab |
---|
| This is the default scenario when the ML is deployed in the standalone mode. H2O server is automatically started when you start the ML server when the H2O server is set to the local mode. This is done by setting the following property in the <ML_HOME>/repository/conf/etc/h2o-config.xml file.
Code Block |
---|
| <property name="mode">local</property> |
This property is set by default. |
Localtab |
---|
|
PrerequisitesIn order to start H2O in client mode with ML, it is required to have a running external H2O cluster. Current ML uses H2O version 3.2.0.9 (Slater release). Therefore, the external H2O cluster in this scenario should be created from the H2O 3.2.0.9 version. To download this H2O version, follow the instructions in the official H2O website. Starting ML with external H2O clusterStart the H2O server with the following command.
java -jar h2o.jar -md5skip Info |
---|
Make sure you include the -md5skip property in the command to prevent the H2O cluster from comparing md5 checksums of the two h2o.jar files in the H2O cluster and in the WSO2 ML. If a difference in the md checksums is detected, The ML server may be refused access to the external H2O cluster. |
To start a customized H2O cluster, see H2O deployment documentation.
You can view the configurations of H2O server in the command line as shown in the example below.
The IP address and the name of the H2O cloud to start the external H2O cluster can be taken from this log. In this example, the name of the cloud is maheshakya and the IP address is 10.100.7.80 . Note that the H2O cloud uses the ports 54321 and 54322 . Configure WSO2 ML to start H2O in client mode in order to connect to the external cluster. This configuration is done by setting the following properties in the <ML_HOME>/repository/conf/etc/h2o-config.xml file. Property | Notes |
---|
<property name="mode">client</property> | | <property name="ip">{IP}</property> | {IP} is the address of the H2O server.
| <property name="port">{PORT}</property> | {PORT} should be a port that is not being used in the external node e.g., 54345. Ports 54321 and 54322 cannot be used since they are used by the external H2O cluster. | <property name="name">{H2O_CLOUD_NAME}</property> | {H2O_CLOUD_NAME} is the name of the external H2O server. |
- Start the ML server. The following is displayed in the command line.
|
|
|
|