Introduction
This guide demonstrates how to use a downloaded ML model generated using WSO2 ML in plain Java code. This guide uses the <ML_HOME>/samples/model-usage/
sample Java client implementation.
Prerequisites
Before you run this sample, do the following.
- Download WSO2 ML, and start the server. For instructions, see Getting Started.
- Generate a model using WSO2 ML. This model will be used to make the predictions. For instructions on generating a model, see Generating Models. To build a model, see ML UI Workflow.
- Download the generated model.
You can stop the WSO2 ML server after generating the required model, since this process of using the model in a Java client does not require a running ML server.
Building the sample
To build this sample, a Java project should be created as follows.
- Create a Maven project.
Include the following dependencies in the
pom.xml
file of the project.<dependency> <groupId>org.wso2.carbon.ml</groupId> <artifactId>org.wso2.carbon.ml.core</artifactId> <version>1.0.1-SNAPSHOT</version> </dependency> <dependency> <groupId>org.wso2.carbon.ml</groupId> <artifactId>org.wso2.carbon.ml.commons</artifactId> <version>1.0.1-SNAPSHOT</version> </dependency>
Runtime dependencies
<!--Dependencies required for MLModel deserialization--> <dependency> <groupId>org.wso2.orbit.org.apache.spark</groupId> <artifactId>spark-core_2.10</artifactId> <version>1.4.1.wso2v1</version> </dependency> <dependency> <groupId>org.wso2.orbit.org.apache.spark</groupId> <artifactId>spark-mllib_2.10</artifactId> <version>1.4.1.wso2v1</version> </dependency> <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId> <version>2.10.4</version> </dependency> <dependency> <groupId>org.wso2.orbit.org.scalanlp</groupId> <artifactId>breeze_2.10</artifactId> <version>0.11.1.wso2v1</version> </dependency> <dependency> <groupId>org.wso2.orbit.github.fommil.netlib</groupId> <artifactId>core</artifactId> <version>1.1.2.wso2v1</version> </dependency> <!--Dependencies required for predictions--> <dependency> <groupId>org.wso2.orbit.sourceforge.f2j</groupId> <artifactId>arpack_combined</artifactId> <version>0.1.wso2v1</version> </dependency>
The downloaded model contains a serialized
MLModel
object. Deserialize the MLModel as follows.private MLModel deserializeMLModel() throws IOException, ClassNotFoundException, URISyntaxException { // Path to downloaded-ml-model String pathToDownloadedModel = “/tmp/downloaded-ml-model”; FileInputStream fileInputStream = new FileInputStream(pathToDownloadedModel); ObjectInputStream in = new ObjectInputStream(fileInputStream); MLModel mlModel = (MLModel) in.readObject(); logger.info("Algorithm Type : " + mlModel.getAlgorithmClass()); logger.info("Algorithm Name : " + mlModel.getAlgorithmName()); logger.info("Response Variable : " + mlModel.getResponseVariable()); logger.info("Features : " + mlModel.getFeatures()); return mlModel; }
Use
org.wso2.carbon.ml.core.impl.Predictor
to make predictions using the downloaded ML Model. Predictor accepts a list of string arrays as feature values. The feature values should be in the order of the feature index inside the array as shown below.MLModelUsageSample modelUsageSample = new MLModelUsageSample(); // Deserialize MLModel mlModel = modelUsageSample.deserializeMLModel(); // Predict String[] featureValueArray1 = new String[] { "6", "148", "72", "35", "0", "33.6", "0.627", "50" }; String[] featureValueArray2 = new String[] { "0", "101", "80", "40", "0", "26", "0.5", "33" }; ArrayList<String[]> list = new ArrayList<String[]>(); list.add(featureValueArray1); list.add(featureValueArray2); modelUsageSample.predict(list, mlModel); public void predict(List<String[]> featureValueLists, MLModel mlModel) throws MLModelHandlerException { Predictor predictor = new Predictor(0, mlModel, featureValueLists); List<?> predictions = predictor.predict(); logger.info("Predictions : " + predictions); }
The complete project created above can be found in the <ML_HOME>/samples/model-usage
directory. To build this, execute the following command from this directory.
mvn clean install
Executing the sample
To execute this sample, execute the following command from the <ML_HOME>/samples/model-usage
directory.
mvn exec:java
Analysing the output
The following output is displayed in the ML console once you execute the sample.
INFO [MLModelUsageSample] - Algorithm Type : Classification INFO [MLModelUsageSample] - Algorithm Name : LOGISTIC_REGRESSION INFO [MLModelUsageSample] - Response Variable : Class INFO [MLModelUsageSample] - Features : [Feature [name=Age, index=7, type=NUMERICAL, imputeOption=DISCARD, include=true], Feature [name=BMI, index=5, type=NUMERICAL, imputeOption=DISCARD, include=true], Feature [name=DBP, index=2, type=NUMERICAL, imputeOption=DISCARD, include=true], Feature [name=DPF, index=6, type=NUMERICAL, imputeOption=DISCARD, include=true], Feature [name=NumPregnancies, index=0, type=NUMERICAL, imputeOption=DISCARD, include=true], Feature [name=PG2, index=1, type=NUMERICAL, imputeOption=DISCARD, include=true], Feature [name=SI2, index=4, type=NUMERICAL, imputeOption=DISCARD, include=true], Feature [name=TSFT, index=3, type=NUMERICAL, imputeOption=DISCARD, include=true]] Sep 03, 2015 9:35:20 AM com.github.fommil.netlib.BLAS <clinit> WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS Sep 03, 2015 9:35:20 AM com.github.fommil.netlib.BLAS <clinit> WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS INFO [MLModelUsageSample] - Predictions : [0, 0]