...
There are two possible Siddhi query syntaxes to use the extension in an execution plan as follows.
<double|float|long|int|string|boolean>
predict(<string>
pathToMLModel, <string> dataType)
- Extension Type: StreamProcessor
- Description: Returns an output event with the additional attribute with the response variable name of the model, set with the predicted value, using the feature values extracted from the input event.
- Parameter: pathToMLModel: The file path or the registry path where ML model is located. If the model storage location is registry, the value of this this parameter should have the prefix “
registry:
” Parameter: dataType: Data type of the predicted value (double, float, long, integer/int, string, boolean/bool).
Example: predict(‘registry:/_system/governance/mlmodels/indian-diabetes-model’)
<double|float|long|int|string|boolean>
predict(<string>
pathToMLModel, <string> dataType, <double>
input)
- Extension Type: StreamProcessor
- Description: Returns an output event with the additional attribute with the response variable name of the model, set with the predicted value, using the feature values extracted from the input event.
- Parameter: pathToMLModel: The file path or the registry path where ML model is located. If the model storage location is registry, the value of this parameter should have the prefix “
registry:
” Parameter: dataType: Data type of the predicted value (double, float, long, integer/int, string, boolean/bool).
Parameter: input: A variable attribute value of the input stream which is sent to the ML model as feature values for predictions. Function does not accept any constant values as input parameters. You can have multiple input parameters.
Example: predict(‘registry:/_system/governance/mlmodels/indian-diabetes-model’, NumPregnancies, TSFT, DPF, BMI, DBP, PG2, Age, SI2)
...
Info |
---|
When you run WSO2 CEP in a distributed mode, the following needs to be carried out <CEP_HOME>/samples/utils/storm-dependencies.jar/pom.xml The following dependencies should be uncommented in the <CEP_HOME>/samples/utils/storm-dependencies.jar/pom.xml file as shown below..
Code Block |
---|
| <!-- Uncomment the following depedency section if you want to include Siddhi ML extension as part of
Storm dependencies -->
<dependency>
<groupId>org.wso2.carbon.ml</groupId>
<artifactId>org.wso2.carbon.ml.siddhi.extension</artifactId>
<version>${carbon.ml.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.carbon.ml</groupId>
<artifactId>org.wso2.carbon.ml.core</artifactId>
<version>${carbon.ml.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.carbon.ml</groupId>
<artifactId>org.wso2.carbon.ml.database</artifactId>
<version>${carbon.ml.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.carbon.ml</groupId>
<artifactId>org.wso2.carbon.ml.commons</artifactId>
<version>${carbon.ml.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.carbon.metrics</groupId>
<artifactId>org.wso2.carbon.metrics.manager</artifactId>
<version>${carbon.metrics.version}</version>
</dependency>
<!--<!– Dependencies for Spark –>-->
<dependency>
<groupId>org.wso2.orbit.org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>${spark.core.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.orbit.org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.orbit.org.apache.spark</groupId>
<artifactId>spark-mllib_2.10</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.orbit.org.apache.spark</groupId>
<artifactId>spark-streaming_2.10</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.orbit.org.scalanlp</groupId>
<artifactId>breeze_2.10</artifactId>
<version>${breeze.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.orbit.jblas</groupId>
<artifactId>jblas</artifactId>
<version>${jblas.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.orbit.spire-math</groupId>
<artifactId>spire_2.10</artifactId>
<version>${spire.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.orbit.org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.client.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.uncommons.maths</groupId>
<artifactId>uncommons-maths</artifactId>
<version>${uncommons.maths.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.json4s</groupId>
<artifactId>json4s-jackson_2.10</artifactId>
<version>${json4s.jackson.version}</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>${slf4j.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.orbit.github.fommil.netlib</groupId>
<artifactId>core</artifactId>
<version>${fommil.netlib.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.orbit.sourceforge.f2j</groupId>
<artifactId>arpack_combined</artifactId>
<version>${arpack.combined.version}</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-csv</artifactId>
<version>${commons.csv.version}</version>
</dependency> |
Code Block |
---|
| <!-- ML extension dependencies -->
<include>org.wso2.orbit.org.apache.spark:spark-core_2.10
</include>
<include>org.wso2.orbit.org.apache.spark:spark-coresql_2.10
</include>
<include>org.wso2.orbit.org.apache.spark:spark-mllib_2.10
</include>
<include>org.wso2.orbit.org.apache.spark:spark-sqlstreaming_2.10
</include>
<include>org.wso2.orbit.org.scalanlp:breeze_2.10</include>
<include>org.wso2.orbit.jblas:jblas</include>
</include> <include>org.wso2.orbit.spire-math:spire_2.10</include>
<include>org.wso2.orbit.org.apache.hadoop:hadoop-client
</include>
<include>org.wso2.orbit.org.apache.spark:spark-mllib_2.10uncommons.maths:uncommons-maths</include>
<include>org.wso2.json4s:json4s-jackson_2.10</include>
<include>org.slf4j:slf4j-api</include>
</include>
<include>org.wso2.orbit.orggithub.apachefommil.spark:spark-streaming_2.10
</include>
<include>org.wso2.orbit.org.scalanlp:breeze_2.10</include>
<include>org.wso2.orbit.jblas:jblas</include>
<include>org.wso2.orbit.spire-math:spire_2.10</include>
<include>org.wso2.orbit.org.apache.hadoop:hadoop-client
</include>
<include>org.wso2.uncommons.maths:uncommons-maths</include>
<include>org.wso2.json4s:json4s-jackson_2.10</include>
<include>org.slf4j:slf4j-api</include>
<include>org.wso2.orbit.github.fommil.netlib:core</include>
<include>org.wso2.orbit.sourceforge.f2j:arpack_combined
</include>
<include>org.scala-lang:scala-library</include>
<include>org.apache.commons:commons-csv</include>
<include>org.wso2.carbon.ml:org.wso2.carbon.ml.core</include>
<include>org.wso2.carbon.ml:org.wso2.carbon.ml.database
</include>
<include>org.wso2.carbon.ml:org.wso2.carbon.ml.commons</include>
netlib:core</include>
<include>
<include>org.wso2.orbit.sourceforge.f2j:arpack_combined
</include>
<include>org.scala-lang:scala-library</include>
<include>org.apache.commons:commons-csv</include>
org<include>org.wso2.carbon.ml:org.wso2.carbon.ml.siddhi.extensioncore</include>
<include>org.wso2.carbon.ml:org.wso2.carbon.ml.database
</include>
<include>org.wso2.carbon.ml:org.wso2.carbon.ml.commons</include>
<include>
org.wso2.carbon.ml:org.wso2.carbon.ml.siddhi.extension
<include> </include>
<include>
org.wso2.carbon.metrics:org.wso2.carbon.metrics.manager
</include> |
- Run the following command from the
<CEP_HOME>/samples/utils/storm-dependencies-jar directory.
mvn clean install This will generate a jar in the target directory. Create a directory named patchxxxx in the <CEP_HOME>/repository/components/patches .
Tip |
---|
Replace xxxx with a number that is greater by 1 than the number of the latest patch directory. |
- Copy the jar generated in the
<CEP_HOME>/samples/utils/storm-dependencies-jar/target directory to the newly created <CEP_HOME>/repository/components/patches/patchxxxx directory .
|
...