Configuring WSO2 Data Analytics Server with PPaaS
Use MySQL 5.6 and the 5.1.x MySQL Connector for Java when carrying out the following configurations.
Follow the instructions below to manually setup DAS with PPaaS:
Step 1 - Configure PPaaS
Enable thrift stats publishing with the
DAS_HOSTNAME
andDAS_TCP_PORT
 values in theÂthrift-client-config.xml
 file, which is in theÂ<PRIVATE_PAAS_HOME>/repository/conf
 directory. If needed, you can set multiple DAS nodes for a High Availability (HA) setup.<!-- Apache thrift client configuration for publishing statistics to WSO2 CEP and WSO2 DAS--> <thriftClientConfiguration> . . . <das> <node id="node-01"> <statsPublisherEnabled>false</statsPublisherEnabled> <username>admin</username> <password>admin</password> <ip>[DAS_HOSTNAME]</ip> <port>[DAS_TCP_PORT]</port> </node> <!--<node id="node-02"> <statsPublisherEnabled>true</statsPublisherEnabled> <username>admin</username> <password>admin</password> <ip>localhost</ip> <port>7613</port> </node>--> </das> </config> </thriftClientConfiguration>
Configure the Private PaaS metering dashboard URL with the
DAS_HOSTNAME
andDAS_PORTAL_PORT
 values in theÂ<PRIVATE_PAAS_HOME>/repository/conf/cartridge-config.properties
file as follows:das.metering.dashboard.url=https://<DAS_HOSTNAME>:<DAS_PORTAL_PORT>/portal/dashboards/metering-dashboard
Â
Configure the PPaaS monitoring dashboard URL with theÂ
DAS_HOSTNAME
 andÂDAS_PORTAL_PORT
 values in theÂ<PRIVATE_PAAS_HOME>/repository/conf/cartridge-config.properties
 file as follows:das.monitoring.dashboard.url=https://<DAS_HOSTNAME>:<DAS_PORTAL_PORT>/portal/dashboards/monitoring-dashboard
Step 2 - Configure DAS
Create the
ANALYTICS_FS_DB
,ANALYTICS_EVENT_STORE
andANALYTICS_PROCESSED_STORE
databases in MySQLÂ using the following MySQL scripts:CREATE DATABASE ANALYTICS_FS_DB; CREATE DATABASE ANALYTICS_EVENT_STORE; CREATE DATABASE ANALYTICS_PROCESSED_DATA_STORE;
Configure DAS
analytics-datasources.xml
file, which is in the<DAS_HOME>/repository/conf/datasources
directory, as follows to create theANALYTICS_FS_DB
,ANALYTICS_EVENT_STORE
 andANALYTICS_PROCESSED_STORE
 datasources.<datasources-configuration> <providers> <provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider> </providers> <datasources> <datasource> <name>WSO2_ANALYTICS_FS_DB</name> <description>The datasource used for analytics file system</description> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://127.0.0.1:3306/ANALYTICS_FS_DB</url> <username>root</username> <password>root</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> <defaultAutoCommit>false</defaultAutoCommit> </configuration> </definition> </datasource> <datasource> <name>WSO2_ANALYTICS_EVENT_STORE_DB</name> <description>The datasource used for analytics record store</description> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://127.0.0.1:3306/ANALYTICS_EVENT_STORE</url> <username>root</username> <password>root</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> <defaultAutoCommit>false</defaultAutoCommit> </configuration> </definition> </datasource> <datasource> <name>WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</name> <description>The datasource used for analytics record store</description> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://127.0.0.1:3306/ANALYTICS_PROCESSED_DATA_STORE</url> <username>root</username> <password>root</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> <defaultAutoCommit>false</defaultAutoCommit> </configuration> </definition> </datasource> </datasources> </datasources-configuration>
Set the analytics datasources created in above step (
WSO2_ANALYTICS_FS_DB, WSO2_ANALYTICS_EVENT_STORE_DB
andWSO2_ANALYTICS_PROCESSED_STORE_DB
)Â in the DASanalytics-config.xml
file, which is in the<DAS_HOME>/repository/conf/analytics
directory.<analytics-dataservice-configuration> <!-- The name of the primary record store --> <primaryRecordStore>EVENT_STORE</primaryRecordStore> <!-- The name of the index staging record store --> <indexStagingRecordStore>INDEX_STAGING_STORE</indexStagingRecordStore> <!-- Analytics File System - properties related to index storage implementation --> <analytics-file-system> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsFileSystem</implementation> <properties> <!-- the data source name mentioned in data sources configuration --> <property name="datasource">WSO2_ANALYTICS_FS_DB</property> <property name="category">large_dataset_optimized</property> </properties> </analytics-file-system> <!-- Analytics Record Store - properties related to record storage implementation --> <analytics-record-store name="EVENT_STORE"> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation> <properties> <property name="datasource">WSO2_ANALYTICS_EVENT_STORE_DB</property> <property name="category">large_dataset_optimized</property> </properties> </analytics-record-store> <analytics-record-store name="INDEX_STAGING_STORE"> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation> <properties> <property name="datasource">WSO2_ANALYTICS_EVENT_STORE_DB</property> <property name="category">limited_dataset_optimized</property> </properties> </analytics-record-store> <analytics-record-store name = "PROCESSED_DATA_STORE"> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation> <properties> <property name="datasource">WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</property> <property name="category">large_dataset_optimized</property> </properties> </analytics-record-store> <!-- The data indexing analyzer implementation --> <analytics-lucene-analyzer> <implementation>org.apache.lucene.analysis.standard.StandardAnalyzer</implementation> </analytics-lucene-analyzer> <!-- The maximum number of threads used for indexing per node, -1 signals to aute detect the optimum value, where it would be equal to (number of CPU cores in the system - 1) --> <indexingThreadCount>-1</indexingThreadCount> <!-- The number of index shards, should be equal or higher to the number of indexing nodes that is going to be working, ideal count being 'number of indexing nodes * [CPU cores used for indexing per node]' --> <shardCount>6</shardCount> <!-- Data purging related configuration --> <analytics-data-purging> <!-- Below entry will indicate purging is enable or not. If user wants to enable data purging for cluster then this property need to be enable in all nodes --> <purging-enable>false</purging-enable> <cron-expression>0 0 0 * * ?</cron-expression> <!-- Tables that need include to purging. Use regex expression to specify the table name that need include to purging.--> <purge-include-tables> <table>.*</table> <!--<table>.*jmx.*</table>--> </purge-include-tables> <!-- All records that insert before the specified retention time will be eligible to purge --> <data-retention-days>365</data-retention-days> </analytics-data-purging> <!-- Receiver/Indexing flow-control configuration --> <analytics-receiver-indexing-flow-control enabled = "true"> <!-- maximum number of records that can be in index staging area before receiving is throttled --> <recordReceivingHighThreshold>10000</recordReceivingHighThreshold> <!-- the limit on number of records to be lower than, to reduce throttling --> <recordReceivingLowThreshold>5000</recordReceivingLowThreshold> </analytics-receiver-indexing-flow-control> </analytics-dataservice-configuration>
Add the MySQL Java connector 5.1.x JAR file, which is supported by MYSQL 5.6, in the
<DAS_HOME>/repository/components/lib
directory.
Step 2.1 - Download the DAS extension distribution
Download the DAS extension from the PPaaS product page on the WSO2 website and uncompress the file. The extracted distribution is referred to as <PPAAS_DAS_DISTRIBUTION>.
Â
Step 2.2 -Â Create PPaaS Metering Dashboard with DAS
Add the
org.apache.stratos.das.extension-4.1.5.jarÂ
file, which is in theÂ<PPAAS_DAS_DISTRIBUTION>/lib
 directory, into the<DAS_HOME>/repository/components/lib
directory.Add the following Java class path into the
spark-udf-config.xml
file in theÂ<DAS_HOME>/repository/conf/analytics/sparkÂ
directory.<class-name>org.apache.stratos.das.extension.TimeUDF</class-name>
Add Jaggery files, which are in the
<PPAAS_DAS_DISTRIBUTION>/metering-dashboard/jaggery-files
 directory into the<DAS_HOME>/repository/deployment/server/jaggeryapps/portal/controllers/apis
directory.Manually create MySQL databases and tables using the queries, which are in the
<PPAAS_DAS_DISTRIBUTION>/metering-dashboard/metering-mysqlscript.sqlÂ
file.ÂCREATE DATABASE IF NOT EXISTS ANALYTICS_FS_DB; CREATE DATABASE IF NOT EXISTS ANALYTICS_EVENT_STORE; CREATE DATABASE IF NOT EXISTS ANALYTICS_PROCESSED_DATA_STORE; CREATE TABLE ANALYTICS_PROCESSED_DATA_STORE.MEMBER_STATUS(Time long, ApplicationId VARCHAR(150), ClusterAlias VARCHAR(150), MemberId VARCHAR(150), MemberStatus VARCHAR(50)); CREATE TABLE ANALYTICS_PROCESSED_DATA_STORE.MEMBER_COUNT(Time long, ApplicationId VARCHAR(150), ClusterAlias VARCHAR(150), CreatedInstanceCount int, InitializedInstanceCount int, ActiveInstanceCount int, TerminatedInstanceCount int); CREATE TABLE ANALYTICS_PROCESSED_DATA_STORE.MEMBER_INFORMATION(MemberId VARCHAR(150), InstanceType VARCHAR(150), ImageId VARCHAR(150), HostName VARCHAR(150), PrivateIPAddresses VARCHAR(150), PublicIPAddresses VARCHAR(150), Hypervisor VARCHAR(150), CPU VARCHAR(10) , RAM VARCHAR(10), OSName VARCHAR(150), OSVersion VARCHAR(150));
Apply a WSO2 User Engagement Server (UES)Â patch to the DAS dashboard.
You need to do this to populate the metering dashboard.Copy the
ues-gadgets.jsÂ
and theues-pubsub.js
files from the<PPAAS_DAS_DISTRIBUTION>/metering-dashboard/ues-patchÂ
directory into the<DAS_HOME>/repository/deployment/server/jaggeryapps/portal/js
directory.Copy the
dashboard.jag
file from theÂ<PPAAS_DAS_DISTRIBUTION>/metering-dashboard/ues-patchÂ
directory into the<DAS_HOME>/repository/deployment/server/jaggeryapps/portal/theme/templates
directory.
Add theÂ
ppaas-metering-service.car
file, which is in the<PPAAS_DAS_DISTRIBUTION>/metering-dashboard
 directory, into theÂ<DAS_HOME>/repository/deployment/server/carbonapps
directory to generate the metering dashboard.If the
<DAS_HOME>/repository/deployment/server/carbonapps
folder does not exist, initially create the folder before moving the CAR file.You can navigate to the metering dashboard from the Private PaaS application topology view at the application or cluster level as shown below.
The following is a sample metering dashboard:
Step 2.3 - Create the PPaaS Monitoring Dashboard with DAS
- Add the Jaggery files, which are in the
<PPAAS_DAS_DISTRIBUTION>/
monitoring-dashboard/jaggery-files
 directory into the<DAS_HOME>/repository/deployment/server/jaggeryapps/portal/controllers/apis
directory. Manually create the MySQL database and tables using the queries in the
<PPAAS_DAS_DISTRIBUTION>/monitoring-dashboard/jaggery-files/monitoring-mysqlscript.sql
file.ÂCREATE DATABASE IF NOT EXISTS ANALYTICS_FS_DB; CREATE DATABASE IF NOT EXISTS ANALYTICS_EVENT_STORE; CREATE DATABASE IF NOT EXISTS ANALYTICS_PROCESSED_DATA_STORE; CREATE TABLE ANALYTICS_EVENT_STORE.AVERAGE_MEMORY_CONSUMPTION_STATS(Time long, ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE); CREATE TABLE ANALYTICS_EVENT_STORE.MEMBER_AVERAGE_MEMORY_CONSUMPTION_STATS(Time long, MemberId VARCHAR(150), ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE); CREATE TABLE ANALYTICS_EVENT_STORE.AVERAGE_LOAD_AVERAGE_STATS(Time long, ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE); CREATE TABLE ANALYTICS_EVENT_STORE.MEMBER_AVERAGE_LOAD_AVERAGE_STATS(Time long, MemberId VARCHAR(150), ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE); CREATE TABLE ANALYTICS_EVENT_STORE.AVERAGE_IN_FLIGHT_REQUESTS(Time long, ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), COUNT DOUBLE); CREATE TABLE ANALYTICS_EVENT_STORE.SCALING_DETAILS(Time VARCHAR(50), ScalingDecisionId VARCHAR(150), ClusterId VARCHAR(150), MinInstanceCount INT, MaxInstanceCount INT, RIFPredicted INT, RIFThreshold INT ,RIFRequiredInstances INT, MCPredicted INT, MCThreshold INT, MCRequiredInstances INT ,LAPredicted INT, LAThreshold INT,LARequiredInstances INT,RequiredInstanceCount INT ,ActiveInstanceCount INT, AdditionalInstanceCount INT, ScalingReason VARCHAR(150));
- Copy the CEP EventFormatter artifacts, which are in the
<PPAAS_DAS_DISTRIBUTION>/wso2cep-<VERSION>/eventformatters
directory, into theÂ<CEP_HOME>/repository/deployment/server/eventformatters
directory. Copy CEP OutputEventAdapter artifacts, which are in the
<PPAAS_DAS_DISTRIBUTION>/wso2cep-<VERSION>/outputeventadaptors
directory, into the<CEP_HOME>/repository/deployment/server/outputeventadaptors
directory and update the receiverURL and authenticatorURL with theÂDAS_HOSTNAME
andDAS_TCP_PORTÂ andÂ
DAS_SSL_PORT
 values as follows:<outputEventAdaptor name="DefaultWSO2EventOutputAdaptor" statistics="disable" trace="disable" type="wso2event" xmlns="http://wso2.org/carbon/eventadaptormanager"> <property name="username">admin</property> <property name="receiverURL">tcp://<DAS_HOSTNAME>:<DAS_TCP_PORT></property> <property name="password">admin</property> <property name="authenticatorURL">ssl://<DAS_HOSTNAME>:<DAS_SSL_PORT></property> </outputEventAdaptor>
Add theÂ
ppaas-monitoring-service.car
file, which is in the<PPAAS_DAS_DISTRIBUTION>/metering-dashboard
 directory into the<DAS_HOME>/repository/deployment/server/carbonapps
 directory to generate the monitoring dashboard.If the
<DAS_HOME>/repository/deployment/server/carbonapps
folder does not exist, initially create the folder before moving the CAR file.- Navigate to monitoring dashboard from the PPaaS Console using the Monitoring menu.
The following is a sample monitoring dashboard: - Once you have carriedout all the configurations, start the DAS server. After the DAS server has started successfully start the PPaaS server.