- Created by Former user, last modified on Oct 27, 2015
You are viewing an old version of this page. View the current version.
Compare with Current View Page History
« Previous Version 49 Next »
Follow the instructions below to deploy WSO2 Private PaaS (PPaaS) on a preferred IaaS in a single JVM:
- Step 1 - Configure external databases for PPaaS
- Step 2 - Setup ActiveMQ
- Step 3 - Setup and start WSO2 CEP
- Step 4 - Setup and start WSO2 DAS (Optional)
- Step 5 - Setup PPaaS
- Step 6 - Start the PPaaS server
Step 1 - Configure external databases for PPaaS
For testing purposes you can run your PPaaS setup on the internal database (DB), which is the H2 DB. In the latter mentioned scenario, you do not need to setup the internal DB. However, in a production environment it is recommend to use an external RDBMS (e.g., MySQL).
Follow the instructions given below to configure PPaaS with external databases:
WSO2 Private PaaS 4.1.0 requires the following external databases: User database, Governance database and Config database. Therefore, before using the above databases, you need to create these DBs, as explained in Working with Databases, and configure Private PaaS as mentioned below.
Copy the MySQL JDBC driver to the
<PRIVATE_PAAS_HOME>/repository/components/lib
directory.Create 3 empty databases, in the
<PRIVATE_PAAS_HOME>/dbscripts
directory, in your MySQL server with the following names and grant permission to the databases, so that they can be accessed through a remote server.ppaas_registry_db
ppaas_user_db
ppaas_config_db
Navigate to the
<PRIVATE_PAAS_HOME>/repository/conf/datasources
directory and add the datasources that correspond to your DB in themaster-datasources.xml
file.
Change the IP addresses and ports based on your environment.<datasource> <name>WSO2_GOVERNANCE_DB</name> <description>The datasource used for governance MySQL database</description> <jndiConfig> <name>jdbc/registry</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://[MYSQL_HOSTNAME]:[MYSQL_PORT]/ppaas_registry_db?autoReconnect=true</url> <username>[USERNAME]</username> <password>[PASSWORD]</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> </configuration> </definition> </datasource> <datasource> <name>WSO2_CONFIG_DB</name> <description>The datasource used for CONFIG MySQL database</description> <jndiConfig> <name>jdbc/ppaas_config</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://[MYSQL_HOSTNAME]:[MYSQL_PORT]/ppaas_config_db?autoReconnect=true</url> <username>[USERNAME]</username> <password>[PASSWORD]</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> </configuration> </definition> </datasource> <datasource> <name>WSO2_USER_DB</name> <description>The datasource used for userstore MySQL database</description> <jndiConfig> <name>jdbc/userstore</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://[MYSQL_HOSTNAME]:[MYSQL_PORT]/ppaas_user_db?autoReconnect=true</url> <username>[USERNAME]</username> <password>[PASSWORD]</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> </configuration> </definition> </datasource>
Navigate to the
<PRIVATE_PAAS_HOME>/repository/conf
directory and change the datasources in both theuser-mgt.xml
andidentity.xml
files as follows:<Property name="dataSource">jdbc/userstore</Property>
Navigate to the
<PRIVATE_PAAS_HOME>/repository/conf
directory and add the following configurations in theregistry.xml
file. Change your IP addresses and ports based on your environment.<dbConfig name="governance"> <dataSource>jdbc/registry</dataSource> </dbConfig> <remoteInstance url="https://localhost:9443/registry"> <id>governance</id> <dbConfig>governance</dbConfig> <readOnly>false</readOnly> <registryRoot>/</registryRoot> <enableCache>true</enableCache> <cacheId>root@jdbc:mysql://[MYSQL_HOSTNAME]:[MYSQL_PORT]/ppaas_registry_db</cacheId> </remoteInstance> <dbConfig name="config"> <dataSource>jdbc/ppaas_config</dataSource> </dbConfig> <remoteInstance url="https://localhost:9443/registry"> <id>config</id> <dbConfig>config</dbConfig> <readOnly>false</readOnly> <registryRoot>/</registryRoot> <enableCache>true</enableCache> <cacheId>root@jdbc:mysql://[MYSQL_HOSTNAME]:[MYSQL_PORT]/ppaas_config_db</cacheId> </remoteInstance> <mount path="/_system/governance" overwrite="true"> <instanceId>governance</instanceId> <targetPath>/_system/governance</targetPath> </mount> <mount path="/_system/config" overwrite="true"> <instanceId>config</instanceId> <targetPath>/_system/config</targetPath> </mount>
Step 2 - Setup ActiveMQ
PPaaS uses the Message Broker (MB) to handle the communication among all the components in a loosely coupled manner. Currently, PPaaS uses Apache ActiveMQ; however, PPaaS supports any Advanced Message Queuing Protocol (AMQP) Message Broker.
Follow the instructions below to run ActiveMQ in a separate host:
Download and unzip Apache ActiveMQ.
Start ActiveMQ
./activemq start
Step 3 - Setup and start WSO2 CEP
By default, PPaaS is shipped with an embedded WSO2 Complex Event Processor (CEP). It is recommended to use the embedded CEP only for testing purposes and to configure CEP externally in a production environment. Furthermore, the compatible CEP versions differ based on whether the CEP is internal or external. WSO2 CEP 3.0.0 is embedded into PPaaS. However, PPaaS uses CEP 3.1.0 when working with CEP externally.
If you want to use CEP externally, prior to carrying out the steps below, download WSO2 CEP 3.1.0 and unzip the ZIP file.
Configuring CEP internally
Follow the instructions below to configure the embedded CEP:
Update the MB_HOSTNAME
and MB_LISTEN_PORT
with relevant values in the JMSOutputAdaptor.xml
file, which is in the <PRIVATE_PAAS_HOME>/repository/deployment/server/outputeventadaptors
directory, as follows:
property name="java.naming.provider.url">tcp://MB_HOSTNAME:MB_LISTEN_PORT</property>
Configuring CEP externally
Follow the instructions below to configure CEP with PPaaS as an external component:
Step 1 - Configure the Thrift client
Enable thrift stats publishing in the
thrift-client-config.xml
file, which is in the<PRIVATE_PAAS_HOME>/repository/conf
directory. Here you can set multiple CEP nodes for a High Availability (HA) setup.<cep> <node id="node-01"> <statsPublisherEnabled>true</statsPublisherEnabled> <username>admin</username> <password>admin</password> <ip>localhost</ip> <port>7611</port> </node> <!--<node id="node-02"> <statsPublisherEnabled>true</statsPublisherEnabled> <username>admin</username> <password>admin</password> <ip>10.10.1.1</ip> <port>7714</port> </node>--> </cep>
Restart the PPaaS server without the internally embedded WSO2 CEP.
sh wso2server.sh -Dprofile=cep-excluded
Step 2 - Configure CEP
If you are configuring the external CEP in the High Availability (HA) mode, create a CEP HA deployment cluster in full-active-active mode. Note that it is recommended to setup CEP in a HA mode.
Skip this step if you are setting up the external CEP in a single node.
For more information on CEP clustering see the CEP clustering guide.
When following the steps in the CEP clustering guide, note that you need to configure all the CEP nodes in the cluster as mentioned in step 3 and only then carryout the preceding steps.- Download the CEP extension from the PPaaS product page on the WSO2 website and uncompress the file. The extracted distribution is referred to as
<PPAAS_CEP_DISTRIBUTION>.
- Copy the following
stream-manager-config.xml
file from the<PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/streamdefinitions
directory to the<CEP_HOME>/repository/conf
directory. Replace the content in the
jndi.properties
file, which is in the<CEP_HOME>/repository/conf
directory, with the following configurations. Update themessage-broker-ip
andmessage-broker-port
values.connectionfactoryName=TopicConnectionFactory java.naming.provider.url=tcp://[MB_IP]:[MB_Port] java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory # register some topics in JNDI using the form # topic.[jndiName]=[physicalName] topic.lb-stats=lb-stats topic.instance-stats=instance-stats topic.summarized-health-stats=summarized-health-stats topic.topology=topology topic.ping=ping
Add the following content to the
siddhi.extension
file, which is in the<CEP_HOME>/repository/conf/siddhi
directory.org.apache.stratos.cep.extension.GradientFinderWindowProcessor org.apache.stratos.cep.extension.SecondDerivativeFinderWindowProcessor org.apache.stratos.cep.extension.FaultHandlingWindowProcessor org.apache.stratos.cep.extension.ConcatWindowProcessor org.apache.stratos.cep.extension.MemeberRequestHandlingCapabilityWindowProcessor org.wso2.ppaas.cep.extension.SystemTimeWindowProcessor
Copy the following JARs, which are in the
<PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/lib
directory to the<CEP_HOME>/repository/components/lib
directory.org.apache.stratos.cep.310.extension-4.1.4.jar
org.wso2.ppaas.cep310.extension-4.1.0.jar
Copy the following JARs, which are in the
<PPAAS_CEP_DISTRIBUTION>/lib
directory to the<CEP_HOME>/repository/components/lib
directory.org.apache.stratos.messaging-4.1.x.jar
org.apache.stratos.common-4.1.x.jar
Download any dependencies on ActiveMQ 5.10.0 or the latest stable ActiveMQ TAR file from activemq.apache.org. The folder path of this file is referred to as
<ACTIVEMQ_HOME>
. Copy the following ActiveMQ client JARSs from the <ACTIVEMQ_HOME>
/lib
directory to the<CEP_HOME>/repository/components/lib
directory.activemq-broker-5.10.0.jar
activemq-client-5.10.0.jar
geronimo-j2ee-management_1.1_spec-1.0.1.jar
geronimo-jms_1.1_spec-1.1.1.jar
hawtbuf-1.10.jar
- Download the
commons-lang3-3.4.jar
files from commons.apache.org andcommons-logging-1.2.jar
files from commons.apache.org. Copy the downloaded files to the<CEP_HOME>/repository/components/lib
directory. - Copy the following files from the
<PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/eventbuilders
directory, which you downloaded in step 2.2, to the<CEP_HOME>/repository/deployment/server/eventbuilders
directory:HealthStatisticsEventBuilder.xml
LoadBalancerStatisticsEventBuilder.xml
- Copy the following file from the
<PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/inputeventadaptors
directory, which you downloaded in step 2.2, to the<CEP_HOME>/repository/deployment/server/inputeventadaptors
directory:DefaultWSO2EventInputAdaptor.xml
- Copy the
<PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/outputeventadaptors/JMSOutputAdaptor.xml
file, which you downloaded in step 2.2, to the<CEP_HOME>/repository/deployment/server/outputeventadaptors
directory: Update the
MB_HOSTNAME
andMB_LISTEN_PORT
with relevant values in theJMSOutputAdaptor.xml
file, which you copied in the above step, as follows:property name="java.naming.provider.url">tcp://MB_HOSTNAME:MB_LISTEN_PORT</property>
- Copy the following files from the
<PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/executionplans
directory, which you downloaded in step 2.2, to the<CEP_HOME>/repository/deployment/server/executionplans
directory:AverageHeathRequest.xml
AverageInFlightRequestsFinder.xml
GradientOfHealthRequest.xml
GradientOfRequestsInFlightFinder.xml
SecondDerivativeOfHealthRequest.xml
SecondDerivativeOfRequestsInFlightFinder.xml
- If you are setting up the external CEP in a single node, change the
siddhi.enable.distibuted.processing
property, in all the latter mentioned CEP 3.1.0 execution plans, fromRedundantMode
tofalse
. - Copy the following files from the
<PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/eventformatters
directory, which you downloaded in step 2.2, to the<CEP_HOME>/repository/deployment/server/eventformatters
directory:AverageInFlightRequestsEventFormatter.xml
AverageLoadAverageEventFormatter.xml
AverageMemoryConsumptionEventFormatter.xml
FaultMessageEventFormatter.xml
GradientInFlightRequestsEventFormatter.xml
GradientLoadAverageEventFormatter.xml
GradientMemoryConsumptionEventFormatter.xml
MemberAverageLoadAverageEventFormatter.xml
MemberAverageMemoryConsumptionEventFormatter.xml
MemberGradientLoadAverageEventFormatter.xml
MemberGradientMemoryConsumptionEventFormatter.xml
MemberSecondDerivativeLoadAverageEventFormatter.xml
MemberSecondDerivativeMemoryConsumptionEventFormatter.xml
SecondDerivativeInFlightRequestsEventFormatter.xml
SecondDerivativeLoadAverageEventFormatter.xml
SecondDerivativeMemoryConsumptionEventFormatter.xml
Add the CEP URLs as a payload parameter to the network partition.
If you are deploying Private PaaS on Kubernetes, then add the CEP URLs to the Kubernetes cluster.
Example:
{ "name": "payload_parameter.CEP_URLS", "value": "192.168.0.1:7712,192.168.0.2:7711" }
If you want to use CEP externally, after you have successfully configured CEP, start the CEP server:
This is only applicable if you have configured CEP 3.1.0 externally.
./wso2server.sh
Step 4 - Setup and start WSO2 DAS (Optional)
Skip this step if you do not want to enable monitoring and metering in PPaaS using DAS. Even though this step is optional we recommend that you enable monitoring and metering in PPaaS.
Optionally, you can configure PPaaS to work with WSO2 Data Analytics Server (DAS), so that it can handle the monitoring and metering aspect related to PPaaS.
If you want to use DAS with PPaaS, prior to carrying out the steps below, download WSO2 DAS 3.0.0 and unzip the ZIP file.
Use MySQL 5.6 and the 5.1.x MySQL Connector for Java when carrying out the following configurations.
Follow the instructions below to manually setup DAS with PPaaS:
Step 1 - Configure PPaaS
Enable thrift stats publishing with the
DAS_HOSTNAME
andDAS_TCP_PORT
values in thethrift-client-config.xml
file, which is in the<PRIVATE_PAAS_HOME>/repository/conf
directory. If needed, you can set multiple DAS nodes for a High Availability (HA) setup.<!-- Apache thrift client configuration for publishing statistics to WSO2 CEP and WSO2 DAS--> <thriftClientConfiguration> . . . <das> <node id="node-01"> <statsPublisherEnabled>false</statsPublisherEnabled> <username>admin</username> <password>admin</password> <ip>[DAS_HOSTNAME]</ip> <port>[DAS_TCP_PORT]</port> </node> <!--<node id="node-02"> <statsPublisherEnabled>true</statsPublisherEnabled> <username>admin</username> <password>admin</password> <ip>localhost</ip> <port>7613</port> </node>--> </das> </config> </thriftClientConfiguration>
Configure the Private PaaS metering dashboard URL with the
DAS_HOSTNAME
andDAS_PORTAL_PORT
values in the<PRIVATE_PAAS_HOME>/repository/conf/cartridge-config.properties
file as follows:das.metering.dashboard.url=https://<DAS_HOSTNAME>:<DAS_PORTAL_PORT>/portal/dashboards/ppaas-metering-dashboard
Configure the monitoring dashboard URL.
To configure the DAS monitoring dashboard, add the following configuration in the<PRIVATE_PAAS_HOME>/repository/deployment/server/jaggeryapps/console/controllers/menu/menu.json
file, after the applications menu. Configure monitoring dashboard URL by adding theDAS_HOSTNAME
andDAS_PORTAL_PORT
values in thelink
property.{ "link": "https://<DAS_HOSTNAME>:<DAS_PORTAL_PORT>/portal/dashboards/ppaas-monitoring-dashboard", "linkexternal": true, "context": "/", "title": "Monitoring", "icon": "fa-laptop", "block-color":"#f1c40f", "permissionPaths": [ "/permission", "/permission/admin" ], "description": "Monitor health statistics of clusters and members." },
Step 2 - Configure DAS
Create the
ANALYTICS_FS_DB
,ANALYTICS_EVENT_STORE
andANALYTICS_PROCESSED_STORE
databases in MySQL using the following MySQL scripts:CREATE DATABASE ANALYTICS_FS_DB; CREATE DATABASE ANALYTICS_EVENT_STORE; CREATE DATABASE ANALYTICS_PROCESSED_DATA_STORE;
Configure DAS
analytics-datasources.xml
file, which is in the<DAS_HOME>/repository/conf/datasources
directory, as follows to create theANALYTICS_FS_DB
,ANALYTICS_EVENT_STORE
andANALYTICS_PROCESSED_STORE
datasources.<datasources-configuration> <providers> <provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider> </providers> <datasources> <datasource> <name>WSO2_ANALYTICS_FS_DB</name> <description>The datasource used for analytics file system</description> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://127.0.0.1:3306/ANALYTICS_FS_DB</url> <username>root</username> <password>root</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> <defaultAutoCommit>false</defaultAutoCommit> </configuration> </definition> </datasource> <datasource> <name>WSO2_ANALYTICS_EVENT_STORE_DB</name> <description>The datasource used for analytics record store</description> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://127.0.0.1:3306/ANALYTICS_EVENT_STORE</url> <username>root</username> <password>root</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> <defaultAutoCommit>false</defaultAutoCommit> </configuration> </definition> </datasource> <datasource> <name>WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</name> <description>The datasource used for analytics record store</description> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://127.0.0.1:3306/ANALYTICS_PROCESSED_DATA_STORE</url> <username>root</username> <password>root</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> <defaultAutoCommit>false</defaultAutoCommit> </configuration> </definition> </datasource> </datasources> </datasources-configuration>
Set the analytics datasources created in above step (
WSO2_ANALYTICS_FS_DB, WSO2_ANALYTICS_EVENT_STORE_DB
andWSO2_ANALYTICS_PROCESSED_STORE_DB
) in the DASanalytics-config.xml
file, which is in the<DAS_HOME>/repository/conf/analytics
directory.<analytics-dataservice-configuration> <!-- The name of the primary record store --> <primaryRecordStore>EVENT_STORE</primaryRecordStore> <!-- The name of the index staging record store --> <indexStagingRecordStore>INDEX_STAGING_STORE</indexStagingRecordStore> <!-- Analytics File System - properties related to index storage implementation --> <analytics-file-system> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsFileSystem</implementation> <properties> <!-- the data source name mentioned in data sources configuration --> <property name="datasource">WSO2_ANALYTICS_FS_DB</property> <property name="category">large_dataset_optimized</property> </properties> </analytics-file-system> <!-- Analytics Record Store - properties related to record storage implementation --> <analytics-record-store name="EVENT_STORE"> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation> <properties> <property name="datasource">WSO2_ANALYTICS_EVENT_STORE_DB</property> <property name="category">large_dataset_optimized</property> </properties> </analytics-record-store> <analytics-record-store name="INDEX_STAGING_STORE"> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation> <properties> <property name="datasource">WSO2_ANALYTICS_EVENT_STORE_DB</property> <property name="category">limited_dataset_optimized</property> </properties> </analytics-record-store> <analytics-record-store name = "PROCESSED_DATA_STORE"> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation> <properties> <property name="datasource">WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</property> <property name="category">large_dataset_optimized</property> </properties> </analytics-record-store> <!-- The data indexing analyzer implementation --> <analytics-lucene-analyzer> <implementation>org.apache.lucene.analysis.standard.StandardAnalyzer</implementation> </analytics-lucene-analyzer> <!-- The maximum number of threads used for indexing per node, -1 signals to aute detect the optimum value, where it would be equal to (number of CPU cores in the system - 1) --> <indexingThreadCount>-1</indexingThreadCount> <!-- The number of index shards, should be equal or higher to the number of indexing nodes that is going to be working, ideal count being 'number of indexing nodes * [CPU cores used for indexing per node]' --> <shardCount>6</shardCount> <!-- Data purging related configuration --> <analytics-data-purging> <!-- Below entry will indicate purging is enable or not. If user wants to enable data purging for cluster then this property need to be enable in all nodes --> <purging-enable>false</purging-enable> <cron-expression>0 0 0 * * ?</cron-expression> <!-- Tables that need include to purging. Use regex expression to specify the table name that need include to purging.--> <purge-include-tables> <table>.*</table> <!--<table>.*jmx.*</table>--> </purge-include-tables> <!-- All records that insert before the specified retention time will be eligible to purge --> <data-retention-days>365</data-retention-days> </analytics-data-purging> <!-- Receiver/Indexing flow-control configuration --> <analytics-receiver-indexing-flow-control enabled = "true"> <!-- maximum number of records that can be in index staging area before receiving is throttled --> <recordReceivingHighThreshold>10000</recordReceivingHighThreshold> <!-- the limit on number of records to be lower than, to reduce throttling --> <recordReceivingLowThreshold>5000</recordReceivingLowThreshold> </analytics-receiver-indexing-flow-control> </analytics-dataservice-configuration>
Add the MySQL Java connector 5.1.x JAR file, which is supported by MYSQL 5.6, in the
<DAS_HOME>/repository/components/lib
directory.
Step 2.1 - Download the DAS extension distribution
Download the DAS extension from the PPaaS product page on the WSO2 website and uncompress the file. The extracted distribution is referred to as <PPAAS_DAS_DISTRIBUTION>.
Step 2.2 - Create PPaaS Metering Dashboard with DAS
- Add the
org.wso2.ppaas.das.extension-<PPAAS_VERSION>.jar
file, which is in the<PPAAS_DAS_DISTRIBUTION>/lib
directory, into the<DAS_HOME>/repository/components/lib
directory. Add the following Java class path into the
spark-udf-config.xml
file in the<DAS_HOME>/repository/conf/analytics/spark
directory.<class-name>org.wso2.ppaas.das.extension.TimeUDF</class-name>
Add Jaggery files, which are in the
<PPAAS_DAS_DISTRIBUTION>/metering-dashboard/jaggery-files
directory into the<DAS_HOME>/repository/deployment/server/jaggeryapps/portal/controllers/apis
directory.Manually create MySQL databases and tables using the queries, which are in the
<PPAAS_DAS_DISTRIBUTION>/metering-dashboard/metering-mysqlscript.sql
file.CREATE DATABASE IF NOT EXISTS ANALYTICS_FS_DB; CREATE DATABASE IF NOT EXISTS ANALYTICS_EVENT_STORE; CREATE DATABASE IF NOT EXISTS ANALYTICS_PROCESSED_DATA_STORE; CREATE TABLE ANALYTICS_PROCESSED_DATA_STORE.MEMBER_STATUS(Time long, ApplicationId VARCHAR(150), ClusterAlias VARCHAR(150), MemberId VARCHAR(150), MemberStatus VARCHAR(50)); CREATE TABLE ANALYTICS_PROCESSED_DATA_STORE.MEMBER_COUNT(Time long, ApplicationId VARCHAR(150), ClusterAlias VARCHAR(150), CreatedInstanceCount int, InitializedInstanceCount int, ActiveInstanceCount int, TerminatedInstanceCount int); CREATE TABLE ANALYTICS_PROCESSED_DATA_STORE.MEMBER_INFORMATION(MemberId VARCHAR(150), InstanceType VARCHAR(150), ImageId VARCHAR(150), HostName VARCHAR(150), PrivateIPAddresses VARCHAR(150), PublicIPAddresses VARCHAR(150), Hypervisor VARCHAR(150), CPU VARCHAR(10) , RAM VARCHAR(10), OSName VARCHAR(150), OSVersion VARCHAR(150));
Apply a WSO2 User Engagement Server (UES) patch to the DAS dashboard.
You need to do this to populate the metering dashboard.Copy the
ues-dashboard.js
and theues-pubsub.js
files from the<PPAAS_DAS_DISTRIBUTION>/metering-dashboard/ues-patch
directory into the<DAS_HOME>/repository/deployment/server/jaggeryapps/portal/js
directory.Copy the
dashboard.jag
file from the<PPAAS_DAS_DISTRIBUTION>/metering-dashboard/ues-patch
directory into the<DAS_HOME>/repository/deployment/server/jaggeryapps/portal/theme/templates
directory.
Add the
ppaas-metering-service.car
file, which is in the<PPAAS_DAS_DISTRIBUTION>/metering-dashboard
directory, into the<DAS_HOME>/repository/deployment/server/carbonapps
directory to generate the metering dashboard.If the
<DAS_HOME>/repository/deployment/server/carbonapps
folder does not exist, initially create the folder before moving the CAR file.You can navigate to the metering dashboard from the Private PaaS application topology view at the application or cluster level as shown below.
The following is a sample metering dashboard:
Step 2.3 - Create the PPaaS Monitoring Dashboard with DAS
- Add the Jaggery files, which are in the
<PPAAS_DAS_DISTRIBUTION>/
monitoring-dashboard/jaggery-files
directory into the<DAS_HOME>/repository/deployment/server/jaggeryapps/portal/controllers/apis
directory. Manually create the MySQL database and tables using the queries in the
<PPAAS_DAS_DISTRIBUTION>/monitoring-dashboard/jaggery-files/monitoring-mysqlscript.sql
file.CREATE DATABASE IF NOT EXISTS ANALYTICS_FS_DB; CREATE DATABASE IF NOT EXISTS ANALYTICS_EVENT_STORE; CREATE DATABASE IF NOT EXISTS ANALYTICS_PROCESSED_DATA_STORE; CREATE TABLE ANALYTICS_EVENT_STORE.AVERAGE_MEMORY_CONSUMPTION_STATS(Time long, ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE); CREATE TABLE ANALYTICS_EVENT_STORE.MEMBER_AVERAGE_MEMORY_CONSUMPTION_STATS(Time long, MemberId VARCHAR(150), ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE); CREATE TABLE ANALYTICS_EVENT_STORE.AVERAGE_LOAD_AVERAGE_STATS(Time long, ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE); CREATE TABLE ANALYTICS_EVENT_STORE.MEMBER_AVERAGE_LOAD_AVERAGE_STATS(Time long, MemberId VARCHAR(150), ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE); CREATE TABLE ANALYTICS_EVENT_STORE.AVERAGE_IN_FLIGHT_REQUESTS(Time long, ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), COUNT DOUBLE); CREATE TABLE ANALYTICS_EVENT_STORE.SCALING_DETAILS(Time VARCHAR(50), ScalingDecisionId VARCHAR(150), ClusterId VARCHAR(150), MinInstanceCount INT, MaxInstanceCount INT, RIFPredicted INT, RIFThreshold INT ,RIFRequiredInstances INT, MCPredicted INT, MCThreshold INT, MCRequiredInstances INT ,LAPredicted INT, LAThreshold INT,LARequiredInstances INT,RequiredInstanceCount INT ,ActiveInstanceCount INT, AdditionalInstanceCount INT, ScalingReason VARCHAR(150));
- Copy the CEP EventFormatter artifacts, which are in the
<PPAAS_DAS_DISTRIBUTION>/wso2cep-<VERSION>/eventformatters
directory, into the<CEP_HOME>/repository/deployment/server/eventformatters
directory. Copy CEP OutputEventAdapter artifacts, which are in the
<PPAAS_DAS_DISTRIBUTION>/wso2cep-<VERSION>/outputeventadaptors
directory, into the<CEP_HOME>/repository/deployment/server/outputeventadaptors
directory and update the receiverURL and authenticatorURL with theDAS_HOSTNAME
andDAS_TCP_PORT and
DAS_SSL_PORT
values as follows:<outputEventAdaptor name="DefaultWSO2EventOutputAdaptor" statistics="disable" trace="disable" type="wso2event" xmlns="http://wso2.org/carbon/eventadaptormanager"> <property name="username">admin</property> <property name="receiverURL">tcp://<DAS_HOSTNAME>:<DAS_TCP_PORT></property> <property name="password">admin</property> <property name="authenticatorURL">ssl://<DAS_HOSTNAME>:<DAS_SSL_PORT></property> </outputEventAdaptor>
Add the
ppaas-monitoring-service.car
file, which is in the<PPAAS_DAS_DISTRIBUTION>/metering-dashboard
directory into the<DAS_HOME>/repository/deployment/server/carbonapps
directory to generate the monitoring dashboard.If the
<DAS_HOME>/repository/deployment/server/carbonapps
folder does not exist, initially create the folder before moving the CAR file.- Navigate to monitoring dashboard from the PPaaS Console using the Monitoring menu.
The following is a sample monitoring dashboard: - Once you have carriedout all the configurations, start the DAS server. After the DAS server has started successfully start the PPaaS server.
After you have successfully configured DAS in a separate host, start the DAS server:
./wso2server.sh
Step 5 - Setup PPaaS
When using a VM setup or Kubernetes, you need to configure PPaaS accurately before attempting to deploy a WSO2 product on the PaaS.
Follow the instructions below to configure PPaaS:
Some steps are marked as optional as they are not applicable to all IaaS.
Therefore, only execute the instructions that correspond to the IaaS being used!
- Step 1 - Install Prerequisites
- Step 2 - Setup a Kubernetes Cluster (Optional)
- Step 3 - Setup Puppet Master (Optional)
- Step 4 - Create a cartridge base image (Optional)
- Step 5 - Disable the mock IaaS
- Step 6 - Carryout additional IaaS configurations (Optional)
- Step 7 - Configure the Cloud Controller (Optional)
- Step 8 - Define the Message Broker IP (Optional)
Step 1 - Install Prerequisites
Ensure that the following prerequisites have been met based on your environment and IaaS.
Install the prerequisites listed below.
Oracle Java SE Development Kit (JDK)
Apache ActiveMQ
For more information on the prerequisites, see Prerequisites.
Download the Private PaaS binary distribution from the PPaaS product page and unzip it.
Step 2 - Setup a Kubernetes Cluster (Optional)
This step is only mandatory if you are using Kubernetes.
You can setup a Kubernetes cluster using one of the following approaches:
Step 3 - Setup Puppet Master (Optional)
This step is only mandatory if you are deploying PPaaS on a Virtual Machine (e.g., EC2, OpenStack, GCE).
Puppet is an open source configuration management utility. In Private PaaS, Puppet has been used as the orchestration layer. Private PaaS does not have any templates, configurations in puppet, it consists only of the product distributions. Puppet acts as a file server while the Configurator does the configuration in runtime.
Follow the instructions below to setup the Puppet Master.
Step 1 - Configure Puppet Master
Follow steps given below to install Puppet Master on Ubuntu:
Download the Puppet Master distribution package for the Ubuntu release.
wget https://apt.puppetlabs.com/puppetlabs-release-<CODE_NAME>.deb # For example for Ubuntu 14.04 Trusty: wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
Install the downloaded distribution package.
sudo dpkg -i puppetlabs-release-<CODE_NAME>.deb
Install Puppet Master.
sudo apt-get update sudo apt-get install puppetmaster
Install Passenger with Apache.
For more information, see Install Apache and Passenger.
- Change the Ubuntu hostname. Please follow the steps given below to change the Ubuntu hostname:
Update the
/etc/hosts
file.sudo echo "127.0.0.1 puppet.test.org" >> /etc/hosts
Change the value of the hostname.
sudo hostname puppet.test.org
- Add the following entry to the
/etc/puppet/autosign.conf
file:*.test.org
Add the
server=puppet.test.org
line to thepuppet.conf
file, which is in the/etc/puppet
directory.[main] server=puppet.test.org logdir=/var/log/puppet vardir=/var/lib/puppet ssldir=/var/lib/puppet/ssl rundir=/var/run/puppet factpath=$vardir/lib/facter templatedir=$confdir/templates dns_alt_names=puppet [master] # These are needed when the puppetmaster is run by passenger # and can safely be removed if webrick is used. ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY
Restart the Puppet Master.
/etc/init.d/puppetmaster restart
Download the VM Tools by navigating to the following path via the PPaaS product page.
Cartridges > common >
wso2ppaas-vm-tools-4.1.1
Copy and replace the content in the Puppet Master's
/etc/puppet
folder with the content in the<VM_TOOLS>/Puppet
directory.- Configure the mandatory modules.
Mandatory modules
It is mandatory to configure the following modules when configuring Puppet Master for PPaaS:
Python Cartridge Agent Module
Download the Cartridge Agent via the PPaaS product page.
Copy the downloaded
apache-stratos-python-cartridge-agent-4.1.4.zip
to the/etc/puppet/modules/python_agent/files
directory.Change the file permission value, of the
apache-stratos-python-cartridge-agent-4.1.4.zip
file, to0755
.chmod 755 apache-stratos-python-cartridge-agent-4.1.4.zip
Update the
base.pp
file in the/etc/puppet/manifests/nodes
directory, with the following Python agent variables.$pca_name = 'apache-stratos-python-cartridge-agent' $pca_version = '4.1.4' $mb_ip = 'MB-IP' $mb_port = 'MB-PORT' $mb_type = 'activemq' #in wso2mb case, value should be 'wso2mb' $cep_ip = "CEP-IP" $cep_port = "7711" $cep_username = "admin" $cep_password = "admin" $bam_ip = '192.168.30.96' $bam_port = '7611' $bam_secure_port = '7711' $bam_username = 'admin' $bam_password = 'admin' $metadata_service_url = 'METADATA-SERVICE-URL' $agent_log_level = 'INFO' $enable_log_publisher = 'false'
Optionally you can configure the
MB_IP
,MB_PORT
,PUPPET_IP
and thePUPPET_HOSTNAME
in the network partition as shown below.It must be noted that the values defined in the network partition receives higher priority over the values declared in the
base.pp
file ( i.e., The values declared in thebase.pp
file are overwritten by the values declared in the network partition.).{ "id": "network-partition-openstack", "provider": "openstack", "partitions": [ { "id": "partition-1", "property": [ { "name": "region", "value": "<REGION>" } ] }, { "id": "partition-2", "property": [ { "name": "region", "value": "<REGION>" } ] } ], "properties": [ { "name": "payload_parameter.PUPPET_IP", "value": "<PUPPET_MASTER_IP>" }, { "name": "payload_parameter.MB_IP", "value": "<MESSAGE_BROKER_IP>" }, { "name": "payload_parameter.MB_PORT", "value": "<MESSAGE_BROKER_PORT>" }, { "name": "payload_parameter.PUPPET_HOSTNAME", "value": "<PUPPET_MASTER_HOSTNAME>" } ] }
Java Module
Copy the downloaded
jdk-7u72-linux-x64.tar.gz
file to thefiles
folder, which is in the/etc/puppet/modules/java
directory.You can download
jdk-7u72-linux-x64.tar.gz
from here.Change file permission value, of the
jdk-7u72-linux-x64.tar.gz
file, to0755
.chmod 755 jdk-7u72-linux-x64.tar.gz
Update the
base.pp
file, which is in the/etc/puppet/manifests/nodes
directory, with the following Java variables.$java_distribution = 'jdk-7u72-linux-x64.tar.gz' $java_folder = 'jdk1.7.0_72'
Configurator Module
Download the Configurator by navigating to the following path via the PPaaS product page.
Cartridges > common >
wso2ppaas-configurator-4.1.1
- Copy the Configurator (
ppaas-configurator-4.1.1.zip)
to the/etc/puppet/modules/configurator/files
directory. Change the file permission value, of the
ppaas-configurator-4.1.1.zip
file, to0755
.chmod 755 ppaas-configurator-4.1.1.zip
Update the
base.pp
file, which is in the/etc/puppet/manifests/nodes
directory, with the following configurator variables.$configurator_name = 'ppaas-configurator' $configurator_version = '4.1.1'
Step 2 - Update the cartridge-config.properties
file
Update the values of the following parameters in the cartridge-config.properties
file, which is in the <PRIVATE_PAAS_HOME>/repository/conf
directory.
The values are as follows:
[PUPPET_IP] -
The IP address of the running Puppet instance.[PUPPET_HOST_NAME] -
The host name of the running Puppet instance.
Step 4 - Create a cartridge base image (Optional)
This step is only mandatory if you are deploying PPaaS on a Virtual Machine (e.g., EC2, OpenStack, GCE).
Step 5 - Disable the mock IaaS
Mock IaaS is enabled by default. Therefore, if you are running PPaaS on another IaaS, you need to disable the Mock IaaS.
Follow the instructions below to disable the Mock IaaS:
Navigate to the
<PRIVATE_PAAS_HOME>/repository/conf/mock-iaas.xml
file and disable the Mock IaaS.<mock-iaas enabled="false">
Navigate to the
<PRIVATE_PAAS_HOME>/repository/deployment/server/webapps
directory and delete themock-iaas.war
file.When Private PaaS is run the
mock-iaas.wa
r is extracted and themock-iaas
folder is created. Therefore, if you have run PPaaS previously, delete themock-iaas
folder as well.
Step 6 - Carryout additional IaaS configurations (Optional)
This step is only applicable if you are using GCE.
When working on GCE carryout the following instructions:
Step 7 - Configure the Cloud Controller (Optional)
This step is only mandatory if you are deploying PPaaS on a Virtual Machine (e.g., EC2, OpenStack, GCE).
Follow the instructions given below to configure the Cloud Controller (CC):
Configure the IaaS provider details based on the IaaS.
You need to configure details in the<PRIVATE_PAAS_HOME>/repository/conf/cloud-controller.xml
file and comment out the IaaS provider details that are not being used.Update the values of the
MB_IP
andMB_PORT
in thejndi.properties
file, which is in the<PRIVATE_PAAS_HOME>/repository/conf
directory.The default value of the
message-broker-port=
61616.The values are as follows:
MB_IP:
The IP address used by ActiveMQ.MB_PORT:
The port used by ActiveMQ.
connectionfactoryName=TopicConnectionFactory java.naming.provider.url=tcp://[MB_IP]:[MB_Port] java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
Step 8 - Define the Message Broker IP (Optional)
This step is only mandatory if you have setup the Message Broker (MB), in this case ActiveMQ, in a separate host.
If you have setup ActiveMQ, which is the PPaaS Message Broker, in a separate host you need to define the Message Broker IP, so that the MB can communicate with PPaaS.
Update the value of the MB_IP
in the JMSOutputAdaptor
file, which is in the <PRIVATE_PAAS_HOME>/repository/deployment/server/outputeventadaptors
directory.
[MB_IP]:
The IP address used by ActiveMQ.<property name="java.naming.provider.url">tcp://[MB_IP]:61616</property>
Step 6 - Start the PPaaS server
The way in which you need to start the PPaaS server varies based on your settings as follows:
We recommend to start the PPaaS server in background mode, so that the instance will not
If you want to use the internal database (H2) and the embedded CEP, start the PPaaS server as follows:
sh <PRIVATE_PAAS_HOME>/bin/wso2server.sh start
If you want to use an external database, start the Private PaaS server with the
-Dsetup
option as follows:
This creates the database schemas in<PRIVATE_PAAS_HOME>/dbscripts
directory.sh <PRIVATE_PAAS_HOME>/bin/wso2server.sh start -Dsetup
If you want to use an external CEP, disable the embedded CEP when starting the PPaaS server as follows:
sh <PRIVATE_PAAS_HOME>/bin/wso2server.sh start -Dprofile=cep-excluded
If you want to use an external database, together with an external CEP, start the Private PaaS server as follows:
This creates the database schemas in<PRIVATE_PAAS_HOME>/dbscripts
directory.sh <PRIVATE_PAAS_HOME>/bin/wso2server.sh start -Dsetup -Dprofile=cep-excluded
You can tail the log, to verify that the Private PaaS server starts without any issues.
tail -f <PRIVATE_PAAS_HOME>/repository/logs/wso2carbon.log
What's next?
After starting PPaaS on a preferred IaaS, configure the WSO2 cartridge, so that you can seamlessly deploy the WSO2 product on PPaaS.
- No labels