com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_link3' is unknown.

Deploying PPaaS on a Preferred IaaS

Follow the instructions below to deploy WSO2 Private PaaS (PPaaS) on a preferred IaaS, i.e., Kubernetes, Amazon Elastic Compute Cloud (EC2), OpenStack and Google Compute Engine (GCE), in a single JVM:

Step 1 - Configure external databases for PPaaS

For testing purposes you can run your PPaaS setup on the internal database (DB), which is the H2 DB. In the latter mentioned scenario, you do not need to setup the internal DB. However, in a production environment it is recommend to use an external RDBMS (e.g., MySQL).

 Click here for instructions...

Follow the instructions given below to configure PPaaS with external databases:

WSO2 Private PaaS 4.1.0 requires the following external databases: User database, Governance database and Config database. Therefore, before using the above databases, you need to create these DBs, as explained in Working with Databases, and configure Private PaaS as mentioned below.

  1. Copy the MySQL JDBC driver to the <PRIVATE_PAAS_HOME>/repository/components/lib directory.

  2. Create 3 empty databases, in the <PRIVATE_PAAS_HOME>/dbscripts directory, in your MySQL server with the following names and grant permission to the databases, so that they can be accessed through a remote server.

    ppaas_registry_db
    ppaas_user_db
    ppaas_config_db
     

  3. Navigate to the <PRIVATE_PAAS_HOME>/repository/conf/datasources directory and add the datasources that correspond to your DB in the master-datasources.xml file.
    Change the IP addresses and ports based on your environment.

    <datasource>
        <name>WSO2_GOVERNANCE_DB</name>
        <description>The datasource used for governance MySQL database</description>
        <jndiConfig>
            <name>jdbc/registry</name>
        </jndiConfig>
        <definition type="RDBMS">
            <configuration>
                <url>jdbc:mysql://[MYSQL_HOSTNAME]:[MYSQL_PORT]/ppaas_registry_db?autoReconnect=true</url>
                <username>[USERNAME]</username>
                <password>[PASSWORD]</password>
                <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                <maxActive>50</maxActive>
                <maxWait>60000</maxWait>
                <testOnBorrow>true</testOnBorrow>
                <validationQuery>SELECT 1</validationQuery>
                <validationInterval>30000</validationInterval>
            </configuration>
        </definition>
     </datasource>
     <datasource>
        <name>WSO2_CONFIG_DB</name>
        <description>The datasource used for CONFIG MySQL database</description>
        <jndiConfig>
            <name>jdbc/ppaas_config</name>
        </jndiConfig>
        <definition type="RDBMS">
            <configuration>
                <url>jdbc:mysql://[MYSQL_HOSTNAME]:[MYSQL_PORT]/ppaas_config_db?autoReconnect=true</url>
                <username>[USERNAME]</username>
                <password>[PASSWORD]</password>
                <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                <maxActive>50</maxActive>
                <maxWait>60000</maxWait>
                <testOnBorrow>true</testOnBorrow>
                <validationQuery>SELECT 1</validationQuery>
                <validationInterval>30000</validationInterval>
            </configuration>
        </definition>
     </datasource>
     <datasource>
        <name>WSO2_USER_DB</name>
        <description>The datasource used for userstore MySQL database</description>
        <jndiConfig>
            <name>jdbc/userstore</name>
        </jndiConfig>
        <definition type="RDBMS">
            <configuration>
                <url>jdbc:mysql://[MYSQL_HOSTNAME]:[MYSQL_PORT]/ppaas_user_db?autoReconnect=true</url>
                <username>[USERNAME]</username>
                <password>[PASSWORD]</password>
                <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                <maxActive>50</maxActive>
                <maxWait>60000</maxWait>
                <testOnBorrow>true</testOnBorrow>
                <validationQuery>SELECT 1</validationQuery>
                <validationInterval>30000</validationInterval>
            </configuration>
        </definition>
    </datasource>
  4. Navigate to the <PRIVATE_PAAS_HOME>/repository/conf directory and change the datasources in both the user-mgt.xml and identity.xml files as follows: 

    <Property name="dataSource">jdbc/userstore</Property>
  5. Navigate to the <PRIVATE_PAAS_HOME>/repository/conf directory and add the following configurations in the registry.xml file. Change your IP addresses and ports based on your environment.

    <dbConfig name="governance">
        <dataSource>jdbc/registry</dataSource>
    </dbConfig>
    <remoteInstance url="https://localhost:9443/registry">
        <id>governance</id>
        <dbConfig>governance</dbConfig>
        <readOnly>false</readOnly>
        <registryRoot>/</registryRoot>
        <enableCache>true</enableCache>
        <cacheId>root@jdbc:mysql://[MYSQL_HOSTNAME]:[MYSQL_PORT]/ppaas_registry_db</cacheId>
    </remoteInstance>
    <dbConfig name="config">
        <dataSource>jdbc/ppaas_config</dataSource>
    </dbConfig>
    <remoteInstance url="https://localhost:9443/registry">
        <id>config</id>
        <dbConfig>config</dbConfig>
        <readOnly>false</readOnly>
        <registryRoot>/</registryRoot>
        <enableCache>true</enableCache>
        <cacheId>root@jdbc:mysql://[MYSQL_HOSTNAME]:[MYSQL_PORT]/ppaas_config_db</cacheId>
    </remoteInstance>
    <mount path="/_system/governance" overwrite="true">
        <instanceId>governance</instanceId>
        <targetPath>/_system/governance</targetPath>
    </mount>
    <mount path="/_system/config" overwrite="true">
        <instanceId>config</instanceId>
        <targetPath>/_system/config</targetPath>
    </mount>

 

Step 2 - Setup ActiveMQ

PPaaS uses the Message Broker (MB) to handle the communication among all the components in a loosely coupled manner. Currently, PPaaS uses Apache ActiveMQ; however, PPaaS supports any Advanced Message Queuing Protocol (AMQP) Message Broker.

 Click here for instructions...

Follow the instructions below to run ActiveMQ in a separate host:

  1. Download and unzip Apache ActiveMQ.

  2. Start ActiveMQ

    ./activemq start


 

 

Step 3 - Setup and start WSO2 CEP

By default, PPaaS is shipped with an embedded WSO2 Complex Event Processor (CEP). It is recommended to use the embedded CEP only for testing purposes and to configure CEP externally in a production environment. Furthermore, the compatible CEP versions differ based on whether the CEP is internal or external. WSO2 CEP 3.0.0 is embedded into PPaaS. However, PPaaS uses CEP 3.1.0 when working with CEP externally.

If you want to use CEP externally, prior to carrying out the steps below, download WSO2 CEP 3.1.0 and unzip the ZIP file.

 Click here for instructions...

Configuring CEP internally

Follow the instructions below to configure the embedded CEP:

Update the MB_HOSTNAME and MB_LISTEN_PORT with relevant values in the JMSOutputAdaptor.xml file, which is in the <PRIVATE_PAAS_HOME>/repository/deployment/server/outputeventadaptors directory, as follows:

property name="java.naming.provider.url">tcp://MB_HOSTNAME:MB_LISTEN_PORT</property>

 

Configuring CEP externally

Follow the instructions below to configure CEP with PPaaS as an external component:

Step 1 - Configure the Thrift client
  1. Enable thrift stats publishing in the thrift-client-config.xml file, which is in the <PRIVATE_PAAS_HOME>/repository/conf directory. Here you can set multiple CEP nodes for a High Availability (HA) setup.

    <cep>
       <node id="node-01">
          <statsPublisherEnabled>true</statsPublisherEnabled>
          <username>admin</username>
          <password>admin</password>
          <ip>localhost</ip>
          <port>7611</port>
       </node>
       <!--<node id="node-02">
          <statsPublisherEnabled>true</statsPublisherEnabled>
          <username>admin</username>
          <password>admin</password>
          <ip>10.10.1.1</ip>
          <port>7714</port>
       </node>-->
    </cep>
  2. Restart the PPaaS server without the internally embedded WSO2 CEP. 

    sh wso2server.sh -Dprofile=cep-excluded
Step 2 - Configure CEP
  1. If you are configuring the external CEP in the High Availability (HA) mode, create a CEP HA deployment cluster in full-active-active mode. Note that it is recommended to setup CEP in a HA mode.

    Skip this step if you are setting up the external CEP in a single node.

    For more information on CEP clustering see the CEP clustering guide.
    When following the steps in the CEP clustering guide, note that you need to configure all the CEP nodes in the cluster as mentioned in step 3 and only then carryout the preceding steps.

  2. Download the CEP extension from the PPaaS product page on the WSO2 website and uncompress the file. The extracted distribution is referred to as <PPAAS_CEP_DISTRIBUTION>.
  3. Copy the following stream-manager-config.xml file from the <PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/streamdefinitions directory to the <CEP_HOME>/repository/conf directory. 
  4. Replace the content in the jndi.properties file, which is in the <CEP_HOME>/repository/conf directory, with the following configurations. Update the message-broker-ip and message-broker-port values.

    connectionfactoryName=TopicConnectionFactory
    java.naming.provider.url=tcp://[MB_IP]:[MB_Port]
    java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
    
    # register some topics in JNDI using the form
    # topic.[jndiName]=[physicalName]
    topic.lb-stats=lb-stats
    topic.instance-stats=instance-stats
    topic.summarized-health-stats=summarized-health-stats
    topic.topology=topology
    topic.ping=ping
  5. Add the following content to the siddhi.extension file, which is in the <CEP_HOME>/repository/conf/siddhi directory.

    org.apache.stratos.cep.extension.GradientFinderWindowProcessor
    org.apache.stratos.cep.extension.SecondDerivativeFinderWindowProcessor
    org.apache.stratos.cep.extension.FaultHandlingWindowProcessor
    org.apache.stratos.cep.extension.ConcatWindowProcessor
    org.apache.stratos.cep.extension.MemeberRequestHandlingCapabilityWindowProcessor
    org.wso2.ppaas.cep.extension.SystemTimeWindowProcessor
  6. Copy the  following JARs, which are in the <PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/lib directory to the <CEP_HOME>/repository/components/lib directory.

    • org.apache.stratos.cep.310.extension-4.1.4.jar
    • org.wso2.ppaas.cep310.extension-4.1.0.jar
  7. Copy the following JARs, which are in the <PPAAS_CEP_DISTRIBUTION>/lib directory to the <CEP_HOME>/repository/components/lib directory.

    • org.apache.stratos.messaging-4.1.x.jar

    • org.apache.stratos.common-4.1.x.jar

  8. Download any dependencies on ActiveMQ 5.10.0 or the latest stable ActiveMQ TAR file from activemq.apache.org. The folder path of this file is referred to as <ACTIVEMQ_HOME>. Copy the following ActiveMQ client JARSs from the <ACTIVEMQ_HOME>/lib directory to the <CEP_HOME>/repository/components/lib directory.

    • activemq-broker-5.10.0.jar 

    • activemq-client-5.10.0.jar 

    • geronimo-j2ee-management_1.1_spec-1.0.1.jar 

    • geronimo-jms_1.1_spec-1.1.1.jar 

    • hawtbuf-1.10.jar

  9. Download the commons-lang3-3.4.jar files from commons.apache.org and commons-logging-1.2.jar files from commons.apache.org. Copy the downloaded files to the  <CEP_HOME>/repository/components/lib directory.
  10. Copy the following files from the <PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/eventbuilders directory, which you downloaded in step 2.2, to the <CEP_HOME>/repository/deployment/server/eventbuilders directory:
    • HealthStatisticsEventBuilder.xml
    • LoadBalancerStatisticsEventBuilder.xml
  11. Copy the following file from the <PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/inputeventadaptors directory, which you downloaded in step 2.2, to the <CEP_HOME>/repository/deployment/server/inputeventadaptors directory:
    • DefaultWSO2EventInputAdaptor.xml
  12. Copy the <PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/outputeventadaptors/JMSOutputAdaptor.xml file, which you downloaded in step 2.2, to the <CEP_HOME>/repository/deployment/server/outputeventadaptors directory:
  13. Update the MB_HOSTNAME and MB_LISTEN_PORT with relevant values in the JMSOutputAdaptor.xml file, which you copied in the above step, as follows:

    property name="java.naming.provider.url">tcp://MB_HOSTNAME:MB_LISTEN_PORT</property>
  14. Copy the following files from the <PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/executionplans directory, which you downloaded in step 2.2, to the <CEP_HOME>/repository/deployment/server/executionplans directory:
    • AverageHeathRequest.xml
    • AverageInFlightRequestsFinder.xml
    • GradientOfHealthRequest.xml
    • GradientOfRequestsInFlightFinder.xml
    • SecondDerivativeOfHealthRequest.xml
    • SecondDerivativeOfRequestsInFlightFinder.xml
  15. If you are setting up the external CEP in a single node, change the siddhi.enable.distibuted.processing property, in all the latter mentioned CEP 3.1.0 execution plans, from RedundantMode to false.
  16. Copy the following files from the <PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/eventformatters directory, which you downloaded in step 2.2, to the <CEP_HOME>/repository/deployment/server/eventformatters directory:
    • AverageInFlightRequestsEventFormatter.xml
    • AverageLoadAverageEventFormatter.xml
    • AverageMemoryConsumptionEventFormatter.xml
    • FaultMessageEventFormatter.xml
    • GradientInFlightRequestsEventFormatter.xml
    • GradientLoadAverageEventFormatter.xml
    • GradientMemoryConsumptionEventFormatter.xml
    • MemberAverageLoadAverageEventFormatter.xml
    • MemberAverageMemoryConsumptionEventFormatter.xml
    • MemberGradientLoadAverageEventFormatter.xml
    • MemberGradientMemoryConsumptionEventFormatter.xml
    • MemberSecondDerivativeLoadAverageEventFormatter.xml
    • MemberSecondDerivativeMemoryConsumptionEventFormatter.xml
    • SecondDerivativeInFlightRequestsEventFormatter.xml
    • SecondDerivativeLoadAverageEventFormatter.xml
    • SecondDerivativeMemoryConsumptionEventFormatter.xml
  17. Add the CEP URLs as a payload parameter to the network partition. 

    If you are deploying Private PaaS on Kubernetes, then add the CEP URLs to the Kubernetes cluster.

    Example: 

    {
        "name": "payload_parameter.CEP_URLS",
        "value": "192.168.0.1:7712,192.168.0.2:7711"
    }

If you want to use CEP externally, after you have successfully configured CEP, start the CEP server:

This is only applicable if you have configured CEP 3.1.0 externally.

./wso2server.sh

 

Step 4 - Setup and start WSO2 DAS (Optional)

Skip this step if you do not want to enable monitoring and metering in PPaaS using DAS. Even though this step is optional we recommend that you enable monitoring and metering in PPaaS.

Optionally, you can configure PPaaS to work with WSO2 Data Analytics Server (DAS), so that it can handle the monitoring and metering aspect related to PPaaS.

If you want to use DAS with PPaaS, prior to carrying out the steps below, download WSO2 DAS 3.0.0 and unzip the ZIP file.

 Click here for instructions...

Use MySQL 5.6 and the 5.1.x MySQL Connector for Java when carrying out the following configurations.

Follow the instructions below to manually setup DAS with PPaaS:

Step 1 - Configure PPaaS

  1. Enable thrift stats publishing with the DAS_HOSTNAME and DAS_TCP_PORT values in the thrift-client-config.xml file, which is in the <PRIVATE_PAAS_HOME>/repository/conf directory. If needed, you can set multiple DAS nodes for a High Availability (HA) setup.

    <!-- Apache thrift client configuration for publishing statistics to WSO2 CEP and WSO2 DAS-->
    <thriftClientConfiguration>
            .
            .
            .
           <das>
                <node id="node-01">
                     <statsPublisherEnabled>false</statsPublisherEnabled>
                     <username>admin</username>
                     <password>admin</password>
                     <ip>[DAS_HOSTNAME]</ip>
                     <port>[DAS_TCP_PORT]</port>
                </node>
                <!--<node id="node-02">
                     <statsPublisherEnabled>true</statsPublisherEnabled>
                     <username>admin</username>
                     <password>admin</password>
                     <ip>localhost</ip>
                     <port>7613</port>
                </node>-->
           </das>
       </config>
    </thriftClientConfiguration>
  2. Configure the Private PaaS metering dashboard URL with the DAS_HOSTNAME and DAS_PORTAL_PORT values in the <PRIVATE_PAAS_HOME>/repository/conf/cartridge-config.properties file as follows:

    das.metering.dashboard.url=https://<DAS_HOSTNAME>:<DAS_PORTAL_PORT>/portal/dashboards/ppaas-metering-dashboard

     

  3. Configure the monitoring dashboard URL.
    To configure the DAS monitoring dashboard, add the following configuration in the <PRIVATE_PAAS_HOME>/repository/deployment/server/jaggeryapps/console/controllers/menu/menu.json file, after the applications menu. Configure monitoring dashboard URL by adding the DAS_HOSTNAME and DAS_PORTAL_PORT values in the link property. 

    {
       "link": "https://<DAS_HOSTNAME>:<DAS_PORTAL_PORT>/portal/dashboards/ppaas-monitoring-dashboard",
       "linkexternal": true,
       "context": "/",
       "title": "Monitoring",
       "icon": "fa-laptop",
       "block-color":"#f1c40f",
       "permissionPaths": [
           "/permission",
           "/permission/admin"
       ],
       "description": "Monitor health statistics of clusters and members."
    },

Step 2 - Configure DAS

  1. Create the ANALYTICS_FS_DB, ANALYTICS_EVENT_STORE and ANALYTICS_PROCESSED_STORE databases in MySQL using the following MySQL scripts:

    CREATE DATABASE ANALYTICS_FS_DB;
    CREATE DATABASE ANALYTICS_EVENT_STORE;
    CREATE DATABASE ANALYTICS_PROCESSED_DATA_STORE;
  2. Configure DAS analytics-datasources.xml file, which is in the <DAS_HOME>/repository/conf/datasources directory, as follows to create the ANALYTICS_FS_DB, ANALYTICS_EVENT_STORE and ANALYTICS_PROCESSED_STORE datasources.

    <datasources-configuration>
       <providers>
          <provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider>
       </providers>
       <datasources>
          <datasource>
             <name>WSO2_ANALYTICS_FS_DB</name>
             <description>The datasource used for analytics file system</description>
             <definition type="RDBMS">
                <configuration>
                   <url>jdbc:mysql://127.0.0.1:3306/ANALYTICS_FS_DB</url>
                   <username>root</username>
                   <password>root</password>
                   <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                   <maxActive>50</maxActive>
                   <maxWait>60000</maxWait>
                   <testOnBorrow>true</testOnBorrow>
                   <validationQuery>SELECT 1</validationQuery>
                   <validationInterval>30000</validationInterval>
                   <defaultAutoCommit>false</defaultAutoCommit>
                </configuration>
             </definition>
          </datasource>
          <datasource>
             <name>WSO2_ANALYTICS_EVENT_STORE_DB</name>
             <description>The datasource used for analytics record store</description>
             <definition type="RDBMS">
                <configuration>
                   <url>jdbc:mysql://127.0.0.1:3306/ANALYTICS_EVENT_STORE</url>
                   <username>root</username>
                   <password>root</password>
                   <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                   <maxActive>50</maxActive>
                   <maxWait>60000</maxWait>
                   <testOnBorrow>true</testOnBorrow>
                   <validationQuery>SELECT 1</validationQuery>
                   <validationInterval>30000</validationInterval>
                   <defaultAutoCommit>false</defaultAutoCommit>
                </configuration>
             </definition>
          </datasource>
          <datasource>
             <name>WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</name>
             <description>The datasource used for analytics record store</description>
             <definition type="RDBMS">
                <configuration>
                   <url>jdbc:mysql://127.0.0.1:3306/ANALYTICS_PROCESSED_DATA_STORE</url>
                   <username>root</username>
                   <password>root</password>
                   <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                   <maxActive>50</maxActive>
                   <maxWait>60000</maxWait>
                   <testOnBorrow>true</testOnBorrow>
                   <validationQuery>SELECT 1</validationQuery>
                   <validationInterval>30000</validationInterval>
                   <defaultAutoCommit>false</defaultAutoCommit>
                </configuration>
             </definition>
          </datasource>
       </datasources>
    </datasources-configuration>
  3. Set the analytics datasources created in above step (WSO2_ANALYTICS_FS_DB, WSO2_ANALYTICS_EVENT_STORE_DB and WSO2_ANALYTICS_PROCESSED_STORE_DB) in the DAS analytics-config.xml file, which is in the <DAS_HOME>/repository/conf/analytics directory.

    <analytics-dataservice-configuration>
       <!-- The name of the primary record store -->
       <primaryRecordStore>EVENT_STORE</primaryRecordStore>
       <!-- The name of the index staging record store -->
       <indexStagingRecordStore>INDEX_STAGING_STORE</indexStagingRecordStore>
       <!-- Analytics File System - properties related to index storage implementation -->
       <analytics-file-system>
          <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsFileSystem</implementation>
          <properties>
                <!-- the data source name mentioned in data sources configuration -->
                <property name="datasource">WSO2_ANALYTICS_FS_DB</property>
                <property name="category">large_dataset_optimized</property>
          </properties>
       </analytics-file-system>
       <!-- Analytics Record Store - properties related to record storage implementation -->
       <analytics-record-store name="EVENT_STORE">
          <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation>
          <properties>
                <property name="datasource">WSO2_ANALYTICS_EVENT_STORE_DB</property>
                <property name="category">large_dataset_optimized</property>
          </properties>
       </analytics-record-store>
       <analytics-record-store name="INDEX_STAGING_STORE">
          <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation>
          <properties>
                <property name="datasource">WSO2_ANALYTICS_EVENT_STORE_DB</property>
                <property name="category">limited_dataset_optimized</property>
          </properties>
       </analytics-record-store>
       <analytics-record-store name = "PROCESSED_DATA_STORE">
          <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation>
          <properties>
                <property name="datasource">WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</property>
                <property name="category">large_dataset_optimized</property>
          </properties>
       </analytics-record-store>
       <!-- The data indexing analyzer implementation -->
       <analytics-lucene-analyzer>
       	<implementation>org.apache.lucene.analysis.standard.StandardAnalyzer</implementation>
       </analytics-lucene-analyzer>
       <!-- The maximum number of threads used for indexing per node, -1 signals to aute detect the optimum value,
            where it would be equal to (number of CPU cores in the system - 1) -->
       <indexingThreadCount>-1</indexingThreadCount>
       <!-- The number of index shards, should be equal or higher to the number of indexing nodes that is going to be working,
            ideal count being 'number of indexing nodes * [CPU cores used for indexing per node]' -->
       <shardCount>6</shardCount>
       <!-- Data purging related configuration -->
       <analytics-data-purging>
          <!-- Below entry will indicate purging is enable or not. If user wants to enable data purging for cluster then this property
           need to be enable in all nodes -->
          <purging-enable>false</purging-enable>
          <cron-expression>0 0 0 * * ?</cron-expression>
          <!-- Tables that need include to purging. Use regex expression to specify the table name that need include to purging.-->
          <purge-include-tables>
             <table>.*</table>
             <!--<table>.*jmx.*</table>-->
          </purge-include-tables>
          <!-- All records that insert before the specified retention time will be eligible to purge -->
          <data-retention-days>365</data-retention-days>
       </analytics-data-purging>
       <!-- Receiver/Indexing flow-control configuration -->
       <analytics-receiver-indexing-flow-control enabled = "true">
           <!-- maximum number of records that can be in index staging area before receiving is throttled -->
           <recordReceivingHighThreshold>10000</recordReceivingHighThreshold>
           <!-- the limit on number of records to be lower than, to reduce throttling -->
           <recordReceivingLowThreshold>5000</recordReceivingLowThreshold>    
       </analytics-receiver-indexing-flow-control>
    </analytics-dataservice-configuration>
  4. Add the MySQL Java connector 5.1.x JAR file, which is supported by MYSQL 5.6, in the <DAS_HOME>/repository/components/lib directory.


Step 2.1 - Download the DAS extension distribution

Download the DAS extension from the PPaaS product page on the WSO2 website and uncompress the file. The extracted distribution is referred to as <PPAAS_DAS_DISTRIBUTION>.

 

Step 2.2 - Create PPaaS Metering Dashboard with DAS

  1. Add the org.wso2.ppaas.das.extension-<PPAAS_VERSION>.jar file, which is in the <PPAAS_DAS_DISTRIBUTION>/lib directory, into the <DAS_HOME>/repository/components/lib directory.
  2. Add the following Java class path into the spark-udf-config.xml file in the <DAS_HOME>/repository/conf/analytics/spark directory.

    <class-name>org.wso2.ppaas.das.extension.TimeUDF</class-name>
  3. Add Jaggery files, which are in the <PPAAS_DAS_DISTRIBUTION>/metering-dashboard/jaggery-files directory into the <DAS_HOME>/repository/deployment/server/jaggeryapps/portal/controllers/apis directory.

  4. Manually create MySQL databases and tables using the queries, which are in the <PPAAS_DAS_DISTRIBUTION>/metering-dashboard/metering-mysqlscript.sql file. 

    CREATE DATABASE IF NOT EXISTS ANALYTICS_FS_DB;
    CREATE DATABASE IF NOT EXISTS ANALYTICS_EVENT_STORE;
    CREATE DATABASE IF NOT EXISTS ANALYTICS_PROCESSED_DATA_STORE;
    CREATE TABLE ANALYTICS_PROCESSED_DATA_STORE.MEMBER_STATUS(Time long, ApplicationId VARCHAR(150), ClusterAlias VARCHAR(150), MemberId VARCHAR(150), MemberStatus VARCHAR(50));
    CREATE TABLE ANALYTICS_PROCESSED_DATA_STORE.MEMBER_COUNT(Time long, ApplicationId VARCHAR(150), ClusterAlias VARCHAR(150), CreatedInstanceCount int, InitializedInstanceCount int, ActiveInstanceCount int, TerminatedInstanceCount int);
    CREATE TABLE ANALYTICS_PROCESSED_DATA_STORE.MEMBER_INFORMATION(MemberId VARCHAR(150), InstanceType VARCHAR(150), ImageId VARCHAR(150), HostName VARCHAR(150), PrivateIPAddresses VARCHAR(150), PublicIPAddresses VARCHAR(150), Hypervisor VARCHAR(150), CPU VARCHAR(10) , RAM VARCHAR(10), OSName VARCHAR(150), OSVersion VARCHAR(150));
  5. Apply a WSO2 User Engagement Server (UES) patch to the DAS dashboard.
    You need to do this to populate the metering dashboard.

    1. Copy the ues-dashboard.js and the ues-pubsub.js files from the <PPAAS_DAS_DISTRIBUTION>/metering-dashboard/ues-patch directory into the <DAS_HOME>/repository/deployment/server/jaggeryapps/portal/js directory.

    2. Copy the dashboard.jag file from the <PPAAS_DAS_DISTRIBUTION>/metering-dashboard/ues-patch directory into the <DAS_HOME>/repository/deployment/server/jaggeryapps/portal/theme/templates directory.

  6. Add the ppaas-metering-service.car file, which is in the <PPAAS_DAS_DISTRIBUTION>/metering-dashboard directory, into the <DAS_HOME>/repository/deployment/server/carbonapps directory to generate the metering dashboard.

    If the <DAS_HOME>/repository/deployment/server/carbonapps folder does not exist, initially create the folder before moving the CAR file.

    You can navigate to the metering dashboard from the Private PaaS application topology view at the application or cluster level as shown below.


    The following is a sample metering dashboard:

Step 2.3 - Create the PPaaS Monitoring Dashboard with DAS

  1. Add the Jaggery files, which are in the <PPAAS_DAS_DISTRIBUTION>/monitoring-dashboard/jaggery-files directory into the <DAS_HOME>/repository/deployment/server/jaggeryapps/portal/controllers/apis directory.
  2. Manually create the MySQL database and tables using the queries in the <PPAAS_DAS_DISTRIBUTION>/monitoring-dashboard/jaggery-files/monitoring-mysqlscript.sql file. 

    CREATE DATABASE IF NOT EXISTS ANALYTICS_FS_DB;
    CREATE DATABASE IF NOT EXISTS ANALYTICS_EVENT_STORE;
    CREATE DATABASE IF NOT EXISTS ANALYTICS_PROCESSED_DATA_STORE;
    CREATE TABLE ANALYTICS_EVENT_STORE.AVERAGE_MEMORY_CONSUMPTION_STATS(Time long, ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE);
    CREATE TABLE ANALYTICS_EVENT_STORE.MEMBER_AVERAGE_MEMORY_CONSUMPTION_STATS(Time long, MemberId VARCHAR(150), ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE);
    CREATE TABLE ANALYTICS_EVENT_STORE.AVERAGE_LOAD_AVERAGE_STATS(Time long, ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE);
    CREATE TABLE ANALYTICS_EVENT_STORE.MEMBER_AVERAGE_LOAD_AVERAGE_STATS(Time long, MemberId VARCHAR(150), ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE);
    CREATE TABLE ANALYTICS_EVENT_STORE.AVERAGE_IN_FLIGHT_REQUESTS(Time long, ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), COUNT DOUBLE);
    CREATE TABLE ANALYTICS_EVENT_STORE.SCALING_DETAILS(Time VARCHAR(50), ScalingDecisionId VARCHAR(150), ClusterId VARCHAR(150), MinInstanceCount INT, MaxInstanceCount INT, RIFPredicted INT, RIFThreshold INT ,RIFRequiredInstances INT, MCPredicted INT, MCThreshold INT, MCRequiredInstances INT ,LAPredicted INT, LAThreshold INT,LARequiredInstances INT,RequiredInstanceCount INT ,ActiveInstanceCount INT, AdditionalInstanceCount INT, ScalingReason VARCHAR(150));
  3. Copy the CEP EventFormatter artifacts, which are in the <PPAAS_DAS_DISTRIBUTION>/wso2cep-<VERSION>/eventformatters directory, into the <CEP_HOME>/repository/deployment/server/eventformatters directory.
  4. Copy CEP OutputEventAdapter artifacts, which are in the <PPAAS_DAS_DISTRIBUTION>/wso2cep-<VERSION>/outputeventadaptors directory, into the <CEP_HOME>/repository/deployment/server/outputeventadaptors directory and update the receiverURL and authenticatorURL with the DAS_HOSTNAME and DAS_TCP_PORT and DAS_SSL_PORT values as follows:

    <outputEventAdaptor name="DefaultWSO2EventOutputAdaptor"
      statistics="disable" trace="disable" type="wso2event" xmlns="http://wso2.org/carbon/eventadaptormanager">
      <property name="username">admin</property>
      <property name="receiverURL">tcp://<DAS_HOSTNAME>:<DAS_TCP_PORT></property>
      <property name="password">admin</property>
      <property name="authenticatorURL">ssl://<DAS_HOSTNAME>:<DAS_SSL_PORT></property>
    </outputEventAdaptor>
  5. Add the ppaas-monitoring-service.car file, which is in the <PPAAS_DAS_DISTRIBUTION>/metering-dashboard directory into the <DAS_HOME>/repository/deployment/server/carbonapps directory to generate the monitoring dashboard.

    If the <DAS_HOME>/repository/deployment/server/carbonapps folder does not exist, initially create the folder before moving the CAR file.

  6. Navigate to monitoring dashboard from the PPaaS Console using the Monitoring menu.

    The following is a sample monitoring dashboard:
  7. Once you have carriedout all the configurations, start the DAS server. After the DAS server has started successfully start the PPaaS server.

After you have successfully configured DAS in a separate host, start the DAS server:

./wso2server.sh

 

Step 5 - Setup PPaaS

When using a VM setup or Kubernetes, you need to configure PPaaS accurately before attempting to deploy a WSO2 product on the PaaS.

 Click here to instructions...

Follow the instructions below to configure PPaaS:

Some steps are marked as optional as they are not applicable to all IaaS.
Therefore, only execute the instructions that correspond to the IaaS being used!

Step 1 - Install Prerequisites

Ensure that the following prerequisites have been met based on your environment and IaaS.

  1. Install the prerequisites listed below.

    • Oracle Java SE Development Kit (JDK)

    • Apache ActiveMQ

    For more information on the prerequisites, see Prerequisites.

  2. Download the Private PaaS binary distribution from the PPaaS product page and unzip it.

 

Step 2 - Setup a Kubernetes Cluster (Optional)

This step is only mandatory if you are using Kubernetes.

You can setup a Kubernetes cluster using one of the following approaches:

 Click here to instructions...

Step 3 - Setup Puppet Master (Optional)

This step is only mandatory if you are deploying PPaaS on a Virtual Machine (e.g., EC2, OpenStack, GCE).

Puppet is an open source configuration management utility. In Private PaaS, Puppet has been used as the orchestration layer. Private PaaS does not have any templates, configurations in puppet, it consists only of the product distributions. Puppet acts as a file server while the Configurator does the configuration in runtime.

Follow the instructions below to setup the Puppet Master.

Step 1 - Configure Puppet Master

 Click here to instructions...

Follow steps given below to install Puppet Master on Ubuntu:

  1.  Download the Puppet Master distribution package for the Ubuntu release.

    wget https://apt.puppetlabs.com/puppetlabs-release-<CODE_NAME>.deb
     
    # For example for Ubuntu 14.04 Trusty:
    wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
  2. Install the downloaded distribution package.

    sudo dpkg -i puppetlabs-release-<CODE_NAME>.deb  
  3. Install Puppet Master.

    sudo apt-get update
    sudo apt-get install puppetmaster
  4. Install Passenger with Apache.

    For more information, see Install Apache and Passenger.

  5. Change the Ubuntu hostname. Please follow the steps given below to change the Ubuntu hostname:
    1. Update the /etc/hosts file.

      sudo echo "127.0.0.1 puppet.test.org" >> /etc/hosts
    2. Change the value of the hostname.

      sudo hostname puppet.test.org
  6. Add the following entry to the /etc/puppet/autosign.conf file:

     

    *.test.org
  7.  Add the server=puppet.test.org line to the puppet.conf file, which is in the /etc/puppet directory.

    [main]
    server=puppet.test.org
    logdir=/var/log/puppet
    vardir=/var/lib/puppet
    ssldir=/var/lib/puppet/ssl
    rundir=/var/run/puppet
    factpath=$vardir/lib/facter
    templatedir=$confdir/templates
    dns_alt_names=puppet
    
    [master]
    # These are needed when the puppetmaster is run by passenger
    # and can safely be removed if webrick is used.
    ssl_client_header = SSL_CLIENT_S_DN
    ssl_client_verify_header = SSL_CLIENT_VERIFY
  8. Restart the Puppet Master.

    /etc/init.d/puppetmaster restart
  9. Download the VM Tools by navigating to the following path via the PPaaS product page.

    Cartridges > common > wso2ppaas-vm-tools-4.1.1

  10. Copy and replace the content in the Puppet Master's /etc/puppet folder with the content in the <VM_TOOLS>/Puppet directory.

  11. Configure the mandatory modules.

Mandatory modules

It is mandatory to configure the following modules when configuring Puppet Master for PPaaS:

Python Cartridge Agent Module
  1. Download the Cartridge Agent via the PPaaS product page.

  2. Copy the downloaded apache-stratos-python-cartridge-agent-4.1.4.zip  to the /etc/puppet/modules/python_agent/files directory.

  3. Change the file permission value, of the apache-stratos-python-cartridge-agent-4.1.4.zip file, to 0755.

    chmod 755 apache-stratos-python-cartridge-agent-4.1.4.zip
  4. Update the base.pp file in the /etc/puppet/manifests/nodes directory, with the following Python agent variables.

      $pca_name             = 'apache-stratos-python-cartridge-agent'
      $pca_version          = '4.1.4'
      $mb_ip                = 'MB-IP'
      $mb_port              = 'MB-PORT'
      $mb_type              = 'activemq' #in wso2mb case, value should be 'wso2mb'
      $cep_ip               = "CEP-IP"
      $cep_port             = "7711"
      $cep_username         = "admin"
      $cep_password         = "admin"
      $bam_ip               = '192.168.30.96'
      $bam_port             = '7611'
      $bam_secure_port      = '7711'
      $bam_username         = 'admin'
      $bam_password         = 'admin'
      $metadata_service_url = 'METADATA-SERVICE-URL'
      $agent_log_level      = 'INFO'
      $enable_log_publisher = 'false'

    Optionally you can configure the MB_IP, MB_PORT, PUPPET_IP and the PUPPET_HOSTNAME in the network partition as shown below.

    It must be noted that the values defined in the network partition receives higher priority over the values declared in the base.pp file ( i.e., The values declared in the base.pp file are overwritten by the values declared in the network partition.).

    {
        "id": "network-partition-openstack",
        "provider": "openstack",
        "partitions": [
            {
                "id": "partition-1",
                "property": [
                    {
                        "name": "region",
                        "value": "<REGION>"
                    }
                ]
            },
            {
                "id": "partition-2",
                "property": [
                    {
                        "name": "region",
                        "value": "<REGION>"
                    }
                ]
            }
        ],
        "properties": [
            {
                "name": "payload_parameter.PUPPET_IP",
                "value": "<PUPPET_MASTER_IP>"
            },
            {
                "name": "payload_parameter.MB_IP",
                "value": "<MESSAGE_BROKER_IP>"
            },
            {
                "name": "payload_parameter.MB_PORT",
                "value": "<MESSAGE_BROKER_PORT>"
            },
            {
                "name": "payload_parameter.PUPPET_HOSTNAME",
                "value": "<PUPPET_MASTER_HOSTNAME>"
            }
        ]
    }
Java Module
  1. Copy the downloaded jdk-7u72-linux-x64.tar.gz file to the files folder, which is in the /etc/puppet/modules/java directory. 

    You can download jdk-7u72-linux-x64.tar.gz from here.

  2. Change file permission value, of the jdk-7u72-linux-x64.tar.gz file, to 0755.

    chmod 755 jdk-7u72-linux-x64.tar.gz
  3. Update the base.pp file, which is in the /etc/puppet/manifests/nodes directory, with the following Java variables.

    $java_distribution = 'jdk-7u72-linux-x64.tar.gz' 
    $java_folder = 'jdk1.7.0_72' 
Configurator Module
  1. Download the Configurator by navigating to the following path via the PPaaS product page.

    Cartridges > common > wso2ppaas-configurator-4.1.1

  2. Copy the Configurator (ppaas-configurator-4.1.1.zip) to the /etc/puppet/modules/configurator/files directory.
  3. Change the file permission value, of the ppaas-configurator-4.1.1.zip file, to 0755.

    chmod 755 ppaas-configurator-4.1.1.zip
  4.  Update the base.pp file, which is in the /etc/puppet/manifests/nodes directory, with the following configurator variables.

    $configurator_name = 'ppaas-configurator' 
    $configurator_version = '4.1.1'


 

 

Step 2 - Update the cartridge-config.properties file

Update the values of the following parameters in the cartridge-config.properties file, which is in the <PRIVATE_PAAS_HOME>/repository/conf directory.

The values are as follows:

  • [PUPPET_IP] - The IP address of the running Puppet instance.

  • [PUPPET_HOST_NAME] - The host name of the running Puppet instance.

 

Step 4 - Create a cartridge base image (Optional)

This step is only mandatory if you are deploying PPaaS on a Virtual Machine (e.g., EC2, OpenStack, GCE).

 

Step 5 - Disable the mock IaaS

Mock IaaS is enabled by default. Therefore, if you are running PPaaS on another IaaS, you need to disable the Mock IaaS.

Follow the instructions below to disable the Mock IaaS:

  1. Navigate to the <PRIVATE_PAAS_HOME>/repository/conf/mock-iaas.xml file and disable the Mock IaaS.

    <mock-iaas enabled="false">
  2. Navigate to the <PRIVATE_PAAS_HOME>/repository/deployment/server/webapps directory and delete the mock-iaas.war file. 

    When Private PaaS is run the mock-iaas.war is extracted and the mock-iaas folder is created. Therefore, if you have run PPaaS previously, delete the  mock-iaas folder as well.

 

Step 6 - Carryout additional IaaS configurations (Optional)

This step is only applicable if you are using GCE.

When working on GCE carryout the following instructions:

  1. Create a service group.
  2. Add a firewall rule.

Step 7 - Configure the Cloud Controller (Optional)

This step is only mandatory if you are deploying PPaaS on a Virtual Machine (e.g., EC2, OpenStack, GCE).

 Click here to instructions...

Follow the instructions given below to configure the Cloud Controller (CC):

  1. Configure the IaaS provider details based on the IaaS.
    You need to configure details in the  <PRIVATE_PAAS_HOME>/repository/conf/cloud-controller.xml file and comment out the IaaS provider details that are not being used.  

  2. Update the values of the MB_IP and MB_PORT in the jndi.properties file, which is in the <PRIVATE_PAAS_HOME>/repository/conf directory. 

    The default value of the message-broker-port= 61616.

    The values are as follows:

    • MB_IP: The IP address used by ActiveMQ.

    • MB_PORT: The port used by ActiveMQ.
    connectionfactoryName=TopicConnectionFactory
    java.naming.provider.url=tcp://[MB_IP]:[MB_Port]
    java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory

 

Step 8 - Define the Message Broker IP (Optional)

This step is only mandatory if you have setup the Message Broker (MB), in this case ActiveMQ, in a separate host.

If you have setup ActiveMQ, which is the PPaaS Message Broker, in a separate host you need to define the Message Broker IP, so that the MB can communicate with PPaaS.

 Click here to instructions...

Update the value of the  MB_IP  in the  JMSOutputAdaptor  file, which is in the  <PRIVATE_PAAS_HOME>/repository/deployment/server/outputeventadaptors  directory.

 

[MB_IP]: The IP address used by ActiveMQ.
<property name="java.naming.provider.url">tcp://[MB_IP]:61616</property>

 

Step 6 - Start the PPaaS server

The way in which you need to start the PPaaS server varies based on your settings as follows:

We recommend to start the PPaaS server in background mode, so that the instance will not

  • If you want to use the internal database (H2) and the embedded CEP, start the PPaaS server as follows:

    sh <PRIVATE_PAAS_HOME>/bin/wso2server.sh start
  • If you want to use an external database, start the Private PaaS server with the -Dsetup option as follows: 
    This creates the database schemas in <PRIVATE_PAAS_HOME>/dbscripts directory.

    sh <PRIVATE_PAAS_HOME>/bin/wso2server.sh start -Dsetup
  • If you want to use an external CEP, disable the embedded CEP when starting the PPaaS server as follows:

    sh <PRIVATE_PAAS_HOME>/bin/wso2server.sh start -Dprofile=cep-excluded
  • If you want to use an external database, together with an external CEP, start the Private PaaS server as follows:
    This creates the database schemas in <PRIVATE_PAAS_HOME>/dbscripts directory.

    sh <PRIVATE_PAAS_HOME>/bin/wso2server.sh start -Dsetup -Dprofile=cep-excluded 

     

You can tail the log, to verify that the Private PaaS server starts without any issues.

tail -f <PRIVATE_PAAS_HOME>/repository/logs/wso2carbon.log

What's next?

After starting PPaaS on a preferred IaaS, configure the WSO2 cartridge, so that you can seamlessly deploy the WSO2 product on PPaaS.

 

 

com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_links2' is unknown.