Quick Start Guide
WSO2 Private PaaS (PPaaS) is an enterprise grade Platform as a Service (PaaS) that delivers standard, on-premise, application, integration, data, identity, governance, and analytics PaaS solutions for IT projects. WSO2 PPaaS can be deployed on Docker using Kubernetes or on a Virtual Machine using one of the following IaaS that supports Apache jclouds, namely EC2, GCE and OpenStack. At the core of WSO2 PPaaS lies Apache Stratos providing cloud-native capabilities, such as multi-tenancy, elastic scaling, self-service provisioning, cloud bursting, network partitioning, metering, billing and resource pooling among several other functionalities. More significantly, it also adds functionality to host pre-integrated, fully multi-tenant WSO2 Carbon middleware products as cartridges that deliver a range of cloud PaaS services where in PPaaS, a cartridge is a virtual machine (VM) on an IaaS or a container on Kubernetes, which has software components to interact with PPaaS.
This Quick Start guide explains how to deploy PPaaS using the demo EC2 AMI, deploy a sample application, and view statistics.
About the demo EC2 AMI
WSO2 Private PaaS (PPaaS) is a PaaS framework that is designed to be used on top of an underlying infrastructure. We have created an Amazon Machine Image (AMI) that is already preconfigured with PPaaS 4.1.0 to be used in Amazon Elastic Compute Cloud (EC2). This demo AMI includes everything you need to work with the PPaaS, including the following key components:
- WSO2 Private PaaS 4.1.0
WSO2 Private PaaS Cartridges: The demo AMI includes cartridges for several WSO2 products, such as WSO2 ESB and WSO2 API Manager. For a complete list, see Deploying a sample application.
- WSO2 Data Analytics Server (DAS) 3.0.0: Provides metering and monitoring services for the PPaaS.
- Apache Stratos Nginx Load Balancer extension: The Apache Stratos Nginx extension is a load balancer extension for Nginx. It is an executable program that can manage the life-cycle of an Nginx instance according to the topology, composite application model, tenant application signups, and domain mapping information received from Stratos via the Message Broker.
- Apache ActiveMQ 5.12.0: Acts as the messaging bus to communicate with cartridge instances.
- WSO2 Configurator: A Python configuration tool based on the Jinja2 templating engine. It is used to configure cartridge instances as well as core services.
- Init scripts: These scripts look up configuration parameters via the EC2 instance metadata API, configure the core services, and deploy the cartridges.
- Puppet Master AMI: The Puppet AMI contains a Puppet Master, which is pre-configured. WSO2 PPaaS uses the Puppet Master to install the required WSO2 products on the spawned instance.
Understanding the key concepts in PPaaS
Let's take a look at some of the key concepts and terminology in PPaaS:
A partition depicts the division in an IaaS and defines an area of an IaaS cloud used by a service subscription. A network partition is an area of an IaaS that is bound by one network of an IaaS. Therefore, it is possible to include one or more partitions inside a network partition. Network partitions use private IPs for internal communication. For more information, see Cloud Partitioning and for information on all the properties you can define in a network partition definition JSON, see the Network Partition Resource Definition. A deployment policy defines how (such as, which partition algorithm to use) and where to spawn cartridge instances, and it also defines the maximum instances that are allowed in a service cluster. DevOps define deployment policies based on deployment patterns. You need to add the network partitions, which you plan on referring to in a deployment policy, prior to adding a deployment policy. For more information on all the properties that you can define in a deployment policy definition JSON, see the Deployment Policy Resource Definition. An auto-scaling policy defines the threshold values pertaining to scaling up and scaling down decisions. Auto-scaler will refer to the auto-scaling policy that is defined by DevOps. For more information on all the properties that you can define in an auto-scaling policy definition JSON, see the Auto-scaling Policy Resource Definition. A Kubernetes Cluster is a collection of Docker hosts, and a set of management features for running Docker containers in a clustered environment. A Kubernetes Cluster contains node agents (minions) and master components. Kubernetes creates a cluster of Docker hosts to provide high availability for the Docker containers. An API is provided to manage the containers. For more information on all the properties that you can define in a Kubernetes cluster definition JSON, see the Kubernetes Cluster Resource Definition. A cartridge is a virtual machine (VM) on IaaSs, such as EC2, OpenStack and GCE, and a container on Kubernetes, which has software components to interact with Private PaaS. You can add a cartridge in a platform as a service (PaaS) for scalability. In WSO2 Private PaaS, cartridge runtimes create service runtimes. For example, if you want to have a Enterprise Service Bus (ESB) runtime, you can use a WSO2 ESB cartridge to get the runtime to deploy a mediation flow. A cartridge can be fully configured with all the software and configurations, or zero configured where the cartridge user can install and configure what they want. For more information, see Cartridge and for information on all the properties that you can define in a cartridge definition JSON, see the Cartridge Resource Definition. A cartridge group defines the relationship among a set of cartridge groups and a set of cartridges. The relationship among the children of a group can be the startup order, termination behavior and any scalable dependencies. Writing a cartridge group definition provides the ability to re-use the same frequently used cartridges as needed in different composite applications. The cartridges that correspond to a cartridge group have to be added to Private PaaS before the cartridge group is added. For more information on all the properties that you can define in a cartridge group definition JSON, see the Cartridge Group Resource Definition. An application policy defines the application availability in network partitions. Thereby, it would define whether to start the application in all the network partitions or whether to carryout cloud bursting. For more information on all the properties that you can define in an application policy definition JSON, see the Application Policy Resource Definitions. In WSO2 PPaaS, an application is referred to as a composite application for the purpose of grouping a set of cartridges together. The application provides the actual data required to create the clusters, start the instances and specifies how to connect them in the run-time. For more information on all the properties that you can define in an application definition JSON, see the Application Resource Definition.Network Partitions
Deployment Policy
Auto-scaling Policy
Kubernetes Cluster
Cartridge
Cartridge Groups
Application Policy
Application
Let's go through the basic use cases of PPaaS:
- Launching the preconfigured PPaaS instance using the Demo EC2 AMI
- Viewing the artifacts related to the sample application
- Deploying a sample application
- Viewing the application topology
- Viewing metering service information
Launching the preconfigured PPaaS instance using the demo EC2 AMI
Before you begin:
Get an EC2 account.
Create an Amazon Web Services (AWS) account if you do not have one already. This account must be authorized to manage EC2 instances (including starting and stopping instances and creating security groups and key pairs).
To create an AWS account:
Navigate to the following URL: http://aws.amazon.com/
Click Create an AWS Account.
Follow the instructions on the screens.
Follow the instructions below to launch the preconfigured PPaaS instance using the demo EC2 AMI:
- Navigate to http://aws.amazon.com/console/ and sign in to the AWS Management Console.
- Click EC2 on the home console.
- In the region drop-down list, select Asia Pacific (Singapore), which is the region where the demo instance is located.
Identify your AWS access key ID and secret key as follows:
- On the EC2 account details menu, click My Account.
- Click Security Credentials on the left-bar menu.
- Switch to the Access Keys tab.
- Create an access key for this setup. Then note the Access Key ID and Secret Access Key, which you'll need later.
-
Create a security group with rules that define the allowed port ranges, thereby identifying which incoming network traffic is delivered to your instance (all other traffic is ignored):
- On the Network and Security menu, click Security Groups.
- Click Create Security Group.
Enter the name and description of the security group.
Repeat the following steps to add two security group rules to open all the UDP and TCP ports:
Click Add Rule under the Inbound section.
Select the following rule types individually:
- All TCP
- All UDP
The port range gets automatically set as follows:
Rule type Port range All TCP 0 - 65535 All UDP 0 - 65535 Set the source as Anywhere 0.0.0.0/0.
Note that setting the source to 0.0.0.0/0 is a demo-only setting, which must be changed for security purposes in a production environment. For more information, see Amazon EC2 Security Groups.
The security group rules that you added above are only for demo purposes. In a production environment, you should add individual rules with the following ports for security purposes.
- 9443 - HTTPS
- 9763 - HTTP
- 9444 - HTTPS (DAS)
- 7611 - Thrift
- 7612 - Secure Thrift
- 8081 - HTTP (Spark manager)
- 11500 - HTTP (Spark worker)
- 5672 - TCP (AMQP)
- 8672 - Secure TCP (AMQP)
- 1883 - TCP (MQTT)
- 8883 - Secure TCP (MQTT)
- Click Create.
Create a key pair and download it as follows:
- On the Network and Security menu, click Key Pairs.
- Click Create New Key Pair.
- Enter a name for your key pair.
- Click Create. The key pair gets automatically downloaded as a
.pem
file. - Save your private key in a safe place on your computer, and make note of the location so you can easily find it later.
- Navigate to the EC2 Dashboard.
- Click Launch Instance and then click Community AMIs.
Based on the region search for the relevant ami mentioned below and click Select.
Asia Pacific(Singapore): ami-e412df87
US West(Oregon): ami-be2f34df
Note that this demo AMI is only available in the Asia Pacific (Singapore) region and US West (Oregon) region.
Select an instance type for the deployment. We recommend the
m3.large
instance type for the WSO2 PPaaS deployment.Click Next: Configure Instance Details.
Click Advanced Details to expand the tab, and then enter payload parameters for the demo AMI in the User data field. You need to specify the following list of parameters as comma-separated key-value pairs.
EC2_IDENTITY
- Access key ID for EC2 API.EC2_SECRET
- Secret access key for EC2 API.EC2_OWNER_ID
- EC2 user's owner ID.EC2_SECURITY_GROUP
- Security group name that you created.EC2_KEY_PAIR
- EC2 key for accessing the instances via SSH.EC2_BASE_AMI
- Base image for cartridge instances. The recommended value is as follows:EC2_BASE_AMI=ap-southeast-1/ami-9232f5f1
EC2_INSTANCE_TYPE
- EC2 instance type for the cartridge instances. The recommended value is as follows:EC2_INSTANCE_TYPE=m3.medium
EC2_REGION
- The EC2 region for spawning cartridge instances. Use the following value:EC2_REGION=ap-southeast-1
EC2_AVAILABILITY_ZONE_ID_1
- The EC2 availability zone for spawning the cartridge instances. Use the following value:EC2_AVAILABILITY_ZONE_ID_1=ap-southeast-1a
EC2_AVAILABILITY_ZONE_ID_2
- The EC2 availability zone for spawning the cartridge instances. Use the following value:EC2_AVAILABILITY_ZONE_ID_2=ap-southeast-1b
EC2_TAG_USER
- This EC2 tag is added to the cartridge instances under the key "User".The following is a sample payload:
EC2_IDENTITY=XXX,EC2_SECRET=YYY,EC2_OWNER_ID=ZZZ,EC2_SECURITY_GROUP=allowall,EC2_KEY_PAIR=KKK,EC2_BASE_AMI=ap-southeast-1/ami-9232f5f1,EC2_INSTANCE_TYPE=m3.medium,EC2_REGION=ap-southeast-1,EC2_AVAILABILITY_ZONE_ID_1=ap-southeast-1a,EC2_AVAILABILITY_ZONE_ID_2=ap-southeast-1b,EC2_TAG_USER=Akila
Add your actual values in the above payload template and paste the above text in the user data field.
Click Next: Add Storage. You do not need to add or select any storage configurations.
Click Next: Tag Instance. You do not need to add any tags for the EC2 instance.
Click Next: Configure Security Group, click the Select an existing security group option and then click the security group that you created.
Click Review and Launch. After reviewing the instance, click Review and Launch.
Enter the key pair when prompted.
Select the acknowledgement and click Launch Instances.
After you successfully configure the EC2 instance, it redirects you to the page that includes all the instances. It takes a short time for an instance to launch. Until then, the status of the instance will appear as
pending
. After the instance is launched, the status will change torunning
.Log in to the PPaaS Console (
https://<PUBLIC_IP>:9443/console
) using the default credentials (username=admin and password=admin).To log into the PPaaS Console, get the public IP of the WSO2 Private PaaS demo instance that appears in the EC2 dashboard. For example, if your public IP is
54.179.152.92
, access the PPaaS Console using the following URL:https://54.179.152.92:9443/console
Note the PPaaS dashboard that appears. The Monitoring tile appears here because PPaaS has been configured with WSO2 DAS in the demo AMI for monitoring and metering purposes.
Viewing the artifacts related to the sample application
The PPaaS EC2 demo AMI is preconfigured in a manner, so that when the instance is launched all the artifacts, namely network partition, autoscaling policy, deployment policy, cartridge and application policy, which correspond to the application are created automatically. To learn more on the latter mentioned artifacts, see Understanding the key concepts in PPaaS.
Lets view the details of an artifact.
- Click Configurations in the PPaaS dashboard.
- Click on the respective artifact. For example, if you want to view details of the network partition, click Network Partition.
- Move your cursor over the available network partition tile and click Details in the popup menu.
- If you wish to view the respective artifact definition JSON, click Toggle View.
Deploying a sample application
Let's deploy one of the sample applications.
- Click Applications in the PPaaS dashboard.
- Move your cursor over the wso2bps-350-application tile and click Deploy in the popup menu to deploy the BPS application.
In WSO2 PPaaS, an application is referred to as a composite application for the purpose of grouping a set of cartridges together. When you deploy the application, PPaaS creates the relevant instances based on the application definition. When working with a worker manager cartridge, it will include worker nodes and a manager node where the worker node serves the requests received by clients, and a manager node deploys and configures artifacts. In this use case, if we consider an application that uses the wso2bps-350-application worker manager cartridge, which has acartridgeMin
of 2 and 1 for the worker and manager respectively, and if we assume that there is a startup order as well, where the workers should start before the manager; first PPaaS will create two worker nodes, based on the minimum count that is specified in the application definition. If it is in a Virtual Machine (VM) scenario, which is in EC2, PPaaS will spawn two worker nodes. If it is in a Kubernetes scenario, PPaaS will create two pods. After the spawned nodes become active, PPaaS will spawn the manager node.
Viewing the application topology
Immediately after an application is deployed, all the nodes will initially be in the created state, and gradually when the nodes in the clusters come up and become active, they will move to the active state. You can see this transition when viewing the respective application topology.
Let's view an application topology.
- Click Applications in the PPaaS dashboard.
- Move your cursor over the deployed wso2cep400-application application tile and click View in the popup menu.
Viewing metering service information
Metering is useful for analyzing how PPaaS utilizes IaaS resources. For example, it helps you view how many instances were spawned, the duration the instances were spawned for, and more. You will need this information to bill customers based on their resource consumption.
Let's view the metering service information related to a deployed application. PPaaS uses a WSO2 Data Analytics Server (DAS) analytics dashboard to provide metering service information.
Move your cursor to the application icon in the topology view and click Show Usage to access the DAS gadget.
If you are prompted, log in using the default user credentials (username=admin and password=admin).
The metering dashboard displays all the member life-cycle events (e.g., member started, activated, terminated, etc.) that occurred for this application, where members correspond to the instances spawned by PPaaS.
Viewing monitoring details of services
Monitoring is useful for analyzing the health status of each member and cluster, as it provides details on the memory usage, load average, and in-flight requests. Monitoring will also show you details regarding the instances that are scaled.
Let's view the monitoring details of the services. PPaaS uses a WSO2 Data Analytics Server (DAS) analytics dashboard to provide monitoring service information.
Click Monitoring in the PPaaS dashboard.