WSO2 Carbon version 4.0.0 onwards supports deployment models that consist of 'worker' nodes and 'manager' nodes. A worker node serves requests received by clients, whereas a manager node deploys and configures artifacts (web applications, services, proxy services, etc.).
The worker/manager setup separates a WSO2 product's UI components, management console, and related functionality with its internal framework. Typically, the manager nodes are in read-write mode and authorized to make configuration changes. The worker nodes are in read-only mode and authorized only to deploy artifacts and read configurations.
Why separate the worker and manager nodes
- Improved security: Manager nodes are typically behind a firewall that only allows admin access. They are exposed to clients running within the organization only, while worker nodes can be exposed to external clients.
- Proper separation of concerns: Management nodes specialize in the management of the setup while worker nodes specialize in serving requests to deployment artifacts. Only management nodes are authorized to add new artifacts into the system or make configuration changes.
- Specific worker-node tasks: Worker nodes can only deploy artifacts and read configuration. Worker nodes are limited only for specific tasks.
- Lower memory requirements: There is a lower memory footprint in the worker nodes because the OSGi bundles related to the management console and UI are not loaded to them. This is also good for performance.
Worker/Manager separated clustering patterns
Since all WSO2 products are built on the cluster-enabled Carbon platform, you can cluster most WSO2 products in the same way, depending on which deployment pattern you use. The process of separating the worker and manager nodes depends on the worker/manager clustering pattern you choose. You can select one of the following patterns based on your load and the targeted expenditure.
Your configuration changes depending on which clustering deployment pattern you choose.
Although we use WSO2 API Manager (APIM) in the examples, the concepts apply equally to other WSO2 products as well.
Worker/Manager clustering deployment pattern 1
This pattern involves two worker nodes within a cluster. Here the worker is in high availability mode while the manager is not. This pattern is suitable in situations where it is rare to deploy applications or modify a running application (hence you would need only a single management node). However, the application should run continuously, hence the worker nodes are in a cluster.
This mode is rarely used. The preferred mode is two management nodes in Active/Passive mode (as is the case in deployment pattern 2).
Worker/Manager clustering deployment pattern 2
This pattern has two manager nodes in one cluster and two worker nodes in a separate cluster. This is similar to deployment pattern 1, but the management node is in Active/Passive mode for high availability. Active/Active mode for a management node is generally not recommended, but if you want high availability for your data center and location-based services, it is useful to set up Active/Active mode. This pattern is useful in scenarios where the application deployment/modification might be frequent (hence you would need a cluster for the management node). However, the load you may have is less, which is why you would share a single load balancer with the worker cluster.
Worker/Manager clustering deployment pattern 3
This pattern has two manager nodes in one sub-domain and two worker nodes in a separate sub-domain. The manager and worker sub-domains are part of a single WSO2 product cluster domain. Both sub-domains use their own load balancer while existing within the same cluster. Multiple load balancers result in several unique configuration steps, so please ensure that you follow the relevant steps carefully. This pattern is similar to deployment pattern 2. However, the application/modification load (or any other administrative load) might be high, so there is a dedicated load balancer for the manager cluster to prevent this load from affecting the load of the worker cluster.