Warning |
---|
This page is currently under construction so this content is not finalized as yet. |
When a single WSO2 product is providing interacting with a serviceweb application, a session object is created and remains in the memory of the WSO2 product. However, in a clustered environment, where you could have multiple WSO2 product servers fronted by a load balancer to balance the load among these products, the situation is a little different. If WSO2 server A does task X, WSO2 server B does task Y and WSO2 server C does task Z, and each one has a session object, there is no way for one WSO2 server to know what the other is doing. So, in order to synchronize between these servers, sticky sessions are used.
A sticky session ensures that all interactions with the WSO2 servers happens on one product server only even though there are other WSO2 product servers present in the cluster. So the session object will be the same for the duration of the interaction.
If the backend application is state-full, it may use sessions. Sessions can be managed in two different ways.
- Using sticky sessions: In this approach, once a session is created by the backend server, a session ID will be sent back to the client in the response message. This session ID will be tracked by the intermediate load balancers. If the user/client sent another request with the same session ID, that request will be sent to the same backend server.
- Session replication: In this approach, backend servers will replicate the sessions among all the nodes in the cluster, and the load balancers will be able to send any request to any node.
This session can be implemented at the HTTP level or at the SOAP level. One downside of this approach is if a node has failed, the sessions associated with that node are lost and need to be restarted. It is common to couple this solution with a shared database solution. With sticky sessions enabled, the session data is kept in memory, but persistent data is saved into a database.
Finally, if If cluster nodes share state, and they are not stored in same shared persistent media like a database, all changes done at each node have to be disseminated to all other nodes in the cluster. Often, this is implemented using a some kind of group communication method. Group communication keeps track of the members of groups of nodes defined by users and updates the group membership when nodes have joined or if they leave. When a message is sent using group communication, it guarantees that all nodes in the current group will receive the message.For For WSO2 products, the clustering implementation uses Hazelcast as the form of group communication to disseminate all the changes to all other nodes in the cluster. Again there are two choices whether updates are synchronous or asynchronous, where in the former case, the request does not return till all the replicas of the system are updated, whereas the latter updates the replicas in the background. In the latter case, there may be a delay before values are reflected in the other nodes. Coupled with sticky sessions, this could be acceptable if the consistency of the system is not a concern. For example, if all write operations only perform appends and does not edit previous values, lazy propagation of changes are sufficient, and sticky sessions will ensure user who did the change will continue to see his changes. Alternatively, if the read-write ratio is sufficiently high, all writes can be done in a one node while other serves reads.