This site contains the documentation that is relevant to older WSO2 product versions and offerings.
For the latest WSO2 documentation, visit https://wso2.com/documentation/.

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

Objective: Demonstrate simple load balancing among a set of endpoints.

Prerequisites

Start Synapse with the following sample configuration:

<definitions xmlns="http://ws.apache.org/ns/synapse">
    <sequence name="main" onError="errorHandler">
        <in>
            <send>
                <endpoint>
                    <loadbalance>
                        <endpoint>
                            <address uri="http://localhost:9001/services/LBService1">
                                <enableAddressing/>
                                <suspendDurationOnFailure>60</suspendDurationOnFailure>
                            </address>
                        </endpoint>
                        <endpoint>
                            <address uri="http://localhost:9002/services/LBService1">
                                <enableAddressing/>
                                <suspendDurationOnFailure>60</suspendDurationOnFailure>
                            </address>
                        </endpoint>
                        <endpoint>
                            <address uri="http://localhost:9003/services/LBService1">
                                <enableAddressing/>
                                <suspendDurationOnFailure>60</suspendDurationOnFailure>
                            </address>
                        </endpoint>
                    </loadbalance>
                </endpoint>
            </send>
            <drop/>
        </in>
        <out>
            <!-- Send the messages where they have been sent (i.e. implicit To EPR) -->
            <send/>
        </out>
    </sequence>
    <sequence name="errorHandler">
        <makefault response="true">
            <code value="tns:Receiver" xmlns:tns="http://www.w3.org/2003/05/soap-envelope"/>
            <reason value="COULDN'T SEND THE MESSAGE TO THE SERVER."/>
        </makefault>
        <send/>
    </sequence>
</definitions>

Deploy the LoadbalanceFailoverService by switching to <Synapse installation directory>/samples/axis2Server/src/LoadbalanceFailoverService directory and running ant.
Start three instances of sample Axis2 server on HTTP ports 9001, 9002 and 9003 and give some unique names to each server.

Example commands to run sample Axis2 servers from the <Synapse installation directory>/samples/axis2Server directory in Linux are listed below:

./axis2server.sh -http 9001 -https 9005 -name MyServer1
./axis2server.sh -http 9002 -https 9006 -name MyServer2
./axis2server.sh -http 9003 -https 9007 -name MyServer3

Now we are done with setting up the environment for load balancing. Start the load balance and failover client using the following command:

ant loadbalancefailover -Di=100

This client sends 100 requests to the LoadbalanceFailoverService through Synapse. Synapse will distribute the load among the three endpoints mentioned in the configuration in round-robin manner. LoadbalanceFailoverService appends the name of the server to the response, so that client can determine which server has processed the message. If you examine the console output of the client, you can see that requests are processed by three servers as follows:

[java] Request: 1 ==> Response from server: MyServer1
[java] Request: 2 ==> Response from server: MyServer2
[java] Request: 3 ==> Response from server: MyServer3
[java] Request: 4 ==> Response from server: MyServer1
[java] Request: 5 ==> Response from server: MyServer2
[java] Request: 6 ==> Response from server: MyServer3
[java] Request: 7 ==> Response from server: MyServer1
...

Now run the client without the -Di=100 parameter to send infinite requests. While running the client, shutdown the server named MyServer1. You can observe that requests are only distributed among MyServer2 and MyServer3 after shutting down MyServer1. Console output before and after shutting down MyServer1 is listed below (MyServer1 was shutdown after request 63):

...
[java] Request: 61 ==> Response from server: MyServer1
[java] Request: 62 ==> Response from server: MyServer2
[java] Request: 63 ==> Response from server: MyServer3
[java] Request: 64 ==> Response from server: MyServer2
[java] Request: 65 ==> Response from server: MyServer3
[java] Request: 66 ==> Response from server: MyServer2
[java] Request: 67 ==> Response from server: MyServer3

Now restart MyServer1. You can observe that requests will be again sent to all three servers roughly after 60 seconds. This is because we have specified <suspendDurationOnFailure> as 60 seconds in the configuration. Therefore, load balance endpoint will suspend any failed child endpoint only for 60 seconds after detecting the failure.

  • No labels