Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Updated further with old and new EC2 instances WSODOCINTERNAL-1100

The following sections analyze the results of WSO2 API Manager performance tests done in the Amazon EC2 environment.

...

See: JMeter Remote Test. Two JMeter servers are used to simulate high number of concurrent users.



The following are the EC2 instances that the API-M 2.5.0 performance tests were carried out on.

Syanapse Gateway

m5large

Name

EC2 Instance Type

vCPU

Mem (GiB)

Apache JMeter Client

c5c3.large

24

3.75

Apache JMeter Server 01

c5c3.xlarge

48

7.5

Apache JMeter Server 02

c5c3.xlarge

48

7.5

WSO2 API Manager (Synapse)

c5c3.xlarge

48

7.5

Netty HTTP Echo ServiceBackend

c5c3.xlarge

48

7.5

MySQLdb.m3.medium (RDS)

2

8

...

13.75


Ballerina Gateway

Name

EC2 Instance Type

vCPU

Mem (GiB)

Apache JMeter Client

c3.large

2

3.75

Apache JMeter Server 01

c3.xlarge

4

7.5

Apache JMeter Server 02

c3.xlarge

4

7.5

Microgateway

c3.large

4

7.5

Netty HTTP Backend

c3.xlarge

4

7.5

Expand
titleClick here to see the current generation EC2 instances that are similar to the above mentioned previous generation EC2 instances
Warning

As the latter mentioned EC2 instances are now categorised as the “previous generation EC2 instance types" by AWS, the following are the “current generation EC2 instances" that are similar to the above mentioned EC2 instances. However, note that WSO2 API-M 2.5.0 has not been tested on the following EC2 instances.

Syanapse Gateway

Name

EC2 Instance Type

vCPU

Mem (GiB)

Apache JMeter Client

c5.large

2

4

Apache JMeter Server 01

c5.xlarge

4

8

Apache JMeter Server 02

c5.xlarge

4

8

WSO2 API Manager (Synapse)

c5.xlarge

4

8

Netty HTTP Backend

c5.xlarge

4

8

MySQLdb.m5.large (RDS)48


Ballerina Gateway

Name

EC2 Instance Type

vCPU

Mem (GiB)

Apache JMeter Client

c5.large

2

4

Apache JMeter Server 01

c5.xlarge

4

8

Apache JMeter Server 02

c5.xlarge

4

8

Microgateway

c5.large

4

8

Netty HTTP Backend

c5.xlarge

2

8

Refer to the following links for more details on Amazon Instance Types

...

  • # Samples - The number of requests sent with the given number of concurrent users.

  • Error Count - How many request errors were recorded.

  • Error % - Percent of requests with errors

  • Average - The average response time of a set of results

  • Min - The shortest time taken for a request

  • Max - The longest time taken for a request

  • 90th Percentile - 90% of the requests took no more than this time. The remaining samples took at least as long as this

  • 95th Percentile - 95% of the requests took no more than this time. The remaining samples took at least as long as this

  • 99th Percentile - 99% of the requests took no more than this time. The remaining samples took at least as long as this

  • Throughput - The Throughput is measured in requests per second.

  • Received KB/sec - The throughput measured in received Kilobytes per second

  • Sent KB/sec - The throughput measured in sent Kilobytes per second

In addition, to above details, some additional details were recorded for every test.

  • GC Throughput - Time percentage the application was not busy with GC

GC throughput and other GC related details were obtained from the GC logs produced by the WSO2 API Manager.

The following are the GC flags used:

-XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:"$CARBON_HOME/repository/logs/gc.log

Info

The process memory was not considered as Java is working on an already reserved heap area.

Performance Test Scripts

All scripts used to run the performance tests and analyze results are in the following repositories.

...

Table of Content Zone
minLevel3
locationtop

Throughput Comparison

Image Modified
The Echo API has some errors with 100KiB message size for 1000 and 2000 concurrent users.

The following charts show what happens to the server throughput when considering all results.

  • Throughput (Requests/sec) vs Concurrent Users

    Image Modified

  • Throughput (Requests/sec) vs Message Size (Bytes)

    Image Modified

  • Throughput (Requests/sec) vs Sleep Time (ms)

    Image Modified

Average Response Time Comparison


The following charts show what happens to the average response time when considering all results.

  • Average Response Time (ms) vs Concurrent Users

  • Average Response Time (ms) vs Message Size (Bytes)

  • Average Response Time (ms) vs Sleep Time (ms)

GC Throughput Comparison

The following chart shows the GC throughput behavior when considering all results.



  • API Manager GC Throughput (%) vs Concurrent Users


  • API Manager GC Throughput (%) vs Message Size (Bytes)

  • API Manager GC Throughput (%) vs Sleep Time (ms)


Refer Observations from all results for more details on the charts.