Performance Analysis Results
com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_link3' is unknown.

Performance Analysis Results

This section lists performance analysis experiments conducted focusing on stream query processing and event ingestion with persistence.

Summary

The following table shows a complete summary of the content presented in this section.

Tested Item

Data Store

Query Type

Amount of events processed

Average Throughput (events per second)

Average Latency

(milliseconds)

Message Payload Size
(bytes)

Tested Item

Data Store

Query Type

Amount of events processed

Average Throughput (events per second)

Average Latency

(milliseconds)

Message Payload Size
(bytes)

Simple Pass-through

None

None

30 million

900K

0.9

12

Filter

None

Filter out all the events

30 million

900K

1.5

12

Window-small (1 second)

None

Sliding time window

30 million

100K

48

12

Window - Large (1 minute)

None

Sliding time window

30 million

100K

130

12

Patterns

None

Temporal event sequence patterns

1250 million

500K

550

76




Event Ingestion with Persistence

 

Oracle Event Store

Insert

252 million

70K

42

40

Update

75 million

20K

12

40

 

MS SQL Event Store

Insert

198 million

55K

44.2

40

Update

3.6 million

1K

4.6

40

 

MySQL Event Store

Insert

12.2 million

3.4K

2.14

40

Update

3 million

500

0.5

40

Notes

  • Event ingestion with persistence tests were conducted using the default Amazon RDS configurations.

  • All the tests for event ingestion with persistence were conducted for 1 hour.

  • Performance results were aggregated in 5000K event windows.

  • The above table had an input rate of 1000K events per second during the first four tests.

  • All the tests were conducted using TCP transport.

Stream query processing

 


Scenario: Running multiple Siddhi queries

Infrastructure used
  • The experiments were carried out in two c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth)  Amazon EC2 instances.

    • Linux kernel 4.44, java version "1.8.0_131", JVM flags -Xmx4g -Xms2g

  • One node operated as a client.

  • Another node operated as a Stream Processor node.

  • Experiments were carried out using TCP as the transport.

Query Type

Sample Query

Amount of Events Processed

Average Throughput (events per second)

Latency (ms)

Query Type

Sample Query

Amount of Events Processed

Average Throughput (events per second)

Latency (ms)

Simple Passthrough

@App:name("TCP_Benchmark")

@source(type = 'tcp', context='inputStream',@map(type='binary'))

define stream inputStream (iijtimestamp long,value float);

from inputStream

select iijtimestamp,value

insert into tempStream;

30 million

900K

0.9

 

 

 

 

 

 

 

Filter


@App:name("TCP_Benchmark")


@source(type = 'tcp', context='inputStream',@map(type= 'binary'))

define stream inputStream (iijtimestamp long,value float);


from inputStream[value<=1]
select iijtimestamp,value
insert into tempStream;

30 million

900K

1.5

@App:name("TCP_Benchmark")


@source(type = 'tcp', context='inputStream',@map(type= 'binary'))
define stream inputStream (iijtimestamp long,value float);

from inputStream[value<=0.5]
select iijtimestamp,value
insert into tempStream;

30 million

450K

1.1

@App:name("TCP_Benchmark")


@source(type = 'tcp', context='inputStream',@map(type= 'binary'))
define stream inputStream (iijtimestamp long,value float);

from inputStream[value<=0.25]
select iijtimestamp,value
insert into tempStream;

30 million

226K

0.6

 

 

 

 

 

Window

@App:name("TCP_Benchmark")

@source(type='tcp',context='inputStream',@map(type='binary'))
define stream inputStream (iijtimestamp long,value float);


from inputStream#window.time(1 sec)
select iijtimestamp,value
insert into tempStream;

30 million

100K

48

@App:name("TCP_Benchmark")

@source(type='tcp',context='inputStream',@map(type='binary'))
define stream inputStream (iijtimestamp long,value float);


from inputStream#window.time(1 min)
select iijtimestamp,value
insert into tempStream;

30 million

100K

130

Scenario: Running Siddhi Pattern query on debs-2013-grand-challenge-soccer-monitoring dataset

Infrastructure used
  • The experiments were carried out in two c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth)  Amazon EC2 instances.

    • Linux kernel 4.44, java version "1.8.0_131", JVM flags : -Xmx4g -Xms2g

  • One node operated as a client.

  • Another node operated as a Stream Processor node.

  • Experiments were carried out using TCP as the transport.

Dataset

The data used in 2013 DEBS Grand Challenge is collected by the Real-Time Locating System deployed on a football field of the Nuremberg Stadium in Germany. Data originates from sensors located near the players’ shoes (1 sensor per leg) and in the ball (1 sensor). The goalkeeper is equipped with two additional sensors in each hand. The sensors in the players’ shoes and hands produce data at a frequency of 200Hz, while the sensor in the ball produces data at a frequency of 2000Hz. The total data rate reaches roughly 15 position events per second. Every position event describes the position of a given sensor in a three-dimensional coordinate system. The center of the playing field is at coordinate (0, 0, 0) for the dimensions of the playing field and the coordinates of the kickoff.

For more details about the dataset, see DEBS 2013 Grand Challenge: Soccer monitoring.

Pattern used

We created patterns for goal scoring scenario. In this scenario, the pattern matching query involves two events referrred to as e1 and e2 that should occur one after the other with some preconditions being satisfied. These preconditions include the position (denoted by x, y, and z) and the acceleration of the ball (denoted by a_abs). The numerical constants (such as 29880, 22560, etc.,) in the sample pattern matching query with which the values for x, y, and z are compared correspond to the boundary points of the goal reg.

 

Sample Siddhi App
@App:name("TCP_Benchmark") @source(type = 'tcp', context='inputStream',@map(type='binary')) define stream innerStream(iij_timestamp long,sid int, eventtt long, x double, y, double, z int, v_abs double, a_abs int, vx int, vy int, vz int, ax int,ay int, az int); from e1=innerStream[(x>29880 or x<22560) and y>-33968 and y<33965 and (sid==4 or sid ==12 or sid==10 or sid==8)] -> e2=innerStream[(x<=29898 and x>22579) and y<=-33968 and z<2440 and a_abs>=55000 and (sid==4 or sid ==12 or sid==10 or sid==8)] select e2.sid as sid, e2.eventtt as eventtt, e2.x as x,e2.y as y, e2.z as z,e2.v_abs as v_abs,e2.a_abs as a_abs, e2.vx as vx,e2.vy as vy, e2.vz as vz, e2.ax as ax,e2.ay as ay, e2.az as az, e1.iij_timestamp as iij_timestamp insert into outputStream;

 

Summary Results

Throughput (events per second)

500,000

Latency (ms)

550

Ingesting events with persistence

All the event ingestion with persistence tests were conducted using the following deployment configuration.

Oracle event store

Infrastructure used

The following infrastructure was used in both scenarios:

  • c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth) Amazon EC2 instances operated as the SP node and the TCP client.

    • Linux kernel 4.44, java version "1.8.0_131", JVM flags : -Xmx4g -Xms2g

  • db.m4.2xlarge (8 vCPU, 32 GB RAM, EBS-optimized storage with 100 Mbps max dedicated bandwidth) Amazon RDS instance with Oracle operated as the database node.

  • Customized TCP client operated as the data publisher (TCP producer found in samples).

  • Experiments were carried out using Oracle 12g.

 

Scenario 1: Insert query - Persisting 252 million events of process monitoring events in Oracle

This test involved persisting process monitoring events of approximately 180 bytes each. The test injected 252 million events into WSO2 Stream Processor with a publishing TPS of 70,000 events per second during a time period of one hour.

Throughput Graph

                     

 Latency Graph

 

Summary Results

Throughput (events per second)

70,000

Latency (ms)

42

 

Scenario 2: Update Query - Updating 10 million events in Oracle Data store

The test injected 10 million events into the properly indexed Oracle Database. The test involved persisting process monitoring events of approximately 180 bytes each. 75 million update queries were performed. The publishing throughput was 20,000 events per second during a time period of one hour.

Throughput Graph

                                             

            Latency Graph                                 

Summary Results

Number Of Persisted Events

10 million

Throughput (events per second)

20,000

Latency (ms)

12

 

Microsoft SQL server event store

Infrastructure used
  • c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth) Amazon EC2 instance operated as the SP node.

    • Linux kernel 4.44, java version “1.8.0_131", JVM flags : -Xmx4g -Xms2g

  • db.m4.2xlarge (8 vCPU, 32 GB RAM, EBS-optimized storage with 100 Mbps max dedicated bandwidth) Amazon RDS instance with MS SQL Enterprise Edition 2016 operated as the database node

  • Customized TCP client operated as the data publisher (Sample TCP client found in samples).

Scenario 1: Insert Query - Persisting 198 million process monitoring events in MS SQL

This test involved persisting process monitoring events of approximately 180 bytes each. The test injected 198 million events into WSO2 Stream Processor with a publishing throughput of 55,000 events per second during a time period of one hour.

Throughput Graph

                                                                                                                                                                                                                                                                                                   
                    Latency Graph


Summary Results

Throughput (event per second)

55,000

Latency (ms)

44.2

                           

Scenario 2: Update Query - Updating 10 million events in MS SQL data store 

This test injected 10 million events into the properly indexed MS SQL Database. This test involved persisting process monitoring events of approximately 180 bytes each. 3.6 million update queries were performed with a publishing throughput of 1000 events per second during a time period of one hour.

                                                           Throughput Graph                                                                                                         

                                                                                                                                                                                                                                                                                                    

Summary Results

Number Of Persisted Events

10 million

Throughput (events per second)

1000

Latency (ms)

4.6

 

MySQL event store

Infrastructure Used
  • c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth) Amazon EC2 instances operated as the SP node and the TCP client.

    • Linux kernel 4.44, java version "1.8.0_131", JVM flags : -Xmx4g -Xms2g

  • db.m4.2xlarge (8 vCPU, 32 GB RAM, EBS-optimized storage with 100 Mbps max dedicated bandwidth) Amazon RDS instance with MySQL Community Edition version 5.7 operated as the database node.

  • Customized TCP client operated as the data publisher (TCP producer found in samples).

  • Experiments were carried out using 5.7.19 MySQL Community Server.

 

Scenario 1: Insert Query - Persisting 12.2 million process monitoring events in MySQL

This test involved persisting process monitoring events of approximately 180 bytes each. The test injected 12.2 million events into WSO2 Stream Processor with a publishing throughput of 3400 events per second during a time period of one hour.

Throughput Graph

 

Latency Graph

Summary Results

Throughput (events per second)

3400

Latency (ms)

2.14

MySQL Upper Limit

After about 12.2 million events are published, a sudden drop can be observed in the receiver performance. This number can be considered as the upper limit of MySQL event store with default settings. In order to continue receiving events without a major performance degradation, data should be purged periodically from the event store before it reaches the upper limit.

 

Scenario 2: Update Query - Updating 100K events in MySQL data store

The test injected 100,000 events into the properly indexed My SQL Database. This test involved persisting process monitoring events of approximately 180 bytes each. Three million update queries were performed with a publishing throughput of 500 events per second during a time period of one hour.

Throughput Graph


Latency Graph

Summary Results

Number Of Persisted Events

100K

Throughput (events per second)

500

Latency (ms)

0.5

Conclusion

The performance test results indicate that the event persistence performance of WSO2 Stream Processor is characterized by the event store database performance.

com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_links2' is unknown.