This page lists performance analysis experiments conducted focusing on stream query processing and event ingestion with persistance. This section lists performance analysis experiments conducted focusing on stream query processing and event ingestion with persistence.
Table of Contents maxLevel 2 minLevel 2
Summary
The following table shows a complete summary of the content presented in on this page.
Tested Item | Data Store | Query Type | Amount of events processed | Average Throughput (events /per second) | Average Latency (milliseconds) |
---|---|---|---|---|---|
Simple Pass-through | None | None | 30 million | 900K | 0.9 |
Filter | None | Filter out all the events | 30 million | 900K | 1.5 |
Window-small (1 second) | None | Sliding time window | 30 million | 100K | 48 |
Window - Large (1 minute) | None | Sliding time window | 30 million | 100K | 130 |
Patterns | None | Temporal event sequence patterns | 1250 million | 500K | 550 |
Event Ingestion with Persistence | Oracle Event Store | Insert | 252 million | 70K | 42 |
Update | 75 million | 20K | 12 | ||
MS SQL Event Store | Insert | 198 million | 55K | 44.2 | |
Update | 3.6 million | 1K | 4.6 | ||
MySQL Event Store | Insert | 12.2 million | 3.4K | 2.14 | |
Update | 3 million | 500 | 0.5 |
Info | ||
---|---|---|
| ||
|
Table of Contents | ||
---|---|---|
|
Stream Query Processing
...
Stream query processing
Scenario: Running multiple Siddhi queries
Infrastructure used
The experiments were carried out in two c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth) Amazon EC2 instances.
Linux kernel 4.44, java version "1.8.0_131", JVM flags : -Xmx4g -Xms2g
One node operated as a client.
Another node operated as a Stream Processor Nodenode.
Experiments were carried out using TCP as a the transport.
...
Scenario : Running Multiple Siddhi Queries
Query Type | Sample Query | Amount of Events Processed | Average Throughput (events /per second) | Latency (ms) |
---|---|---|---|---|
Simple Passthrough |
| 30 million | 900K | 0.9 |
Filter |
define stream inputStream (iijtimestamp long,value float); from inputStream[value<=1] select iijtimestamp,value insert into tempStream; | 30 million | 900K | 1.5 |
| 30 million | 450K | 1.1 | |
| 30 million | 226K | 0.6 | |
Window |
| 30 million | 100K | 48 |
from inputStream#window.time(1 min) select iijtimestamp,value insert into tempStream; | 30 million | 100K | 130 |
Scenario: Running Siddhi Pattern
...
query on debs-2013-grand-challenge-soccer-monitoring dataset
Infrastructure
...
used
The experiments were carried out in two c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth) Amazon EC2 instances.
- Linux kernel 4.44, java version "1.8.0_131", JVM flags : -Xmx4g -Xms2g
One node operated as a client.
Another node operated as a Stream Processor Nodenode.
Experiments were carried out using TCP as a the transport.
...
Dataset
The data used in 2013 DEBS Grand Challenge is collected by the Real-Time Locating System deployed on a football field of the Nuremberg Stadium in Germany. Data originates from sensors located near the players’ shoes (1 sensor per leg) and in the ball (1 sensor). The goal keeper goalkeeper is equipped with two additional sensors , one at in each hand. The sensors in the players’ shoes and hands produce data with at a frequency of 200Hz frequency, while the sensor in the ball produces data with at a frequency of 2000Hz frequency. The total data rate reaches roughly 15 position events per second. Every position event describes the position of a given sensor in a three-dimensional coordinate system. The center of the playing field is at coordinate (0, 0, 0) for the dimensions of the playing field and the coordinates of the kick off.
For more details about the dataset please refer :
...
, see DEBS 2013 Grand Challenge: Soccer monitoring.
Pattern used
We created patterns for goal scoring scenario. In this case scenario, the pattern matching query involves two events referrred to as e1
and e2
which that should occur one after the other with some preconditions being satisfied such as . These preconditions include the position (denoted by x
, y
, and z
) and the acceleration of the ball (denoted by a_abs
). The numerical constants (such as 29880
, 22560
, etc.,) in the sample pattern matching query with which the values for x
, y
, and z
values are compared correspond to the boundary points of the goal reg.
...
Code Block | ||||
---|---|---|---|---|
| ||||
@App:name("TCP_Benchmark") @source(type = 'tcp', context='inputStream',@map(type='binary')) define stream innerStream(iij_timestamp long,sid int, eventtt long, x double, y, double, z int, v_abs double, a_abs int, vx int, vy int, vz int, ax int,ay int, az int); from e1=innerStream[(x>29880 or x<22560) and y>-33968 and y<33965 and (sid==4 or sid ==12 or sid==10 or sid==8)] -> e2=innerStream[(x<=29898 and x>22579) and y<=-33968 and z<2440 and a_abs>=55000 and (sid==4 or sid ==12 or sid==10 or sid==8)] select e2.sid as sid, e2.eventtt as eventtt, e2.x as x,e2.y as y, e2.z as z,e2.v_abs as v_abs,e2.a_abs as a_abs, e2.vx as vx,e2.vy as vy, e2.vz as vz, e2.ax as ax,e2.ay as ay, e2.az as az, e1.iij_timestamp as iij_timestamp insert into outputStream; |
Summary Results
Throughput (events / per second) | 500,000 |
---|---|
Latency (ms) | 550 |
...
Ingesting events with
...
persistence
All the event ingestion with persistance persistence tests were conducted using the following deployment cofigurationconfiguration.
Oracle
...
event store
Infrastructure used
The following infrastructure was used in both scenarios:
c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth) Amazon Amazon EC2 instances operated as the SP node and the TCP client.
Linux kernel 4.44, java version "1.8.0_131", JVM flags : -Xmx4g -Xms2g
db.m4.2xlarge (8 vCPU, 32 GB RAM, EBS-optimized storage with 100 Mbps max dedicated bandwidth) Amazon Amazon RDS instance with Oracle operated as the database node.
Customized TCP client operated as the data publisher (TCP producer found in samples).
- Experiments were carried out using Oracle 12g.
Scenario 1 : Insert
...
query - Persisting 252 million events of
...
process monitoring events in Oracle
This test involved persisting process monitoring events each of approximately 180 bytes each. The test injected 252 million events into WSO2 Stream Processor with a publishing TPS of 70,000 events /per second during 1 hour a time period of one hour.
Throughput Graph
Latency Graph
Summary Results
Throughput (events /per second) | 70,000 |
---|---|
Latency (ms) | 42 |
Scenario 2: Update Query - Updating 10 million events
...
in Oracle Data store
The test injected 10 million events into the properly indexed Oracle Database. This test involved The test involved persisting process monitoring events each of approximately 180 bytes each. 75 million update queries were performed with the . The publishing throughput as was 20,000 events / per second during 1 hour a time period of one hour.
Throughput Graph
Latency Graph
Summary Results
Number Of Persisted Events | 10 million |
---|---|
Throughput (events/ per second) | 20,000 |
Latency (ms) | 12 |
Microsoft SQL
...
server event store
Infrastructure used
c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth) Amazon EC2 instance operated as the SP node.
Linux kernel 4.44, java version “1“1.8.0_131", JVM flags : -Xmx4g -Xms2g
db.m4.2xlarge (8 vCPU, 32 GB RAM, EBS-optimized storage with 100 Mbps max dedicated bandwidth) Amazon Amazon RDS instance with MS SQL Enterprise Edition 2016 operated as the database node
Customized TCP client operated as the data publisher (Sample TCP client found in samples).
Scenario 1: Insert Query - Persisting 198 million
...
process monitoring events in MS SQL
This test involved persisting process monitoring events each of approximately 180 bytes each. The test injected 198 million events into WSO2 Stream Processor with a publishing TPS throughput of 55,000 events /per second during 1 hour a time period of one hour.
Throughput Graph
Latency Graph
Summary Results
Throughput (events/event per second) | 55,000 |
---|---|
Latency (ms) | 44.2 |
Scenario 2: Update Query - Updating 10 million events
...
in MS SQL
...
data store
The This test injected 10 million events into the properly indexed MS SQL Database. This test involved persisting process monitoring events each of approximately 180 bytes each. 3.6 million update queries were performed with the a publishing throughput as of 1000 events / per second during 1 hour a time period of one hour.
Throughput Graph
Summary Results
Number Of Persisted Events | 10 million |
---|---|
Throughput (events /per second) | 1000 |
Latency (ms) | 4.6 |
MySQL
...
event store
Infrastructure Used
c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth) Amazon Amazon EC2 instances operated as the SP nodenode and the TCP client.
Linux kernel 4.44, java version "1.8.0_131", JVM flags : -Xmx4g -Xms2g
db.m4.2xlarge (8 vCPU, 32 GB RAM, EBS-optimized storage with 100 Mbps max dedicated bandwidth) Amazon Amazon RDS instance with MySQL Community Edition version 5.7 operated as the database node.
Customized TCP client operated as the data publisher (TCP producer found in samples).
- Experiments were carried out using 5.7.19 MySQL Community Server.
Scenario 1 : Insert Query - Persisting 12.2 million
...
process monitoring events in MySQL
This test involved persisting process monitoring events each of approximately 180 bytes each. The test injected 12.2 million events into StreamProcessor WSO2 Stream Processor with the a publishing throughput as of 3400 events /per second during a time period of one hour time period.
Throughput Graph
Latency Graph
Summary Results
Throughput (events /per second) | 3400 |
---|---|
Latency (ms) | 2.14 |
Info | ||
---|---|---|
| ||
After around about 12.2 million events are published, a sudden drop can be observed in the receiver performance that . This number can be considered as the upper limit of MySQL event store with default settings. In order to continue receiving events without a major performance degradation, data has to should be purged periodically from the event store before it reaches the upper limit. |
Scenario 2: Update Query - Updating 100K events
...
in MySQL
...
data store
The test injected 100,000 events into the properly indexed My SQL Database. This test involved persisting process monitoring events each of approximately 180 bytes each. 3 Three million update queries were performed with the a publishing throughput as of 500 events / per second during 1 hour a time period of one hour.
Throughput Graph
Latency Graph
Summary Results
Number Of Persisted Events | 100K |
---|---|
Throughput (events /per second) | 500 |
Latency (ms) | 0.5 |
Info | ||
---|---|---|
| ||
The performance analysistest results indicate that Stream Processor’sthe event persistence scenarios’ performance has beenperformance of WSO2 Stream Processor is characterized by the event store database performance. |