This page lists performance analysis experiments conducted focusing on stream query processing and event ingestion with persistance. The following table shows a complete summary of the content presented in this page.
...
Average Latency
(milliseconds)
...
Oracle Event Store
...
MS SQL Event Store
...
MySQL Event Store
...
Info | ||
---|---|---|
| ||
|
Table of Contents | ||
---|---|---|
|
Stream Query Processing
Infrastructure Used
The experiments were carried out in two c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth) Amazon EC2 instances.
Linux kernel 4.44, java version "1.8.0_131", JVM flags : -Xmx4g -Xms2g
One node as a client
Another node as Stream Processor Node
Experiments were carried out using TCP as a transport.
Scenario : Running Multiple Siddhi Queries
...
@App:name("TCP_Benchmark")
@source(type = 'tcp', context='inputStream',@map(type='binary'))
define stream inputStream (iijtimestamp long,value float);
from inputStream
select iijtimestamp,value
insert into tempStream;
...
define stream inputStream (iijtimestamp long,value float);
...
@App:name("TCP_Benchmark")
...
@App:name("TCP_Benchmark")
...
Window
...
Scenario : Running Siddhi Pattern Query on debs-2013-grand-challenge-soccer-monitoring dataset
Infrastructure Used
The experiments were carried out in two c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth) Amazon EC2 instances.
- Linux kernel 4.44, java version "1.8.0_131", JVM flags : -Xmx4g -Xms2g
One node as a client
Another node as Stream Processor Node
Experiments were carried out using TCP as a transport.
Data Set
The data used in 2013 DEBS Grand Challenge is collected by the Real-Time Locating System deployed on a football field of the Nuremberg Stadium in Germany. Data originates from sensors located near the players’ shoes (1 sensor per leg) and in the ball (1 sensor). The goal keeper is equipped with two additional sensors, one at each hand.The sensors in the players’ shoes and hands produce data with 200Hz frequency, while the sensor in the ball produces data with 2000Hz frequency. The total data rate reaches roughly 15 position events per second. Every position event describes position of a given sensor in a three-dimensional coordinate system. The center of the playing field is at coordinate (0, 0, 0) for the dimensions of the playing field and the coordinates of the kick off.
For more details about the dataset please refer :
Used Pattern
We created patterns for goal scoring scenario. In this case the pattern matching query involves two events e1 and e2 which should occur one after the other with some preconditions being satisfied such as the position (denoted by x, y, and z) and the acceleration of the ball (denoted by a_abs). The numerical constants (such as 29880, 22560, etc.) in the sample pattern matching query with which x, y, and z values are compared correspond to the boundary points of the goal reg.
Code Block | ||||
---|---|---|---|---|
| ||||
@App:name("TCP_Benchmark")
@source(type = 'tcp', context='inputStream',@map(type='binary'))
define stream innerStream(iij_timestamp long,sid int, eventtt long, x double, y, double, z int, v_abs double, a_abs int, vx int, vy int, vz int, ax int,ay int, az int);
from e1=innerStream[(x>29880 or x<22560) and y>-33968 and y<33965 and (sid==4 or sid ==12 or sid==10 or sid==8)]
-> e2=innerStream[(x<=29898 and x>22579) and y<=-33968 and z<2440 and a_abs>=55000 and (sid==4 or sid ==12 or sid==10 or sid==8)]
select
e2.sid as sid, e2.eventtt as eventtt, e2.x as x,e2.y as y, e2.z as z,e2.v_abs as v_abs,e2.a_abs as a_abs, e2.vx as vx,e2.vy as vy, e2.vz as
vz, e2.ax as ax,e2.ay as ay, e2.az as az, e1.iij_timestamp as iij_timestamp
insert into outputStream; |
Summary Results
...
Event Ingestion with Persistence
All the event ingestion with persistance tests were conducted using the following deployment cofiguration.
Oracle Event Store
Infrastructure used
c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth) Amazon EC2 instances as the SP node
Linux kernel 4.44, java version "1.8.0_131", JVM flags : -Xmx4g -Xms2g
db.m4.2xlarge (8 vCPU, 32 GB RAM, EBS-optimized storage with 100 Mbps max dedicated bandwidth) Amazon RDS instance with Oracle as the database node
Customized TCP client as the data publisher (TCP producer found in samples)
- Experiments were carried out using Oracle 12g.
Scenario 1 : Insert Query - Persisting 252 million events of Process Monitoring Events on Oracle.
This test involved persisting process monitoring events each of approximately 180 bytes. The test injected 252 million events into Stream Processor with a publishing TPS of 70,000 events/second during 1 hour time period.
Throughput Graph
Latency Graph
Summary Results
...
Scenario 2: Update Query - Updating 10 million events on Oracle Data store
The test injected 10 million events into the properly indexed Oracle Database. This test involved persisting process monitoring events each of approximately 180 bytes.75 million update queries were performed with the publishing throughput as 20,000 events / second during 1 hour time period.
Throughput Graph
Latency Graph
Summary Results
...
Microsoft SQL Server Event Store
Infrastructure used
c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth) Amazon EC2 instance as the SP node
Linux kernel 4.44, java version “1.8.0_131", JVM flags : -Xmx4g -Xms2g
db.m4.2xlarge (8 vCPU, 32 GB RAM, EBS-optimized storage with 100 Mbps max dedicated bandwidth) Amazon RDS instance with MS SQL Enterprise Edition 2016 as the database node
Customized TCP client as the data publisher (Sample TCP client found in samples)
Scenario 1: Insert Query - Persisting 198 million events of Process Monitoring Events on MS SQL
This test involved persisting process monitoring events each of approximately 180 bytes. The test injected 198 million events into Stream Processor with a publishing TPS of 55,000 events/second during 1 hour time period.
Throughput Graph
Latency Graph
Summary Results
...
Scenario 2: Update Query - Updating 10 million events on MS SQL Data store
The test injected 10 million events into the properly indexed MS SQL Database. This test involved persisting process monitoring events each of approximately 180 bytes. 3.6 million update queries were performed with the publishing throughput as 1000 events / second during 1 hour time period.
Throughput Graph
Summary Results
...
MySQL Event Store
Infrastructure Used
c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth) Amazon EC2 instances as the SP node
Linux kernel 4.44, java version "1.8.0_131", JVM flags : -Xmx4g -Xms2g
db.m4.2xlarge (8 vCPU, 32 GB RAM, EBS-optimized storage with 100 Mbps max dedicated bandwidth) Amazon RDS instance with MySQL Community Edition version 5.7 as the database node
Customized TCP client as the data publisher (TCP producer found in samples)
- Experiments were carried out using 5.7.19 MySQL Community Server.
Scenario 1 : Insert Query - Persisting 12.2 million events of Process Monitoring Events on MySQL
This test involved persisting process monitoring events each of approximately 180 bytes. The test injected 12.2 million events into StreamProcessor with the publishing throughput as 3400 events/second during one hour time period.
Throughput Graph
Latency Graph
Summary Results
...
Info | ||
---|---|---|
| ||
After around 12.2 million events are published, a sudden drop can be observed in receiver performance that can be considered as the upper limit of MySQL event store with default settings. In order to continue receiving events without a major performance degradation data has to be purged periodically before it reaches the upper limit. |
Scenario 2: Update Query - Updating 100K events on MySQL Data store
The test injected 100,000 events into the properly indexed My SQL Database. This test involved persisting process monitoring events each of approximately 180 bytes. 3 million update queries were performed with the publishing throughput as 500 events / second during 1 hour time period.
Throughput Graph
Latency Graph
Summary Results
...
title | Conclusion |
---|
...
This section presents the results of the latest performance test carried out for WSO2 Stream Processor.
Info |
---|
These performance statistics were taken when the load average was below 3.8 in the 4 core instance. |
Table of Contents | ||||
---|---|---|---|---|
|
Consuming events using a Kafka source
Anchor | ||||
---|---|---|---|---|
|
Specifications for EC2 instances
Stream Processor: c5.xLarge
Kafka server: c5.xLarge
Kafka publisher: c5.xLarge
Siddhi application used
Code Block | ||
---|---|---|
| ||
@App:name("HelloKafka")
@App:description('Consume events from a Kafka Topic and publish to a different Kafka Topic')
@source(type='kafka',
topic.list='kafka_topic',
partition.no.list='0',
threading.option='single.thread',
group.id="group",
bootstrap.servers='172.31.0.135:9092',
@map(type='json'))
define stream SweetProductionStream (name string, amount double);
@sink(type='log')
define stream KafkaSourceThroughputStream(count long);
from SweetProductionStream#window.timeBatch(5 sec)
select count(*)/5 as count
insert into KafkaSourceThroughputStream; |
Results
Average Publishing TPS to Kafka : 1.1M
Average Consuming TPS from Kafka: 180K
Consuming messages from an HTTP source
Anchor | ||||
---|---|---|---|---|
|
Specifications for EC2 instances
Stream Processor : c5.xLarge
JMeter: c5.xLarge
Siddhi application used
Code Block | ||
---|---|---|
| ||
@App:name("HttpSource")
@App:description('Consume events from http clients')
@source(type='http', worker.count='20', receiver.url='http://172.31.2.99:8081/service',
@map(type='json'))
define stream SweetProductionStream (name string, amount double);
@sink(type='log')
define stream HttpSourceThroughputStream(count long);
from SweetProductionStream#window.timeBatch(5 sec)
select count(*)/5 as count
insert into HttpSourceThroughputStream; |
Results
Average Publishing TPS to Http Source : 30K
Average Consuming TPS from Http Source: 30K
Sending HTTP requests and consuming the responses
Anchor | ||||
---|---|---|---|---|
|
Specifications for EC2 instances
Stream Processor : c5.xLarge
JMeter: c5.xLarge
Web server : c5.xLarge
Siddhi application used
Code Block | ||
---|---|---|
| ||
@App:name("HttpRequestResponse")
@App:description('Consume events from an HTTP source, ')
@source(type='http', worker.count='20', receiver.url='http://172.31.2.99:8081/service',
@map(type='json'))
define stream SweetProductionStream (name string, amount double);
@sink(type='http-request', l, sink.id='production-request', publisher.url='http://172.17.0.1:8688//netty_echo_server', @map(type='json'))
define stream HttpRequestStream (batchNumber double, lowTotal double);
@source(type='http-response' , sink.id='production-request', http.status.code='200',
@map(type='json'))
define stream HttpResponseStream(batchNumber double, lowTotal double);
@sink(type='log')
define stream FinalThroughputStream(count long);
@sink(type='log')
define stream InputThroughputStream(count long);
from SweetProductionStream
select 1D as batchNumber, 1200D as lowTotal
insert into HttpRequestStream;
from SweetProductionStream#window.timeBatch(5 sec)
select count(*)/5 as count
insert into InputThroughputStream;
from HttpResponseStream#window.timeBatch(5 sec)
select count(*)/5 as count
insert into FinalThroughputStream; |
Results
Average Publishing TPS to HTTP Source : 29K
Average Publishing TPS from HTTP request sink: 29K
Average Consuming TPS from HTTP response source: 29K