WSO2 DAS Performance Analysis
This section summarizes the results of performance tests carried out with the minimum fully distributed DAS deployment setup with RDBMS (MySQL) and HBase event stores separately.
Infrastructure used
- c4.2xlarge Amazon EC2 instances as the DAS nodes
- One DAS node was used as the publisher
- c3.2xlarge Amazon instances as database nodes
Receiver node Data Persistence Performance
A reduction in the throughput is observed after 1200000 events in both DAS receiver nodes as shown below. This reduction is caused by limitations of MySQL. The receiver performance variation of the second node of the 2 node receiver cluster is as given below. The event rate after the first 1200000 events was considered for the the following graph because the initial buffer filling in receiver queues gives a very high receiver performance at the beginning of the event publishing.
MySQL Breakpoint
After around 30 million events are published, a sudden drop can be observed in receiver performance. This can be considered as the break point of MySQL event store. Another type of event store such as HBase event store should be used when the receiver performance has to be maintained unchanged.
Testing with large events
The following results were obtained by testing the two-node HA DAS cluster with 10 million events published via the Analysing Wikipedia Data sample. Each event in this sample contains several kilobytes, to represent large events.
In the above graph, TPS represents the total number of events published per second. This stabilizes at about 8500 events per second.
The above graph shows the amount of data that is published per second (referred to as the data rate). The data rate published is significantly reduced at the initial stages due to the flow control mechanisms of the receiver. It stabilizes at around 25 MB per second.
With MySQL RDBMS event store
DAS data persistence was measured by publishing to 2 loadbalanced receiver nodes with MySQL database.
Number of Events | Mean Event Rate | |
---|---|---|
Smart Home sample | 100000000 | 5741 events per second |
Wikipedia sample | 15901127 | 4438 events per second |
With HBase event store
DAS data persistence was measured by publishing to 2 loadbalanced receiver nodes with a 3 node HBase database cluster.
Number of Events | Mean Event Rate | |
---|---|---|
Smart Home sample | 500000000 | 12638 events per second |
Wikipedia sample | 15901127 | 1640 events per second |
Analyzer Performance
This section provides information about the Spark analyzing performance with different database types
With MySQL RDBMS event store
Spark analyzing performance (time to complete execution) was measured using a 2 node DAS analyzer cluster with MySQL database.
Time taken for each type of Spark query is as given below.
Data set | Event Count | Query Type | Time Taken (seconds) |
---|---|---|---|
Smart Home | 10000000 | INSERT OVERWRITE TABLE cityUsage SELECT metro_area, avg(power_reading) AS avg_usage, min(power_reading) AS min_usage, max(power_reading) AS max_usage FROM smartHomeData GROUP BY metro_area | 26.304 |
Smart Home | 10000000 | INSERT OVERWRITE TABLE peakDeviceUsageRange SELECT house_id, (max(power_reading) - min(power_reading)) AS usage_range FROM smartHomeData WHERE is_peak = true AND metro_area = "Seattle" GROUP BY house_id | 21.659 |
Smart Home | 10000000 | INSERT OVERWRITE TABLE stateAvgUsage SELECT state, avg(power_reading) AS state_avg_usage FROM smartHomeData | 21.003 |
Smart Home | 10000000 | INSERT OVERWRITE TABLE stateUsageDifference SELECT a2.state, (a2.state_avg_usage-a1.overall_avg) AS avg_usage_difference FROM (select avg(state_avg_usage) as overall_avg from stateAvgUsage) as a1 join stateAvgUsage as a2 | 0.759 |
Wikipedia | 10000000 | INSERT INTO TABLE wikiAvgArticleLength SELECT AVG(length) as avg_article_length FROM wiki | 2883.66 |
Wikipedia | 10000000 | INSERT INTO TABLE wikiContributorSummary SELECT contributor_username, COUNT(*) as page_count FROM wiki GROUP BY contributor_username | 6288.236 |
Wikipedia | 10000000 | INSERT INTO TABLE wikiTotalArticleLength SELECT SUM(length) as total_article_chars FROM wiki | 2619.713 |
Wikipedia | 10000000 | INSERT INTO TABLE wikiTotalArticlePages SELECT COUNT(*) as total_pages FROM wiki | 4626.654 |
With HBase event store
Spark analyzing performance (time to complete execution) was measured using a 2 node DAS analyzer cluster with a 3 node HBase database cluster.
Time taken for each type of Spark query is as given below.
Data set | Event Count | Query Type | Time Taken (seconds) |
---|---|---|---|
Smart Home | 500000000 | INSERT OVERWRITE TABLE cityUsage SELECT metro_area, avg(power_reading) AS avg_usage, min(power_reading) AS min_usage, max(power_reading) AS max_usage FROM smartHomeData GROUP BY metro_area | 2218.23 |
Smart Home | 500000000 | INSERT OVERWRITE TABLE peakDeviceUsageRange SELECT house_id, (max(power_reading) - min(power_reading)) AS usage_range FROM smartHomeData WHERE is_peak = true AND metro_area = "Seattle" GROUP BY house_id | 2229.134 |
Smart Home | 500000000 | INSERT OVERWRITE TABLE stateAvgUsage SELECT state, avg(power_reading) AS state_avg_usage FROM smartHomeData GROUP BY state | 2185.097 |
Smart Home | 500000000 | INSERT OVERWRITE TABLE stateUsageDifference SELECT a2.state, (a2.state_avg_usage-a1.overall_avg) AS avg_usage_difference FROM (select avg(state_avg_usage) as overall_avg from stateAvgUsage) as a1 join stateAvgUsage as a2 | 0.923 |
Wikipedia | 15901127 | INSERT INTO TABLE wikiContributorSummary SELECT contributor_username, COUNT(*) as page_count FROM wiki GROUP BY contributor_username | 829.075 |
Wikipedia | 15901127 | INSERT INTO TABLE wikiTotalArticleLength SELECT SUM(length) as total_article_chars FROM wiki | 741.101 |
Wikipedia | 15901127 | INSERT INTO TABLE wikiTotalArticlePages SELECT COUNT(*) as total_pages FROM wiki | 643.101 |
Wikipedia | 15901127 | INSERT INTO TABLE wikiAvgArticleLength SELECT AVG(length) as avg_article_length FROM wiki | 709.001 |
Indexing Performance
shardIndexRecordBatchSize
: The amount of index data (in bytes) to be processed at a time by a shard index worker.
Mode | Data set | shardIndexRecordBatchSize | Replication Factor | Event Count | Time Taken (seconds) | Average TPS |
---|---|---|---|---|---|---|
Stand alone | Wikipedia | 10MB | NA | 15901127 | 7975 | 1993.871724 |
Stand alone | Wikipedia | 20MB | NA | 15901127 | 6765 | 2350.499187 |
Stand alone | Smart Home | 20MB | NA | 20000000 | 1385 | 14440.43321 |
Minimum Fully Distributed | Wikipedia | 20MB | 1 | 15901127 | 6870 | 2314.574527 |
Minimum Fully Distributed | Wikipedia | 20MB | 0 | 15901127 | 7280 | 2184.220742 |