...
Code Block | ||||
---|---|---|---|---|
| ||||
<analytics-dataservice-configuration> <!-- The name of the primary record store --> <primaryRecordStore>EVENT_STORE</primaryRecordStore> <!-- Analytics Record Store - properties related to record storage implementation --> <analytics-record-store name="EVENT_STORE"> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation> <properties> <property name="datasource">WSO2_ANALYTICS_EVENT_STORE_DB</property> <property name="category">large_dataset_optimized</property> </properties> </analytics-record-store> <analytics-record-store name = "PROCESSED_DATA_STORE"> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation> <properties> <property name="datasource">WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</property> <property name="category">large_dataset_optimized</property> </properties> </analytics-record-store> <!-- The data indexing analyzer implementation --> <analytics-lucene-analyzer> <implementation>org.apache.lucene.analysis.standard.StandardAnalyzer</implementation> </analytics-lucene-analyzer> <!-- The maximum number of threadsindex data usedreplicas forthe indexingsystem pershould nodekeep, -1 signals to aute detect the optimum value, for H/A, this should be at least 1, e.g. the value 0 means where it would be equal to (number of CPU cores in the system - 1) --there aren't any copies of the data --> <indexingThreadCount>-1<<indexReplicationFactor>1</indexingThreadCount>indexReplicationFactor> <!-- The number of index shards, should be equal or higher to the number of indexing nodes that is going to be working, ideal count being 'number of indexing nodes * [CPU cores used for indexing per node]' --> <shardCount>6</shardCount> <!-- The numberamount of batch index records, the indexing node will process per each indexing thread. A batch index record basically index data (in bytes) to be processed at a time by a shard index worker. Minimum value is 1000. --> <shardIndexRecordBatchSize>20971520</shardIndexRecordBatchSize> <!-- The interval in milliseconds, which a shard index processing worker thread will sleep during index processing operations. This setting along with the 'shardIndexRecordBatchSize' setting can be used to increase the final index batched data amount the indexer processes encapsulatesat a given time. Usually, higher the batch data amount, higher the throughput of records retrieved from the receiver to be indexed the indexing operations, but will have a higher latency of record insertion to indexing. Minimum value of this is 10, and a maximum value is 60000 (1 minute). --> <shardIndexRecordBatchSize>100<<shardIndexWorkerInterval>1500</shardIndexRecordBatchSize>shardIndexWorkerInterval> <!-- Data purging related configuration --> <analytics-data-purging> <!-- Below entry will indicate purging is enable or not. If user wants to enable data purging for cluster then this property need to be enable in all nodes --> <purging-enable>false</purging-enable> <cron-expression>0 0 0 * * ?</cron-expression> <!-- Tables that need include to purging. Use regex expression to specify the table name that need include to purging.--> <purge-include-tables> <table>.*</table> <!--<table>.*jmx.*</table>--> </purge-include-tables> <!-- All records that insert before the specified retention time will be eligible to purge --> <data-retention-days>365</data-retention-days> </analytics-data-purging> <!-- Receiver/Indexing flow-control configuration --> <analytics-receiver-indexing-flow-control enabled="true"> <!-- maximum number of records that can be in index staging area before receiving is throttled --> <recordReceivingHighThreshold>10000</recordReceivingHighThreshold> <!-- the limit on number of records to be lower than, to reduce throttling --> <recordReceivingLowThreshold>5000</recordReceivingLowThreshold> </analytics-receiver-indexing-flow-control> </analytics-dataservice-configuration> |
Analytics Record Store
Anchor | ||||
---|---|---|---|---|
|
...
Info |
---|
Once a record store is configured in the |
Analytics Indexing
By default, WSO2 DAS executes indexing operation when the server is started. The following system property can be used to disable the indexing operations if required.
...
This option allows you to create servers that are dedicated for specific operations such as event receiving, analytics, indexing, etc.
All index data are stored in the file system, partitioned into unit is known as shards. For detailed information about shard configuration and allocation, see Storing Index Data.
Configuring common parameters
The following parameters are common for both the Analytics Record Store and the Analytics File System
Data purging parameters
Parameter | Description | Default Value |
---|---|---|
<purging-enable> | This parameter specifies whether the functionality to purge data from event tables is enabled or not. | false |
<cron-expression> | A regex expression to select the tables from which data should be purged. | 0 0 0 * * ? |
<purge-include-tables> | A list of event tables from which data should be purged can be defined as subelements of this element. | |
<data-retention-days> | The number of days for which the data should be retained in the event tables that were selected to have their data purged. All the data in these tables are cleared once the number of days that equal the value specified for this parameter have elapsed. | 365 |
Other Parameters
Parameter | Description | Default Value | ||||
---|---|---|---|---|---|---|
<analytics-lucene-analyzer> | The implementation of the Analytics Lucene Analyzer is defined as a subelement of this parameter. e.g., <implementation>org.apache.lucene.analysis.standard.StandardAnalyzer</implementation> | |||||
<indexingThreadCount> | The maximum number of threads used for indexing per node. When | -1 | ||||
<shardCount> |
Note |
---|
This parameter can only be set once for the lifetime of the cluster, and cannot be changed later on. |
6
<shardIndexRecordBatchSize>
The number of batch index records the indexing node should process per each indexing thread at a given time.
An index record contains data of a record batch inserted in a single put operation. This batch can be as high as the event receiver queue data size, which is 10MB by default. Therefore, the highest amount of in-memory record data that an indexing processing thread can have is 10MB * 100. This parameter should be configured to change the maximum amount of memory available to the indexing node based on your requirement.
100