Unknown macro: {next_previous_links}
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

Data analytics refer to aggregating, analyzing and presenting information about business activities. This definition is paramount, when designing a solution to address a data analysis use case. Aggregation refers to the collection of data, analysis refers to the manipulation of data to extract information, and presentation refers to representing this data visually or in other ways such as alerts. The WSO2 DAS architecture reflects this natural flow in its very design as illustrated below.  

WSO2 DAS 3.0.0 architecture

The WSO2 DAS architecture consists of the following components as described below.

Data Agents

Data Agents are external sources which publishes data to WSO2 DAS. Data Agent components reside in external systems and pushes data as events to the DAS. For more information on Data Agents, see /wiki/spaces/PS/pages/10519883.

Event Receivers

WSO2 DAS receives data published by Data Agents through Event Receivers. Each event receiver is associated with an event stream, which you then persist and/or process using Event Processors. There are many transport types supported as entry protocols for Event Receivers in WSO2 DAS. For more information on Event Receivers, see Receiving Events.

Analytics REST API

In addition to the Event Receivers, WSO2 DAS facilitates REST API based data publishing. You can use the REST API with Web applications and Web services. For more information on REST API, see Analytics REST API Guide.

Data Store

WSO2 DAS supports Data Stores (Cassandra, HBase, and RDBMS etc.for data persistence. There are three main types of Data Stores, namely Event Store, Processed Event Store, and File store. Event Store is used to store events that are published directly to the DAS. Processed Event Store is used to store resulting data processed using Apache Spark analytics. File store is used for storing Apache Lucene indexes. For more information on Data Stores, see DAS Data Access Layer.

 

 

 

 

Data that needs to be monitored goes through these modules in order. The data flow is as follows:
  1. Data will be sent from event adapters and the analytics REST API to the DAS server.
  2. Received data will be stored through the data layer in the underlying data store (RDBMS or HBase). 
  3. A background indexing process fetches the data from the data store, and does the indexing operations.
  4. Analyzer engine, which is powered by Apache Spark will analyze this data according to defined analytic queries. This will usually follow a pattern of retrieving data from the data store, performing a data operation such as an addition, and storing data back in a data store. 
  5. The dashboard will query the data store for the analyzed data and will display them graphically.
  • No labels