com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_link3' is unknown.

Scheduling Report Generation

WSO2 SP allows you to generate reports with visualizations of processed data as explained in Generating Reports.  You can also schedule these reports to be generated and sent to the required stakeholders when required. WSO2 SP integrates with the Jasper Reports technology in order to provide this feature. 

To create a Siddhi application that generates reports and sends them as email attachments to the required addresses in a scheduled manner, follow the steps below.

Before you begin:

  • Download the following from here.
    • Jasper Reports ® Server
    • Jasper Reports ® Library
  • Setup a database of your choice. In this example, a MySQL database is configured.

     Click here for instructions to configure a MySQL database.
    1. Download and install MySQL Server.

    2. Download the MySQL JDBC driver.

    3. Unzip the downloaded MySQL driver zipped archive, and copy the MySQL JDBC driver JAR (mysql-connector-java-x.x.xx-bin.jar) into the <SP_HOME>/lib directory.

    4. Enter the following command in a terminal/command window, where username is the username you want to use to access the databases.
      mysql -u username -p 
    5. When prompted, specify the password you are using to access the databases with the username you specified.
    6. Add the following configuration under the Data Sources Configuration section of the <SP_HOME>/conf/editor/deployment.yaml file.

      You need to change the values for the username and password parameters to the username and password that you are using to access the MySQL database.

          - name: SweetFactoryDB
            description: Datasource used for Sweet Factory Supply Records
            jndiConfig:
              name: jdbc/SweetFactoryDB
              useJndiReference: true
            definition:
              type: RDBMS
              configuration:
                jdbcUrl: 'jdbc:mysql://localhost:3306/SweetFactoryDB'
                username: root
                password: root
                driverClassName: com.mysql.jdbc.Driver
                maxPoolSize: 50
                idleTimeout: 60000
                connectionTestQuery: SELECT 1
                validationTimeout: 30000
                isAutoCommit: false 
    7. To create a database table named SweetFactoryDB, issue the following commands from the terminal.
      mysql> create database SweetFactoryDB;
      mysql> use SweetFactoryDB;
      mysql> source <SP_HOME>/wso2/editor/dbscripts/metrics/mysql.sql;
      mysql> grant all on SweetFactoryDB.* TO username@localhost identified by "password";
       
  1. Start WSO2 Stream Processor in the editor mode my issuing one of the following commands from the <SP_HOME>/bin directory.
    • For Windows: editor.bat
    • For Linux: ./editor.sh

    Then access the Stream Processor Studio via the URL displayed in the start up logs.

    The default URL is http://localhost:9390/editor

  2. Click New to start creating a new Siddhi application. 
  3. In the new Siddhi file that opens, enter a name and a description for the application as shown in the example below.
  4. Enter the following data to create a basic Siddhi application

    define stream ProductionStream (name string, amount long);


    define table ProductionData(name string, totalProduction long);

    @info(name='ProductionTotalQuery')
    from ProductionStream
    select name, sum(amount) as totalProduction
    group by name
    insert into ProductionData;


    In the above Siddhi application, basic production data (i.e., the name of the product and the amount produced) is captured via the ProductionStream input stream. Then the total production for each product is calculated via the sum() function, and the results are inserted into the ProductionData database table.

  5. To publish the information you processed and saved in the ProductionData table in a report, you need an output stream to which the sink via which the information is published can be connected. Define the output stream as follows.

    define stream ProductionStream (name string, amount long);

    define stream ProductionTotalStream(name string, totalProduction long);

    define table ProductionData(name string, totalProduction long);

    @info(name='ProductionTotalQuery')
    from ProductionStream
    select name, sum(amount) as totalProduction
    group by name
    insert into ProductionData;

  6. To publish the information to be included in the report, connect a sink to the output stream as follows.

    define stream ProductionStream (name string, amount long);

    @sink(type = 'report',title=’Weekly Production’, outputpath='/home/XXX/Projects/Jasper/Reports',@map(type = 'json'))
    define stream ProductionTotalStream(name string, totalProduction long);

    define table ProductionData(name string, totalProduction long);
     

    @info(name='ProductionTotalQuery')
    from ProductionStream
    select name, sum(amount) as totalProduction
    group by name
    insert into ProductionData;

    Here, the sink connected needs to be of the report type. The parameters and annotations configured for this sink are as follows.

    Parameter/AnnotationDescription
    titleThe title of the report. In this example, it is Weekly Production.
    outputpathThe path to the directory in your macxhine where the generated report needs to be saved.
    @mapThe format in which the report needs to be published.
    In this scenario, you need to generated a report based on a chunk of events. To enable this, the map type needs to be JSON. If not, a report is generated per event.

    For information about all the possible parameters and annotations that can be configured for the report sink type, see documentation for the siddhi-io-report extension.

  7. To trigger the periodical generation of reports, define a trigger as follows.

    define stream ProductionStream (name string, amount long);

    @sink(type = 'report',title=’Weekly Production’, outputpath='/home/XXX/Projects/Jasper/Reports',@map(type = 'json'))
    define stream ProductionTotalStream(name string, totalProduction long);

    define table ProductionData(name string, totalProduction long);

    define trigger SetupTrigger at '0 00 00 ? * SAT';

    @info(name='ProductionTotalQuery')
    from ProductionStream
    select name, sum(amount) as totalProduction
    group by name
    insert into ProductionData;

    This trigger polls the ProductionData table at midnight every Saturday to get the latest production data available at that time.

  8. To schedule the generation of reports, add another query to the Siddhi application as follows.

    @info(name='TriggerQuery')
    from SetupTrigger#rdbms:query('SweetFactoryDB',
    "SELECT * FROM ProductionData;", 'name string, totalProduction long')
    select name,totalProduction
    insert into ProductionTotalStream;

    Here, the SetupTrigger trigger you previously defined is applied to the ProductionData table. An RDBMS query gets all the data in this table and inserts that into the ProductionTotalStream output stream. This is the stream to which the sink of the report type is connected. Therefore, when the Siddhi application is executed, this information extracted from the table and then inserted into the output stream are published in a report.

The complete Siddhi application looks as follows:

@App:name("ProductionReportApp")
@App:description("Scheduled Production Reports")

-- Please refer to https://docs.wso2.com/display/SP400/Quick+Start+Guide on getting started with SP editor. 

define stream ProductionStream (name string, amount long);

@sink(type = 'report', title='Weekly Production', outputpath='/home/XXX/Projects/Jasper/Reports', @map(type = 'json'))
define stream ProductionTotalStream(name string, totalProduction long);


define table ProductionData(name string, totalProduction long);

define trigger SetupTrigger at '0 00 00 ? * SAT';

@info(name='ProductionTotalQuery')
from ProductionStream
select name, sum(amount) as totalProduction
group by name 
insert into ProductionData;


@info(name='TriggerQuery')
from SetupTrigger#rdbms:query('SweetFactoryDB', 
"SELECT * FROM ProductionData;", 'name string, totalProduction long')
select name, totalProduction
insert into ProductionTotalStream;

Using an external JRXML file to generate reports

If you want to use to use an external JRXML file to generate reports, you must include the following in the sink definition.

  • The path to the required JRXML file.
  • The dataset parameter that is defined in the JRXML file.

e.g., In this example, the path to the JRMXL file is defined as template='/home/../scheduled-reporting/template.jrxml'. The dataset parameter is included as dataset='TableDataSource'.

@sink(type = 'report' , template='/home/../scheduled-reporting/template.jrxml', dataset='TableDataSource', outputpath='/home/../Jasper/Reports',@map(type = 'json'))

define stream ProductionTotalStream(name string, totalProduction long);

Including multiple charts in a report

To enable multiple query support, the sink definition should include the following parameters:

  • The query.mode parameter needs to be set to true.  
  • The datasource.name parameter needs to be included. The value should be the same as the datasource you defined under datasource configurations in the <SP_HOME>/conf/editor/deployment.yaml file when seeting up the database for your scheduled report generation scenario.

Then the queries can be defined as a JSON string as shown in the example below.

@sink(type='report',
  	outputpath='/abc/example.pdf',
  	query.mode='true',
  	datasource.name='POPULATION_DATA',
  	queries="""[{"query":"SELECT * FROM SampleTable;","chart":"table"},
              	{"query":"SELECT Value, Selection FROM SampleTable;","chart":"line","series":"Value","category":"Selection",
               	"chart.title":"Sample chart"}]""",
  	@map(type='json')
  	)

The queries should be provided in the JSON format where the query, chart type the query data should represent, the series and category column names, the chart title.


com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_links2' is unknown.