com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_links' is unknown.

Creating a STORM Based Distributed Execution Plan

WSO2 DAS uses storm based distributed execution plan to store the processing logic to be used in a distributed mode deployment.

Writing an execution plan

The procedure for creating an execution plan is the same as that in Creating a Standalone Execution Plan. In addition, the following annotations are used in the Siddhi queries.

AnnotationDescriptionExample
@dist (parallel='<number of Storm tasks>)The number of storm tasks in which the query should be run parallel.@dist(parallel='4')
@dist(execGroup='name of the group')

All the Siddhi queries in a particular execGroup will be executed in a single Siddhi bolt.

@dist(execGroup='Filtering')
@Plan:dist(receiverParallelism='number of receiver spouts') The number of event receiver spouts to be spawned for the Storm topology.@Plan:dist(receiverParallelism='1')
@Plan:dist(receiverParallelism='number of publisher spouts') The number of event publisher spouts to be spawned for the Storm topology.@Plan:dist(publisherParallelism='4')

 

The following execution plan is populated with the above mentioned annotations.

Note:
Every Siddhi query in a particular execGroup should have the same number of tasks as shown in the execution plan above ( e.g., parallel = '4' ). If the queries need to be distributed across different siddhi bolts, the execGroup names of the queries should differ from each other.
com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_links2' is unknown.