Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Once the topology is successfully submitted to Apache Storm, a log similar to the following example is printed in the CLI of wso2carbon.log file of the CEP manager.

Naming execution plans

...

For example, if an execution plan is created with the @Plan:name(‘StockAnalysis’) annotation for a the super-tenant with tenant ID 1234, the Storm topology can be viewed as follows in the Storm UI.
 

...

Checking the status of the execution plan

Before publishing data to an event flow with a distributed execution plan, check whether the following conditions are met to ensure that the execution plan is ready to process data.

...

Tip
titleThings to note
  • If you change the name of an execution plan after creating and saving it, remove the Storm topology generated with the previous name from the Apache Storm UI.
  • The number of execution plans that are allowed to be created in a CEP distributed deployment is determined by the number of slots (i.e. workers) in the Storm cluster. Each execution plan by default requires at least one slot to deploy its Storm topology. This value can be changed by adding the following configuration to the <CEP_HOME>repository/conf/cep/storm/storm.yaml file.
    topology.workers : <number> 
    For example, if you add the configuration topology.workers : 2, and the number of slots in the cluster is 10, then the maximum number of execution plans allowed to be created for the cluster is 5 (10/2).
  • At present, there is no way to control how bolts/spouts are spread across the Storm cluster. It is carried out in the Round Robin method by the default Storm Scheduler.
  • When adding RDBMS event tables please use below notation to define event tables as datasource based approach does not work with Storm.

    @From(eventtable='rdbms', jdbc.url='jdbc:mysql://localhost:3306/cepdb', username='root', password='root', driver.name='com.mysql.jdbc.Driver', table.name='RoomTable')
  • If you are using Hazelcast event tables, utilize a separate node in the network which runs Hazelcast and point to that node when defining the Hazelcast event table.

See the following samples for more information.