com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_link3' is unknown.

Scheduling Tasks

Task scheduling is used to invoke an operation periodically or only a specified number of times. The scheduling functionality is useful when a specific data service operation scheduled to execute is associated with an event-trigger. When such a scheduled task is run, the event can be automatically fired by evaluating the even trigger criteria. For example, we can scheduled a task on getProductQuantity operation and set an event (e.g., sending an email) if the quantity goes down to some level.

Task scheduling functionality is provided by the following feature in the WSO2 feature repository:

Name : Data Service Tasks Feature
Identifier : org.wso2.carbon.dataservices.task.feature.group

The following topics are covered:

Tasks Configuration

The scheduled tasks configuration is a generic configuration, which is used by any component which requires scheduled tasks functionality. The scheduled tasks support many modes of operations, where it fully supports load balancing and fail-over of tasks. The tasks configuration file can be found at "/repository/conf/etc/tasks-config.xml". The default configuration is mentioned below.

tasks-config.xml
<tasks-configuration xmlns:svns="http://org.wso2.securevault/configuration">

    <!-- 
      The currently running server mode; possible values are:-
      STANDALONE, CLUSTERED, REMOTE, AUTO.
      In AUTO mode, the server startup checks whether clustering is enabled in the system, 
      if so, CLUSTERED mode will be used, or else, the the server mode will be STANDALONE.
    -->
    <taskServerMode>AUTO</taskServerMode>

    <!-- 
      To be used in CLUSTERED mode to notify how many servers are there in 
      the task server cluster, the servers wait till this amount of servers
      are activated before the tasks are scheduled -->
    <taskServerCount>2</taskServerCount>

    <!-- The address to which the remote task server should dispatch the trigger messages to, 
      usually this would be an endpoint to a load balancer -->
    <taskClientDispatchAddress>https://localhost:9448</taskClientDispatchAddress>

    <!-- The address of the remote task server -->
    <remoteServerAddress>https://localhost:9443</remoteServerAddress>

    <!-- The username to authenticate to the remote task server -->
    <remoteServerUsername>admin</remoteServerUsername>

    <!-- The password to authenticate to the remote task server -->
    <remoteServerPassword>admin</remoteServerPassword>

    <!-- Below contain a sample to be used when using with secure vault -->
    <!--remoteServerPassword svns:secretAlias="remote.task.server.password"></remoteServerPassword-->

</tasks-configuration>

The defaults values in the tasks-config.xml are chosen in a way, for the user to do minimal changes when running in both standalone and clustered modes. The task server mode is set to "AUTO" by default, which automatically detects if clustering is enabled in the server, and it by default switches to clustered mode of scheduled tasks. The task server count is set to "2" by default, where in a clustered setup, at least two nodes will be there. This setting basically represents the number of servers, who will be waiting for scheduled tasks to be shared between the given number of nodes at the startup. For example, if 10 tasks were saved and scheduled earlier, for some reason, later if they cluster was brought down, and then again, when individual servers are coming up, we do not want the first server up to schedule all the tasks, rather we will want several servers to come up and share the 10 tasks between them.

The task clustering is based on a peer-to-peer communication mechanism, and when carrying out the fail-over scenarios, it can rarely result in split-brain scenarios, where the same task can be scheduled without knowing it is already scheduled somewhere else. So the task implementors should make their best effort to make the task functionality idempotent, or come up with a mechanism to detect if the current task is already running elsewhere.  

Adding Scheduled Tasks

Follow the steps below to schedule a task.

  1. Log in to the Data Services Server Management Console and select Data Services > Scheduled Tasks in the Main menu.    
  2. The Scheduled Tasks window opens. Click Add New Task.     Fill the required information .

    You can configure a task to invoke a data service operation or to use a custom Java class that implements the org.wso2.carbon.dataservices.task.DataTask int erface. To successfully create a task, provide the following set of properties:

    • Task Name : Name of the scheduled task
    • Task Repeat Count : Number of cycles to be executed. If you enter 0, the task will execute once. If you enter 1, the task will execute twice and so on.
    • Task Interval : Time gap between two consecutive task executions
    • Start Time : Starting time of the scheduled task. If this is not given, the task will start at the moment the it is scheduled.
    Parameters required to define a task that uses a data service operation
    • Data Service Name : Name of the relevant data service.
    • Operation Name : Data service operation to be executed from the task.

      Note: Only data services with HTTP endpoints are available when scheduling tasks to invoke data service operations. Also, you can use only operations with no input parameters when scheduling.

    Parameters required to define a task that uses a custom java class
    • Data Service Task Class : Name of the java class that implements org.wso2.carbon.dataservices.task.DataTask interface. The definition of the interface is follows:

      package org.wso2.carbon.dataservices.task;
      
      /**
       * This interface represents a scheduled data task.
       */
      public interface DataTask 
          void execute(DataTaskContext ctx);
      }

      The following code snippet shows a sample DataTask implementation:

      package samples;
      import java.util.HashMap;
      import java.util.Map;
      import org.wso2.carbon.dataservices.core.DataServiceFault;
      import org.wso2.carbon.dataservices.core.engine.ParamValue;
      import org.wso2.carbon.dataservices.task.DataTask;
      import org.wso2.carbon.dataservices.task.DataTaskContext;
      
      public class SampleDataTask implements DataTask {    
         @Override    
         public void execute(DataTaskContext ctx) {
             Map<String, ParamValue> params = new HashMap<String, ParamValue>();
             params.put("increment", new ParamValue("1000"));
             params.put("employeeNumber", new ParamValue("1002"));
             try {
                 ctx.invokeOperation("RDBMSSample", "incrementEmployeeSalary", params);
             } catch (DataServiceFault e) {
               // handle exception
             }
          }
      } 
com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_links2' is unknown.