Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Current »

A workflow is loaded and started on an Execution Context according to a configured distribution criteria. For example, distribution of the workflow based on machine load, or explicitly specifying the Execution Contexts where the workflow should run. 

There are two kinds of Execution Contexts. One that can execute any type of workflow (EC) and one that can run stand-alone (ECSA). The stand-alone version only works with real-time workflows that are configured to not depend on external entities. The purpose with a stand-alone workflow is to allow it to run without relying on the Platform. For example, assume a work environment where either the network is unreliable, or the workflow must guarantee uptime, even if the Platform has terminated. If the Platform is down, a stand-alone Execution Context keeps track of events that occurred, and once the Platform is up and running again, these events are propagated to the Platform.

Execution Tab in Batch Workflow

The Execution tab has settings that are related to where the workflow will be executed and how often it will execute.

The Batch Workflow Execution tab


ItemDescription

Execution Settings

Select Enable to enable setup of the execution parameters.

Distribution

A workflow executes on an EC, or groups of ECs. You can specify these ECs, or the system can select them automatically.

Note!

If you select to configure the distribution using EC groups, the selected distribution type will also be applied on the ECs within the groups.

Hint!

You can combine both individual ECs and EC groups in the Execution Contexts list. The selected distribution will then be applied for all ECs stated either individually or in groups.

The following options exist:

Sequential - Valid only if ECs are defined. Starts the workflow on the first EC/EC group in the list. If this EC/EC group is not available, it proceeds with the next in line.

Workflow Count - Starts the workflow on the EC running the fewest number of workflows. If the Execution Contexts list contains at least one entry, only this/these ECs/EC groups will be considered.

Machine Load - Starts the workflow on the EC with the lowest machine load. If the Execution Contexts list contains at least one entry, only this/these ECs/EC groups will be considered. Which EC to select is based on information from the System Statistics sub-system.

Round Robin - Starts the workflow on the available ECs/EC groups in turn, but not necessarily in a sequential order. If ec1, ec2 and ec3 are defined, the workflow may first attempt to start on ec2. The next time it may start on ec3 and then finally on ec1. This order is then repeated. If an EC is not available, the workflow will be started on any other available EC.

Debug Type

Select Event to enable debug output (e g output from a debug call in the APL code) to appear in the Workflow Monitor .

Select File to save debug results in $MZ_HOME/tmp/debug. The file name is made up of the names of the workflow template and of the workflow itself, for example: MZ_HOME/tmp/debug/Default.radius_wf.workflow_2. Only debug events are written to the file. To save to file all the events that are shown when you select to monitor events in the Workflow Monitor, you must add an event notifier to the relevant Agent Message Event with file output. For further information, see 4. Event Notifications.

If you save debug results in a file, and you restart the workflow, this file gets overwritten by the debug information that is generated by the second execution. To avoid losing debug data of earlier executions, set Number of Files to Keep to a number that is higher than 0 (zero).

Number of Files to Keep : Enter the number of debug output files that you want to save. When this limit is reached, the oldest file is overwritten. If you set this limit to 0 (zero), the log file is overwritten every time the workflow starts.


Example - Debug output

The workflow configuration Default.radius_wf includes a workflow that is called workflow_2. Number of Files to Keep is set to 10.

The debug output folder contains the following files:

Default.radius_wf.workflow_2    (current debug file)
Default.radius_wf.workflow_2.1  (newest rotated file)
Default.radius_wf.workflow_2.2
Default.radius_wf.workflow_2.3
Default.radius_wf.workflow_2.4
Default.radius_wf.workflow_2.5
Default.radius_wf.workflow_2.6
Default.radius_wf.workflow_2.7
Default.radius_wf.workflow_2.8
Default.radius_wf.workflow_2.9
Default.radius_wf.workflow_2.10 (oldest rotated file)

According to this example there are totally 11 files that are being overwritten one-by-one and the rotation order is:

Default.radius_wf.workflow_2
             |
             V
Default.radius_wf.workflow_2.1
             |
             V
Default.radius_wf.workflow_2.2
             |
             V
             :
             :
             |
             V
Default.radius_wf.workflow_2.n
             |
             V
          Deleted

Example - Using the option Always Create a New Log File

If you have a workflow named Default.radius_wf with an instance called workflow_2 and you create new debug output files every time the workflow executes, the debug output folder contains files like the following:

Default.radius_wf.workflow_2.1279102896375
Default.radius_wf.workflow_2.1279102902908
Default.radius_wf.workflow_2.1279102907149

Note!

will not manage the debug output files when this option is used. It is up to the user to make sure that the disk does not fill up.

Transaction Data Storage

This field is only applicable if you have enabled Scalable Batch Transaction Service (SBTS).

When SBTS is disabled, batch transaction state information is always stored in a database, i e Derby, Oracle, or PostgreSQL.

When SBTS is enabled, use this drop-down list to select the storage:

  • Default Handler - Database.
  • Local File-Based Handler -  EC filesystem.

Note!

When you change the transaction data storage, the latest state will remain in the previous storage but will not be transferred to the new storage.

For further information about how to enable SBTS, see 2.13 Enabling Scalable Batch Transaction Service in the System Administrator's Guide.

Throughput Calculation

contains an algorithm to calculate the throughput of a running workflow. It locates the first agent in the workflow configuration that delivers UDRs, usually the decoder and counts the number of passed UDRs per second. If no UDRs are passing through the workflow, the first agent delivering raw data will be used. The statistics can be viewed in the System Statistics.

If a MIM value other than the default is preferred for calculating the throughput, the User Defined check box is selected. From the browser button, a MIM Browser dialog is opened and available MIM values for the workflow configuration is shown and a new calculation point can be selected.

Since the MIM value shall represent the amount of data entered into the workflow since the start (for batch workflows from the start of the current transaction), the MIM value must be of a dynamic numeric type, as it will change as the workflow is running.


Execution Tab in Real-Time Workflow


The Realtime Execution tab


ItemDescription
Execution Settings

Select Enable to enable setup of the execution parameters. This is required to configure a stand-alone workflow, i e when Execution Context Type is set to ECSA.

Autostarted Workflow

Select Enable to enable the automatic startup of all the workflows after they have been enabled in the Execution Manager.

For Instances / EC, set the number of workflow instances to be started per EC.

For Abort Behavior, you can select either Abort or Retry. If you select Abort, a workflow aborts when an error occurs. If you select Retry, a workflow tries to restart when an error occurs, and continues to retry until the workflow restarts.

You can define only one EC per autostarted workflow template.

Distribution

A workflow executes on an EC, or groups of ECs. You can specify these ECs, or the system can select them automatically.

Note!

If you select to configure the distribution using EC groups, the selected distribution type will also be applied on the ECs within the groups.

Hint!

You can combine both individual ECs and EC groups in the Execution Contexts list. The selected distribution will then be applied for all ECs stated either individually or in groups.

The following options exist:

Sequential - Valid only if ECs are defined. Starts the workflow on the first EC/EC group in the list. If this EC/EC group is not available, it proceeds with the next in line.

Workflow Count - Starts the workflow on the EC running the fewest number of workflows. If the Execution Contexts list contains at least one entry, only this/these ECs/EC groups will be considered.

Machine Load - Starts the workflow on the EC with the lowest machine load. If the Execution Contexts list contains at least one entry, only this/these ECs/EC groups will be considered. Which EC to select is based on information from the System Statistics sub-system.

Round Robin - Starts the workflow on the available ECs/EC groups in turn, but not necessarily in a sequential order. If ec1, ec2 and ec3 are defined, the workflow may first attempt to start on ec2. The next time it may start on ec3 and then finally on ec1. This order is then repeated. If an EC is unavailable, the workflow will be started on any other available EC.

Execution Context Type

Select an execution context type that the workflow should execute on.

The following options exist:

EC - This setting enables execution of the workflow on one or more ECs. If several ECs/EC groups are added to the Execution Contexts list, the selected Distribution is considered. If no EC/EC group is selected will consider all available ECs as possible targets.

ECSA - This setting enables execution of a stand-alone workflow that is independent of the Platform.

When ECSA is selected you must configure the workflow to execute on a specific ECSA/ECSA group.

Note!

Agents in the workflow will validate against the configured ECSA. If an agent depends on another ECSA, the workflow will be invalid.

To add an EC/EC group, click Add and select an item in the drop-down list.

Queue Size

The number of unprocessed entries (backlog) that the workflow can store in a buffer before the collector is slowed down. The workflow and its back-end systems might slow its processing activity when the number of requests rises. To avoid congestion, while the records or decoding tasks are in the queue, the queue intake is delayed to limit the backlog from growing too fast. Default value is 1000.

The value that you enter here is the size of each route's queue in the workflow.

Queue Strategy

Blocking queue

The default method in .

Ordered Routing

To be able to preserve the order of incoming UDRs, this option should be used in combination with Ordered Services in the Service tab.

To maintain the order of the UDRs as the agent sees it from the source, you can use the function ordered routing. This ensures that you retain the routing order, even if the work is divided over several threads. When using this function you must define how to catch the order of the UDRs. This can be done with APL in the Services tab in Workflow Properties.

  1. Via the 'input' value, typically by instanceOf checks for each routed type,
  2. Use ordered.addInteger(si, i) etc with values from session defining fields. Hashing will select partition. Alternatively use ordered.setPartition(si, N) to explicitly chose a partition.
    void route(order.SessionIdentifier si, any input) {....}

Queue Worker Strategy

By selecting Queue Worker Strategy, you can determine how the workflow should handle queue selection, which may be useful if you have several different collectors.

You have the following options:

RoundRobin

The RoundRobin strategy works in the same way as the InsertionOrder strategy, except that each workflow thread will be given its own starting position in the routing queue list. This means that as long as the number of workflow threads is equal to, or greater than, the number of routing queues, no queue will suffer from starvation.

Faster routes will get more load than slower ones. This option provides pretty fair distribution.

Use this strategy if the number of workflow threads is equal to, or greater than, the number of routing queues, and it is desirable to prioritize faster routes before slower ones. 

RoundRobin is the default strategy.


DedicatedAndRoundRobin

In the DedicatedAndRoundRobin strategy, each queue has one thread dedicated to it by default. The number of workflow threads (given in the Threads column) minus one, serve the queues in round robin fashion. The number of threads indicates the maximum number of threads that can collect from a queue at any one time. One workflow thread guarantees the order of the UDRs in the workflow.


InsertionOrder

With the InsertionOrder strategy, queues are selected in route insertion order. As long as there are queued UDRs available on the first queue, that queue will be polled. This means that routes with later insertion order may not receive as many UDRs as they have capacity for, and get no or little throughput. This type of condition may be detected by looking at the Queue Throughput for workflows in the System Statistics view.

Only use this strategy, if this is not an issue.

This is the preferred choice when you work synchronously with responses and process small amounts of UDRs at any given time (which is not the same as low throughput).

Note!

The insertion order depends primarily on how close the queues are to an "exit", i e an agent without any configured output. The queues that are closest to an exit will be inserted first. The queues that are furthest from an exit, will be inserted last. However, if the distance to an exit is equal for two or more queues, the insertion order is dictated by the sequence of the agents in the workflow configuration, i e the agent that was added first to the configuration has higher priority.

Threads

The number of workflow threads. The default value is 8.

Throughput Calculation

contains an algorithm to calculate the throughput of a running workflow. It locates the first agent in the workflow configuration that delivers UDRs, usually the decoder and counts the number of passed UDRs per second. If no UDRs are passing through the workflow, the first agent delivering raw data will be used. The statistics can be viewed in the System Statistics.

If a MIM value other than the default is preferred for calculating the throughput the User Defined check box is selected. From the browser button a MIM Browser dialog is opened and available MIM values for the workflow configuration is shown and a new calculation point can be selected.

Since the MIM value shall represent the amount of data entered into the workflow since the start (for batch workflows from the start of the current transaction) the MIM value must be of a dynamic numeric type, as it will change as the workflow is running.

Processed UDRs Count Interval (min)Select this option to specify the interval period in minutes for counting the number of processed UDRs. The default value is 1. The maximum value permitted is 1440 min (one day).
  • No labels