Batch Scaling workflow design guide (4.3)

 

Private_Edition_flag.png

When creating a scalable batch workflow in Usage Engine, it’s important to ensure that all agents with storage capabilities are configured to use Kafka storage. Additionally, scalable workflows require scalable Inter Workflow Collection and Forwarding Agents, as regular Inter Workflow Agents are not compatible. Mixing agents with different storage types, such as a Data Aggregator agent configured with Kafka storage and another with file storage, within the same workflow is not supported.

Creating a scalable solution (example)

These are high-level steps to creating a scalable batch solution in Usage Engine. The following example solution is made up of several profiles including the newly created Partition Profile (4.3) and Scalable Inter Workflow Profile (4.3), and two workflow types, Batch Scaling Collection and Batch Scaling Processing.

  1. Decide on your scaling factor, this will be the maximum number of workflows that can effectively cooperate to process a batch. This is an important choice and will be difficult to change once your workflows are in production.

Warning!
Try to pick a Max Scale Factor that is divisible by many other numbers, like 6 or 12. You need to ensure that it is high enough to handle the data coming in, but not so high that you will overload resources.

  1. You must choose one or more fields in your UDRs that will be used to partition data. These fields may be based on a record group like a customer ID or an account number.

  2. Create a Kafka Profile pointing to your cluster

  3. Create a Partition Profile where you define your Max Scale Factor and your partitioning fields.

  4. Create the Aggregation, Duplicate UDR, and Scalable Inter Workflow profiles and link the Partition Profile created in Step 2 to each.

  5. Create your workflows.

    • Standard workflows - prepare data for scaling by sending it to the scalable InterWF Forwarder

    • Scalable processing workflows - collect data with a Scalable InterWF Collector.

Warning!
When creating a scalable workflow you must add the Kafka profile in the execution tab of the workflow properties.

Note!
You can include multiple Aggregation and Duplicate UDR agents within the same workflow. These agents can either share the same Partition Profile or use different Aggregation and Duplicate UDR Profiles. For instance, you might use different profiles if you need to apply a different ID field as the Key in storage.

Scaling Batch Workflows

Usage Engine will scale out and in and re-balance scalable batch workflows automatically and you can schedule when to start a scale-out or scale-in.

Deploying a scale-out configuration with ECDs:

Use https://infozone.atlassian.net/wiki/x/IgMkEg with Dynamic Workflows to define how to package a scale-out. See the tabs on https://infozone.atlassian.net/wiki/x/VgQkEg for more information.

Â