Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 29 Next »

Private_Edition_flag.png

Overview

Usage Engine Private Edition now supports horizontal scaling of batch workflows, increasing or decreasing processing capacity as needed without manual intervention. As a general concept, batch scaling is a way to speed up processes by splitting the workload between multiple “workers” or resources, enabling them to complete tasks in parallel rather than sequentially. Usage Engine’s solution consists of two new agents, a Scalable Inter Workflow (Scalable InterWF) Forwarding and a Scalable InterWF Collector agent. A new profile has also been created - the Partition Profile. The feature uses the existing agents, Data Aggregator and Deduplication, which have been updated to include a Kafka storage profile. Kafka must be configured for all storage within your batch scaling solution.

Add something here re: adding the kAFKA profile in the execution tab on the workflow properties needs to be configured for the Scalable WF.

How it works

This example shows a batch processing setup where you collect files and perform duplication checks and aggregation. We want to make this solution scalable to improve the processing times of our data during periods of high usage. We have set up two workflows in our batch scaling solution.

batchScaling.png
  1. The Scalable InterWF Forwarding agent in the File collection workflow manages the partitions. It uses an ID Field (e.g. customer ID) to determine which partition a UDR belongs to.

  2. The maximum number of partitions created is determined by the Max Scale Factor parameter in the Partition Profile.

Note!

The number of partitions will be the same across all topics. The points of storage will occur, for example,

  • With the passing of UDRs between workflows.

  • When duplicate UDR keys are detected.

  • For aggregated sessions.

  1. The Duplication Check workflow will check for duplicates across all partitions. Checked UDRs are placed in an additional topic with the same partitions as the corresponding Collection workflow topic. Any duplicate keys are saved in a separate topic.

  2. The Aggregation workflow will collect data from an inter-workflow topic and use a separate aggregation session storage topic.

Prerequisites for Kafka/batch scaling?

Your workflow must be designed in a way that can process batch workflows for example, there has to be at least one common denominator in the data that links individual records.

Example use case for batch scaling

Subsections

This section contains the following subsections:

From chat with Michal: (internal notes)

How does the new solution differ from what users can configure now? The information on Automatic Scale Out and Rebalancing (4.3) is not related to batch scaling. It references Kafka doing some partitioning work based on what is configured in the Kafka agent. DRs new Batch scaling solution does the partitioning work within the inter-WF agents.

How does the new solution know when to scale? Is it based on the number of raw data files that get collected at any one time?  - right now you have to manually configure your ECD to scale based on a known metric i.e. if the data file amount is over 1000 files…

Look at the example image from the doc: 

is it the File collection workflow that creates the partitions?  not really, but it is sort of the scalable InterWF forwarding agent or as Michal says - any agent using the Partition profile. 

It creates the partitions based on the Max Scale Factor paramater? True - says Michal - this will set the max number of parallel workflows as well. 

Where is the Max scale factor parameter located? In the Partition Profile configuration.

Our example shows 3 workflows - Does there have to be exactly 3 workflows in a solution? Is there a minimum/maximum amount of workflows needed to create a working solution? there is no maximum or minimum amount of workflows required. 

Are there any prerequisites required to be able to configure batch scaling using Kafka storage? yes -  your workflow has to be designed in a way that can process batch workflows for example there has to be at least 1 common denominator in the data that links individual records.



  • No labels