...
Usage Engine Private Edition now supports horizontal scaling of batch workflows, increasing or decreasing processing capacity as needed without manual intervention. As a general concept, batch scaling is a way to speed up processes by splitting the workload between multiple ‘workers,’ enabling them to complete tasks in parallel rather than sequentially. Usage Engine’s solution consists of two new agents, a Scalable Inter Workflow Forwarding agent and a Scalable Inter Workflow Collection agent (Scalable InterWF). Two new profiles have also been created - the Partition Profile and the Scalable Inter Workflow Profile. The feature uses the existing agents, Data Aggregator and Deduplication, which have been updated to support a Kafka storage type. Kafka must be configured for all storage within your scalable batch solution.
How it works
Scalable WFs workflows operate by splitting batch data into partitions so that multiple WFs workflows can cooperate to process a batch. each Each scaled WFs workflow is assigned one or more partitions and will process all the data assigned to them. When WFs workflows are started or stopped, a rebalance is performed where partitions are reassigned to the new set of WFsworkflows.
This example shows a batch processing setup where you collect files and perform duplication checks and aggregation. We have set up two workflows in our batch scaling solution.
...