Overview
Usage Engine Private Edition now supports batch scaling, making it possible to increase or decrease processing capacity as needed without manual intervention. As a general concept, batch scaling is a way to speed up processes by splitting the workload between multiple “workers” or resources, enabling them to complete tasks in parallel rather than sequentially. Usage Engine’s solution consists of two new agents, the Scalable InterWF Forwarder and Collector, and a new profile, the Partition Profile. It also uses the existing agents, Data Aggregator and Deduplication, which have been updated to include a Kafka storage profile. Add something here about recommended use cases as per note above.
Prerequisites for Kafka?
Are there any prerequisites required to be able to configure automatic batch scaling?…
How it works
You collect a large number of files and you want to process the data in them more efficiently. This can be achieved by creating… ,
The File collection workflow(s) will use the ID Fields (e.g. customer id?) to determine which shard/partition a UDR belongs to - they manage the InterWF partitions
you use the new agent InterWF Collector, to pick up the files from the external system/ IF storage (InterWF partition). You also need to have Duplication checks after which you will use the InterWF Forwarder to take the non-duplicated files and feed them to the Aggregation partitions on the data (pretty common processes in any workflow group) You will use the current agents Deduplicate and Data Aggregator, however they will have a new storage profile option for Kafka, which you need to configure. Finally you would use the other new agent
Assume that you have a batch use case where you collect files, and have to do duplication checks and aggregation. You want to be able to scale. You need 2 or 3 WFs. In the picture below we use 3 WFs.
The File collection workflow(s) will use the ID Fields (e.g. customer id?) to determine which shard/partition a UDR belongs to.
The number of partitions is determined using the Max Scale Factor parameter. The number of partitions will be the same for all different storages needed:
Passing of UDRs between workflows.
Duplicate UDR keys.
Aggregation Sessions
The Duplication Check workflow(s) will check for duplicates across all partitions. Checked UDRs are placed in another topic with the same corresponding partitions as the topic the workflow collected from. (The Duplication Keyes are saved in a separate topic with the same number of partitions having the same ID fields.)
The Aggregation workflow(s) will collect from an inter-workflow topic, and work against a separate aggregation session storage topic.
Subsections
This section contains the following subsections: