...
Usage Engine Private Edition now supports batch scaling, making it possible to increase or decrease processing capacity as needed without manual intervention. As a general concept, batch scaling is a way to speed up processes by splitting the workload between multiple “workers” or resources, enabling them to complete tasks in parallel rather than sequentially. Usage Engine’s solution consists of two three new agents, the A Scalable File Collection agent and Scalable InterWF Forwarder and Collector , and a agents. A new profile , has also been created - the Partition Profile. It also The feature uses the existing agents, Data Aggregator and Deduplication, which have been updated to include a Kafka storage profile. Kafka must be configured for all storage within your batch scaling solution. Add something here about recommended use cases as per the note above. ?
How it works
Assume that you have This example shows a batch use case processing setup where you collect files , and have to do perform duplication checks and aggregation. You We want to make your this solution scalable to improve the processing times of your our data during periods of high usage. You will need to create 2-3 workflows in your new We need to set up two to three workflows in our batch scaling solution. In this example, we use three.
...
I think we should add something here to explain what a partition is… also may be helpful to link to this Doc..Automatic Scale Out and Rebalancing (4.3).
The File collection workflow(s) manage the InterWF ( Inter workflow (InterWF) partitions and they . They will use an ID Field (e.g. customer ID) to determine which partition a UDR belongs to.
The number of partitions created is determined by the Max Scale Factor parameter. This is configured in ….
...