...
Overview
Usage Engine Private Edition now supports batch scaling, making it possible to increase or decrease processing capacity as needed without manual intervention. As a general concept, batch scaling is a way to speed up processes by splitting the workload between multiple “workers” or resources, enabling them to complete tasks in parallel rather than sequentially. Usage Engine’s solution consists of two new agents, the Scalable InterWF Forwarder and Collector, and a new profile, the Partition Profile. It also uses the existing agents, Data Aggregator and Deduplication, which have been updated to include a Kafka storage profile. Add something here about recommended use cases as per the note above.
How it works
Assume that you have a batch use case where you collect files, and have to do duplication checks and aggregation. You want to make your solution scalable to improve the processing times of your data during periods of high usage. You will need to create 2-3 workflows in your new batch scaling solution. In this example, we use three.
...
The Aggregation workflow(s) will collect data from an inter-workflow topic and use a separate aggregation session storage topic.
Prerequisites for Kafka?
Are there any prerequisites required to be able to configure batch scaling using Kafka storage?
Subsections
This section contains the following subsections:
...