The following guide can be used to assist you when creating your unique batch scaling solution. An important thing to remember is that you cannot mix standard agents with scaling agents in the same workflows. Workflows with standard agents save the state in Usage Engine, but workflows with batch agents save the state in Kafka.
Creating a scalable solution
Always start with a Batch Scaling Collection Workflow that collects from the original file source and forwards UDRs to Kafka.
The Batch Scaling Processing Workflows can be one or a series of workflows. Batch Duplication Check and Aggregation can be part of the same workflow. There can only be one Aggregation agent and one Deduplication agent per workflow.
Decide how many maximum workflows should execute in parallel. Think about how you can evenly distibrute (shard) your data into different groups. For example….add example here. Finally you will need to select an identifier that the workflow will use to distribute the UDR. Typically, this would be a field based on the record group like a customer ID or an account number. You also have the option to create and populate such a field using APL (insert link).
UI Parameters
Parameter | Comment |
---|---|
ID Field | Defines how to match a UDR to a partition. |
Max Scale Factor - this is located in the Partition profile config. | Number of partitions, which is the same as maximum number of workflows that can execute in parallel. - this means that there can be fewer workflows - but not more. Note! If any of the parameters needs to be changed, it is considered a new configuration, and they need to start with empty topics. You can use the existing data, but you must use the standard Kafka Agents and migrate the data. Or do we even want to mention this? |
Scaling Batch Workflows
Usage Engine will scale out and in and re-balance scalable batch workflows automatically and you can schedule when to start a scale-out or scale-in.
Deploying/grouping a scale-out configuration with ECDs:
Use the regular ECD (Execution context deployment) definition using Dynamic Workflows to define how to package a scale-out. For instance: yOU HAVE TO MANUALLY DEFINE when these ECDs will activate.
A Collection Workflow scales with 1 extra Workflow per ECD.
A Processing Workflow scales with 3 extra Workflows per ECD.
Or combine the above into the same ECD.
ECD from Chat GPT:
An Execution Context Deployment (ECD) is a setup that defines the environment in which a specific part of software will run. Think of it as a container or area that holds everything needed for certain tasks to execute smoothly. This includes details like where the software code will run, the necessary resources (like memory and processing power), and specific settings or permissions.
In simple terms, an ECD is like setting up a workspace that’s fully equipped for a job, so the tasks can start and run without interruptions or missing tools.
Scheduling a scale-out configuration:
You can schedule the ECD and workflow to start or stop at specific times, alternatively, these can also be started manually. This is configured in… If no schedule for scaling is created, the system will scale automatically based on metrics.
Add an image of the setting for the manual schedule.
Use this table to explain the settings in the image.
Automatic Scaling | Manual Scaling |
---|---|
|
|