Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Your workflow must be designed in a way that can process batch workflows, for example, there has to be at least one common denominator in the data that links individual records such as, an id field. This field will then be used as the customer 'key' and will determine which partitions the data is assigned to.

Info

Example use case for batch scaling

...

Child pages (Children Display)
depth1
allChildrentrue
style
sortAndReverse
first0

...

From chat with Michal: (internal notes)

How does the new solution differ from what users can configure now? The information on Automatic Scale Out and Rebalancing (4.3) is not related to batch scaling. It references Kafka doing some partitioning work based on what is configured in the Kafka agent. DRs new Batch scaling solution does the partitioning work within the inter-WF agents.

How does the new solution know when to scale? Is it based on the number of raw data files that get collected at any one time?  - right now you have to manually configure your ECD to scale based on a known metric i.e. if the data file amount is over 1000 files…

Look at the example image from the doc: 

is it the File collection workflow that creates the partitions?  not really, but it is sort of the scalable InterWF forwarding agent or as Michal says - any agent using the Partition profile. 

It creates the partitions based on the Max Scale Factor paramater? True - says Michal - this will set the max number of parallel workflows as well. 

Where is the Max scale factor parameter located? In the Partition Profile configuration.

Our example shows 3 workflows - Does there have to be exactly 3 workflows in a solution? Is there a minimum/maximum amount of workflows needed to create a working solution? there is no maximum or minimum amount of workflows required. 

...