...
Click on an example to know more:
...
Single connection TCP based collection workflows
First, let's consider the case where a realtime collection workflow is exposing a raw TCP or UDP port without fronting it with a load balancer. In this scenario, there is a one to one mapping between the client connection and the workflow. Scaling the ECD is not applicable here. Instead scaling on multiple connections is done by creating new ECDs, configured with different IP ports.
...
Example workflow export: https://github.com/digitalroute/mz-example-workflows/tree/master/tcpcollect/export
...
Scalable TCP based collection workflows
Next, let's consider a case where a TCP load balancer is used to distribute load across a number of backend workflows. An external load balancer together with a Kubernetes Service resource distributes the traffic across the workflows. The workflows can expose the same IP port as it is a cluster internal port. Kubernetes networking takes care or routing the traffic to the correct EC. TCP based load balancing is classified as OSI Level 4 (transport level) load balancing.
...
Example workflow export: https://github.com/digitalroute/mz-example-workflows/tree/master/tcpcollect/export
...
Single connection peer-to-peer UDP based workflows
Certain low level protocols, especially in the telecom domain, require the IP address of the sender to be known to the receiver. In such scenarios, it can be necessary to map connection from the client directly to a physical node, to avoid that the traffic is being proxied between cluster nodes. The ECD does need to be tied to the physical node using a Nodeselector. This is not a scalable setup, but at least can be used to solve the peer-to-peer protocol with source address verification paradigm.
...
Example workflow export: https://github.com/digitalroute/mz-example-workflows/tree/master/radiuscollect/export
...
...
ECD for Scalable Realtime Processing
For a processing workflow, scaling can be very useful to dynamically distribute the load between worker nodes and adapt to traffic changes. A processing workflow must be connected to the collection node to receive payload data. There are different tools to achieve this. Either standard using support for standard protocols like HTTP or to use proprietary features Workflow Bridge or Inter Workflow. If HTTP is used, the setup is very similar to the HTTP collection example, with the difference that the ports does not have to be published external to the cluster. HTTP has the advantages that standard Kubernetes tools for traffic management and similar can be used. For instance Istio is a tool that can provide very powerful traffic shaping capabilities to be used together with processing workflows.
...
Example workflow export: https://github.com/digitalroute/mz-example-workflows/tree/master/wfbstream/export
...
ECD for external HTTP interface
You can also configure an ECD to expose an HTTP interface externally using a DNS name and a path. This require no changes to the HTTP server workflow and only minor changes to the ECD. You need to change the networking configuration to use an Ingress resource. Also a DNS resolvable 'host' must be assigned (in a public cloud setup, this will typically be setup during the installation) as well as a path. The DNS name and the path will form the URL on which the interface will be exposed. Finally, to expose an interface publicly in a secure manner, you should use encryption, which require the use of a certificate. The certificate is stored in a Kubernetes secret. In the example below, the system certificate 'mz-cert' is used.
...
Click on the following example to know more:
...
Disk Collection workflows added in a Workflow Group
This example shows how to add a Workflow Group in a ECD YAML file. It adds two members, one dynamic inside a Workflow Package and one static workflow. The dynamic workflow is also created by the ECD. The workflow group has a simple schedule to run every minute all days.
...