Networking

Network Plugin 

A Kubernetes cluster requires a networking plugin to function. Usage Engine works well with several different plugins, including the ones provided by the hyper-scaler platforms. For on-prem deployments, Calico is a powerful, robust, fast, and heavily tested network plugin. 

Kubernetes Networking 

To operate, Usage Engine needs to expose its services on IP ports reachable from outside of the Kubernetes cluster where it is hosted. Kubernetes offers multiple ways achieve this and Usage Engine offers flexibility in how to utilize Kubernetes networking features. 

These are the different Kubernetes networking resources available for an application to use. 

Service 

A Service is what bridges a group of executing pods with external or internal IP ports. A service consists of three pieces of information: a name, a way to identify the pods in its group (typically a label), and a way to access those pods (port and protocol). 

Services come in a few distinct types:  

  • ClusterIP – Accessible only from within the cluster. 
  • NodePort – Accessible on an externally exposed IP port on each node in the cluster. Easy to use but exposes only one service per port, are limited to the 30000-32767 port range and are lacking security policies. 
  • LoadBalancer – Connects a port in the cluster with an external Load Balancer, provided by the cloud environment. LoadBalancers support multiple protocols and multiple ports per service. They do consume one IP address for every service, which can add cost and overhead.  

Usage Engine supports dynamic creation of all these Services. 

The task of translating the service definitions into routing rules, effectively mapping the incoming traffic to the right pod, is handled by native components in Kubernetes, called kube-proxy and endpoint-controller. These work silently in the background. Usually all you need to know is that they exist and reliably solve the task of making sure the Service objects you have defined result in traffic being routed correctly, regardless of how Pods move between nodes or scale.  

Ingress 

An ingress is a routing rule applied on incoming traffic on a single externally exposed port. With Ingresses, payload data can be load balanced internally in the Kubernetes cluster without taking the overhead of assigning an IP address or port per endpoint. Ingress controllers can work on the transport- (OSI layer 4 - TCP/UDP) or application- (OSI layer 7 - HTTP/HTTPS) layer, though Usage Engine currently limits the support to the application layer. 

Ingress Controller 

Ingresses require an Ingress Controller to work. The Ingress Controller takes care of applying the routing rule defined in the Ingress resources as well as routing the incoming traffic to the correct backing Service. The backing service does not have to be exposed outside of the cluster since the Ingress Controller exposes its own port, usually through a NodePort. 

Usage Engine and Network Resources 

An overview of how networking resources can be used by Usage Engine is illustrated below. Note that this is only an example, and that this can depending on the implemented solution and on cloud environment specifics. 

Usage Engine and Network Resources

We distinguish between network of the Usage Engine platform itself and the solutions implemented on it using ECDs. Note that there is no built-in network level separation on traffic between these kinds of network traffic, although it is possible to achieve this through configuration in the environment 

Platform Networking 

The Usage Engine platform and web backend processes need to expose ports to enable access to REST APIs. To connect an external Desktop client also TCP level access, need to be exposed. 

The ports are: 

Pod Port 

Cluster Port 

Used for 

Protocol 

9000 

platform:80 or  
platform:443 

Operational APIs  
Desktop client authentication 

HTTP(S) 

6790 

platform:6790 

Desktop control interface 

TCP 

9999 

wd:9999 

Web Desktop - Web UI 

HTTP(S) 

8080 

mzonline:80 

DROnline - Web UI 

HTTP(S) 

 The ports in the table are by default exposed as NodePorts, but this can be changed during installation. 

Solution Networking 

Solutions deployed in Usage Engine are defined using ECD descriptor files. As part of an ECD descriptor any type of service objects can be created to expose IP ports bound by a solution to outside of the cluster. 

It is important to be aware of that the creation of certain types of services can lead to cloud infrastructure costs. When configuring services as part of ECDs, be careful to consider the implications of your choice. 

ClusterIP 

ClusterIP is the natural choice if the port should be exposed for intra-cluster communication only, like to connect workflows from two Pods with each other. 

NodePort 

NodePort is the natural choice to expose ports for TCP or UDP based peer-2-peer communication. For some protocols that require source IP validation (for instance Radius) it is necessary to also bind the Pod hosting the server workflow to a specific physical node and to configure the NodePort with “External Traffic Policy” set to Local, to avoid translation of source IP addresses. 

LoadBalancer 

LoadBalancer can be a viable choice if you want to leverage LoadBalancers from the cloud environment and is not concerned about the cost or overhead coming from the fact that each configured load balancer allocates an IP address and a load balancer resource in the cloud environment. 

Ingress 

Ingress is the most cost effective and powerful choice to enable advanced routing and load balancing of incoming HTTP(S) based traffic.