Diameter Server Inside Kubernetes Cluster and Diameter Client Outside Kubernetes Cluster(4.2)

There are two options (at least described here) for when one Diameter Peer is inside the cluster and one is outside (in this example we assume that the server is inside the cluster and the client is outside, but the suggested solution works for client inside too). One for a cluster hosted in a public cloud solution, and one for a privately hosted cluster. The public cloud solution could work for private cloud too depending on the CNI, and other networking infrastructure used. The solution for public cloud assumes that a LoadBalancer gets an outside ip, which is handled automatically by most public cloud providers, whereas that might not necessarily be the case in a private cluster depending on how it was set up.

Private Cluster - Will this be correct? Unless you setup a hostalias even for the pods running the diameter workflow(s)
If the cluster is a private cluster where the user is in control, the pod/ecd could be linked to a specific node using a so called “nodeSelector” so that you can make sure that the pod running the Diameter Workflow always ends up on the same node. This is described after the instructions for public cloud.

Public Cloud

If the installation is running in a public cloud environment such as GCP, AWS etc. first set up a Kubernetes type load balancer in the same namespace as the pod will run.

This is done using a yaml-file like this:

apiVersion: v1 kind: Service metadata: name: diameter-hosts namespace: uepe spec: selector: ecd: server ports: - port: 3868 targetPort: 3868 name: server - port: 3869 targetPort: 3869 name: backupserver type: LoadBalancer

Apply it using:

kubectl apply -f diameter_loadbalancer.yaml

This will open up the selected ports on an IP address that will be provided when that service is up and running, and map that traffic to the targetPort inside the cluster. (i.e. in this case this is assumed to be the Diameter Stack Agent's port). To see the IP address after startup of the service (line 8):

kubectl get svc -n uepe NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE desktop-online LoadBalancer 10.12.7.158 34.88.174.174 9001:30416/TCP 29d external-dns ClusterIP 10.12.3.173 <none> 7979/TCP 30d ingress-nginx-controller NodePort 10.12.7.7 <none> 80:31666/TCP,443:30005/TCP 30d platform LoadBalancer 10.12.1.178 35.228.105.131 9000:30593/TCP,6790:32665/TCP,443:31575/TCP 29d prometheus-adapter ClusterIP 10.12.4.140 <none> 443/TCP 23d diameter-hosts LoadBalancer 10.12.0.176 34.88.188.186 3868:31391/TCP,3869:30910/TCP 26d proxy2-ip LoadBalancer 10.12.3.29 35.228.109.202 3869:32761/TCP,3871:30548/TCP 26d uepe-operator-controller-manager-metrics-service ClusterIP 10.12.3.150 <none> 8443/TCP 29d

Next create the ECD and setup hostAliases for it. The ECD is connected to the service using the field under "selector" above, i.e. "ecd: server" which needs to be present in the ECD-yaml for this to work. Here is an example of an ECD set up like that, with the selector entered under “labels” to connect it to the service:

As can be seen here, there are two ip addresses connected to two host names, "client" and "server", where the server is connected to the ip of the service/loadbalancer, and the client host name should match the host name and ip address of the external diameter peer. Whereas the client ip obviously always has to match that of the client (or rather, the peer outside the cluster needs to be connected to it’s public ip), the server can also be connected to localhost (127.0.0.1) since the ip mapping defined here is only used by the server workflow itself.

For this to work (using Diameter in an ECD in Kubernetes), in the Diameter Stack agent the checkbox “Bind On All Interfaces” under the “Advanced”-tab needs to be checked. This is true regardless of if the localhost ip is used or the ip connected with the LoadBalancer service.

Private Cluster

The above solution can be used for a private cluster as well, and may be preferable, but there is also another option which is more suited for a privately hosted cluster, which is to use a so called node selector, which binds a pod to a specific node in the cluster, thus the ip of the node can be used by the client to connect to the workflow.

This is done by pointing out a tag that needs to be present in the node on which the diameter workflow should execute. Start by adding the tag to the node itself:

e.g:

And then to the ECD definition will look like this (nodeSelector):

And when the yaml-file is applied as:

In the above case the ip used is localhost. The idea is that the client uses the ip address of the node, obviously using the same hostname, in this case server, as the Diameter Server Stack, but localhost is used here since the Diameter Stack Agent doesn’t associate the node’s IP address as being its own. This together with checking the checkbox “Bind On All Interfaces” in the “Advanced”-tab of the Diameter Stack Agent achieves this.

It should always start on that node. This does of course have the downside that the pod won’t be able to switch to a different node if the node it is assigned to is unavailable for some reason. To reduce the risk of overloading said node by workflows/pods that don’t need to run there, something called “taints” can be used. Note however, that using taints will limit all workloads (this includes DaemonSets, k8s control plane components etc) that can be run on that particular node to the ones with a configured matching Toleration.

To do so, add a taint to a node like:

E.g.

And then you need to add this section to the ECD definition of any ECD that should be allowed to run on that node:

Now you should be ready to set up your workflow, in this case using “server” as host for the diameter stack in the client workflow inside the cluster.

For this to work (using Diameter in an ECD in Kubernetes), in the Diameter Stack agent the checkbox “Bind On All Interfaces” under the “Advanced”-tab needs to be checked. This is true regardless of if the localhost ip is used or the ip connected with the LoadBalancer service.