Diameter Server and Client Inside Same Kubernetes Cluster(3.3)
The following examples shows how to setup both Diameter server and client inside the same cluster.
First setup the Server and Client clusterIPs in their respective yaml files as shown in the following examples. The selector part binds the IP address to the correct ECD. The ECD should only contain one pod. The port is cluster-global, which means that two ECDs can’t use the same port (as listening port).
The listening port for the two server workflows are 3868 (server), 3870 (backupserver), and 3869 (client).The string
ecd: client
matches the same string underlabels
in the ECD yaml shown further below. See example.Run the following command to get the IP address for the ECD:
kubectl get svc
For example:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE clientip ClusterIP 10.98.186.113 <none> 3869/TCP 41d mz-operator-controller-manager-metrics-service ClusterIP 10.102.28.47 <none> 8443/TCP 43d mzonline NodePort 10.106.161.255 <none> 80:32008/TCP 43d platform NodePort 10.109.197.192 <none> 9000:30022/TCP,6790:32193/TCP 43d serverip ClusterIP 10.110.121.222 <none> 3868/TCP 38d
Run the following command:
kubectl apply -f path/to/file.yaml
Since the server needs to have information about both the client and server hostnames, so both the hostnames need to be configured for both ECDs.
The following examples (for the client and server) is just an excerpt of the full ECD to show how to use hostnames:
The following is an example of a full ECD - one Diameter client and two Diameter servers (three peers). The hostnames must be included in the yaml file irrespective of the fact if it is for client or server.Run the following command:
Note!
The label ecd: server
connects the cluster IP to the pod.
Example - Full client ecd
nodeHost
The nodeHost field is needed to bind the pod to a host. If you don't mention the workflow as part of the ECD-yaml, you need to configure the workflow to run on the ECD's ECGroup (after the ECD has been created).