The following examples shows how to setup both Diameter server and client inside the same cluster.
First setup the Server and Client clusterIPs in their respective yaml files as shown in the following examples. The selector part binds the IP address to the correct ECD. The ECD should only contain one pod. The port is cluster-global, which means that two ECDs can’t use the same port (as listening port).
The listening port for the two server workflows are 3868 (server), 3870 (backupserver), and 3869 (client).Example - Server ClusterIP yaml
apiVersion: v1 kind: Service metadata: name: serverip spec: type: ClusterIP ports: - targetPort: 3868 port: 3868 name: server - targetPort: 3870 port: 3870 name: backupserver selector: ecd: server
Example - Client ClusterIP yaml
apiVersion: v1 kind: Service metadata: name: clientip spec: type: ClusterIP ports: - targetPort: 3869 port: 3869 name: client selector: ecd: client
The string
ecd: client
matches the same string underlabels
in the ECD yaml shown further below. See example.IP addresses
For both peers inside the cluster it should be the IP of the clusterip that's connected to the ECD in question. So first create the clusterip for both client and server and then get the IP address using
kubectl
command as shown below and put that in the hostAliases section of the ECD yaml.For one peer inside the cluster and one outside, the IP address of the node should be used for the peer that is inside the cluster, and the IP address of the peer outside the cluster must be reachable from the cluster.
Run the following command to get the IP address for the ECD:
kubectl get svc
For example:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE clientip ClusterIP 10.98.186.113 <none> 3869/TCP 41d mz-operator-controller-manager-metrics-service ClusterIP 10.102.28.47 <none> 8443/TCP 43d mzonline NodePort 10.106.161.255 <none> 80:32008/TCP 43d platform NodePort 10.109.197.192 <none> 9000:30022/TCP,6790:32193/TCP 43d serverip ClusterIP 10.110.121.222 <none> 3868/TCP 38d wd NodePort 10.99.248.181 <none> 9999:31687/TCP 43d
Note!
The IP addresses retrieved from this command are then used for the respective hostAliases.
Run the following command:
kubectl apply -f path/to/file.yaml
Since the server needs to have information about both the client and server hostnames, so both the hostnames need to be configured for both ECDs.
The following examples (for the client and server) is just an excerpt of the full ECD to show how to use hostnames:Example - hostAliases for Client
- ip: "10.110.121.222" hostnames: - "server.digitalroute.com" - "server" - "backupserver.digitalroute.com" - "backupserver" - ip: "10.98.186.113" hostnames: - "client.digitalroute.com" - "client"
Example - hostAliases for Server
- ip: "10.110.121.222" hostnames: - "server.digitalroute.com" - "server" - "backupserver.digitalroute.com" - "backupserver" - ip: "10.98.186.113" hostnames: - "client.digitalroute.com" - "client"
The following is an example of a full ECD - one Diameter client and two Diameter servers (three peers). The hostnames must be included in the yaml file irrespective of the fact if it is for client or server.
Example - Full server-ecd
metadata: name: "diameter-server" annotations: meta.helm.sh/release-name: "mediationzone-ecd" meta.helm.sh/release-namespace: "davids" labels: app.kubernetes.io/managed-by: "Helm" app.kubernetes.io/component: "ecd" ecd: "server" apiVersion: "mz.digitalroute.com/v1alpha1" kind: "ECDeployment" spec: jvmArgs: - "Xms256m" - "Xmx512m" nodeHost: "dig-srv-test03.dev.drint.net" resources: requests: memory: "320Mi" limits: memory: "640Mi" hostAliases: - ip: "10.110.121.222" hostnames: - "server.digitalroute.com" - "server" - "backupserver.digitalroute.com" - "backupserver" - ip: "10.98.186.113" hostnames: - "client.digitalroute.com" - "client" manualUpgrade: false
Run the following command:
kubectl apply -f /path/to/file/serverecd.yaml
Note!
The label ecd: server
connects the cluster IP to the pod.
Example - Full client ecd
metadata: name: "diameter-client" annotations: meta.helm.sh/release-name: "mediationzone-ecd" meta.helm.sh/release-namespace: "davids" labels: app.kubernetes.io/managed-by: "Helm" app.kubernetes.io/component: "ecd" ecd: "client" apiVersion: "mz.digitalroute.com/v1alpha1" kind: "ECDeployment" spec: jvmArgs: - "Xms256m" - "Xmx512m" nodeHost: "dig-srv-test02.dev.drint.net" resources: requests: memory: "320Mi" limits: memory: "640Mi" hostAliases: - ip: "10.110.121.222" hostnames: - "server.digitalroute.com" - "server" - "backupserver.digitalroute.com" - "backupserver" - ip: "10.109.59.222" hostnames: - "client.digitalroute.com" - "client" manualUpgrade: false
nodeHost
The nodeHost field is needed to bind the pod to a host. If you don't mention the workflow as part of the ECD-yaml, you need to configure the workflow to run on the ECD's ECGroup (after the ECD has been created).
Note!
Since this is for internal cluster communication, the hostnames configured above together with the ports from the cluster IP can be used in the Diameter Stack agents and routing profiles. In this case, “client”, “server” and “backupserver” can be used as Diameter Host and “digitalroute.com” as Diameter Realm, and the ports are 3869 for the client and 3868 and 3870 for the servers.
Add Comment