Assume the following mentioned GCP and OCI Kubernetes Cluster are created beforehand according to various cloud providers installation guide here.
This section describes how to setup Diameter Server and Diameter Client hosted in the same Kubernetes Cluster. For example, Diameter Server and Diameter Client hosted in GCP Kubernetes Cluster.
Assumed that Diameter Server and Diameter Client workflows have been created with the following configurations.
Diameter Server | Diameter Client | |
---|---|---|
Hostname | server-digitalroute-com | client-digitalroute-com |
Dynamic Workflow enabled | Yes | Yes |
Bind On All Interfaces | Yes | Yes |
Diameter Debug Event enabled | Yes | Yes |
Diameter Port | 3868 | 3869 |
Network Protocol | TCP | TCP |
Diameter Realm | digitalroute.com | digitalroute.com |
For this to work (using Diameter in an ECD in Kubernetes), in the Diameter Stack agent the checkbox “Bind On All Interfaces” under the “Advanced”-tab needs to be checked. This is true regardless of if the localhost ip is used or the ip connected with the LoadBalancer service.
To run these Diameter Server and Diameter Client workflows, deploy an ECD pod for each workflow respectively. Therefore, login to desktop-online Web UI, go to Manage menu, select EC Deployment > New EC Deployment.
To deploy an ECD Pod for Diameter Server workflow on the cluster, follow these steps:
On the Configure EC Tab, fill in the EC Deployment Name and the rest of the fields if needed. Click Next to continue.
On the Configure Auto Scale Tab, do not enable Auto Scaling. Click Next to continue.
On the Configure Workflow Tab, create a new real time workflow from the selected workflow template i.e., Diameter Server or Diameter Client (either one). Click Next to continue.
On the Configure Network Tab, create a Network Service type ClusterIP with appropriate port. The Service Name must be set to the Diameter Server or Diameter Client hostname.
Diameter Server | Diameter Client | |
---|---|---|
Service Name | server-digitalroute-com | client-digitalroute-com |
ClusterIP Port | 3868 | 3869 |
Before finalise the EC deployment, check and verify EC Deployment by pressing “View YAML“.
For instance, the Diameter Server YAML content looks like this:
apiVersion: "mz.digitalroute.com/v1alpha1" kind: "ECDeployment" metadata: creationTimestamp: "2024-09-09T03:41:05.000000Z" name: "ecd-server-digitalroute-com" namespace: "uepe" resourceVersion: "6342188" uid: "260e7271-14b7-43e7-8fb5-4535bb630f1d" spec: jvmArgs: - "Xmx512M" - "Xms512M" manualUpgrade: false persistence: pvcName: "ec1-filestore-pvc" ports: - containerPort: 3868 protocol: "TCP" resources: limits: memory: "640Mi" requests: memory: "640Mi" services: - name: "server-digitalroute-com" spec: ports: - name: "port-1" port: 3868 protocol: "TCP" targetPort: 3868 type: "ClusterIP" workflows: - instances: - name: "Diameter_Server" parameters: "{}" useExtRef: "{}" template: "Diameter.DiameterDCCAServerSimulator"
For instance, the Diameter Client YAML content looks like this:
apiVersion: "mz.digitalroute.com/v1alpha1" kind: "ECDeployment" metadata: creationTimestamp: "2024-09-09T03:42:56.000000Z" name: "ecd-client-digitalroute-com" namespace: "uepe" resourceVersion: "6343337" uid: "b4e223b7-75ff-453f-bdea-c8f73b5e0342" spec: jvmArgs: - "Xmx512M" - "Xms512M" manualUpgrade: false persistence: pvcName: "ec1-filestore-pvc" ports: - containerPort: 3869 protocol: "TCP" resources: limits: memory: "640Mi" requests: memory: "640Mi" services: - name: "client-digitalroute-com" spec: ports: - name: "port-1" port: 3869 protocol: "TCP" targetPort: 3869 type: "ClusterIP" workflows: - instances: - name: "Diameter_Client" parameters: "{}" useExtRef: "{}" template: "Diameter.DiameterClientSimulator"
Now, finalise EC Deployment by clicking Finish. Choose not to enable the ECD for now.
Repeat step 1 till step 4 to deploy another ECD Pod for Diameter Client workflow on the same cluster.
Following the creation of both ECD, let’s monitor the Kubernetes Services type ClusterIP that is being created.
Execute this command:
kubectl get svc -n uepe
On the cluster, the is the output.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE client-digitalroute-com ClusterIP 10.12.6.12 <none> 3869/TCP 35m desktop-online NodePort 10.12.5.41 <none> 9001:31441/TCP 5d23h external-dns ClusterIP 10.12.2.229 <none> 7979/TCP 6d ingress-nginx-controller NodePort 10.12.1.201 <none> 80:31079/TCP,443:31170/TCP 6d platform LoadBalancer 10.12.2.222 34.142.171.15 9000:30870/TCP,6790:31985/TCP,443:31140/TCP 5d23h server-digitalroute-com ClusterIP 10.12.5.153 <none> 3868/TCP 36m uepe-operator-controller-manager-metrics-service ClusterIP 10.12.7.111 <none> 8443/TCP 5d23h
Kubernetes creates DNS records for Services and Pods. You can contact Services with consistent DNS names instead of IP addresses.
The Service Name in the form of <service-name>.<namespace>
resolves to the cluster IP of the Service. <namespace>
can be omitted if both ECD are located in the same namepsace, i.e., uepe
Next, user may select and enable both ECD on the cluster one by one via the desktop-online WEB UI respectively.
To view Diameter Server workflow debug log, go back to the Manage menu. Select Execution Manager > Running Workflows.
Select the Diameter Server workflow that is currently running and click Open Monitor.
Then select Enable Debug and select Diameter Stack agent in the agent debug panel to view the debug log.
If connection established successfully between Diameter Server and Diameter Client, Capabilities-Exchange pair messages and repeat Device Watchdog messages are seen in the agent debug panel.
2024-09-09 11:46:28 Diameter_Stack_1 Received Capabilities-Exchange-Request from peer /10.11.1.20:49810 {Additional_AVPs=null, Vendor_Id=9008, Auth_Application_Id=[4], Acct_Application_Id=null, EndToEndIdentifier=-113390639, Product_Name=MediationZone, className=D_Capabilities_Exchange_Request, Vendor_Specific_Application_Id=null, rawCmdflags=0x80, Origin_Host=client-digitalroute-com, Inband_Security_Id=[0], Is_Error=false, Supported_Vendor_Id=null, Firmware_Revision=-1, Origin_State_Id=1725853587, Is_Request=true, HopByHopIdentifier=2015775414, Host_IP_Address=[10.11.1.20], Origin_Realm=digitalroute.com, Is_Retransmit=false, Is_Proxiable=false} 2024-09-09 11:46:28 Diameter_Stack_1 transmitting targeted Capabilities-Exchange-Answer e2e=-113390639 hbh=2015775414 target=/10.11.1.20:49810 cmd={Additional_AVPs=null, Vendor_Id=9008, Auth_Application_Id=[16777238, 4], Acct_Application_Id=null, EndToEndIdentifier=-113390639, Product_Name=MediationZone, className=D_Capabilities_Exchange_Answer, Vendor_Specific_Application_Id=null, rawCmdflags=0x0, Origin_Host=server-digitalroute-com, Inband_Security_Id=[0], Is_Error=false, Supported_Vendor_Id=[9, 13019, 10415, 5535], Firmware_Revision=-1, Error_Message=Successful handshake, Origin_State_Id=1725853494, Is_Request=false, HopByHopIdentifier=2015775414, Host_IP_Address=[10.11.0.12], Result_Code=2001, Origin_Realm=digitalroute.com, Failed_AVP=null, Is_Retransmit=false, Is_Proxiable=false}
2024-09-09 11:46:59 Diameter_Stack_1 Received Device-Watchdog-Request from peer client-digitalroute-com {rawCmdflags=0x80, Origin_Host=client-digitalroute-com, Is_Error=false, EndToEndIdentifier=-113390638, className=D_Device_Watchdog_Request, Origin_State_Id=null, Is_Request=true, HopByHopIdentifier=2015775415, Origin_Realm=digitalroute.com, Is_Retransmit=false, Is_Proxiable=false} 2024-09-09 11:46:59 Diameter_Stack_1 transmitting targeted Device-Watchdog-Answer e2e=-113390638 hbh=2015775415 target=/10.11.1.20:49810 cmd={EndToEndIdentifier=-113390638, className=D_Device_Watchdog_Answer, rawCmdflags=0x0, Origin_Host=server-digitalroute-com, Is_Error=false, Error_Message=null, Origin_State_Id=null, Is_Request=false, HopByHopIdentifier=2015775415, Result_Code=2001, Origin_Realm=digitalroute.com, Failed_AVP=null, Is_Retransmit=false, Is_Proxiable=false}
(The following section is Deprecated)
The following examples shows how to setup both Diameter server and client inside the same cluster.
First setup the Server and Client clusterIPs in their respective yaml files as shown in the following examples. The selector part binds the IP address to the correct ECD. The ECD should only contain one pod. The port is cluster-global, which means that two ECDs can’t use the same port (as listening port).
The listening port for the two server workflows are 3868 (server), 3870 (backupserver), and 3869 (client).Example - Server ClusterIP yaml
apiVersion: v1 kind: Service metadata: name: serverip spec: type: ClusterIP ports: - targetPort: 3868 port: 3868 name: server - targetPort: 3870 port: 3870 name: backupserver selector: ecd: server
Example - Client ClusterIP yaml
apiVersion: v1 kind: Service metadata: name: clientip spec: type: ClusterIP ports: - targetPort: 3869 port: 3869 name: client selector: ecd: client
The string
ecd: client
matches the same string underlabels
in the ECD yaml shown further below. See example.IP addresses
For both peers inside the cluster it should be the IP of the clusterip that's connected to the ECD in question. So first create the clusterip for both client and server and then get the IP address using
kubectl
command as shown below and put that in the hostAliases section of the ECD yaml.For one peer inside the cluster and one outside, the IP address of the node should be used for the peer that is inside the cluster, and the IP address of the peer outside the cluster must be reachable from the cluster.
Run the following command to get the IP address for the ECD:
kubectl get svc
For example:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE clientip ClusterIP 10.98.186.113 <none> 3869/TCP 41d uepe-operator-controller-manager-metrics-service ClusterIP 10.102.28.47 <none> 8443/TCP 43d desktop-online NodePort 10.106.161.255 <none> 80:32008/TCP 43d platform NodePort 10.109.197.192 <none> 9000:30022/TCP,6790:32193/TCP 43d serverip ClusterIP 10.110.121.222 <none> 3868/TCP 38d
Note!
The IP addresses retrieved from this command are then used for the respective hostAliases.
Run the following command:
kubectl apply -f path/to/file.yaml
Since the server needs to have information about both the client and server hostnames, so both the hostnames need to be configured for both ECDs.
The following examples (for the client and server) is just an excerpt of the full ECD to show how to use hostnames:Example - hostAliases for Client
- ip: "10.110.121.222" hostnames: - "server.digitalroute.com" - "server" - "backupserver.digitalroute.com" - "backupserver" - ip: "10.98.186.113" hostnames: - "client.digitalroute.com" - "client"
Example - hostAliases for Server
- ip: "10.110.121.222" hostnames: - "server.digitalroute.com" - "server" - "backupserver.digitalroute.com" - "backupserver" - ip: "10.98.186.113" hostnames: - "client.digitalroute.com" - "client"
The following is an example of a full ECD - one Diameter client and two Diameter servers (three peers). The hostnames must be included in the yaml file irrespective of the fact if it is for client or server.
Example - Full server-ecdmetadata: name: "diameter-server" annotations: meta.helm.sh/release-name: "mediationzone-ecd" meta.helm.sh/release-namespace: "davids" labels: app.kubernetes.io/managed-by: "Helm" app.kubernetes.io/component: "ecd" ecd: "server" apiVersion: "mz.digitalroute.com/v1alpha1" kind: "ECDeployment" spec: jvmArgs: - "Xms256m" - "Xmx512m" nodeHost: "dig-srv-test03.dev.drint.net" resources: requests: memory: "320Mi" limits: memory: "640Mi" hostAliases: - ip: "10.110.121.222" hostnames: - "server.digitalroute.com" - "server" - "backupserver.digitalroute.com" - "backupserver" - ip: "10.98.186.113" hostnames: - "client.digitalroute.com" - "client" manualUpgrade: false
Run the following command:
kubectl apply -f /path/to/file/serverecd.yaml
Note!
The label ecd: server
connects the cluster IP to the pod.
Example - Full client ecd
metadata: name: "diameter-client" annotations: meta.helm.sh/release-name: "mediationzone-ecd" meta.helm.sh/release-namespace: "davids" labels: app.kubernetes.io/managed-by: "Helm" app.kubernetes.io/component: "ecd" ecd: "client" apiVersion: "mz.digitalroute.com/v1alpha1" kind: "ECDeployment" spec: jvmArgs: - "Xms256m" - "Xmx512m" nodeHost: "dig-srv-test02.dev.drint.net" resources: requests: memory: "320Mi" limits: memory: "640Mi" hostAliases: - ip: "10.110.121.222" hostnames: - "server.digitalroute.com" - "server" - "backupserver.digitalroute.com" - "backupserver" - ip: "10.109.59.222" hostnames: - "client.digitalroute.com" - "client" manualUpgrade: false
nodeHost
The nodeHost field is needed to bind the pod to a host. If you don't mention the workflow as part of the ECD-yaml, you need to configure the workflow to run on the ECD's ECGroup (after the ECD has been created).
Note!
Since this is for internal cluster communication, the hostnames configured above together with the ports from the cluster IP can be used in the Diameter Stack agents and routing profiles. In this case, “client”, “server” and “backupserver” can be used as Diameter Host and “digitalroute.com” as Diameter Realm, and the ports are 3869 for the client and 3868 and 3870 for the servers.
Please note also that when configuring the Diameter Stack agent, the option “Bind On All Interfaces” must be used for the agent to be able to connect to its hostname since that is defined as a service.