Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

An ECD Patch is meant to provide a flexible option for users to define and provision Kubernetes objects that suits their suit your system architecture, or to tailor the ECD to their preferred flavourflavor.

...

Note!

An ECD Patch is NOT the same as using the kubectl patch command. Although

...

they are conceptually similar, they do not necessarily behave in the same way.

How it Works?

...

Introduction

The ECD patch functionality enables you to add, change, and remove certain fields and functionality that might not be supported directly in the ECD specification, from the different Kubernetes objects created by the Operator through the ECD specification. The patch and patchType fields are part of the ECD CRD structure and they are on child object of ECD as well. While the use case is to have the ECD (and patch) to be created by MZ Online, technically an ECD can be created through K8S CLI. Since Operator is reconciling and monitoring the cluster through K8S API Server, there is no dependency on who or what is creating the ECD. After ECD is created in the cluster, Operator will be able to detect the change in desired state and act accordingly to match the actual state - the reconciliation process.

Based on current design, Operator is expecting the ECD patch to be in YAML format with respective parameters according to the patching strategy. Operator will attempt to patch the user defined YAML with the original YAML, resulting as 1 YAML before applying it to the K8S cluster.

Patch Format

Patch comprises of 2 fields - Patch and Patch Type, embedded under different K8S object. Patch .

The operator expects the ECD patch to be in YAML format with respective parameters according to the patching strategy. The operator will attempt to patch the user-defined YAML with the original YAML, resulting in one YAML before applying it to the Kubernetes cluster.

The ECD patch functionality can be used either from Desktop Online or directly in the ECD specification YAML.

Note that parameters defined by Usage Engine in the ECD specification (Workflows, Workflow Groups) cannot be patched with the ECD Patch functionality. You can however, of course, edit these parameters directly in the ECD specification and apply the changes to the cluster.

Patch Format

The Patch format consists of 2 fields; patch and patchType, embedded under different Kubernetes objects. The patch field is the payload itself, which will be used to patch into the ECD K8S Kubernetes objects. Patch Type patchType is the field where users can define the patching strategies used to patch the payload.

Current Currently, the following objects that can be patched through ECD are:

  1. ECD (Deployments and Pods)

  2. Services

  3. HPA/autoscaling

  4. Ingress

Below is an example of the structure example under ECD (spec.patch and spec.patchType) :

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: anyECDeployment
  namespace: anyNamespace
spec:
  ...
  ...
  patchType: "application/merge-patch+json"
  patch: |
        ...
        ...

Below is an example of the structure example under HPA (spec.autoscale.patch and spec.autoscale.patchType):

info
Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  ...
spec:
  autoscale:
    ...
    ...
    patchType: "application/merge-patch+json"
    patch: |
      spec:
        ...

Note!

There is a pipe “|” right after Patch, to indicate that the lines below

...

are multi-lines YAML

Patching Strategies

There In Desktop Online you can find the corresponding patch for ECD (deployment and pods), Services, HPA/autoscaling, and Ingress (Ingress also being under networking) under their respective ECD sections:

...

Patching Strategies

There are 3 types of strategies supported by MZ Operator the ECD Patch feature:

  1. JSON Patch (RFC6902)

  2. Merge Patch (RFC7386)

  3. Strategic Merge Patch (K8S Kubernetes custom implementation of Merge Patch)

...

As defined in RFC6902, a JSON Patch is a sequence of operations that are executed on the resource, e.g. {"op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ]}. For more details on how to use the JSON Patch, see the RFC.

Example The example below shows how to you annotate an Ingress resource , so that it could can be managed by Istio:

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  ...
spec:
  ...
  ingress:
    patchType: "application/json-patch+json"
    patch: |
      - op: replace
        path: /metadata/annotations/kubernetesKubernetes.io~1ingress.class
        value: istio 

Merge Patch

...

Changing an item in a list

In order to change an item in a list you can do this conveniently with JSON Patch. In the example below we change the service port from 1234 to 1235. The zero in the path (/spec/ports/0/port) specifies that the first item in the list should be changed.

...

Merge Patch

As defined in RFC7386, a Merge Patch is essentially a partial representation of the resource. The submitted JSON is "merged" with the current resource to create a new one, then the new one is saved. For more details on how to use Merge Patch, see the RFC.

Example The example below shows how to you add a node selector to restrict this deployment (pod) to only run on nodes with a label where the disk type is SSD:

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  ...
spec:
  ...
  ...
  patchType: "application/merge-patch+json"
  patch: |
    spec:
      template:
        spec:
          nodeSelector:
            disktype: ssd

...

Strategic Merge Patch is a custom implementation of Merge Patch for Kubernetes. For a detailed explanation of how it works and why it needed had to be introduced, see API Conventions on Patch - Strategic Merge. In general, Strategic Merge Patch works better when it comes to merging K8S Kubernetes objects in a list.

Example below shows how to add a host alias to the deployment (pod), which will basically add an entry into /etc/hosts

...

In this ECD Services, port 9092 is already defined. Using Strategic Merge Patch, you can add two more ports 9093 and 9094. If you were to change the type from a Strategic Merge Patch to a Merge Patch in this case, port 9092 would have been removed after the patch.

Code Block
services:
    - spec:
  ...   ...   patchTypetype: "application/strategic-merge-patch+json"ClusterIP
  patch: |     specports:
      template:    - port: 9092
  spec:          protocol: TCP
hostAliases:           - iptargetPort: "127.0.0.19092
      ...
      ...
      patchType: "application/strategic-merge-patch+json"
      patch: |
       hostnames spec:
          ports:
            - "dummy"

In this ECD Services, a port 9092 is already defined. Using Strategic Merge Patch, we can add two more ports 9093 and 9094. On a side note, if we were to change the type from Strategic Merge Patch to Merge Patch, the port 9092 would have been removed after patch.

Code Block
services:name: "port-1"
          - spec:   port: 9093
    type: ClusterIP         portsprotocol: TCP
         - port: 9092   targetPort: 9093
        protocol: TCP             targetPort: 9092- name: "port-2"
      ...       ...
      patchType: "application/strategic-merge-patch+json" port: 9094
      patch: |         specprotocol: UDP
         ports:     targetPort: 9094
      - name: "port-1"
        ...

Here is an example changing multiple (sub-)paths in the same deployment/pod patch (also using Strategic Merge Patch):

Code Block
patchType: "application/strategic-merge-patch+json"
  patch: |
    spec:
  port    template:
9093        spec:
      protocol: TCP   hostAliases:
          - targetPortip: 909334.88.208.176
            hostnames:
            - name: "port-2 "client"
            - "client-simulator"
          - ip: 35.228.46.60
  port: 9094          hostnames:
         protocol: UDP  - "proxy"
           targetPort: 9094- "proxy2"
         ...

Samples

To help users to understand better, below are some samples that can get users started using ECD patch. Do note that “Before” is based on the ECD - which is the definition file for the desired state. while “After” is based on the conversion and logic processing done by Operator - which is the actual objects provisioning yaml to be applied to the cluster. As you might notice, there are a lot more objects that will be provisioned and handled by Operator itself.

Changing Rollout Strategy

...

Before ECD Patch

After ECD Patch

k apply -f file.yaml

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: ecd-test-rolling-strategy
spec:
  enabled: true
  patchType: "application/strategic-merge-patch+json"
  patch: |
    spec:
      strategy:
        type: Recreate
  image: dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
  workflows:
  - template: Default.http2
    instances:
      - name: server-1
        parameters: |
          {
            "port": 8989
          }

k get deploy ecd-test-rolling-strategy -o yaml

Code Block
apiVersion: apps/v1
kind: Deployment
metadata:
  ...
  ...
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: ecd-test-rolling-strategy
  strategy:
    type: Recreate
  template:
    ...
    ...

Setting Toleration

In example below, assuming a 3 nodes implementation K8S cluster, 2 nodes are tainted color=blue and 1 node is tainted color=red, the test is to add toleration to ECD so that it will get deployed into node tainted with color=red.

Code Block
$ k taint nodes kl-kube-node01.digitalroute.com kl-kube-node02.digitalroute.com color=blue:NoSchedule
node/kl-kube-node01.digitalroute.com tainted
node/kl-kube-node02.digitalroute.com tainted
$ k taint nodes kl-kube-node03.digitalroute.com color=red:NoSchedule
node/kl-kube-node03.digitalroute.com tainted

Observe how toleration is being added and it get scheduled to the node tainted with color=red.

...

Before ECD Patch

...

After ECD Patch

...

k apply -f file.yaml

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: ecd-test-tolerations
spec:
  enabled: true
  patchType: "application/strategic-merge-patch+json"
  patch: |
    spec:                                 # Spec for Deployment
      template:                           # Template for Pods
        spec:                             # Spec for Pods
          tolerations:                    # Toleration added to each Pod
          - key: "color"
            value: "red"                  
            operator: "Equal"
            effect: "NoSchedule"
  image: dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
  workflows:
  - template: Default.http2
    instances:
      - name: server-1
        parameters: |
          {
            "port": 8989
          }

k get pods ecd-test-tolerations-5d646c45cd-g9x8n -o wide

...

 containers:
          - name: ec1
            resources:
              limits:
                memory: 1536Mi
              requests:
                memory: 1024Mi

Samples

Below are samples that can help you get started with an ECD patch. The “Before” section is based on the ECD, which is the definition file for the desired state. while the “After” section is based on the conversion and logic processing done by Operator - which is the actual objects provisioning yaml to be applied to the cluster. As you can see, there are several objects that will be provisioned and handled by the Operator itself.

Changing Rollout Strategy

Creating an ECD will result in the creation of different Kubernetes objects, where one of them is a Deployment object. The rollout strategy defaults to RollingUpdate, but through an ECD patch we can change it to another strategy such as Recreate. The change can be seen on the spec.strategy.typein the Deployment object After ECD Patch.

Before ECD Patch

After ECD Patch

kubectl apply -f file.yaml

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: ecd-test-rolling-strategy
spec:
  enabled: true
  patchType: "application/strategic-merge-patch+json"
  patch: |
    spec:
      strategy:
        type: Recreate
  image: dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
  workflows:
  - template: Default.http2
    instances:
      - name: server-1
        parameters: |
          
{
  
READY
   
STATUS
    
RESTARTS
   
AGE
"port": 8989
 
IP
         }

kubectl get deploy ecd-test-rolling-strategy -o yaml

Code Block
apiVersion: apps/v1
kind: 
NODE
Deployment
metadata:
  ...
  ...
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
 
NOMINATED
 
NODE
   
READINESS
 
GATES
app: ecd-test-
tolerations-5d646c45cd-g9x8n
rolling-strategy
  strategy:
   
1/1
 type: Recreate
  
Running
template:
  
0
  ...
    ...

Setting Toleration

In the example below, with a 3 node implementation of a Kubernetes cluster, 2 nodes are tainted color=blue and 1 node is tainted color=red. The test is to add toleration to ECD so that it will get deployed into node tainted with color=red.

Code Block
kubectl taint 

...

nodes kl-kube-node01.digitalroute.com kl-kube-node02.digitalroute.com color=blue:NoSchedule
node/kl-kube-node01.digitalroute.com tainted
node/kl-kube-

...

node02.digitalroute.com tainted
kubectl 

...

taint nodes 

...

 

k describe pods ecd-test-tolerations-5d646c45cd-g9x8n| grep -i toleration

...

kl-kube-node03.digitalroute.com color=red:NoSchedule
node/kl-kube-node03.digitalroute.com tainted

Observe how toleration is being added and gets scheduled to the node tainted with color=red.

Before ECD Patch

After ECD Patch

kubectl apply -f file.yaml

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: ecd-test-tolerations
spec:
  enabled: true
  
app=ecd-test-tolerations Controlled By: ReplicaSet/ecd-test-tolerations-5d646c45cd ecd-test-tolerations: Tolerations:
patchType: "application/strategic-merge-patch+json"
  patch: |
    
color=red
spec:
NoSchedule
   
Normal
   
Scheduled
  
5m21s
  
default-scheduler
                       # Spec 
Successfully
for Deployment
  
assigned
 
castle-black/ecd-test-tolerations-5d646c45cd-g9x8n
 
to
 
kl-kube-node03.digitalroute.com
 template:  
Normal
   
Created
    
5m21s
  
kubelet,
 
kl-kube-node03.digitalroute.com
  
Created
 
container
 
ecd-test-tolerations
   
Normal
   
Started
    
5m20s
 # 
kubelet, kl-kube-node03.digitalroute.com Started container ecd-test-tolerations

Setting Environment Variable

There might be a case where you would like to add in an environment variable. In the example below, we will add one calls ENV where the value will be “dev”.

Before ECD Patch

After ECD Patch

k apply -f file.yaml

Code BlockapiVersion: mz.digitalroute.com/v1alpha1 kind: ECDeployment metadata: name: ecd-test-2 spec: enabled: true patchType: "application/strategic-merge-patch+json" patch: | spec:
Template for Pods
        spec:                             # Spec for Pods
          
template
tolerations:                    # Toleration added 
spec:
to each Pod
          - key: "color"
containers:
           
-
 
name
value:
ecd-test-2
 "red"              
env:
    
        
-
 
name:
 
ENV
  operator: "Equal"
            
value
effect: 
dev
"NoSchedule"
  image: dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
  workflows:
  - template: Default.http2
    instances:
      - name: server-1
        parameters: |
          {
            "port": 8989
          }
kex

kubectl get pods ecd-test-

2

tolerations-

7487469546

5d646c45cd-

s77xx bash -- printenv | grep ENV
Code Block
ENV=dev

 

k describe pods ecd-test-2-7487469546-s77xx

Code BlockName: ecd-test-2-7487469546-s77xx Namespace: castle-black Priority:

g9x8n -o wide

Code Block
NAME                                    READY   STATUS    RESTARTS   AGE   IP     
0
 
Node:
      NODE   
kl-kube-node03.digitalroute.com/10.60.10.143
 
Start
 
Time:
   
Tue,
 
25
 
Aug
 
2020
 
17:05:04
 
+0800
 
Labels:
       
ECDeployment=ecd-test-2
         NOMINATED NODE   READINESS GATES
app=
ecd-test-
2
tolerations-5d646c45cd-g9x8n   1/1     Running   0    
pod-template-hash=7487469546
 
Annotations:
  
Status:
  
Running
 
IP:
80s   10.244.2.10   kl-kube-node03.digitalroute.com   
10.244.2.14 IPs:
<none>     
IP:
      <none>

 

kubectl describe pods ecd-test-tolerations-5d646c45cd-g9x8n| grep -i toleration

Code Block
Name:     
10.244.2.14
 
Controlled
 
By:
  
ReplicaSet/
ecd-test-
2
tolerations-5d646c45cd-
7487469546
g9x8n
Containers
Labels:       ECDeployment=ecd-test-
2:
tolerations
  
Container
 
ID:
 
docker://a07de37d1cfff80b7ce240d7a6d3821cea393a49b58f8a9f43f97a229efd236f
     
Image:
     app=ecd-test-tolerations
Controlled By:  
dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
ReplicaSet/ecd-test-tolerations-5d646c45cd
  ecd-test-tolerations:
Tolerations:    
Image
 
ID
color=red:NoSchedule
  Normal   
docker-pullable://dtr.digitalroute.com/dr/mz10@sha256:6e5efb5bb8e526679d2e0878f5cf69011d0f8724be1dc90f26e631f33afe8227
Scheduled  5m21s  default-scheduler        
Port:
          
<none>
     
Host
 
Port:
 Successfully assigned castle-black/ecd-test-tolerations-5d646c45cd-g9x8n 
<none>
to kl-kube-node03.digitalroute.com
  Normal  
Command:
 Created    5m21s  
/opt/mz/entrypoint/docker-entrypoint.sh
kubelet, kl-kube-node03.digitalroute.com  Created container ecd-test-tolerations
  Normal  
Args:
 Started    5m20s  kubelet, 
-e accepts.any.scheduling.criteria=false State: Running Started: Tue, 25 Aug 2020 17:05:05 +0800 Ready: True Restart Count: 0 Liveness: http-get http://:9090/health/live delay=90s timeout=10s period=15s #success=1 #failure=3 Readiness
kl-kube-node03.digitalroute.com  Started container ecd-test-tolerations

Setting Environment Variable

You can also add in an environmental variable. In the example below, the environmental variable ENV is added with the value “dev”.

Before ECD Patch

After ECD Patch

kubectl apply -f file.yaml

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: ecd-test-2
spec:
  enabled: true
  patchType: "application/strategic-merge-patch+json"
  patch: |
    spec:  
      template:      
http-get http://:9090/health/ready
 
delay=0s
 
timeout=1s
 
period=5s
 
#success=1
 
#failure=60
   
 
Environment:
       
ENV
spec:    
dev
 
      
TZ:
   
UTC

Mounting a storage

In this scenario, we might want to attach a storage (be it temporary or permanent) in the ECD Pods, perhaps for Batch workflow processing files. In below example, we are attaching a temporary storage (live as long as Pod’s lifespan) and mounting it to the pod.

Before ECD Patch

After ECD Patch

k apply -f file.yaml

Code BlockapiVersion: mz.digitalroute.com/v1alpha1 kind: ECDeployment metadata: name: ecd-test-2 spec: enabled: true patchType: "application/strategic-merge-patch+json" patch: | spec: template:
 containers:
          - name: ecd-test-2
            env:
            - name: ENV 
              value: dev
  image: dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
  workflows:
  - template: Default.http2
    instances:
      - name: server-1
        
spec
parameters: |
          {
   
containers:
         
- name
"port": 
ecd-test-2
8989
          
volumeMounts
}

kubectl exec ecd-test-2-7487469546-s77xx -- /bin/bash printenv | grep ENV

Code Block
ENV=dev

 

kubectl describe pods ecd-test-2-7487469546-s77xx

Code Block
Name:         
- mountPath: /cdr_volume
ecd-test-2-7487469546-s77xx
Namespace:    
name: cdr-volume
castle-black
Priority:     
volumes
0
Node:         
- name: cdr-volume emptyDir: {} image: dtr.digitalroute.com/dr/mz10:10.2.0-xe-2080-bugfix-latest-ec workflows: - template: Default.http2
kl-kube-node03.digitalroute.com/10.60.10.143
Start Time:   Tue, 25 Aug 2020 17:05:04 +0800
Labels:       ECDeployment=ecd-test-2
       
instances:
       app=ecd-test-2
name:
 
server-1
         
parameters:
 
|
   pod-template-hash=7487469546
Annotations:  Status:  Running
IP:  
{
         10.244.2.14
IPs:
  
"port"
IP: 
8989
          
}kg pods
10.244.2.14
Controlled By:  ReplicaSet/ecd-test-2-
678ccb76d6-s49ql -o yaml Code BlockapiVersion: v1 kind: Pod metadata: ... ... name: ecd-test-2-678ccb76d6-s49ql ... ... spec: containers: - name: ecd-test-2 ... ... volumeMounts
7487469546
Containers:
  ecd-test-2:
    Container ID:  docker://a07de37d1cfff80b7ce240d7a6d3821cea393a49b58f8a9f43f97a229efd236f
    Image:         dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
    Image ID:      docker-pullable://dtr.digitalroute.com/dr/mz10@sha256:6e5efb5bb8e526679d2e0878f5cf69011d0f8724be1dc90f26e631f33afe8227
    Port:          <none>
    Host Port:     <none>
    Command:
    
-
 
mountPath:
 
/cdr_volume
/opt/mz/entrypoint/docker-entrypoint.sh
    Args:
 
name:
 
cdr-volume
    
-
mountPath: /etc/config/common
e accepts.any.scheduling.criteria=false
    State:  
name:
 
common-config
     
-
 
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
 Running
      
name
Started: 
default-token-4dc54
     Tue, 
readOnly: true
25 Aug 2020 17:05:05 +0800
    
...
Ready:   
...
   
volumes:
   
-
 
emptyDir:
True
{}
    Restart 
name
Count:  0
cdr-volume
   
-
 
configMap
Liveness:       
defaultMode: 420
http-get http://:9090/health/live delay=90s timeout=10s period=15s #success=1 #failure=3
    
name
Readiness: 
common-config
     
name: common-config - name: default-token-4dc54
http-get http://:9090/health/ready delay=0s timeout=1s period=5s #success=1 #failure=60
    
secret
Environment:
      
defaultMode
ENV: 
420
 dev
     
secretName: default-token-4dc54 status
 TZ:   
... ...
UTC

Removing an Object

We may You can also use ECD Patch this functionality to remove a provisioned K8S Kubernetes object. From mounting a storage example, now we can use In the example below, the directive marker ($patch: delete) is used to remove the a volume and volumeMount.

Before ECD Patch

After ECD Patch

Before ECD Patch

After ECD Patch

k

kubectl apply -f file.yaml

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: ecd-test-2
spec:
  enabled: true
  patchType: "application/strategic-merge-patch+json"
  patch: |
    spec:  
      template:              
        spec:     
          containers:
          - name: ecd-test-2
            volumeMounts:
            - mountPath: /cdr_volume
              name: cdr-volume
              $patch: delete
          volumes:
          - name: cdr-volume
            emptyDir: {}
            $patch: delete
  image: dtr.digitalroute.com/dr/mz10:10.2.0-xe-2080-bugfix-latest-ec
  workflows:
  - template: Default.http2
    instances:
      - name: server-1
        parameters: |
          {
            "port": 8989
          }
kg

kubectl get pods ecd-test-2-678ccb76d6-s49ql -o yaml

Code Block
apiVersion: v1
kind: Pod
metadata:
  ...
  ...
  name: ecd-test-2-678ccb76d6-s49ql
  ...
  ...
spec:
  containers:
  - name: ecd-test-2
    ...
    ...
    volumeMounts:
    - mountPath: /etc/config/common
      name: common-config
    - mountPath: /var/run/secrets/
kubernetes
Kubernetes.io/serviceaccount
      name: default-token-4dc54
      readOnly: true
  ...
  ...
  volumes:
  - configMap:
      defaultMode: 420
      name: common-config
    name: common-config
  - name: default-token-4dc54
    secret:
      defaultMode: 420
      secretName: default-token-4dc54
status:
  ...
  ...