Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

An ECD Patch is meant to provide a flexible option for users to define and provision Kubernetes objects that suits their suit your system architecture, or to tailor the ECD to their preferred flavourflavor.

...

Note!

An ECD Patch is NOT the same as using the kubectl patch command. Although

...

they are conceptually similar, they do not necessarily behave in the same way.

Introduction

The ECD patch functionality is meant to enable users to enables you to add, change, and remove certain fields and functionality that might not be supported directly in the ECD specification, from the different Kubernetes objects created by the Operator through the ECD specification, that might not be supported directly in the EDC specification. The patch and patchType fields are part of the ECD CRD structure.

The operator is expecting expects the ECD patch to be in YAML format with respective parameters according to the patching strategy. The operator will attempt to patch the user-defined YAML with the original YAML, resulting in one YAML before applying it to the K8S Kubernetes cluster.

The ECD patch functionality can be used either from Desktop Online or directly in the ECD specification YAML.

Image Removed

Patch Format

Patch comprises of 2 fields - Patch and Patch TypeNote that parameters defined by Usage Engine in the ECD specification (Workflows, Workflow Groups) cannot be patched with the ECD Patch functionality. You can however, of course, edit these parameters directly in the ECD specification and apply the changes to the cluster.

Patch Format

The Patch format consists of 2 fields; patch and patchType, embedded under different K8S Kubernetes objects. Patch The patch field is the payload itself, which will be used to patch into the ECD K8S Kubernetes objects. Patch Type patchType is the field where users can define the patching strategies used to patch the payload.

Current Currently, the following objects that can be patched through ECD are:

  1. ECD (Deployments and Pods)

  2. Services

  3. HPA/autoscaling

  4. Ingress

Below is an example of the structure example under ECD (spec.patch and spec.patchType) :

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: anyECDeployment
  namespace: anyNamespace
spec:
  ...
  ...
  patchType: "application/merge-patch+json"
  patch: |
        ...
        ...

Below is an example of the structure example under HPA (spec.autoscale.patch and spec.autoscale.patchType):

info
Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  ...
spec:
  autoscale:
    ...
    ...
    patchType: "application/merge-patch+json"
    patch: |
      spec:
        ...

Note!

There is a pipe “|” right after Patch, to indicate that the lines below

...

are multi-lines YAML

In Desktop Online you can find the corresponding patch for ECD (deployment and pods), Services, HPA/autoscaling, and Ingress (Ingress also being under networking) under their respective ECD sections:

...

Patching Strategies

There are 3 types of strategies supported by MZ Operator the ECD Patch feature:

  1. JSON Patch (RFC6902)

  2. Merge Patch (RFC7386)

  3. Strategic Merge Patch (K8S Kubernetes custom implementation of Merge Patch)

...

As defined in RFC6902, a JSON Patch is a sequence of operations that are executed on the resource, e.g. {"op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ]}. For more details on how to use the JSON Patch, see the RFC.

Example The example below shows how to you annotate an Ingress resource , so that it could can be managed by Istio:

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  ...
spec:
  ...
  ingress:
    patchType: "application/json-patch+json"
    patch: |
      - op: replace
        path: /metadata/annotations/kubernetesKubernetes.io~1ingress.class
        value: istio 

Changing an item in a list

In order to change an item in a list you can do this conveniently with JSON Patch. In the example below we change the service port from 1234 to 1235. The zero in the path (/spec/ports/0/port) specifies that the first item in the list should be changed.

...

Merge Patch

As defined in RFC7386, a Merge Patch is essentially a partial representation of the resource. The submitted JSON is "merged" with the current resource to create a new one, then the new one is saved. For more details on how to use Merge Patch, see the RFC.

Example The example below shows how to you add a node selector to restrict this deployment (pod) to only run on nodes with a label where the disk type is SSD:

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  ...
spec:
  ...
  ...
  patchType: "application/merge-patch+json"
  patch: |
    spec:
      template:
        spec:
          nodeSelector:
            disktype: ssd

...

Strategic Merge Patch is a custom implementation of Merge Patch for Kubernetes. For a detailed explanation of how it works and why it needed had to be introduced, see API Conventions on Patch - Strategic Merge. In general, Strategic Merge Patch works better when it comes to merging K8S Kubernetes objects in a list.

Example below shows how to add a host alias to the deployment (pod), which will basically add an entry into /etc/hosts.

...

In this ECD Services, port 9092 is already defined. Using Strategic Merge Patch, you can add two more ports 9093 and 9094. If you were to change the type from a Strategic Merge Patch to a Merge Patch in this case, port 9092 would have been removed after the patch.

Code Block
services:
    - spec:
  ...   ...   patchTypetype: "application/strategic-merge-patch+json"ClusterIP
  patch: |     specports:
      template:    - port: 9092
   spec:         protocol: TCP
hostAliases:           - ip: "127.0.0.1targetPort: 9092
      ...
      ...
      patchType: "application/strategic-merge-patch+json"
      patch: |
    hostnames    spec:
          ports:
            - name: "dummy"

In this ECD Services, a port 9092 is already defined. Using Strategic Merge Patch, we can add two more ports 9093 and 9094. On a side note, if we were to change the type from Strategic Merge Patch to Merge Patch, the port 9092 would have been removed after patch.

Code Block
services:port-1"
      - spec:         typeport: ClusterIP9093
        ports:      protocol: TCP
   - port: 9092         targetPort: 9093
  protocol: TCP         -    targetPortname: 9092"port-2"
      ...       ...
      patchType: "application/strategic-merge-patch+json" port: 9094
      patch: |         specprotocol: UDP
         ports:     targetPort: 9094
      - name: "port-1"
      ...

Here is an example changing multiple (sub-)paths in the same deployment/pod patch (also using Strategic Merge Patch):

Code Block
patchType: "application/strategic-merge-patch+json"
  patch: |
    spec:
    port  template:
9093        spec:
      protocol: TCP   hostAliases:
          - targetPortip: 909334.88.208.176
            - name: "port-2hostnames:
            - "client"
            - "client-simulator"
          - ip: 35.228.46.60
 port: 9094          hostnames:
    protocol: UDP       - "proxy"
      targetPort: 9094     - "proxy2"
     ...

Samples

Below are samples that can help users getting started with ECD patch. Do note that “Before” is based on the ECD - which is the definition file for the desired state. while “After” is based on the conversion and logic processing done by Operator - which is the actual objects provisioning yaml to be applied to the cluster. As you might notice, there are a lot more objects that will be provisioned and handled by Operator itself.

Changing Rollout Strategy

...

Before ECD Patch

After ECD Patch

k apply -f file.yaml

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: ecd-test-rolling-strategy
spec:
  enabled: true
  patchType: "application/strategic-merge-patch+json"
  patch: |
    spec:
      strategy:
        type: Recreate
  image: dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
  workflows:
  - template: Default.http2
    instances:
      - name: server-1
        parameters: |
          {
            "port": 8989
          }

k get deploy ecd-test-rolling-strategy -o yaml

Code Block
apiVersion: apps/v1
kind: Deployment
metadata:
  ...
  ...
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: ecd-test-rolling-strategy
  strategy:
    type: Recreate
  template:
    ...
    ...

Setting Toleration

In the example below, assuming a 3 nodes implementation K8S cluster, 2 nodes are tainted color=blue and 1 node is tainted color=red, the test is to add toleration to ECD so that it will get deployed into node tainted with color=red.

Code Block
$ k taint nodes kl-kube-node01.digitalroute.com kl-kube-node02.digitalroute.com color=blue:NoSchedule
node/kl-kube-node01.digitalroute.com tainted
node/kl-kube-node02.digitalroute.com tainted
$ k taint nodes kl-kube-node03.digitalroute.com color=red:NoSchedule
node/kl-kube-node03.digitalroute.com tainted

Observe how toleration is being added and it get scheduled to the node tainted with color=red.

...

Before ECD Patch

...

After ECD Patch

k apply -f file.yaml

...

     containers:
          - name: ec1
            resources:
              limits:
                memory: 1536Mi
              requests:
                memory: 1024Mi

Samples

Below are samples that can help you get started with an ECD patch. The “Before” section is based on the ECD, which is the definition file for the desired state. while the “After” section is based on the conversion and logic processing done by Operator - which is the actual objects provisioning yaml to be applied to the cluster. As you can see, there are several objects that will be provisioned and handled by the Operator itself.

Changing Rollout Strategy

Creating an ECD will result in the creation of different Kubernetes objects, where one of them is a Deployment object. The rollout strategy defaults to RollingUpdate, but through an ECD patch we can change it to another strategy such as Recreate. The change can be seen on the spec.strategy.typein the Deployment object After ECD Patch.

Before ECD Patch

After ECD Patch

kubectl apply -f file.yaml

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: ecd-test-rolling-strategy
spec:
  enabled: true
  patchType: "application/strategic-merge-patch+json"
  patch: |
    spec:
      strategy:
        type: Recreate
  image: dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
  workflows:
  - template: Default.http2
    instances:
      - name: server-1
        parameters: |
          {
            "port": 8989
          }
k

kubectl get

pods

deploy ecd-test-

tolerations

rolling-

5d646c45cd

strategy -

g9x8n -

o

wide

yaml

Code Block
NAME
apiVersion: apps/v1
kind: Deployment
metadata:
  ...
  ...
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  
READY STATUS
revisionHistoryLimit: 10
  selector:
  
RESTARTS
  matchLabels:
AGE
   
IP
   app: ecd-test-rolling-strategy
  strategy:
    
NODE
type: Recreate
  template:
    ...
    
NOMINATED NODE READINESS GATES ecd-test-tolerations-5d646c45cd-g9x8n 1/1 Running 0 80s 10.244.2.10 kl-kube-node03.digitalroute.com <none> <none>

 

k describe pods ecd-test-tolerations-5d646c45cd-g9x8n| grep -i toleration

Code BlockName: ecd-test-tolerations-5d646c45cd-g9x8n Labels: ECDeployment=ecd-test-tolerations app=ecd-test-tolerations Controlled By: ReplicaSet/ecd-test-tolerations-5d646c45cd ecd-test-tolerations: Tolerations: color=red:NoSchedule Normal Scheduled 5m21s default-scheduler
...

Setting Toleration

In the example below, with a 3 node implementation of a Kubernetes cluster, 2 nodes are tainted color=blue and 1 node is tainted color=red. The test is to add toleration to ECD so that it will get deployed into node tainted with color=red.

Code Block
kubectl taint nodes kl-kube-node01.digitalroute.com kl-kube-node02.digitalroute.com color=blue:NoSchedule
node/kl-kube-node01.digitalroute.com tainted
node/kl-kube-node02.digitalroute.com tainted
kubectl taint nodes kl-kube-node03.digitalroute.com color=red:NoSchedule
node/kl-kube-node03.digitalroute.com tainted

Observe how toleration is being added and gets scheduled to the node tainted with color=red.

Before ECD Patch

After ECD Patch

kubectl apply -f file.yaml

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: ecd-test-tolerations
spec:
  enabled: true
  patchType: "application/strategic-merge-patch+json"
  patch: |
    spec:                                 # Spec for Deployment
Successfully
   
assigned
 
castle-black/ecd-test-tolerations-5d646c45cd-g9x8n
 
to kl-kube-node03.digitalroute.com
 template:   
Normal
   
Created
    
5m21s
  
kubelet,
 
kl-kube-node03.digitalroute.com
  
Created
 
container
 
ecd-test-tolerations
   
Normal
   
Started
    
5m20s
# Template 
kubelet, kl-kube-node03.digitalroute.com Started container ecd-test-tolerations

Setting Environment Variable

There might be a case where you would like to add in an environment variable. In the example below, we will add one calls ENV where the value will be “dev”.

Before ECD Patch

After ECD Patch

k apply -f file.yaml

Code BlockapiVersion: mz.digitalroute.com/v1alpha1 kind: ECDeployment metadata: name: ecd-test-2 spec: enabled: true patchType: "application/strategic-merge-patch+json" patch: | spec:
for Pods
        spec:                             # Spec for Pods
          
template
tolerations:                    # Toleration added 
spec:
to each Pod
          - key: "color"
containers:
           
-
 
name
value:
ecd-test-2
 "red"              
env:
    
        
-
 
name:
 
ENV
  operator: "Equal"
            
value
effect: 
dev
"NoSchedule"
  image: dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
  workflows:
  - template: Default.http2
    instances:
      - name: server-1
        parameters: |
          {
            "port": 8989
          }
kex

kubectl get pods ecd-test-

2

tolerations-

7487469546

5d646c45cd-

s77xx bash -- printenv | grep ENV
Code Block
ENV=dev

 

k describe pods ecd-test-2-7487469546-s77xx

Code BlockName: ecd-test-2-7487469546-s77xx Namespace: castle-black Priority: 0 Node:

g9x8n -o wide

Code Block
NAME                                    READY   STATUS    RESTARTS   AGE   IP            NODE             
kl-kube-node03.digitalroute.com/10.60.10.143 Start Time:
          
Tue,
 
25
 
Aug
 
2020
 
17:05:04
 
+0800
 
Labels:
 NOMINATED NODE   READINESS GATES
ECDeployment=
ecd-test-
2
tolerations-5d646c45cd-g9x8n   1/1     Running   0          
app=ecd-test-2
80s   10.244.2.10   kl-kube-node03.digitalroute.com   <none>           
pod-template-hash=7487469546 Annotations: Status: Running IP:
<none>

 

kubectl describe pods ecd-test-tolerations-5d646c45cd-g9x8n| grep -i toleration

Code Block
Name:         ecd-test-tolerations-5d646c45cd-g9x8n
Labels:       
10.244.2.14 IPs:
ECDeployment=ecd-test-tolerations
   
IP:
           
10.244.2.14
app=ecd-test-tolerations
Controlled By:  ReplicaSet/ecd-test-
2
tolerations-
7487469546
5d646c45cd
Containers:
  
ecd-test-
2
tolerations:
Tolerations:    
Container
 
ID
color=red:
docker://a07de37d1cfff80b7ce240d7a6d3821cea393a49b58f8a9f43f97a229efd236f
NoSchedule
  Normal   Scheduled  5m21s  default-scheduler      
Image:
         
dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
     
Image
 
ID:
    Successfully assigned 
docker
castle-
pullable://dtr
black/ecd-test-tolerations-5d646c45cd-g9x8n to kl-kube-node03.digitalroute.com
/dr/mz10@sha256:6e5efb5bb8e526679d2e0878f5cf69011d0f8724be1dc90f26e631f33afe8227

  Normal  
Port:
 Created    5m21s  kubelet, kl-kube-node03.digitalroute.com  
<none>
Created container ecd-test-tolerations
  
Host
Normal 
Port:
  Started   
<none>
 5m20s  
Command:
kubelet, kl-kube-node03.digitalroute.com  Started container 
/opt/mz/entrypoint/docker-entrypoint.sh Args: -e accepts.any.scheduling.criteria=false State: Running Started:
ecd-test-tolerations

Setting Environment Variable

You can also add in an environmental variable. In the example below, the environmental variable ENV is added with the value “dev”.

Before ECD Patch

After ECD Patch

kubectl apply -f file.yaml

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: ecd-test-2
spec:
  enabled: true
  patchType: "application/strategic-merge-patch+json"
  patch: |
    spec:  
      
Tue,
template: 
25
 
Aug
 
2020
 
17:05:05
 
+0800
     
Ready:
    
     
True
   spec:  
Restart
 
Count:
  
0

    
Liveness:
      containers:
http-get
 
http://:9090/health/live
 
delay=90s
 
timeout=10s
 
period=15s
 
#success=1
 
#failure=3
    - 
Readiness
name: ecd-test-2
    
http-get
 
http://:9090/health/ready
 
delay=0s
 
timeout=1s
 
period=5s
 
#success=1
 
#failure=60
  env:
  
Environment:
       
ENV:
  
dev
 - name: ENV 
  
TZ:
   
UTC

Mounting a storage

In this scenario, we might want to attach a storage (be it temporary or permanent) in the ECD Pods, perhaps for Batch workflow processing files. In below example, we are attaching a temporary storage (live as long as Pod’s lifespan) and mounting it to the pod.

Before ECD Patch

After ECD Patch

k apply -f file.yaml

Code BlockapiVersion: mz.digitalroute.com/v1alpha1 kind: ECDeployment metadata: name: ecd-test-2 spec: enabled: true patchType: "application/strategic-merge-patch+json" patch: | spec:
         value: dev
  image: dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
  workflows:
  - template: Default.http2
    instances:
      - name: server-1
        parameters: |
       
template:
   {
            "port": 8989
     
spec:
     
containers:
}

kubectl exec ecd-test-2-7487469546-s77xx -- /bin/bash printenv | grep ENV

Code Block
ENV=dev

 

kubectl describe pods ecd-test-2-7487469546-s77xx

Code Block
Name:        
-
 
name:
ecd-test-2-7487469546-s77xx
Namespace:    castle-black
Priority:     0
volumeMounts
Node:         
- mountPath: /cdr_volume name: cdr-volume
kl-kube-node03.digitalroute.com/10.60.10.143
Start Time:   Tue, 25 Aug 2020 17:05:04 +0800
Labels:       ECDeployment=ecd-test-2
   
volumes:
           
- name: cdr-volume
app=ecd-test-2
            
emptyDir:
 
{} image: dtr.digitalroute.com/dr/mz10:10.2.0-xe-2080-bugfix-latest-ec workflows
 pod-template-hash=7487469546
Annotations:  Status:  Running
IP:   
-
   
template:
 
Default.http2
    
instances
10.244.2.14
IPs:
  IP:    
-
 
name:
 
server-1
     10.244.2.14
Controlled By:  
parameters
ReplicaSet/ecd-test-2-7487469546
Containers:
|
  ecd-test-2:
    Container ID:  docker://a07de37d1cfff80b7ce240d7a6d3821cea393a49b58f8a9f43f97a229efd236f
{
    Image:         
"port": 8989
dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
    Image ID:      
}

kg pods ecd-test-2-678ccb76d6-s49ql -o yaml

Code BlockapiVersion: v1 kind: Pod metadata: ... ... name: ecd-test-2-678ccb76d6-s49ql ... ... spec: containers: - name: ecd-test-2 ... ... volumeMounts
docker-pullable://dtr.digitalroute.com/dr/mz10@sha256:6e5efb5bb8e526679d2e0878f5cf69011d0f8724be1dc90f26e631f33afe8227
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/mz/entrypoint/docker-entrypoint.sh
    Args:
      -
mountPath: /cdr_volume
e accepts.any.scheduling.criteria=false
    State:  
name:
 
cdr-volume
     
-
 
mountPath: /etc/config/common
 Running
      
name
Started: 
common-config
     Tue, 
-
25 
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
Aug 2020 17:05:05 +0800
    
name
Ready: 
default-token-4dc54
       
readOnly:
 
true
 
...
True
  
...
  Restart 
volumes
Count:  0
-
 
emptyDir:
 
{}
  Liveness:   
name:
 
cdr-volume
   http-get 
configMap: defaultMode: 420
http://:9090/health/live delay=90s timeout=10s period=15s #success=1 #failure=3
    Readiness:     
name:
 
common
http-
config name: common-config - name: default-token-4dc54
get http://:9090/health/ready delay=0s timeout=1s period=5s #success=1 #failure=60
    
secret
Environment:
      
defaultMode
ENV: 
420
 dev
     
secretName: default-token-4dc54 status
 TZ:   
... ...
UTC

Removing an Object

We may You can also use ECD Patch this functionality to remove a provisioned K8S Kubernetes object. From mounting a storage example, now we can use In the example below, the directive marker ($patch: delete) is used to remove the a volume and volumeMount.

Before ECD Patch

After ECD Patch

k

kubectl apply -f file.yaml

Code Block
apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: ecd-test-2
spec:
  enabled: true
  patchType: "application/strategic-merge-patch+json"
  patch: |
    spec:  
      template:              
        spec:     
          containers:
          - name: ecd-test-2
            volumeMounts:
            - mountPath: /cdr_volume
              name: cdr-volume
              $patch: delete
          volumes:
          - name: cdr-volume
            emptyDir: {}
            $patch: delete
  image: dtr.digitalroute.com/dr/mz10:10.2.0-xe-2080-bugfix-latest-ec
  workflows:
  - template: Default.http2
    instances:
      - name: server-1
        parameters: |
          {
            "port": 8989
          }
kg

kubectl get pods ecd-test-2-678ccb76d6-s49ql -o yaml

Code Block
apiVersion: v1
kind: Pod
metadata:
  ...
  ...
  name: ecd-test-2-678ccb76d6-s49ql
  ...
  ...
spec:
  containers:
  - name: ecd-test-2
    ...
    ...
    volumeMounts:
    - mountPath: /etc/config/common
      name: common-config
    - mountPath: /var/run/secrets/
kubernetes
Kubernetes.io/serviceaccount
      name: default-token-4dc54
      readOnly: true
  ...
  ...
  volumes:
  - configMap:
      defaultMode: 420
      name: common-config
    name: common-config
  - name: default-token-4dc54
    secret:
      defaultMode: 420
      secretName: default-token-4dc54
status:
  ...
  ...