Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 23 Next »

An ECD Patch is meant to provide a flexible option to define and provision Kubernetes objects that suit your system architecture, or to tailor the ECD to their preferred flavor.

Note!

An ECD Patch is NOT the same as using the kubectl patch command. Although they are conceptually similar, they do not necessarily behave in the same way.

Introduction

The ECD patch functionality enables you to add, change, and remove certain fields and functionality that might not be supported directly in the ECD specification, from the different Kubernetes objects created by the Operator through the ECD specification. The patch and patchType fields are part of the ECD CRD structure.

The operator expects the ECD patch to be in YAML format with respective parameters according to the patching strategy. The operator will attempt to patch the user-defined YAML with the original YAML, resulting in one YAML before applying it to the Kubernetes cluster.

The ECD patch functionality can be used either from Desktop Online or directly in the ECD specification YAML.

Note that parameters defined by Usage Engine in the ECD specification (Workflows, Workflow Groups) cannot be patched with the ECD Patch functionality. You can however, of course, edit these parameters directly in the ECD specification and apply the changes to the cluster.

Patch Format

The Patch format consists of 2 fields; patch and patchType, embedded under different Kubernetes objects. The patch field is the payload itself, which will be used to patch into the ECD Kubernetes objects. patchType is the field where users can define the patching strategies used to patch the payload.

Currently, the following objects can be patched through ECD:

  1. ECD (Deployments and Pods)

  2. Services

  3. HPA/autoscaling

  4. Ingress

Below is an example of the structure under ECD (spec.patch and spec.patchType) :

apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: anyECDeployment
  namespace: anyNamespace
spec:
  ...
  ...
  patchType: "application/merge-patch+json"
  patch: |
        ...
        ...

Below is an example of the structure under HPA (spec.autoscale.patch and spec.autoscale.patchType):

apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  ...
spec:
  autoscale:
    ...
    ...
    patchType: "application/merge-patch+json"
    patch: |
      spec:
        ...

Note!

There is a pipe “|” right after Patch, to indicate that the lines below are multi-lines YAML

In Desktop Online you can find the corresponding patch for ECD (deployment and pods), Services, HPA/autoscaling, and Ingress (Ingress also being under networking) under their respective ECD sections:

ECD-patch.png

Patching Strategies

There are 3 types of strategies supported by the ECD Patch feature:

  1. JSON Patch (RFC6902)

  2. Merge Patch (RFC7386)

  3. Strategic Merge Patch (Kubernetes custom implementation of Merge Patch)

JSON Patch

As defined in RFC6902, a JSON Patch is a sequence of operations that are executed on the resource, e.g. {"op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ]}. For more details on how to use the JSON Patch, see the RFC.

The example below shows how you annotate an Ingress resource so that it can be managed by Istio:

apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  ...
spec:
  ...
  ingress:
    patchType: "application/json-patch+json"
    patch: |
      - op: replace
        path: /metadata/annotations/Kubernetes.io~1ingress.class
        value: istio 

Changing an item in a list

In order to change an item in a list you can do this conveniently with JSON Patch. In the example below we change the service port from 1234 to 1235. The zero in the path (/spec/ports/0/port) specifies that the first item in the list should be changed.

Merge Patch

As defined in RFC7386, a Merge Patch is a partial representation of the resource. The submitted JSON is "merged" with the current resource to create a new one, then the new one is saved. For more details on how to use Merge Patch, see the RFC.

The example below shows how you add a node selector to restrict this deployment (pod) to only run on nodes with a label where the disk type is SSD:

apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  ...
spec:
  ...
  ...
  patchType: "application/merge-patch+json"
  patch: |
    spec:
      template:
        spec:
          nodeSelector:
            disktype: ssd

Strategic Merge Patch

Strategic Merge Patch is a custom implementation of Merge Patch for Kubernetes. For a detailed explanation of how it works and why it had to be introduced, see API Conventions on Patch - Strategic Merge. In general, Strategic Merge Patch works better when merging Kubernetes objects in a list.

In this ECD Services, port 9092 is already defined. Using Strategic Merge Patch, you can add two more ports 9093 and 9094. If you want to change the type from a Strategic Merge Patch to a Merge Patch, port 9092 would have been removed after patch.

services:
    - spec:
        type: ClusterIP
        ports:
          - port: 9092
            protocol: TCP
            targetPort: 9092
      ...
      ...
      patchType: "application/strategic-merge-patch+json"
      patch: |
        spec:
          ports:
            - name: "port-1"
              port: 9093
              protocol: TCP
              targetPort: 9093
            - name: "port-2"
              port: 9094
              protocol: UDP
              targetPort: 9094
          ...

Here is an example changing multiple (sub-)paths in the same patch:

spec:
  template:
    spec:
      hostAliases:
      - ip: 34.88.208.176
        hostnames:
        - "client"
        - "client-simulator"
      - ip: 35.228.46.60
        hostnames:
        - "proxy"
        - "proxy2"
      containers:
      - name: ec1
        resources:
          limits:
            memory: 1536Mi
          requests:
            memory: 1024Mi

Samples

Below are samples that can help you get started with an ECD patch. The “Before” section is based on the ECD, which is the definition file for the desired state. while the “After” section is based on the conversion and logic processing done by Operator - which is the actual objects provisioning yaml to be applied to the cluster. As you can see, there are several objects that will be provisioned and handled by the Operator itself.

Changing Rollout Strategy

Creating an ECD will result in the creation of different Kubernetes objects, where one of them is a Deployment object. The rollout strategy defaults to RollingUpdate, but through an ECD patch we can change it to another strategy such as Recreate. The change can be seen on the spec.strategy.type in the Deployment object After ECD Patch.

Before ECD Patch

After ECD Patch

k apply -f file.yaml

apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: ecd-test-rolling-strategy
spec:
  enabled: true
  patchType: "application/strategic-merge-patch+json"
  patch: |
    spec:
      strategy:
        type: Recreate
  image: dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
  workflows:
  - template: Default.http2
    instances:
      - name: server-1
        parameters: |
          {
            "port": 8989
          }

k get deploy ecd-test-rolling-strategy -o yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  ...
  ...
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: ecd-test-rolling-strategy
  strategy:
    type: Recreate
  template:
    ...
    ...

Setting Toleration

In the example below, with a 3 node implementation of a Kubernetes cluster, 2 nodes are tainted color=blue and 1 node is tainted color=red. The test is to add toleration to ECD so that it will get deployed into node tainted with color=red.

$ k taint nodes kl-kube-node01.digitalroute.com kl-kube-node02.digitalroute.com color=blue:NoSchedule
node/kl-kube-node01.digitalroute.com tainted
node/kl-kube-node02.digitalroute.com tainted
$ k taint nodes kl-kube-node03.digitalroute.com color=red:NoSchedule
node/kl-kube-node03.digitalroute.com tainted

Observe how toleration is being added and gets scheduled to the node tainted with color=red.

Before ECD Patch

After ECD Patch

k apply -f file.yaml

apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: ecd-test-tolerations
spec:
  enabled: true
  patchType: "application/strategic-merge-patch+json"
  patch: |
    spec:                                 # Spec for Deployment
      template:                           # Template for Pods
        spec:                             # Spec for Pods
          tolerations:                    # Toleration added to each Pod
          - key: "color"
            value: "red"                  
            operator: "Equal"
            effect: "NoSchedule"
  image: dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
  workflows:
  - template: Default.http2
    instances:
      - name: server-1
        parameters: |
          {
            "port": 8989
          }

k get pods ecd-test-tolerations-5d646c45cd-g9x8n -o wide

NAME                                    READY   STATUS    RESTARTS   AGE   IP            NODE                              NOMINATED NODE   READINESS GATES
ecd-test-tolerations-5d646c45cd-g9x8n   1/1     Running   0          80s   10.244.2.10   kl-kube-node03.digitalroute.com   <none>           <none>

 

k describe pods ecd-test-tolerations-5d646c45cd-g9x8n| grep -i toleration

Name:         ecd-test-tolerations-5d646c45cd-g9x8n
Labels:       ECDeployment=ecd-test-tolerations
              app=ecd-test-tolerations
Controlled By:  ReplicaSet/ecd-test-tolerations-5d646c45cd
  ecd-test-tolerations:
Tolerations:     color=red:NoSchedule
  Normal   Scheduled  5m21s  default-scheduler                         Successfully assigned castle-black/ecd-test-tolerations-5d646c45cd-g9x8n to kl-kube-node03.digitalroute.com
  Normal   Created    5m21s  kubelet, kl-kube-node03.digitalroute.com  Created container ecd-test-tolerations
  Normal   Started    5m20s  kubelet, kl-kube-node03.digitalroute.com  Started container ecd-test-tolerations

Setting Environment Variable

You can also add in an environmental variable. In the example below, the environmental variable ENV is added with the value “dev”.

Before ECD Patch

After ECD Patch

k apply -f file.yaml

apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: ecd-test-2
spec:
  enabled: true
  patchType: "application/strategic-merge-patch+json"
  patch: |
    spec:  
      template:              
        spec:     
          containers:
          - name: ecd-test-2
            env:
            - name: ENV 
              value: dev
  image: dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
  workflows:
  - template: Default.http2
    instances:
      - name: server-1
        parameters: |
          {
            "port": 8989
          }

kex ecd-test-2-7487469546-s77xx bash -- printenv | grep ENV

ENV=dev

 

k describe pods ecd-test-2-7487469546-s77xx

Name:         ecd-test-2-7487469546-s77xx
Namespace:    castle-black
Priority:     0
Node:         kl-kube-node03.digitalroute.com/10.60.10.143
Start Time:   Tue, 25 Aug 2020 17:05:04 +0800
Labels:       ECDeployment=ecd-test-2
              app=ecd-test-2
              pod-template-hash=7487469546
Annotations:  Status:  Running
IP:           10.244.2.14
IPs:
  IP:           10.244.2.14
Controlled By:  ReplicaSet/ecd-test-2-7487469546
Containers:
  ecd-test-2:
    Container ID:  docker://a07de37d1cfff80b7ce240d7a6d3821cea393a49b58f8a9f43f97a229efd236f
    Image:         dtr.digitalroute.com/dr/mz10:10.1.0.0-dev-20200813052033.a224284-ec
    Image ID:      docker-pullable://dtr.digitalroute.com/dr/mz10@sha256:6e5efb5bb8e526679d2e0878f5cf69011d0f8724be1dc90f26e631f33afe8227
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/mz/entrypoint/docker-entrypoint.sh
    Args:
      -e accepts.any.scheduling.criteria=false
    State:          Running
      Started:      Tue, 25 Aug 2020 17:05:05 +0800
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:9090/health/live delay=90s timeout=10s period=15s #success=1 #failure=3
    Readiness:      http-get http://:9090/health/ready delay=0s timeout=1s period=5s #success=1 #failure=60
    Environment:
      ENV:  dev
      TZ:   UTC

Removing an Object

You can also use this functionality to remove a provisioned Kubernetes object. In the example below, the directive marker ($patch: delete) is used to remove a volume and volumeMount.

Before ECD Patch

After ECD Patch

k apply -f file.yaml

apiVersion: mz.digitalroute.com/v1alpha1
kind: ECDeployment
metadata:
  name: ecd-test-2
spec:
  enabled: true
  patchType: "application/strategic-merge-patch+json"
  patch: |
    spec:  
      template:              
        spec:     
          containers:
          - name: ecd-test-2
            volumeMounts:
            - mountPath: /cdr_volume
              name: cdr-volume
              $patch: delete
          volumes:
          - name: cdr-volume
            emptyDir: {}
            $patch: delete
  image: dtr.digitalroute.com/dr/mz10:10.2.0-xe-2080-bugfix-latest-ec
  workflows:
  - template: Default.http2
    instances:
      - name: server-1
        parameters: |
          {
            "port": 8989
          }

kg pods ecd-test-2-678ccb76d6-s49ql -o yaml

apiVersion: v1
kind: Pod
metadata:
  ...
  ...
  name: ecd-test-2-678ccb76d6-s49ql
  ...
  ...
spec:
  containers:
  - name: ecd-test-2
    ...
    ...
    volumeMounts:
    - mountPath: /etc/config/common
      name: common-config
    - mountPath: /var/run/secrets/Kubernetes.io/serviceaccount
      name: default-token-4dc54
      readOnly: true
  ...
  ...
  volumes:
  - configMap:
      defaultMode: 420
      name: common-config
    name: common-config
  - name: default-token-4dc54
    secret:
      defaultMode: 420
      secretName: default-token-4dc54
status:
  ...
  ...
  • No labels