Upgrade - AWS (3.1)

Upgrade Usage Engine

  1. Validate the Helm Chart Contents

    To ensure there are no errors in the Helm chart content, you can run the following:

    $ helm lint usage-engine-private-edition

    You can also run the helm template chart to see the yaml file with modified values:

    $ helm template usage-engine-private-edition
  2. Upgrade all pods.

    - Use the new License Key for the upgrade. In case you have not received a new License Key, see the Release Note for the alternative method.

    - In the case the upgrade fails, it will rollback automatically. The new resources from the upgrade will be removed automatically.

    Before you upgrade, validate the helm chart with a dry-run:
    $ helm upgrade <release_name_platform>  usage-engine-private-edition --set-file licenseKey=<licensekey_file> --dry-run --debug  --namespace <namespace>
    If the validation is ok, continue with the upgrade:
    $ helm upgrade <release_name_platform> usage-engine-private-edition --set-file licenseKey=<licensekey_file> --atomic --cleanup-on-fail --debug --namespace <namespace>
  3. Verify the installation. STATUS Running. The READY state (ready/desired) should be 1/1, 2/2 and so on.  It can can take a few minutes before everything is in up and running.

    # Verify pods
    $ kubectl get pods --namespace <namespace>
    NAME                                          READY   STATUS    RESTARTS   AGE
    aws-alb-ingress-controller-5fbf5b59d9-wsgzt   1/1     Running   0          24h
    efs-provisioner-69854b5db8-2bhds              1/1     Running   0          24h
    external-dns-7b79999d56-tq6pl                 1/1     Running   0          24h
    mz-ingress-nginx-5767cbbcf8-d6crt             1/1     Running   0          24h
    mz-operator-controller-manager-0              2/2     Running   0          24h
    mzonline-574cb89f54-7vkrg                     1/1     Running   0          24h
    platform-0                                    1/1     Running   0          24h
    wd-c8f5d77d8-pls97                            1/1     Running   0          23h
    # Verify service contexts to connect
    $ kubectl get services --namespace <namespace>
    NAME                                             TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)                                        AGE
    external-dns                                     ClusterIP    <none>                                                                   7979/TCP                                       24h
    ingress-nginx                                    NodePort    <none>                                                                   80:32266/TCP,443:30403/TCP                     24h
    kubernetes                                       ClusterIP       <none>                                                                   443/TCP                                        46h
    mz-operator-controller-manager-metrics-service   ClusterIP   <none>                                                                   8443/TCP                                       24h
    mz-operator-controller-manager-service           ClusterIP     <none>                                                                   443/TCP                                        24h
    mzonline                                         NodePort     <none>                                                                   80:31250/TCP,443:30738/TCP                     24h
    platform                                         LoadBalancer    a21e6b94b01a0f02fe62803350-284407273.eu-west-1.elb.amazonaws.com         9000:31561/TCP,6790:31654/TCP,443:31216/TCP    24h
    wd                                               NodePort    <none>                                                                   9999:31135/TCP                                 24h


    The output will show the exposed ports when you run kubectl get services. We have used a default namespace and helm chart for efs-provisioner,  external-dns and aws-alb-ingress-controller from 3PP → "Readme" instruction from zip file in /wiki/spaces/DRXXE/pages/6194875

    Manual Downgrade

    In the case you need to do a manual downgrade of the system, see Downgrade(3.0).

Manual ECD rolling update

  • In the case Automatic Rolling Update was never defined for the ECDs, you need to enter the Web interface and manually upgrade the ECDs to the new image by clicking Upgrade in the EC deployment interface.

  1. Connect to the Web Interface:


    Web interface: https://mzonline.<domain>/

  2. Login to the Web Interface and click on Upgrade for the ECDs with a warning sign, to apply the upgrades.

    Example of the ECD ecd01, with a warning sign ready for the image upgrade.