Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Preparations

Before doing anything to the running installation, the config file for the new installation should be prepared by following these steps:

  1. Retrieve the values.yaml file that you have used previously, or if you want to start from scratch, you extract it from the installation by running these commands:

    helm -n <namespace> get all <helm name>
    E.g:
    helm -n uepe get all uepe

    Where uepe is the helm name you have selected for your installation. You will see list similar to the one below.

    helm list
    NAME         	NAMESPACE	REVISION	UPDATED                                 	STATUS  	CHART                             	APP VERSION
    external-dns 	uepe     	1       	2024-05-08 15:27:48.258978207 +0200 CEST	deployed	external-dns-7.2.0                	0.14.1     
    ingress-nginx	uepe     	1       	2024-05-08 16:18:43.919980224 +0200 CEST	deployed	ingress-nginx-4.10.0              	1.10.0     
    uepe         	uepe     	3       	2024-05-10 14:16:17.724426589 +0200 CEST	deployed	usage-engine-private-edition-4.0.0	4.0.0      
  2. Extract the values manually from the output. Copy the lines below “USER-SUPPLIED VALUES:” and stop at the blank line before “COMPUTED VALUES:”. Save the copied content to the config file valuesFromSystem.yaml.

  3. Update helm repository to get the latest helm chart versions by running the following command.

    helm repo list
    helm repo update
  4. Retrieve the new version from the repository by running the following command. Refer to Release Information for the Helm Chart version.

    helm fetch <repo name>/usage-engine-private-edition --version <version> --untar

    For example:

    helm fetch digitalroute/usage-engine-private-edition --version 4.0.0 --untar
  5. Next, check the file CHANGELOG.md inside the created folder to find out what may have changed in the new version when it comes to the values-file.
    If you are uncertain about how to interpret the content of the file, see below for some examples of keys and how to interpret them:

    The following values have been removed:
    * ```mzOperator.clusterWide```
    * ```mzOperator.experimental.performPeriodicWorkflowCleanup```
    * ```jmx.remote```
    * ```platform.debug.jmx```
    

    means that in the values file they should be entered as:

    mzOperator:
      clusterWide:
      experimental:
        performPeriodicWorkflowCleanup
    jmx:
      remote:
    platform:
      debug:
        jmx:

    Each part of the key does not necessarily follow directly after the previous one, but always before any other “parent” on the same level. So in this example of a values.yaml file:

    debug:
      script:
        enabled: false
      log:
        level:
          codeserver: info
          jetty: 'off'
          others: warn

    an example of a key could be debug.log.level.jetty.

  6. Make any necessary updates based on changed field you may be using in the valuesFromSystem.yaml file you got from the existing installation so it matches the new version.

  7. Take note of any fields that have been deprecated or removed since the last version so any configuration of those fields can be replaced.

Note!

Before proceeding with the upgrade make sure :

  • you logged in and have access the container registry.

  • you have a valid image pull secret that allows the Kubernetes cluster to pull the container images from the container registry.

  • update the Image Pull Secret (if needed) to the valuesFromSystem.yaml file.

  • update the License Key for the upgrade version to the valuesFromSystem.yaml file.

  1. When you have updated the valuesFromSystem.yaml file you can test it by running this command:

helm upgrade --install uepe digitalroute/usage-engine-private-edition --atomic --cleanup-on-fail --version 4.0.0 -n uepe -f valuesFromSystem.yaml --dry-run=server

Preparing ECDs

Before you start the actual upgrade, these steps are recommended to avoid issues in processing caused by the restarts during the upgrade:

  1. Disable any batch workflow groups and let any running batch workflows finish their runs.

  2. For real-time workflows, check which types of real-time workflows the ECs are running. If an ECD hosts workflows that allow for scaling and use an ingress for incoming traffic, the ECD will, by default, be upgraded through a rolling upgrade, which means that there will always be at least one workflow running even during the upgrade.

    However, if the real-time workflow does not support scaling, for example, because it uses fixed ports or storage that is not shared, the EC will become unavailable for a certain time during the upgrade. To gain control over when the EC becomes unavailable, you can edit the ECD by setting manualUpgrade to true before the upgrade. With this setting, the ECD will keep running on the old version until the upgrade has been performed and it can then be restarted on the new version in the EC Deployment Interface (4.2).

Example - Editing ECD to Manual Upgrade

Option 1

Run the following command:

kubectl get ecd -n <namespace>

kubectl edit ecd <ecd-name> -n <namespace>

And change manualUpgrade to true:

spec:
    .....
    manualUpgrade: true

Option 2

Run the following command:

kubectl patch ecdeployment <ecd-name> -n <namespace> --type=merge -p $'spec:\n  manualUpgrade: true'

When the upgrade is completed, the ECDs can be upgraded by editing the ECD in Desktop Online.

  • No labels