Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

Preparations

Before doing anything to the running installation, the config file for the new installation should be prepared by following these steps:

  1. Retrieve the values.yaml file that you have used previously, or if you want to start from scratch, you extract it from the installation by running these commands:

    helm -n <namespace> get all <helm name>
    E.g:
    helm -n uepe get all uepe

    Where uepe is the helm name you have selected for your installation. You will see list similar to the one below.

    helm list
    NAME         	NAMESPACE	REVISION	UPDATED                                 	STATUS  	CHART                             	APP VERSION
    external-dns 	uepe     	1       	2024-05-08 15:27:48.258978207 +0200 CEST	deployed	external-dns-7.2.0                	0.14.1     
    ingress-nginx	uepe     	1       	2024-05-08 16:18:43.919980224 +0200 CEST	deployed	ingress-nginx-4.10.0              	1.10.0     
    uepe         	uepe     	3       	2024-05-10 14:16:17.724426589 +0200 CEST	deployed	usage-engine-private-edition-4.0.0	4.0.0      
  2. Extract the values manually from the output. Copy the lines below “USER-SUPPLIED VALUES:” and stop at the blank line. Save the copied content to the config file valuesFromSystem.yaml.

  3. Update helm repository to get the latest helm chart versions by running the following command.

    helm repo list
    helm repo update
  4. Retrieve the new version from the repository by running the following command. Refer to Release Information for the Helm Chart version.

    helm fetch <repo name>/usage-engine-private-edition --version <version> --untar

    For example:

    helm fetch digitalroute/usage-engine-private-edition --version 4.0.0 --untar
  5. Next, check the file CHANGELOG.md inside the created folder to find out what may have changed in the new version when it comes to the values-file.
    If you are uncertain about how to interpret the content of the file, see below for some examples of keys and how to interpret them:

    The following values have been removed:
    * ```mzOperator.clusterWide```
    * ```mzOperator.experimental.performPeriodicWorkflowCleanup```
    * ```jmx.remote```
    * ```platform.debug.jmx```
    

    means that in the values file they should be entered as:

    mzOperator:
      clusterWide:
      experimental:
        performPeriodicWorkflowCleanup
    jmx:
      remote:
    platform:
      debug:
        jmx:

    Each part of the key does not necessarily follow directly after the previous one, but always before any other “parent” on the same level. So in this example of a values.yaml file:

    debug:
      script:
        enabled: false
      log:
        level:
          codeserver: info
          jetty: 'off'
          others: warn

    an example of a key could be debug.log.level.jetty.

  6. Make any necessary updates based on changed field you may be using in the valuesFromSystem.yaml file you got from the existing installation so it matches the new version.

  7. Take note of any fields that have been deprecated or removed since the last version so any configuration of those fields can be replaced.

  8. When you have updated the valuesFromSystem.yaml file you can test it by running this command:

helm upgrade --install uepe digitalroute/usage-engine-private-edition --atomic --cleanup-on-fail --version 4.0.0 -n uepe -f valuesFromSystem.yaml --dry-run=server

Error rendering macro 'excerpt-include' : No link could be created for 'Upgrade Excerpts for Cloud'.

Backup and Database Upgrade

When all the running batch workflows have stopped you should make a backup so that the system can be restored in case of any issues during the upgrade.

Note!

Before proceeding with the backup you must shut down the platform. This is very important since otherwise the backup of the database may become corrupt.

The platform can be shut down in various ways, see examples below.

Examples - Shutting Down the Platform

Option 1

Reduce the number of replicas (under “spec”) to 0 by running the following command:

kubectl edit statefulset platform -n uepe

where uepe is the namespace used.

Option 2

Run this command:

kubectl scale --replicas=0 sts/platform -n uepe

and then this command:

kubectl get pods -n uepe

And ensure that the pod platform-0 is no longer present

Note!

The instructions for backup and upgrade of the database below are only relevant if you are using Azure Database as platform database. If the platform database used is derby, the backup of the Azure Storage covers the database as well (assuming persistent storage of the platform is enabled).

For database backup, please refer to https://learn.microsoft.com/en-us/azure/backup/backup-azure-database-postgresql-flex for guidance.

It is now time to do a backup of the file system used.

Note!

If there are standalone ECs that are still running and writing their logs to the same EFS, whatever happens after the backup has been initiated will not be included in the backup.

To create an Azure File share backup, see https://learn.microsoft.com/en-us/azure/backup/backup-azure-files?tabs=backup-center and https://learn.microsoft.com/en-us/azure/backup/backup-afs-cli for instructions.

The section below contains an example of create a backup vault, follow by enabling Azure File share backup protection and perform an on-demand backup through the command line.

export RESOURCE_GROUP=PT_Stratus
export LOCATION="Southeast Asia"
export STORAGE_ACCOUNT_NAME=uepeaks
export STORAGE_ACCOUNT_KEY=$(az storage account keys list --account-name $STORAGE_ACCOUNT_NAME --query "[].{Value:value}" | jq -rc ".[0].Value")
export STORAGE_ACCOUNT_ID=$(az storage account show --resource-group $RESOURCE_GROUP --name $STORAGE_ACCOUNT_NAME | jq -rc ".id")
export SUBSCRIPTION_ID=$(az account subscription list | jq -rc ".[0].subscriptionId")
export FILE_SHARE=$(az storage share list --account-name $STORAGE_ACCOUNT_NAME --account-key $STORAGE_ACCOUNT_KEY --query "[].{Name:name}" | jq -rc ".[0].Name")
export FILE_BACKUP_VAULT=azurefilesvault
export FILE_BACKUP_POLICY=MyBackupPolicy

# Create new file backup vault
az backup vault create --resource-group $RESOURCE_GROUP --name $FILE_BACKUP_VAULT --location $LOCATION --output table
az backup vault list --query "[].{Name:name}"

# Create new file backup policy for scheduled backup
# https://learn.microsoft.com/en-us/azure/backup/manage-afs-backup-cli#create-policy
# https://learn.microsoft.com/en-us/azure/templates/microsoft.recoveryservices/vaults/backuppolicies?pivots=deployment-language-bicep#property-values
cat <<-EOF > /path/to/$FILE_BACKUP_POLICY.json
{
  "eTag": null,
  "id": "/Subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.RecoveryServices/vaults/$FILE_BACKUP_VAULT/backupPolicies/$FILE_BACKUP_POLICY",
  "location": null,
  "name": "$FILE_BACKUP_POLICY",
  "properties": {
    "backupManagementType": "AzureStorage",
    "protectedItemsCount": 0,
    "retentionPolicy": {
      "dailySchedule": {
        "retentionDuration": {
          "count": 30,
          "durationType": "Days"
        },
        "retentionTimes": [
          "2024-07-19T03:00:00+00:00"
        ]
      },
      "monthlySchedule": null,
      "retentionPolicyType": "LongTermRetentionPolicy",
      "weeklySchedule": null,
      "yearlySchedule": null
    },
    "schedulePolicy": {
      "schedulePolicyType": "SimpleSchedulePolicy",
      "scheduleRunDays": null,
      "scheduleRunFrequency": "Daily",
      "scheduleRunTimes": [
        "2024-07-19T03:00:00+00:00"
      ],
      "scheduleWeeklyFrequency": 0
    },
    "timeZone": "UTC",
    "workLoadType": "AzureFileShare"
  },
  "resourceGroup": "$RESOURCE_GROUP",
  "tags": null,
  "type": "Microsoft.RecoveryServices/vaults/backupPolicies"
}
EOF

az backup policy list --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT --query "[].{Name:name}"
az backup policy create --policy $FILE_BACKUP_POLICY.json --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT --name $FILE_BACKUP_POLICY --backup-management-type AzureStorage
az backup policy show --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT --name $FILE_BACKUP_POLICY

# Enable Azure File share backup protection
az backup protection enable-for-azurefileshare --vault-name $FILE_BACKUP_VAULT --resource-group $RESOURCE_GROUP --policy-name $FILE_BACKUP_POLICY --storage-account $STORAGE_ACCOUNT_NAME --azure-file-share $FILE_SHARE  --output table

# Result output as in following:
# Name                                  ResourceGroup
# ------------------------------------  ---------------
# 2b85d01d-9a27-4a5a-aa9d-cbdad082cac2  PT_Stratus

# Track job status
az backup job show --name 2b85d01d-9a27-4a5a-aa9d-cbdad082cac2 --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT

# Retrieve container registered to the Recovery services vault and export as env variable
export CONTAINER_NAME=$(az backup container list --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT --backup-management-type AzureStorage | jq -rc ".[].name")

# Retrieve backed up item and export as env variable
export ITEM_NAME=$(az backup item list --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT | jq -rc ".[].name")

# Perform on-demand backup
az backup protection backup-now --vault-name $FILE_BACKUP_VAULT --resource-group $RESOURCE_GROUP --container-name $CONTAINER_NAME --item-name $ITEM_NAME --retain-until 20-01-2025 --output table

# Result output as in following:
# Name                                  Operation    Status      Item Name               Backup Management Type    Start Time UTC                    Duration
# ------------------------------------  -----------  ----------  ----------------------  ------------------------  --------------------------------  --------------
# 23300e34-b1e0-409c-804e-c247d4587f8f  Backup       InProgress  uepe-aks-storage-share  AzureStorage              2024-07-19T11:01:07.436164+00:00  0:00:02.178697

# Track job status
az backup job show --name 23300e34-b1e0-409c-804e-c247d4587f8f --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT

Restoring from Backup

If restoring becomes necessary, you can restore the recovery point into Azure Blob Storage and use DB native tools pg_restore to restore data as a new PostgreSQL flexible server, see Azure guide https://learn.microsoft.com/en-us/azure/backup/restore-azure-database-postgresql-flex for more information.

To restore Azure File share, follow the instructions in https://learn.microsoft.com/en-us/azure/backup/restore-afs?tabs=full-share-recovery / https://learn.microsoft.com/en-us/azure/backup/restore-afs-cli.

The section below contains an example of how to restore the Azure File backup using the command line. In this example the backup is restored as a new File share. If you wish to restore backup to the existing File share instance, you need to adjust accordingly.

export RESOURCE_GROUP=PT_Stratus
export LOCATION="Southeast Asia"
export STORAGE_ACCOUNT_NAME=uepeaks
export STORAGE_ACCOUNT_KEY=$(az storage account keys list --account-name $STORAGE_ACCOUNT_NAME --query "[].{Value:value}" | jq -rc ".[0].Value")
export STORAGE_ACCOUNT_ID=$(az storage account show --resource-group $RESOURCE_GROUP --name $STORAGE_ACCOUNT_NAME | jq -rc ".id")
export SUBSCRIPTION_ID=$(az account subscription list | jq -rc ".[0].subscriptionId")
export FILE_SHARE=$(az storage share list --account-name $STORAGE_ACCOUNT_NAME --account-key $STORAGE_ACCOUNT_KEY --query "[].{Name:name}" | jq -rc ".[0].Name")
export FILE_BACKUP_VAULT=azurefilesvault
export FILE_BACKUP_POLICY=MyBackupPolicy
export CONTAINER_NAME=$(az backup container list --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT --backup-management-type AzureStorage | jq -rc ".[].name")
export ITEM_NAME=$(az backup item list --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT | jq -rc ".[].name")

# Fetch recovery points
az backup recoverypoint list --vault-name $FILE_BACKUP_VAULT --resource-group $RESOURCE_GROUP --container-name $CONTAINER_NAME --backup-management-type azurestorage --item-name $ITEM_NAME --workload-type azurefileshare --out table

# Result output as in following:
# Name            Time                       Consistency
# --------------  -------------------------  --------------------
# 68988215529834  2024-07-19T11:01:09+00:00  FileSystemConsistent

# Create a new file share for restore purpose
az storage share create --account-name $STORAGE_ACCOUNT_NAME --name $FILE_SHARE-restored

# Full restore snapshot to new file share
az backup restore restore-azurefileshare --vault-name $FILE_BACKUP_VAULT --resource-group $RESOURCE_GROUP --rp-name 68988215529834 --container-name $CONTAINER_NAME --item-name $ITEM_NAME --restore-mode alternatelocation --target-storage-account $STORAGE_ACCOUNT_NAME --target-file-share $FILE_SHARE-restored --target-folder restoredata --resolve-conflict overwrite --out table

# Track job status
az backup job show --name 249c1bbb-da9f-4b3b-b612-f9917ea2cecd --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT

Error rendering macro 'excerpt-include' : No link could be created for 'Upgrade Excerpts for Cloud'.

Error rendering macro 'excerpt-include' : No link could be created for 'Upgrade Excerpts for Cloud'.

Error rendering macro 'excerpt-include' : No link could be created for 'Upgrade Excerpts for Cloud'.

  • No labels