Before doing anything to the running installation, the config file for the new installation should be prepared by following these steps: Retrieve the Where Extract the values manually from the output. Copy the lines below “USER-SUPPLIED VALUES:” and stop at the blank line. Save the copied content to the config file Update helm repository to get the latest helm chart versions by running the following command. Retrieve the new version from the repository by running the following command. Refer to Release Information for the Helm Chart version. For example: Next, check the file CHANGELOG.md inside the created folder to find out what may have changed in the new version when it comes to the values-file. means that in the values file they should be entered as: Each part of the key does not necessarily follow directly after the previous one, but always before any other “parent” on the same level. So in this example of a an example of a key could be Make any necessary updates based on changed field you may be using in the Take note of any fields that have been deprecated or removed since the last version so any configuration of those fields can be replaced. When you have updated the Preparations
values.yaml
file that you have used previously, or if you want to start from scratch, you extract it from the installation by running these commands:helm -n <namespace> get all <helm name>
E.g:
helm -n uepe get all uepe
uepe
is the helm name you have selected for your installation. You will see list similar to the one below.helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
external-dns uepe 1 2024-05-08 15:27:48.258978207 +0200 CEST deployed external-dns-7.2.0 0.14.1
ingress-nginx uepe 1 2024-05-08 16:18:43.919980224 +0200 CEST deployed ingress-nginx-4.10.0 1.10.0
uepe uepe 3 2024-05-10 14:16:17.724426589 +0200 CEST deployed usage-engine-private-edition-4.0.0 4.0.0
valuesFromSystem.yaml
.helm repo list
helm repo update
helm fetch <repo name>/usage-engine-private-edition --version <version> --untar
helm fetch digitalroute/usage-engine-private-edition --version 4.0.0 --untar
If you are uncertain about how to interpret the content of the file, see below for some examples of keys and how to interpret them:The following values have been removed:
* ```mzOperator.clusterWide```
* ```mzOperator.experimental.performPeriodicWorkflowCleanup```
* ```jmx.remote```
* ```platform.debug.jmx```
mzOperator:
clusterWide:
experimental:
performPeriodicWorkflowCleanup
jmx:
remote:
platform:
debug:
jmx:
values.yaml
file:debug:
script:
enabled: false
log:
level:
codeserver: info
jetty: 'off'
others: warn
debug.log.level.jetty
.valuesFromSystem.yaml
file you got from the existing installation so it matches the new version.valuesFromSystem.yaml
file you can test it by running this command:helm upgrade --install uepe digitalroute/usage-engine-private-edition --atomic --cleanup-on-fail --version 4.0.0 -n uepe -f valuesFromSystem.yaml --dry-run=server
When all the running batch workflows have stopped you should make a backup so that the system can be restored in case of any issues during the upgrade. Note! Before proceeding with the backup you must shut down the platform. This is very important since otherwise the backup of the database may become corrupt. The platform can be shut down in various ways, see examples below. Examples - Shutting Down the Platform Option 1 Reduce the number of replicas (under “spec”) to 0 by running the following command: where uepe is the namespace used. Option 2 Run this command: and then this command: And ensure that the pod platform-0 is no longer presentBackup and Database Upgrade
kubectl edit statefulset platform -n uepe
kubectl scale --replicas=0 sts/platform -n uepe
kubectl get pods -n uepe
Note!
The instructions for backup and upgrade of the database below are only relevant if you are using RDS as platform database. If the platform database used is derby, the backup of the EFS covers the database as well (assuming persistent storage of the platform is enabled).
List the databases and locate the one used for Usage Engine with this command:
aws rds describe-db-instances --query 'DBInstances[].DBInstanceIdentifier[]'
Perform a backup of the RDS database with this command:
aws rds create-db-snapshot --db-snapshot-identifier <database backup name> --db-instance-identifier <database instance name>
for example:
aws rds create-db-snapshot --db-snapshot-identifier uepe-eks-db-postgresql-backup --db-instance-identifier uepe-eks-db-postgresql
Check if the backup was created successfully by running this command:
aws rds describe-db-snapshots --snapshot-type manual --db-snapshot-identifier uepe-eks-db-postgresql-backup
It is now time to do a backup of the file system used.
Note!
If there are standalone ECs that are still running and writing their logs to the same EFS, whatever happens after the backup has been initiated will not be included in the backup.
To create an EFS backup using the console, see https://docs.aws.amazon.com/aws-backup/latest/devguide/recov-point-create-on-demand-backup.html for instructions.
The section below contains an example of how to run an on-demand backup job using the command line. The snapshot will in this case be stored under the default backup vault.
export EFS_NAME=uepe-eks-efs-disk export EFS_FILE_SYSTEM_ID=$(aws efs describe-file-systems --query "FileSystems[?Name==\`$EFS_NAME\`].FileSystemId" --output text) export EFS_ARN=$(aws efs describe-file-systems --query "FileSystems[?Name==\`$EFS_NAME\`].FileSystemArn" --output text) export VAULT_NAME=Default export BACKUP_ROLE_ARN=$(aws iam get-role --role-name AWSBackupDefaultServiceRole --query "Role.Arn" --output text) # Run on demand backup job aws backup start-backup-job \ --backup-vault-name $VAULT_NAME \ --resource-arn $EFS_ARN \ --iam-role-arn $BACKUP_ROLE_ARN # View backup job status aws backup list-backup-jobs --by-resource-type EFS
Restoring from Backup
If restoring becomes necessary, you can restore the DB instance from a snapshot backup, see the AWS guide https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html for more information.
You can also restore a new DB instance using the commands below:
export EXISTING_DB=uepe-eks-db-postgresql export NEW_DB=uepe-eks-db-postgresql-2 export SNAPSHOT=uepe-eks-db-postgresql-backup export INSTANCE_CLASS=db.t3.small export SUBNET_GROUP_NAME=$(aws rds describe-db-instances --query "DBInstances[?DBInstanceIdentifier==\`$EXISTING_DB\`].DBSubnetGroup.DBSubnetGroupName" --output text) export SECURITY_GROUP_ID=$(aws rds describe-db-instances --query "DBInstances[?DBInstanceIdentifier==\`$EXISTING_DB\`].VpcSecurityGroups[].VpcSecurityGroupId" --output text) aws rds restore-db-instance-from-db-snapshot \ --db-instance-identifier $NEW_DB \ --db-snapshot-identifier $SNAPSHOT \ --db-instance-class $INSTANCE_CLASS \ --db-subnet-group-name $SUBNET_GROUP_NAME \ --vpc-security-group-ids=$SECURITY_GROUP_ID
If you are using the console to do the RDS restore, remember to include the existing database security group so that it can be accessible by the cluster.
To restore EFS, follow the instructions in https://docs.aws.amazon.com/aws-backup/latest/devguide/restore-resource.html and https://repost.aws/knowledge-center/aws-backup-restore-efs-file-system-cli.
If you want to restore the backup into a new file system, the EFS mount target needs to be manually re-configured to allow access from the cluster, see https://docs.aws.amazon.com/efs/latest/ug/manage-fs-access.html#manage-fs-access-create-delete-mount-targets for more information. If you are using access points, you need to configure access point for the new file system after the restore is done.
The section below contains an example of how to restore the EFS backup using the command line. In this example the volume mount is using access point path /uepe
, and the snapshot is stored under default vault, and then the backup is restored as a new file system. If this is not how you have set it up, or if you wish to restore backup to the existing EFS instance, you need to adjust accordingly.
#################### Retrieve backup ARN id #################### aws backup list-recovery-points-by-backup-vault --backup-vault-name $VAULT_NAME # NOTE: Record the RecoveryPointArn that you wish to recover from # e.g. arn:aws:backup:ap-southeast-1:027763730008:recovery-point:0a82d94c-3d56-481d-98e3-b810d3df363b # To view the recovery point restore metadata aws backup get-recovery-point-restore-metadata \ --backup-vault-name $VAULT_NAME \ --recovery-point-arn <RECOVERY_POINT_ARN> #################### Restore from the backup #################### # Prerequisites: # 1) Generate an UUID, "uuidgen" (Mac) or "uuid -r" (Linux) # 2) Create a metadata json file, properties details are mentioned in # https://docs.aws.amazon.com/aws-backup/latest/devguide/restoring-efs.html#efs-restore-cli # NOTE: If newFileSystem=true, file-system-id parameter will be ignored. # 3) Substitute "CreationToken" value with the generated UUID. # 4) If existing file system is encrypted, you may use the existing KMS key. # # Example metadata json: # cat <<-EOF > /path/to/metadata_json_file # { # "file-system-id": "fs-6a1dcba2", # "Encrypted": "true", # "KmsKeyId": "arn:aws:kms:ap-southeast-1:027763730008:key/4859a845-3ef2-464d-80d2-16c1b2c58ff4", # "PerformanceMode": "generalPurpose", # "CreationToken": "944713C9-C6BB-42A4-AF91-E7DB5761FDBD", # "newFileSystem": "true" # } # EOF aws backup start-restore-job --recovery-point-arn <RECOVERY_POINT_ARN> --iam-role-arn "$BACKUP_ROLE_ARN" --metadata file:///path/to/metadata_json_file watch aws backup list-restore-jobs --by-resource-type EFS #################### Export new file system id #################### # If you recover as new file system (newFileSystem=true), please use command `aws efs describe-file-systems` to find out the new file system id. # After that export the new file system id env variable. export NEW_EFS_FILE_SYSTEM_ID="fs-xxxxxxxxxxxxxxxxx"; #################### Create mount targets for new file system #################### # Retrieve mount targets from existing file system and create the same to new file system. for mountTarget in $(jq -c '.[]' <<< $(aws efs describe-mount-targets --file-system-id $EFS_FILE_SYSTEM_ID --query "MountTargets[?MountTargetId!=null]")); do zoneName=$(jq -r '.AvailabilityZoneName' <<< $mountTarget); mountTargetId=$(jq -r '.MountTargetId' <<< $mountTarget); subnetId=$(jq -r '.SubnetId' <<< $mountTarget); securityGroup=$(aws efs describe-mount-target-security-groups --mount-target-id $mountTargetId --query "SecurityGroups" --output text) echo "Creating mount target for file system id $NEW_EFS_FILE_SYSTEM_ID on zone $zoneName." aws efs create-mount-target \ --file-system-id $NEW_EFS_FILE_SYSTEM_ID \ --subnet-id $subnetId \ --security-groups $securityGroup \ --no-cli-pager done #################### Create root path access point to manage recovered data #################### aws efs create-access-point \ --file-system-id $NEW_EFS_FILE_SYSTEM_ID \ --posix-user Uid=6000,Gid=6000 \ --root-directory 'Path="/",CreationInfo={OwnerUid=6000,OwnerGid=6000,Permissions="0755"}' #################### Create access point for application access #################### aws efs create-access-point \ --file-system-id $NEW_EFS_FILE_SYSTEM_ID \ --posix-user Uid=6000,Gid=6000 \ --root-directory 'Path="/uepe",CreationInfo={OwnerUid=6000,OwnerGid=6000,Permissions="0755"}' aws efs describe-access-points #################### Create a static persistent yaml #################### # NOTE: Update volumeHandle to your file system id and access points accordingly. # Example below are using two sets of PV and PVC that each of them corresponded to root path ("/") and application path ("/uepe"). # Use command `aws efs describe-access-points` to find out access point ids. cat > efs_uepe_persistent.yaml << EOF apiVersion: v1 kind: PersistentVolume metadata: name: root-persistent spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: aws-efs csi: driver: efs.csi.aws.com volumeHandle: fs-0faa7c3cdc681af41::fsap-08232180e9af33cab --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: root-persistent spec: volumeName: root-persistent storageClassName: aws-efs accessModes: - ReadWriteMany resources: requests: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: platform-persistent spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: aws-efs csi: driver: efs.csi.aws.com volumeHandle: fs-0faa7c3cdc681af41::fsap-06ee3201e68a278cd --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: platform-persistent spec: volumeName: platform-persistent storageClassName: aws-efs accessModes: - ReadWriteMany resources: requests: storage: 5Gi EOF #################### Kubectl apply persistence yaml #################### kubectl apply -f efs_uepe_persistent.yaml #################### Create temporary pods to manage volume #################### cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: root-pv-pod spec: volumes: - name: root-persistent persistentVolumeClaim: claimName: root-persistent containers: - name: root-pv-container image: nginx volumeMounts: - mountPath: /root name: root-persistent --- apiVersion: v1 kind: Pod metadata: name: platform-pv-pod spec: volumes: - name: platform-persistent persistentVolumeClaim: claimName: platform-persistent containers: - name: platform-pv-container image: nginx volumeMounts: - mountPath: /uepe name: platform-persistent EOF #################### Locate and move up the application backup directory (uepe) #################### # Purpose of the below steps is to lift up restored folder to the root path, this is for allowing data to be accessible by the application access point. kubectl exec -ti root-pv-pod -- ls -al /root/ kubectl exec -ti root-pv-pod -- ls -al /root/aws-backup-restore_2024-06-17T07-36-15-412650687Z kubectl exec -ti root-pv-pod -- ls -al /root/aws-backup-restore_2024-06-17T07-36-15-412650687Z/uepe kubectl exec -ti root-pv-pod -- cp -rf /root/aws-backup-restore_2024-06-17T07-36-15-412650687Z/uepe /root/ kubectl exec -ti root-pv-pod -- ls -al /root/uepe #################### Verify restored data is visible by application mount point #################### kubectl exec -ti platform-pv-pod -- ls -al /uepe #################### Clean up unused pod, pv and pvc #################### kubectl delete pod root-pv-pod kubectl delete pod platform-pv-pod kubectl delete pvc root-persistent kubectl delete pv root-persistent #################### Helm install PE with existing claim #################### # The persistent volume has now been restored, you can install PE with the existing claim "platform-persistent".