Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Insert excerpt
Upgrade Excerpts for Cloud (4.2)
Upgrade Excerpts for Cloud (4.2)
namePreparingECDsForUpgrade
nopaneltrue

...

Note

Note!

The instructions for backup and upgrade of the database below are only relevant if you are using Azure Database for PostgreSQL - Flexible Server as platform database. If the platform database used is derbyDerby, the backup of the Azure Files Storage covers the database as well (assuming persistent storage of the platform is enabled).

For database backup, please refer to https://learn.microsoft.com/en-us/azure/backup/backup-azure-database-postgresql-flex for guidance.

It The next step is now time to do a backup of the file system used.

Note

Note!

If there are standalone ECs that are still running and writing their logs to the same EFSfile storage , whatever happens after the backup has been initiated will not be included in the backup.

To create an Azure File share backup using the console, see https://learn.microsoft.com/en-us/azure/backup/backup-azure-files?tabs=backup-center for instructions. Alternatively, you can also refer to Azure CLI version guide or https://learn.microsoft.com/en-us/azure/backup/backup-afs-cli for instructions.

The section below contains an example of how to run create a backup vault, followed by enabling an Azure File share backup protection and performing an on-demand backup job using via the command line. The snapshot will in this case be stored under the default backup vault.

Code Block
export RESOURCE_GROUP=PT_Stratus
export LOCATION="Southeast Asia"
export STORAGE_ACCOUNT_NAME=uepeaks
export STORAGE_ACCOUNT_KEY=$(az storage account keys list --account-name $STORAGE_ACCOUNT_NAME --query "[0].{Value:value}" | jq -rc ".[0].Value")
export STORAGE_ACCOUNT_ID=$(az storage account show --resource-group $RESOURCE_GROUP --name $STORAGE_ACCOUNT_NAME | jq -rc--query ".id")
export SUBSCRIPTION_ID=$(az account subscription list | jq -rc--query ".[0].subscriptionId" | tr -d '"')
export FILE_SHARE=$(az storage share list --account-name $STORAGE_ACCOUNT_NAME --account-key $STORAGE_ACCOUNT_KEY --query "[0].{Name:name}name" | jqtr -rcd ".[0].Name"'"')
export FILE_BACKUP_VAULT=azurefilesvault
export FILE_BACKUP_POLICY=MyBackupPolicyFileBackupPolicy

# Create new file backup vault
az backup vault create --resource-group $RESOURCE_GROUP --name $FILE_BACKUP_VAULT --location $LOCATION --output table
az backup vault list --query "[].{Name:name}"

# Create new file backup policy
for scheduled backup# References:
# https://learn.microsoft.com/en-us/azure/backup/manage-afs-backup-cli#create-policy
# https://learn.microsoft.com/en-us/azure/templates/microsoft.recoveryservices/vaults/backuppolicies?pivots=deployment-language-bicep#property-values
cat <<-EOF > /path/to/$FILE_BACKUP_POLICY.json
{
  "eTag": null,
  "id": "/Subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.RecoveryServices/vaults/$FILE_BACKUP_VAULT/backupPolicies/$FILE_BACKUP_POLICY",
  "location": null,
  "name": "$FILE_BACKUP_POLICY",
  "properties": {
    "backupManagementType": "AzureStorage",
    "protectedItemsCount": 0,
    "retentionPolicy": {
      "dailySchedule": {
        "retentionDuration": {
          "count": 30,
          "durationType": "Days"
        },
        "retentionTimes": [
          "2024-07-19T03:00:00+00:00"
        ]
      },
      "monthlySchedule": null,
      "retentionPolicyType": "LongTermRetentionPolicy",
      "weeklySchedule": null,
      "yearlySchedule": null
    },
    "schedulePolicy": {
      "schedulePolicyType": "SimpleSchedulePolicy",
      "scheduleRunDays": null,
      "scheduleRunFrequency": "Daily",
      "scheduleRunTimes": [
        "2024-07-19T03:00:00+00:00"
      ],
      "scheduleWeeklyFrequency": 0
    },
    "timeZone": "UTC",
    "workLoadType": "AzureFileShare"
  },
  "resourceGroup": "$RESOURCE_GROUP",
  "tags": null,
  "type": "Microsoft.RecoveryServices/vaults/backupPolicies"
}
EOF

az backup policy list --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT --query "[].{Name:name}"
az backup policy create --policy $FILE_BACKUP_POLICY.json --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT --name $FILE_BACKUP_POLICY --backup-management-type AzureStorage
az backup policy show --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT --name $FILE_BACKUP_POLICY

# Enable Azure File share backup protection
az backup protection enable-for-azurefileshare --vault-name $FILE_BACKUP_VAULT --resource-group $RESOURCE_GROUP --policy-name $FILE_BACKUP_POLICY --storage-account $STORAGE_ACCOUNT_NAME --azure-file-share $FILE_SHARE  --output table

# ResultCommand output as in followingbelow:
# Name                                  ResourceGroup
# ------------------------------------  ---------------
# 2b85d01d-9a27-4a5a-aa9d-cbdad082cac2  PT_Stratus

# Track job status
az backup job show --name 2b85d01d-9a27-4a5a-aa9d-cbdad082cac2 --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT

# Retrieve container registered to the Recovery services vault and export as env variable
export CONTAINER_NAME=$(az backup container list --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT --backup-management-type AzureStorage | jq -rc--query ".[0].name" | tr -d '"')

# Retrieve backed up item and export as env variable
export ITEM_NAME=$(az backup item list --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT | jq -rc--query ".[0].name" | tr -d '"')

# Perform on-demand backup
az backup protection backup-now --vault-name $FILE_BACKUP_VAULT --resource-group $RESOURCE_GROUP --container-name $CONTAINER_NAME --item-name $ITEM_NAME --retain-until 20-01-2025 --output table

# ResultCommand output as in followingbelow:
# Name                                  Operation    Status      Item Name               Backup Management Type    Start Time UTC                    Duration
# ------------------------------------  -----------  ----------  ----------------------  ------------------------  --------------------------------  --------------
# 23300e34-b1e0-409c-804e-c247d4587f8f  Backup       InProgress  uepe-aks-storage-share  AzureStorage              2024-07-19T11:01:07.436164+00:00  0:00:02.178697

# Track job status
az backup job show --name 23300e34-b1e0-409c-804e-c247d4587f8f --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT

Restoring from Backup

...

Insert excerpt
Upgrade Excerpts for Cloud (4.2)
Upgrade Excerpts for Cloud (4.2)
nameActualUpgrade
nopaneltrue

Insert excerpt
Upgrade Excerpts for Cloud (4.2)
Upgrade Excerpts for Cloud (4.2)
nameAfterUpgrade
nopaneltrue

Insert excerpt
Upgrade Excerpts for Cloud (4.2)
Upgrade Excerpts for Cloud (4.2)
nameECDsAfterUpgrade
nopaneltrue

Insert excerpt
Preparations for Upgrade
Preparations for Upgrade
nameRollback
nopaneltrue

Restore Database Backup

You can restore a database backup into Azure Blob Storage and use DB PostgreSQL native tools tool pg_restore to restore data as to a new PostgreSQL flexible server database, see Azure guide https://learn.microsoft.com/en-us/azure/backup/restore-azure-database-postgresql-flex for more information.To restore EFSdetailed steps.

Note

Note!

The restored PostgreSQL flexible server is a new database instance and is not managed by Terraform. If you plan to destroy the cluster later, ensure that the new database instance is deleted first.

Restore File System Snapshot

To restore an Azure File share, follow the instructions in from https://docslearn.awsmicrosoft.amazon.com/awsen-backupus/latestazure/devguidebackup/restore-resource.html and afs?tabs=full-share-recovery or https://repostlearn.microsoft.awscom/knowledgeen-centerus/azure/aws-backup-/restore-efs-file-systemafs-cli.

If you want to restore the backup into a new file system, the EFS mount target needs to be manually re-configured to allow access from the cluster, see https://docs.aws.amazon.com/efs/latest/ug/manage-fs-access.html#manage-fs-access-create-delete-mount-targets for more information. If you are using access points, you need to configure access point for the new file system after the restore is done.

The section below contains an example of how to restore the EFS backup using the command line. In this example the volume mount is using access point path /uepe, and the snapshot is stored under default vault, and then the backup is restored as a new file system. If this is not how you have set it up, or if you wish to restore backup to the existing EFS instance, you need to adjust accordingly.

Code Block
#################### Retrieve backup ARN id ####################
aws backup list-recovery-points-by-backup-vault --backup-vault-name $VAULT_NAME
# NOTE: Record the RecoveryPointArn that you wish to recover from
# e.g. arn:aws:backup:ap-southeast-1:027763730008:recovery-point:0a82d94c-3d56-481d-98e3-b810d3df363b

# To view the recovery point restore metadata
aws backup get-recovery-point-restore-metadata \
--backup-vault-name $VAULT_NAME \
--recovery-point-arn <RECOVERY_POINT_ARN>

#################### Restore from the backup ####################
# Prerequisites:
# 1) Generate an UUID, "uuidgen" (Mac) or "uuid -r" (Linux)
# 2) Create a metadata json file, properties details are mentioned in
# https://docs.aws.amazon.com/aws-backup/latest/devguide/restoring-efs.html#efs-restore-cli
# NOTE: If newFileSystem=true, file-system-id parameter will be ignored.
# 3) Substitute "CreationToken" value with the generated UUID.
# 4) If existing file system is encrypted, you may use the existing KMS key.
#
# Example metadata json:

# cat <<-EOF > /path/to/metadata_json_file
# {
#   "file-system-id": "fs-6a1dcba2",
#   "Encrypted": "true",
#   "KmsKeyId": "arn:aws:kms:ap-southeast-1:027763730008:key/4859a845-3ef2-464d-80d2-16c1b2c58ff4",
#   "PerformanceMode": "generalPurpose",
#   "CreationToken": "944713C9-C6BB-42A4-AF91-E7DB5761FDBD",
#   "newFileSystem": "true"
# }
# EOF

aws backup start-restore-job --recovery-point-arn <RECOVERY_POINT_ARN> --iam-role-arn "$BACKUP_ROLE_ARN" --metadata file:///path/to/metadata_json_file
watch aws backup list-restore-jobs --by-resource-type EFS

#################### Export new file system id ####################
# If you recover as new file system (newFileSystem=true), please use command `aws efs describe-file-systems` to find out the new file system id.
# After that export the new file system id env variable.
export NEW_EFS_FILE_SYSTEM_ID="fs-xxxxxxxxxxxxxxxxx";

#################### Create mount targets for new file system ####################
# Retrieve mount targets from existing file system and create the same to new file system.
for mountTarget in $(jq -c '.[]' <<< $(aws efs describe-mount-targets --file-system-id $EFS_FILE_SYSTEM_ID --query "MountTargets[?MountTargetId!=null]")); do
  zoneName=$(jq -r '.AvailabilityZoneName' <<< $mountTarget);
  mountTargetId=$(jq -r '.MountTargetId' <<< $mountTarget);
  subnetId=$(jq -r '.SubnetId' <<< $mountTarget);
  securityGroup=$(aws efs describe-mount-target-security-groups --mount-target-id $mountTargetId --query "SecurityGroups" --output text)
  echo "Creating mount target for file system id $NEW_EFS_FILE_SYSTEM_ID on zone $zoneName."
  aws efs create-mount-target \
    --file-system-id $NEW_EFS_FILE_SYSTEM_ID \
    --subnet-id $subnetId \
    --security-groups $securityGroup \
    --no-cli-pager
done

#################### Create root path access point to manage recovered data ####################
aws efs create-access-point \
--file-system-id $NEW_EFS_FILE_SYSTEM_ID \
--posix-user Uid=6000,Gid=6000 \
--root-directory 'Path="/",CreationInfo={OwnerUid=6000,OwnerGid=6000,Permissions="0755"}'

#################### Create access point for application access ####################
aws efs create-access-point \
--file-system-id $NEW_EFS_FILE_SYSTEM_ID \
--posix-user Uid=6000,Gid=6000 \
--root-directory 'Path="/uepe",CreationInfo={OwnerUid=6000,OwnerGid=6000,Permissions="0755"}'

aws efs describe-access-points

#################### Create a static persistent yaml ####################
# NOTE: Update volumeHandle to your file system id and access points accordingly.
# Example below are using two sets of PV and PVC that each of them corresponded to root path ("/") and application path ("/uepe").
# Use command `aws efs describe-access-points` to find out access point ids.

cat > efs_uepe_persistent.yaml << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: root-persistent
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: aws-efs
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-0faa7c3cdc681af41::fsap-08232180e9af33cab
---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: root-persistent
spec:
  volumeName: root-persistent
  storageClassName: aws-efs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: platform-persistent
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: aws-efs
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-0faa7c3cdc681af41::fsap-06ee3201e68a278cd
---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: platform-persistent
spec:
  volumeName: platform-persistent
  storageClassName: aws-efs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
EOF

#################### Kubectl apply persistence yaml ####################
kubectl apply -f efs_uepe_persistent.yaml

#################### Create temporary pods to manage volume ####################
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: root-pv-pod
spec:
  volumes:
    - name: root-persistent
      persistentVolumeClaim:
        claimName: root-persistent
  containers:
    - name: root-pv-container
      image: nginx
      volumeMounts:
        - mountPath: /root
          name: root-persistent
---

apiVersion: v1
kind: Pod
metadata:
  name: platform-pv-pod
spec:
  volumes:
    - name: platform-persistent
      persistentVolumeClaim:
        claimName: platform-persistent
  containers:
    - name: platform-pv-container
      image: nginx
      volumeMounts:
        - mountPath: /uepe
          name: platform-persistent
EOF

#################### Locate and move up the application backup directory (uepe) ####################
# Purpose of the below steps is to lift up restored folder to the root path, this is for allowing data to be accessible by the application access point.
kubectl exec -ti root-pv-pod -- ls -al /root/
kubectl exec -ti root-pv-pod -- ls -al /root/aws-backup-restore_2024-06-17T07-36-15-412650687Z
kubectl exec -ti root-pv-pod -- ls -al /root/aws-backup-restore_2024-06-17T07-36-15-412650687Z/uepe
kubectl exec -ti root-pv-pod -- cp -rf /root/aws-backup-restore_2024-06-17T07-36-15-412650687Z/uepe /root/
kubectl exec -ti root-pv-pod -- ls -al /root/uepe

#################### Verify restored data is visible by application mount point ####################
kubectl exec -ti platform-pv-pod -- ls -al /uepe

#################### Clean up unused pod, pv and pvc ####################
kubectl delete pod root-pv-pod
kubectl delete pod platform-pv-pod
kubectl delete pvc root-persistent
kubectl delete pv root-persistent

#################### Helm install PE with existing claim ####################
# The persistent volume has now been restored, you can install PE with the existing claim "platform-persistent".

Insert excerptUpgrade Excerpts for CloudUpgrade Excerpts for CloudnameActualUpgradenopaneltrue Insert excerptUpgrade Excerpts for CloudUpgrade Excerpts for CloudnameAfterUpgradenopaneltrue Insert excerptUpgrade Excerpts for CloudUpgrade Excerpts for CloudnameECDsAfterUpgradeThe section below contains an example of how to restore an Azure File backup using the command line. In this example the backup is restored to the existing File share. If you wish to restore to a new File share instance, you need to adjust accordingly.

Code Block
export RESOURCE_GROUP=PT_Stratus
export LOCATION="Southeast Asia"
export STORAGE_ACCOUNT_NAME=uepeaks
export STORAGE_ACCOUNT_KEY=$(az storage account keys list --account-name $STORAGE_ACCOUNT_NAME --query "[0].value")
export STORAGE_ACCOUNT_ID=$(az storage account show --resource-group $RESOURCE_GROUP --name $STORAGE_ACCOUNT_NAME --query "id")
export SUBSCRIPTION_ID=$(az account subscription list --query "[0].subscriptionId" | tr -d '"')
export FILE_SHARE=$(az storage share list --account-name $STORAGE_ACCOUNT_NAME --account-key $STORAGE_ACCOUNT_KEY --query "[0].name" | tr -d '"')
export FILE_BACKUP_VAULT=azurefilesvault
export FILE_BACKUP_POLICY=FileBackupPolicy
export CONTAINER_NAME=$(az backup container list --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT --backup-management-type AzureStorage --query "[0].name" | tr -d '"')
export ITEM_NAME=$(az backup item list --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT --query "[0].name" | tr -d '"')

# Fetch recovery points
az backup recoverypoint list --vault-name $FILE_BACKUP_VAULT --resource-group $RESOURCE_GROUP --container-name $CONTAINER_NAME --backup-management-type azurestorage --item-name $ITEM_NAME --workload-type azurefileshare --out table

# Command output as below:
# Name            Time                       Consistency
# --------------  -------------------------  --------------------
# 68988215529834  2024-07-19T11:01:09+00:00  FileSystemConsistent

# Full restore snapshot to existing file share
az backup restore restore-azurefileshare --vault-name $FILE_BACKUP_VAULT --resource-group $RESOURCE_GROUP --rp-name 68988215529834 --container-name $CONTAINER_NAME --item-name $ITEM_NAME --restore-mode originallocation --resolve-conflict overwrite --out table

# Track job status
az backup job show --name 249c1bbb-da9f-4b3b-b612-f9917ea2cecd --resource-group $RESOURCE_GROUP --vault-name $FILE_BACKUP_VAULT

Insert excerpt
Preparations for Upgrade
Preparations for Upgrade
nameRollback command
nopaneltrue