...
This page describes how you configure your deployment your deployment for persistent storage.
Pre-requisites
Before your helm your helm chart can be configured for persistent storage, the PersistentVolumeClaim resource that you are going to use must exist, and it must exist in the same namespace as where where Usage Engine will be deployed.
Tip | ||
---|---|---|
| ||
If you are unfamiliar with how persistent storage works in Kubernetes, please check out the official Kubernetes documentation on this topic. |
...
Code Block | ||||
---|---|---|---|---|
| ||||
apiVersion: v1 kind: PersistentVolume metadata: name: mz-pv spec: storageClassName: mz-storage-class accessModes: - ReadWriteMany nfs: path: "/nfs_share/persistent" server: 10.97.201.232 readOnly: false capacity: storage: 10Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mz-pvc spec: storageClassName: mz-storage-class accessModes: - ReadWriteMany resources: requests: storage: 10Gi |
Note | ||
---|---|---|
| ||
Here we assume that the /nfs_share/persistent path in the persistent volume has been setup beforehand with the appropriate permissions. For instance like this:
|
Applying the above yaml using kubectl will create a PersistentVolumeClaim called mz-pvc that you can refer to from the helm the helm chart.
Configuring the Helm Chart
Now that the PersistentVolumentClaim resource exists, it is time to configure the helm the helm chart to use it.
To do this, you set persistence.enabled=true and persistence.existingClaim=mz-pvc and then do a helm upgrade/install.
Runtime
Below is an explanation of how the persistent storage is being used in runtime. All the directories listed can be found within the /opt/mz/persistent directory in your platform, web desktop and EC pod(s).
Path | Description | |||||
---|---|---|---|---|---|---|
/opt/mz/persistent/3pp | This is where additional 3pp jar files needed for are Usage Engine are stored. Additional information about how this works can be found below | |||||
/opt/mz/persistent/jni | This is where jni files are stored. Example: SAP RFC native library will be stored here. | |||||
/opt/mz/persistent/log | This is where the platform, ec and web desktop logs are stored.
| |||||
/opt/mz/persistent/backup | This is where the backup of your configuration your configurations will be stored in zip format. | |||||
/opt/mz/persistent/keys | Disk based keystore is a deprecated feature. Please refer to Bootstrapping System Certificates and Secrets - AWS(3.0) for information about how to do this in the preferred way. | |||||
/opt/mz/persistent/storage | This is where the mzdb storage for your deployment is stored when it is using Derby as platform database. |
You are free to create whatever additional files/directories under /opt/mz/persistent that your use case may require.
If you, for for example, need to share data between the platform and EC pod(s), you can create a directory /opt/mz/persistent/data and use that to exchange information.
Adding 3PP or Java Native Library Files
Anchor add_3pp_jni add_3pp_jni
You can take two different approaches when adding the 3pp jar files or any jni files needed for certain agents or functionalities in . Anchor
Before Installation
Follow the steps below to add 3pp or jni files before you install Usage Engine. Perform these steps after the PersistentVolumeClaim has been set up.
Create the following directories in your persistent volume. This example assumes that the NFS storage example in the Pre-requisites section has been used.
Code Block title Example - Creating directories mkdir /nfs_share/persistent/3pp mkdir /nfs_share/persistent/jni
- Add the 3pp jar files or the jni files into their respective folders.
- Proceed with the installation. will Usage Engine will detect the existence of the directories and the files. It will not overwrite or remove the folder.
Post Installation
Follow the steps below to add the 3pp or jni files any time after the installation of Usage Engine.
- Add the 3pp jar files or the jni files into their respective folders.
Restart the platform by dropping the platform pod. The pod should reinitialize not long after.
Code Block kubectl delete pod platform-0 --namespace <namespace name>
...