Persistent Storage (3.0)

This page describes how you configure your deployment for persistent storage.


Before your helm chart can be configured for persistent storage, the PersistentVolumeClaim resource that you are going to use must exist, and it must exist in the same namespace as where Usage Engine  will be deployed.


If you are unfamiliar with how persistent storage works in Kubernetes, please check out the official Kubernetes documentation on this topic.

If you do not already have a suitable PersistentVolumeClaim resource, see in the example below how you set one up. In this example NFS storage is used but you can of course use any kind of storage that the Kubernetes API supports.

Example - PersistentVolume & PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolume
  name: mz-pv
  storageClassName: mz-storage-class
    - ReadWriteMany
    path: "/nfs_share/persistent"
    readOnly: false
    storage: 10Gi
apiVersion: v1
kind: PersistentVolumeClaim
  name: mz-pvc
  storageClassName: mz-storage-class
    - ReadWriteMany
          storage:  10Gi


Here we assume that the /nfs_share/persistent path in the persistent volume has been setup beforehand with the appropriate permissions. For instance like this:

mkdir /nfs_share/persistent
chown -R 6000:6000 /nfs_share/persistent 

Applying the above yaml using kubectl will create a PersistentVolumeClaim called mz-pvc that you can refer to from the helm chart.

Configuring the Helm Chart

Now that the PersistentVolumentClaim resource exists, it is time to configure the helm chart to use it.

To do this, you set persistence.enabled=true and persistence.existingClaim=mz-pvc and then do a helm upgrade/install.


Below is an explanation of how the persistent storage is being used in runtime. All the directories listed can be found within the /opt/mz/persistent directory in your platform, web desktop and EC pod(s).


This is where additional 3pp jar files needed for Usage Engine are stored. Additional information about how this works can be found below

/opt/mz/persistent/jniThis is where jni files are stored. Example: SAP RFC native library will be stored here.

This is where the platform, ec and web desktop logs are stored.


You will need to periodically archive EC and Web Desktop logs manually as the log4j mechanism to help automatically archive the files does not work for these two particular logs.


This is where the backup of your configurations will be stored in zip format.

/opt/mz/persistent/keysDisk based keystore is a deprecated feature. Please refer to Bootstrapping System Certificates and Secrets - AWS(3.0)  for information about how to do this in the preferred way.
/opt/mz/persistent/storageThis is where the mzdb storage for your deployment is stored when it is using Derby as platform database.

You are free to create whatever additional files/directories under /opt/mz/persistent that your use case may require.

If you, for for example, need to share data between the platform and EC pod(s), you can create a directory /opt/mz/persistent/data and use that to exchange information. 

Adding 3PP or Java Native Library Files

You can take two different approaches when adding the 3pp jar files or any jni files needed for certain agents or functionalities. 

Before Installation

Follow the steps below to add 3pp or jni files before you install Usage Engine. Perform these steps after the PersistentVolumeClaim has been set up.

  1. Create the following directories in your persistent volume. This example assumes that the NFS storage example in the Pre-requisites section has been used.

    Example - Creating directories
    mkdir /nfs_share/persistent/3pp
    mkdir /nfs_share/persistent/jni
  2. Add the 3pp jar files or the jni files into their respective folders.
  3. Proceed with the installation. Usage Engine will detect the existence of the directories and the files. It will not overwrite or remove the folder.

Post Installation

Follow the steps below to add the 3pp or jni files any time after the installation of Usage Engine.

  1. Add the 3pp jar files or the jni files into their respective folders.
  2. Restart the platform by dropping the platform pod. The pod should reinitialize not long after.

    kubectl delete pod platform-0 --namespace <namespace name>