Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Current »

Add Helm Repository

Add the helm repository where the Usage Engine Private Edition helm chart is located by running the following command:

helm repo add digitalroute https://digitalroute-public.github.io/usage-engine-private-edition

Although it is not a strict requirement, the install commands used throughout this installation guide assume that the repository has been added like this.

Container Images

Usage Engine Private Edition consists of the following container images hosted in the Digital Route AWS ECR registry:

Name

Description

462803626708.dkr.ecr.eu-west-1.amazonaws.com/usage-engine-private-edition:<version>

This is the container image used by the platform pod.

462803626708.dkr.ecr.eu-west-1.amazonaws.com/usage-engine-private-edition:<version>-ec

This is the container image used by EC pods.

462803626708.dkr.ecr.eu-west-1.amazonaws.com/usage-engine-private-edition:<version>-operator

This is the container image used by the uepe-operator pod.

462803626708.dkr.ecr.eu-west-1.amazonaws.com/usage-engine-private-edition:<version>-ui

This is the container image used by the desktop-online pod.

Where <version> is the desired Usage Engine Private Edition version, for example 4.0.0.

Note!

Since Usage Engine Private Edition 3.1, the container images have multi-architecture support (AMD and ARM).

Hosting Container Images in Your Own Container Registry

If you have your own container registry, it is recommended that you host the Usage Engine Private Edition container images there rather than in the Digital Route AWS ECR registry.

In order to access the container images in the Digital Route AWS ECR registry, you will need to authenticate yourself first. Here is how you can do this using the docker CLI:

docker login -u AWS \
-p $(AWS_ACCESS_KEY_ID=<your aws access key> AWS_SECRET_ACCESS_KEY=<your aws secret access key> aws ecr get-login-password --region eu-west-1) \
462803626708.dkr.ecr.eu-west-1.amazonaws.com

Where <your aws access key> and <your aws secret access key> are the access keys provided by Digital Route (see https://infozone.atlassian.net/wiki/spaces/UEPE4D/pages/161481605/Common+Pre-requisites#ECR-Access-Keys in case you have not received any access keys yet).

Once authenticated, you can pull the container images, re-tag them and then finally push them to your own container image repository.

Depending on how your container registry is configured, you probably need to set up an image pull secret that allows the Kubernetes cluster to pull the container images from your container registry in runtime.

Image Pull Secret for Digital Route AWS ECR

On the other hand, if you do not have your own container image registry, then you need to set up an image pull secret that allows the Kubernetes cluster to pull the container images from the Digital Route AWS ECR in runtime.

Such a secret can be created like this:

kubectl create secret docker-registry ecr-cred \
    --docker-server=https://462803626708.dkr.ecr.eu-west-1.amazonaws.com  \
    --docker-username=AWS \
    --docker-password=$(AWS_ACCESS_KEY_ID=<your aws access key> AWS_SECRET_ACCESS_KEY=<your aws secret access key> aws ecr get-login-password --region eu-west-1) \
    -n uepe

Where <your aws access key> and <your aws secret access key> are the access keys provided by Digital Route (see https://infozone.atlassian.net/wiki/spaces/UEPE4D/pages/161481605/Common+Pre-requisites#ECR-Access-Keys in case you have not received any access keys yet).

Since AWS ECR credentials expire after 12 hours, the image pull secret needs to be refreshed regularly. This can be automated through a cron job. The following yaml spec is an example of such a cron job:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ecr-credentials-sync
  namespace: uepe
rules:
- apiGroups: [""]
  resources:
  - secrets
  verbs:
  - get
  - create
  - patch
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ecr-credentials-sync
  namespace: uepe
subjects:
- kind: ServiceAccount
  name: ecr-credentials-sync
roleRef:
  kind: Role
  name: ecr-credentials-sync
  apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ecr-credentials-sync
  namespace: uepe
---
apiVersion: batch/v1
kind: CronJob
metadata:
  name: ecr-credentials-sync
  namespace: uepe
spec:
  suspend: false
  schedule: 0 */8 * * *
  failedJobsHistoryLimit: 1
  successfulJobsHistoryLimit: 1
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: ecr-credentials-sync
          restartPolicy: Never
          volumes:
          - name: token
            emptyDir:
              medium: Memory
          initContainers:
          - image: amazon/aws-cli
            name: get-token
            imagePullPolicy: IfNotPresent
            env:
            - name: AWS_ACCESS_KEY_ID
              value: <your aws access key>
            - name: AWS_SECRET_ACCESS_KEY
              value: <your aws secret access key>
            - name: REGION
              value: eu-west-1
            volumeMounts:
            - mountPath: /token
              name: token
            command:
            - /bin/sh
            - -ce
            - aws ecr get-login-password --region ${REGION} > /token/ecr-token
          containers:
          - image: bitnami/kubectl
            name: create-secret
            imagePullPolicy: IfNotPresent
            env:
            - name: SECRET_NAME
              value: ecr-cred
            volumeMounts:
            - mountPath: /token
              name: token
            command:
            - /bin/sh
            - -ce
            - |-
              kubectl create secret docker-registry $SECRET_NAME \
                --dry-run=client \
                --docker-server=https://462803626708.dkr.ecr.eu-west-1.amazonaws.com \
                --docker-username=AWS \
                --docker-password="$(cat /token/ecr-token)" \
                -n uepe \
                -o yaml | kubectl apply -f -              

Where <your aws access key> and <your aws secret access key> are the access keys provided by Digital Route (see https://infozone.atlassian.net/wiki/spaces/UEPE4D/pages/161481605/Common+Pre-requisites#ECR-Access-Keys in case you have not received any access keys yet).

Simply put the above yaml spec into a file called ecr-credentials-sync.yaml, and then use the following command to create it in your Kubernetes cluster:

kubectl apply -f ecr-credentials-sync.yaml -n uepe

System Database

The Usage Engine Private Edition helm chart is capable of automatically creating the system database at install time. However, that assumes that you are able to supply database administrator credentials (see Bootstrapping System Credentials).

If, for one reason or another, you are unable to supply that, the system database must be created manually prior to installing the Usage Engine Private Edition helm chart.

A tool called uepe-sys-db-tool.jar is provided to facilitate this.

To use it, simply go to Release Information, download it for the relevant version, and then execute it like this:

java -jar uepe-sys-db-tool.jar

The instructions on screen will guide you through the process of configuring the database, and once done, a set of database scripts will be generated. These database scripts can then be used to create the system database and database users. Following the instructions on the screen, you are also required to supply the JDBC user and JDBC password that Usage Engine Private Edition will use to connect to its system database.

The main entry point script mentioned at the end of the instruction, is the ONLY file that should be executed by the user. The rest of the files are referenced by the main entry point script to complete the database creation.

Example - SAP Hana

The SAP Hana instruction would look like this.

Script generation successfully completed!
Please find your system database creation scripts inside the uepe-sys-db-scripts.tar file.
The saphana_create_instance.sh script is the main entry point.

The main entry point script saphana_create_instance.sh should be executed by the user like this:

./saphana_create_instance.sh <SAP HANA administrator user> <SAP HANA administrator password>

Example - PostgreSQL

The PostgreSQL instruction would look like this.

Script generation successfully completed!
Please find your system database creation scripts inside the uepe-sys-db-scripts.tar file.
The postgre_create_instance.sh script is the main entry point.

The main entry point script postgre_create_instance.sh should be executed like this, passing the <postgreSQL administrator user> as the argument.

./postgre_create_instance.sh <postgreSQL administrator user>

Note!

If the postgreSQL administrator user argument is not passed in, the script will assume the postgres user as the default administrator user.

Example - Oracle

The Oracle instruction would look like this.

Script generation successfully completed!
Please find your system database creation scripts inside the uepe-sys-db-scripts.tar file.
The oracle_create_instance.sh script is the main entry point.
For AWS environment, please execute aws_rds_oracle_create_ts_user.sql with sql client as the main entry point.

The main entry point script oracle_create_instance.sh should be executed like this:

./oracle_create_instance.sh

For AWS RDS Oracle, The main entry point script is aws_rds_oracle_create_ts_user.sql and should be executed in sqlplus terminal like this:

SQL>@aws_rds_oracle_create_ts_user.sql

At the end of the execution, you may connect to the database instance via database client tool. Once connected to the database instance, verify that system database and database users have been created successfully.

However, you won’t see any database table being created in the system database. This was designed deliberately as the database tables will only be created during the installation of the Usage Engine Private Edition.

TLS

It is strongly recommended to install Usage Engine Private Edition with TLS enabled, and there are two different ways of providing the required certificate:

  • cert-manager

  • Secret

Here follows an explanation of the preparations required for each of the two.

cert-manager

The most automated and secure way to provide the certificate is to use https://cert-manager.io/ .

If it is not already installed in your Kubernetes cluster, follow these instructions on how to install the cert-manager https://cert-manager.io/docs/installation/helm/ chart. Make sure to install a version that is listed in the Compatibility Matrix (4.3).

Cert-manager must be backed by a certificate authority (CA) to sign the certificates. Once configured with a CA, cert-manager will automatically sign and renew certificates for the system as needed. Configuring cert-manager with a CA is done by creating an Issuer or ClusterIssuer resource (this resource will be referenced later when installing Usage Engine Private Edition).

Refer to https://cert-manager.io/docs/configuration/ for a all the details.

It’s also possible to use an issuer specifiction that will issue a self-signed certificate:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: example-issuer
spec:
  selfSigned: {}
Regardless of the chosen issuer specification, to create the issuer, simply put the specification in a yaml file (here we call it example-issuer.yaml), and then execute a command like this:
kubectl apply -f example-issuer.yaml

Based on the example above the created ClusterIssuer can be inspected like this:

kubectl get clusterissuers example-issuer -o yaml

Secret

If you do not want to automate the certificate provisioning with cert-manager, you can instead manually install a public certificate in a Kubernetes Secret and then refer to that when installing Usage Engine Private Edition.

The Secret must include a keystore file (keystore.jks) in JKS format as well as separate files for key (tls.key) and certificate (tls.crt).

This is an example script that can generate a Secret like that (make sure to set the parameters at the beginning of the script before executing it):

#!/bin/sh
KEY_PASSWORD=<your chosen key password>
STORE_PASSWORD=<your chosen keystore password>
DNAME=CN=exampledomain.com,O=Example
NAMESPACE=uepe
keytool -genkey -keystore keystore.jks -storepass $STORE_PASSWORD -keypass $KEY_PASSWORD -alias certificate -keyalg RSA -keysize 4096 -dname $DNAME
keytool -importkeystore -srckeystore keystore.jks -srcstorepass $STORE_PASSWORD -srckeypass $KEY_PASSWORD -destkeystore keystore.p12 -deststoretype PKCS12 -srcalias certificate -deststorepass $STORE_PASSWORD -destkeypass $KEY_PASSWORD
openssl pkcs12 -in keystore.p12  -nokeys -out tls.crt -password pass:$KEY_PASSWORD
openssl pkcs12 -in keystore.p12  -nodes -nocerts -out tls.key -password pass:$KEY_PASSWORD
kubectl create secret generic uepe-cert -n $NAMESPACE --from-file=keystore.jks --from-file=tls.key --from-file=tls.crt

Note that this will generate a self-signed certificate, which is not suitable for use in publicly exposed interfaces.

Once the Secret has been generated, its content can be inspected like this:

kubectl -n uepe get secrets uepe-cert -o yaml

Add Helm Repository

Add the helm repository where the Usage Engine Private Edition helm chart is located by running the following command:

helm repo add digitalroute https://digitalroute-public.github.io/usage-engine-private-edition

Although not a strict requirement, the install commands used throughout this installation guide assume that the repository has been added like this.

  • No labels