Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 46 Next »

Having completed the preparations, it is now time to install Usage Engine Private Edition.

Main Installation Example

In this main installation example, it is assumed that the following optional resources have been added while preparing for the installation (see Kubernetes Cluster Add-ons - OCI (4.2)):

  • ingress-nginx-controller

  • cert-manager

Example Certificate

Since cert-manager is being used to provide TLS to the Usage Engine Private Edition installation in this example, you need to create an issuer in order to generate the required certificate.

Here we are going to use an ACME issuer type that is configured to match the Kubernetes cluster that was set up previously in the Preparations - OCI (4.2) chapter:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: example-issuer
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    # You must replace this email address with your own.
    # Let's Encrypt will use this to contact you about expiring
    # certificates, and issues related to your account.
    email: <your valid email address>
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: example-issuer-account-key
    solvers:
      - dns01:
          webhook:
            groupName: acme.d-n.be
            solverName: oci
            config:
              ociProfileSecretName: oci-profile

A few things that should be noted:

  • Set email to your email address of choice.

  • The oci-profile is the credential to access Oracle Cloud Infrastructure API. If you choose another name for the secret than oci-profile, ensure you modify the value of ociProfileSecretName in the ClusterIssuer.

Create a yaml file named oci-profile.yaml. The secret oci-profile should look like this:

apiVersion: v1
kind: Secret
metadata:
  name: oci-profile
type: Opaque
stringData:
  tenancy: "your tenancy ocid"
  user: "your user ocid"
  region: "your region"
  fingerprint: "your key fingerprint"
  privateKey: |
    -----BEGIN RSA PRIVATE KEY-----
    ...KEY DATA HERE...
    -----END RSA PRIVATE KEY-----
  privateKeyPassphrase: "private keys passphrase or empty string if none"
 

Create the secret prior to ClusterIssuer creation. To install secret oci-profile to cert-manager namespace:

kubectl apply -f oci-profile.yaml -n cert-manager

Assuming that the issuer spec above has been saved into a file called example-issuer.yaml, it can be created like this:

kubectl apply -f example-issuer.yaml

Load Balancer TLS Certificate

With ClusterIssuer setup properly, we can proceed to generate TLS Certificate and import into OCI Certificates Service.

To generate certificate, create a yaml file named certificate.yaml with the following contents:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: lb-cert
spec:
  commonName: <cluster_dns_zone_name listed in the terraform output>
  dnsNames:
    - <cluster_dns_zone_name listed in the terraform output>
    - desktop-online.<cluster_dns_zone_name listed in the terraform output>
    - platform.<cluster_dns_zone_name listed in the terraform output>
    - ingress.<cluster_dns_zone_name listed in the terraform output>
    - grafana.<cluster_dns_zone_name listed in the terraform output>
  issuerRef:
    kind: ClusterIssuer
    name: example-issuer
  secretName: lb-cert
  1. Execute the yaml file

kubectl apply -f certificate.yaml -n uepe
  1. Wait for a while and confirm certificate has been generated successfully.

kubectl get certificate -n uepe 
  1. The output shows the certificate named lbcert’s status is ready

NAME                        READY   SECRET                              AGE
lb-cert                     True    lb-cert                             46h
  1. Extract the server certificate and CA certificate from secret lbcert

kubectl get secrets lb-cert -n uepe -o yaml | yq '.data' | grep "tls.crt" | awk -F : '{print $2}'| tr -d " "|base64 -d > tls.crt
  1. Separate server certificate and CA certificate into two files

csplit tls.crt '/^-----BEGIN CERTIFICATE-----$/' 
  1. Rename first generated file as server certificate file

mv xx00 tls.crt
  1. Rename second generated file as CA certificate file

mv xx01 ca.crt
  1. Extract the private key from secret lbcert

kubectl get secrets lb-cert -n uepe -o yaml | yq '.data' | grep "tls.key" | awk -F : '{print $2}'| tr -d " "|base64 -d > tls.key

By now, server certificate, CA certificate and private key are stored in tls.crt, ca.crt and tls.key respectively. Next step is to import into OCI Certificates Service.

Note: User need not to import server certificate, CA certificate and private key into OCI Certificate Service anymore if OCI Native Ingress controller version 1.3.8 and above is installed. Load balancer TLS certificate can be obtained from Ingress secret internally.

This helm chart property oci.certificates.enabled must be set to false in Install Helm Chart section. Helm chart property oci.certificates.id can be omitted.

Skip the next section and proceed to TLS Backendset Secret section.

Import into OCI Certificates Service

Go to OCI console management, search for Certificates service. On the Certificates service page, click Create Certificate and follow these steps

  1. Select Certificate Type Imported and key in a unique name

  2. Click Next to go to Certificate Configuration page.

  3. Upload tls.crt, ca.crt and tls.key according to table below respectively

OCI Certificates Configuration

file to upload

Certificate

tls.crt

Certificate Chain

ca.crt

Private Key

tls.key

  1. Click Next and proceed to Create Certificate

  2. Wait for the certificate to be created.

  3. Copy and save the certificate’s ocid. This ocid will be set to the oci.certificates.id property in the helm chart value file in the next section.

TLS Backendset Secret

The SSL configuration between the load balancer and the backend servers (worker nodes) in the backend set is known as backend SSL. In this case, the backend set is referring to Platform Pod on worker nodes. To implement backend SSL, you store the SSL certificates and private key in the form of Kubernetes secret.

You already have server certificate, CA certificate and private key generated from the previous section. These certificates and private key can be reused to generate the Kubernetes secret needed by the backend set.

To store the certificate and the private key as a secret in Kubernetes

kubectl create secret generic ca-ser-secret -n uepe --from-file=tls.crt=tls.crt --from-file=tls.key=tls.key --from-file=ca.crt=ca.crt

Now, the backend set secret named ca-ser-secret has been created in the namespace uepe.

These secret names ca-ser-secret and lb-cert are default secret name used internally in PE helm chart. If user intended to use different secret name, these helm chart properties MUST be set in uepe-values.yaml. For example,

oci.loadbalancer.secret=lb-cert-<cluster-name>

oci.loadbalancer.backendsetSecret=ca-ser-secret-<cluster-name>

Install Helm Chart

Although the number of helm value combinations to set is virtually endless, some values should more or less always be set.

So let’s start by creating a file called uepe-values.yaml, and in that file, specify a minimal set of values that will serve as a good starting point:

Example below assumes you have configured Postgres admin password through secret. If you have not done so please refer to https://infozone.atlassian.net/wiki/spaces/UEPE4D/pages/211091666/Usage+Engine+Private+Edition+Preparations+-+OCI+4.2#Bootstrapping-System-Credentials-%5BinlineExtension%5D for guidance.

oci:
  certificates:
    enabled: false
  backendNSG: <backend_nsg from terraform output>
  healthcheck:
    desktoponline:
      port: 9001
    ingressnginx:
      port: 443  
environment: oci
global:
  domain: <cluster_dns_zone_name from terraform output>
  ingressController:
    serviceName: ingress-nginx-controller
  imagePullSecrets:
  - name: ecr-cred  
licenseKey: <insert-your-license-key-string-here>
log:
  format: json
platform:
  db:
    type: postgresql
  tls:
    cert:
      public: certManager
    certManager:
      public:
        issuer:
          kind: ClusterIssuer
          name: example-issuer
    enabled: true    
postgres:
  adminUsername: postgres
  host: <db_endpoint from >
  port: <db_port from terraform output>
persistence:
  enabled: true
  existingClaim: fss-pvc

Here follows information on how you can determine the values to set in your particular installation:

Value

Comment

oci.certificates.enabled

This value determine to use OCI SSL certificate or Kubernetes secret for Load Balancer SSL termination. Default value is false if it is not set, i.e., to obtain SSL certificate from Kubernetes secret internally.

Set it to true to use OCI SSL certificate.

oci.certificates.id

This value should be set to match the ocid of certificate created in previous section, Import-into-OCI-Certificates-Service. Not in used if oci.certificates.enabled is false.

oci.backendNSG

Value is taken from the backend_nsg listed in the terraform output produced in the Set Up Kubernetes Cluster - OCI | Create-Basic-Cluster-and-additional-infrastructure section.

oci.healthcheck.desktoponline.port

desktop-online backend set health check port, i.e., 9001

oci.healthcheck.ingressnginx.port

ingress nginx backend set health check port, i.e., 443

global.ingressController.serviceName

This is the name of the Kubernetes Service that was created adding the Kubernetes Add-ons | ingress-nginx-controller.

global.domain

Value is taken from the cluster_dns_zone_name listed in the terraform output produced in the Set Up Kubernetes Cluster - OCI | Create-Basic-Cluster-and-additional-infrastructure section.

global.imagePullSecrets

This is referencing an image pull secret containing the credentials required in order to pull container images from the Digital Route AWS ECR registry. If you are hosting the container images in your own container registry, depending on how that is configured, another image pull secret is probably needed. See https://infozone.atlassian.net/wiki/spaces/UEPE4D/pages/161481567/Common+Usage+Engine+Private+Edition+Preparations#Container-Images for additional information.

licenseKey

The license key that can be found in the licenseKey file that you have previously received (see the https://infozone.atlassian.net/wiki/spaces/UEPE4D/pages/161481605/General+Pre-requisites#License section). 

log.format

If you need to use dedicated log collection and monitoring tools like Fluent-bit, Elasticsearch, Kibana or AWS CloudWatch for Usage Engine Private Edition, make sure the log format is configured to json. See Configure Log Collection, Target, and Visualization - OCI for additional information.

platform.tls.*

These values are set to use the example issuer created at the beginning of this chapter. This should only be seen as an example and the values should be adjusted according to the real world situation.

postgres.adminUsername

Value is taken from the db_admin_user listed in the terraform output produced in the Set Up Kubernetes Cluster - OCI | Create-Basic-Cluster-and-additional-infrastructure section.

postgres.host

Value is taken from the db_endpoint listed in the terraform output produced in the Set Up Kubernetes Cluster - OCI | Create-Basic-Cluster-and-additional-infrastructure section.

postgres.port

Value is taken from the db_port listed in the terraform output produced in the Set Up Kubernetes Cluster - OCI | Create-Basic-Cluster-and-additional-infrastructure section.

persistence.existingClaim

The persistent volume claim name created in previous section OCI-Add-ons | oci-file-service-storage | Static Provisioning.

Ignore if persistence.enabled is false

General documentation of the values above is provided in the values.yaml file in the usage-engine-private-editionhelm chart.

In this example, the following assumptions have been made:

  1. PostgreSQL is used as the system database.

  2. It is assumed that you have previously bootstrapped the postgresqlPassword secret key with a value equal to the db_password configured in the terraform.tfvars file. For instructions on how to do this, please refer to the https://infozone.atlassian.net/wiki/spaces/UEPE4D/pages/211091666/Usage+Engine+Private+Edition+Preparations+-+OCI+4.2#Bootstrapping-System-Credentials-%5BinlineExtension%5D section.

  3. The system database is automatically created during installation.

  4. jdbcPassword and mzownerPassword are randomly generated.

  5. postgresqlPassword / oraclePassword / saphanaPassword is not randomly generated and therefore must be created as secret as described in point 3.

  6. If you are using the database tool uepe-sys-db-tool.jar to create the system database manually, ensure that the credentials mentioned in point 5 and 6 are included in the secret. For more details, refer to the https://infozone.atlassian.net/wiki/spaces/UEPE4D/pages/211091666/Usage+Engine+Private+Edition+Preparations+-+OCI+4.2#Bootstrapping-System-Credentials-%5BinlineExtension%5D section.

The command below can be used to install Usage Engine Private Edition:

helm install uepe digitalroute/usage-engine-private-edition --version <version> -f uepe-values.yaml -n uepe

Where <version> is the version of Usage Engine Private Edition to install. For example 4.0.0.

Check that all pods are running and that all pod containers become ready (this may take a little while):

kubectl get pods -w -n uepe                  
NAME                                                READY   STATUS    RESTARTS   AGE
desktop-online-5fdd4df85b-5hc6z                     1/1     Running   0          97m
external-dns-54fb5cb46b-4lfld                       1/1     Running   0          27h
ingress-nginx-controller-7477648b4c-sz2nw           1/1     Running   0          27h
oci-native-ingress-controller-6cd8cf8d79-dz8zp      1/1     Running   0          29h
platform-0                                          1/1     Running   0          97m
uepe-operator-controller-manager-69c4b499c8-h9l8w   2/2     Running   0          97m
uepe-operator-controller-manager-69c4b499c8-hxdcb   2/2     Running   0          97m

To get the Desktop Online web user interface hostname:

kubectl get ingress -n uepe

The output shows FQDN hostname, IP address and port to access desktop online web user interface.

NAME                       CLASS                     HOSTS                                                             ADDRESS           PORTS   AGE
desktop-online             native-ic-ingress-class   desktop-online.example-cluster.stratus.oci.digitalroute.net   130.162.252.220   80      99m
ingress-nginx-controller   native-ic-ingress-class   ingress.example-cluster.stratus.oci.digitalroute.net          130.162.252.220   80      99m

The Desktop Online user interface should now be accessible at:
https://desktop-online.example-cluster.stratus.oci.digitalroute.net/
Note that it may take a little while before the DNS record gets registered.To get the Platform web interface hostname run the following command:

kubectl get service platform -n uepe -o jsonpath="{.metadata.annotations.external-dns\.alpha\.kubernetes\.io/hostname}"

Note!

If you want to connect Usage Engine Private Edition to the Desktop Client, you need to modify the pico.rcp.platform.port value to 6790 in the Configuration l Properties tab after having added the instance via Platform FQDN hostname in Desktop Launcher.

The Usage Engine Private Edition installation is now complete.

Other Common Installation Configurations

Here follows a few common installation configurations for the Usage Engine Private Edition helm chart.

They should be seen as variations to the main installation example outlined above.

Persistent File Storage

If you have chosen to prepare for persistent file storage, there are two different ways of configuring your Usage Engine Private Edition installation to use it.

Use Bundled OCI Specific PVC

Specifically for OCI, the Usage Engine Private Edition helm chart contains a bundled persistent volume claim. This persistent volume claim is using the fss-dyn-storage storage class. To enable it, simply set the following helm values:

persistence:
  enabled: true
  bundledClaim:
    storageRequest: "10Gi"

Where the persistence.bundledClaim.storageRequest value is used to control the size of the requested storage (default is 1Gi).

Use a command like this to inspect the persistent volume claim that gets created as a result of setting the above helm values:

kubectl get persistentvolumeclaims mz-bundled-pvc -o yaml

Reference Arbitrary PVC

Usage Engine Private Edition can be configured to reference an arbitrary persistent volume claim by setting the following helm values:

persistence:
  enabled: true
  existingClaim: my-pvc

In this example, my-pvc is an arbitrary persistent volume claim that you have created before hand.

Container Images Hosted in Your Own Container Registry

If you are hosting the Usage Engine Private Edition container images in your own container registry (see https://infozone.atlassian.net/wiki/spaces/UEPE4D/pages/277676052/General+Usage+Engine+Private+Edition+Preparations+4.2#Hosting-Container-Images-in-Your-Own-Container-Registry), then the following helm values are required:

platform:
  repository: <the repository where the platform image is hosted>
operator:
  repository: <the repository repo where the operator image is hosted>
desktopOnline:
  repository: <the repository repo where the ui image is hosted>      

System Database in Oracle

If you have opted for placing the system database in Oracle rather than PostgreSQL, it is assumed that the system database has already been created using the system database tool (see https://infozone.atlassian.net/wiki/spaces/UEPE4D/pages/277676052/General+Usage+Engine+Private+Edition+Preparations+4.2#System-Database-%5BinlineExtension%5D).

Then, when installing the Usage Engine Private Edition helm chart, make sure to set the following helm values:

extensions:
  enabled: true
  image: my-uepe-extensions:1.0.0 # see additional information below on how to set this value
oracle:
  host: oracle # see additional information below on how to set this value
  port: 1521 # see additional information below on how to set this value
  db: UEPE # see additional information below on how to set this value
  expressEdition: false
platform:
  db:
    type: oracle

Additional information on how you can determine the values to set in your particular installation:

Value

Comment

extensions.image

The name of a custom container image containing the following Oracle software:

https://download.oracle.com/otn-pub/otn_software/jdbc/1923/ojdbc8.jar

https://download.oracle.com/otn_software/linux/instantclient/199000/oracle-instantclient19.9-basiclite-19.9.0.0.0-1.x86_64.rpm

https://download.oracle.com/otn_software/linux/instantclient/199000/oracle-instantclient19.9-sqlplus-19.9.0.0.0-1.x86_64.rpm

The Usage Engine Private Edition helm values file contains instructions on how to build this custom container image. Look for the extensions value to locate the instructions.

oracle.host

The domain name of the Oracle database service.

oracle.port

The port of the Oracle database service.

oracle.db

The name of the Usage Engine Private Edition database that was created in Oracle.

Note!

None of the postgres.* values outlined in the main installation example are required when opting for placing the system database in Oracle.

  • No labels