Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 19 Next »

Having completed the preparations, it is now time to install Usage Engine Private Edition.

Main Installation Example

In this main installation example, it is assumed that the following optional resources have been added while preparing for the installation (see Kubernetes Cluster Add-ons - OCI):

  • ingress-nginx-controller

  • cert-manager

Example Certificate

Since cert-manager is being used to provide TLS to the Usage Engine Private Edition installation in this example, you need to create an issuer in order to generate the required certificate.

Here we are going to use an ACME issuer type that is configured to match the Kubernetes cluster that was set up previously in the Preparations - OCI chapter:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: example-issuer
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: <your email address of choice>
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: example-issuer-account-key
    solvers:
      - dns01:
          webhook:
            groupName: acme.d-n.be
            solverName: oci
            config:
              ociProfileSecretName: oci-profile

A few things that should be noted:

  • Set email to your email address of choice.

  • The oci-profile is the credential to access Oracle Cloud Infrastructure API. If you choose another name for the secret than oci-profile, ensure you modify the value of ociProfileSecretName in the ClusterIssuer.

Create a yaml file named oci-profile.yaml. The secret oci-profile should look like this:

apiVersion: v1
kind: Secret
metadata:
  name: oci-profile
type: Opaque
stringData:
  tenancy: "your tenancy ocid"
  user: "your user ocid"
  region: "your region"
  fingerprint: "your key fingerprint"
  privateKey: |
    -----BEGIN RSA PRIVATE KEY-----
    ...KEY DATA HERE...
    -----END RSA PRIVATE KEY-----
  privateKeyPassphrase: "private keys passphrase or empty string if none"
 

Create the secret prior to ClusterIssuer creation. To install secret oci-profile to cert-manager namespace:

kubectl apply -f oci-profile.yaml -n cert-manager

Assuming that the issuer spec above has been saved into a file called example-issuer.yaml, it can be created like this:

kubectl apply -f example-issuer.yaml

Load Balancer TLS Certificate

With ClusterIssuer setup properly, we can proceed to generate TLS Certificate and import into OCI Certificates Service.

To generate certificate, create a yaml file named certificate.yaml with the following contents:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: lb-cert
spec:
  commonName: <cluster_dns_zone_name listed in the terraform output>
  dnsNames:
    - <cluster_dns_zone_name listed in the terraform output>
    - desktop-online.<cluster_dns_zone_name listed in the terraform output>
    - platform.<cluster_dns_zone_name listed in the terraform output>
    - ingress.<cluster_dns_zone_name listed in the terraform output>
    - grafana.<cluster_dns_zone_name listed in the terraform output>
  issuerRef:
    kind: ClusterIssuer
    name: example-issuer
  secretName: lb-cert
  1. Execute the yaml file

kubectl apply -f certificate.yaml -n uepe
  1. Wait for a while and confirm certificate has been generated successfully.

kubectl get certificate -n uepe 
  1. The output shows the certificate named lbcert’s status is ready

NAME                        READY   SECRET                              AGE
lb-cert                     True    lb-cert                             46h
  1. Extract the server certificate and CA certificate from secret lbcert

kubectl get secrets lb-cert -n uepe -o yaml | yq '.data' | grep "tls.crt" | awk -F : '{print $2}'| tr -d " "|base64 -d > tls.crt
  1. Separate server certificate and CA certificate into two files

csplit tls.crt '/^-----BEGIN CERTIFICATE-----$/' 
  1. Rename first generated file as server certificate file

mv xx00 tls.crt
  1. Rename second generated file as CA certificate file

mv xx01 ca.crt
  1. Extract the private key from secret lbcert

kubectl get secrets lb-cert -n uepe -o yaml | yq '.data' | grep "tls.key" | awk -F : '{print $2}'| tr -d " "|base64 -d > tls.key

By now, server certificate, CA certificate and private key are stored in tls.crt, ca.crt and tls.key respectively. Next step is to import into OCI Certificates Service.

Import into OCI Certificates Service

Go to OCI console management, search for Certificates service. On the Certificates service page, click Create Certificate and follow these steps

  1. Select Certificate Type Imported and key in a unique name

  2. Click Next to go to Certificate Configuration page.

  3. Upload tls.crt, ca.crt and tls.key according to table below respectively

OCI Certificates Configuration

file to upload

Certificate

tls.crt

Certificate Chain

ca.crt

Private Key

tls.key

  1. Click Next and proceed to Create Certificate

  2. Wait for the certificate to be created.

  3. Copy and save the certificate’s ocid. This ocid will be set to the oci.certificates.id property in the helm chart value file in the next section.

TLS Backendset Secret

The SSL configuration between the load balancer and the backend servers (worker nodes) in the backend set is known as backend SSL. In this case, the backend set is referring to Platform Pod on worker nodes. To implement backend SSL, you store the SSL certificates and private key in the form of Kubernetes secret.

You already have server certificate, CA certificate and private key generated from the previous section. These certificates and private key can be reused to generate the Kubernetes secret needed by the backend set.

To store the certificate and the private key as a secret in Kubernetes

kubectl create secret generic ca-ser-secret -n uepe --from-file=tls.crt=tls.crt --from-file=tls.key=tls.key --from-file=ca.crt=ca.crt

Now, the backend set secret named ca-ser-secret has been created in the namespace uepe.

The secret name ca-ser-secret should’t be changed as it is used internally in PE helm chart

Install Helm Chart

Although the number of helm value combinations to set is virtually endless, some values should more or less always be set.

So let’s start by creating a file called uepe-values.yaml, and in that file, specify a minimal set of values that will serve as a good starting point:

oci:
  certificates:
    id: ocid1.certificate.oc1.eu-frankfurt-1.amaaaaaaqpnxi2aaftofigjmkytoomv2u2ycjenhvqsbarhfhpycfujihyyq
  backendNSG: ocid1.networksecuritygroup.oc1.eu-frankfurt-1.aaaaaaaaephkmmm3hsyqw57wvkfssqlc56ddj7yknhgz7cgajxijvhqkzflq
  healthcheck:
    desktoponline:
      port: 9001
    ingressnginx:
      port: 443  
environment: oci
global:
  domain: example-cluster.stratus.oci.digitalroute.net
  ingressController:
    serviceName: ingress-nginx-controller
  imagePullSecrets:
  - name: ecr-cred  
licenseKey: VGhpcyBpcyBhIGZha2UgVXNhZ2UgRW5naW5lIFByaXZhdGUgRWRpdGlvbiBsaWNlbnNlIGtleSE=
log:
  format: json
platform:
  db:
    type: postgresql
  tls:
    cert:
      public: certManager
    certManager:
      public:
        issuer:
          kind: ClusterIssuer
          name: example-issuer
    enabled: true    
postgres:
  adminUsername: postgres
  host: example-cluster-db-primary.postgresql.eu-frankfurt-1.oc1.oraclecloud.com
  port: 5432
persistence:
  enabled: true

Here follows information on how you can determine the values to set in your particular installation:

Value

Comment

oci.certificates.id

This value should be set to match the ocid of certificate created in previous section, Import-into-OCI-Certificates-Service.

oci.backendNSG

Value is taken from the backend_nsg listed in the terraform output produced in the Set Up Kubernetes Cluster - OCI | Create-Basic-Cluster-and-additional-infrastructure section.

oci.healthcheck.desktoponline.port

desktop-online backend set health check port, i.e., 9001

oci.healthcheck.ingressnginx.port

ingress nginx backend set health check port, i.e., 443

global.ingressController.serviceName

This is the name of the Kubernetes Service that was created adding the Kubernetes Add-ons | ingress-nginx-controller.

global.domain

Value is taken from the cluster_dns_zone_name listed in the terraform output produced in the Set Up Kubernetes Cluster - OCI | Create-Basic-Cluster-and-additional-infrastructure section.

global.imagePullSecrets

This is referencing an image pull secret containing the credentials required in order to pull container images from the Digital Route AWS ECR registry. If you are hosting the container images in your own container registry, depending on how that is configured, another image pull secret is probably needed. See https://infozone.atlassian.net/wiki/spaces/UEPE4D/pages/161481567/Common+Usage+Engine+Private+Edition+Preparations#Container-Images for additional information.

licenseKey

The license key that can be found in the licenseKey file that you have previously received (see the https://infozone.atlassian.net/wiki/spaces/UEPE4D/pages/161481605/General+Pre-requisites#License section). 

log.format

If you need to use dedicated log collection and monitoring tools like Fluent-bit, Elasticsearch, Kibana or AWS CloudWatch for Usage Engine Private Edition, make sure the log format is configured to json. See Configure Log Collection, Target, and Visualization - OCI for additional information.

platform.tls.*

These values are set to use the example issuer created at the beginning of this chapter. This should only be seen as an example and the values should be adjusted according to the real world situation.

postgres.adminUsername

Value is taken from the db_admin_user listed in the terraform output produced in the Set Up Kubernetes Cluster - OCI | Create-Basic-Cluster-and-additional-infrastructure section.

postgres.host

Value is taken from the db_endpoint listed in the terraform output produced in the Set Up Kubernetes Cluster - OCI | Create-Basic-Cluster-and-additional-infrastructure section.

postgres.port

Value is taken from the db_port listed in the terraform output produced in the Set Up Kubernetes Cluster - OCI | Create-Basic-Cluster-and-additional-infrastructure section.

General documentation of the values above is provided in the values.yaml file in the usage-engine-private-editionhelm chart.

In this example, the system database is to be automatically created at install time. For this to happen, you need to provide the database administrator credentials. Hence, the postgres.adminUsername value is set to the default OCI PostgreSQL administrator username. Since setting passwords through helm values is a great security risk, it is assumed that you have previously boostrapped the postgresqlPassword secret key with a value equal to super_SeCrEt_db_pAsSwOrD_457! (see the https://infozone.atlassian.net/wiki/spaces/UEPE4D/pages/161481567/General+Usage+Engine+Private+Edition+Preparations#Bootstrapping-System-Credentials-%5BinlineExtension%5D section for an explanation on how to do this).

The command below can be used to install Usage Engine Private Edition:

helm install uepe digitalroute/usage-engine-private-edition --version <version> -f uepe-values.yaml -n uepe

Where <version> is the version of Usage Engine Private Edition to install. For example 4.0.0.

Check that all pods are running and that all pod containers become ready (this may take a little while):

kubectl get pods -w -n uepe                  
NAME                                                READY   STATUS    RESTARTS   AGE
desktop-online-5fdd4df85b-5hc6z                     1/1     Running   0          97m
external-dns-54fb5cb46b-4lfld                       1/1     Running   0          27h
ingress-nginx-controller-7477648b4c-sz2nw           1/1     Running   0          27h
oci-native-ingress-controller-6cd8cf8d79-dz8zp      1/1     Running   0          29h
platform-0                                          1/1     Running   0          97m
uepe-operator-controller-manager-69c4b499c8-h9l8w   2/2     Running   0          97m
uepe-operator-controller-manager-69c4b499c8-hxdcb   2/2     Running   0          97m

To get the Desktop Online web user interface hostname:

kubectl get ingress -n uepe

The output shows FQDN hostname, IP address and port to access desktop online web user interface.

NAME                       CLASS                     HOSTS                                                             ADDRESS           PORTS   AGE
desktop-online             native-ic-ingress-class   desktop-online.example-cluster.stratus.oci.digitalroute.net   130.162.252.220   80      99m
ingress-nginx-controller   native-ic-ingress-class   ingress.example-cluster.stratus.oci.digitalroute.net          130.162.252.220   80      99m

The Desktop Online user interface should now be accessible at:
https://desktop-online.example-cluster.stratus.oci.digitalroute.net/
Note that it may take a little while before the DNS record gets registered.

The Usage Engine Private Edition installation is now complete.

Other Common Installation Configurations

Here follows a few common installation configurations for the Usage Engine Private Edition helm chart.

They should be seen as variations to the main installation example outlined above.

Persistent File Storage

If you have chosen to prepare for persistent file storage, there are two different ways of configuring your Usage Engine Private Edition installation to use it.

Use Bundled OCI Specific PVC

Specifically for OCI, the Usage Engine Private Edition helm chart contains a bundled persistent volume claim. This persistent volume claim is using the fss-dyn-storage storage class. To enable it, simply set the following helm values:

persistence:
  enabled: true
  bundledClaim:
    storageRequest: "10Gi"

Where the persistence.bundledClaim.storageRequest value is used to control the size of the requested storage (default is 1Gi).

Use a command like this to inspect the persistent volume claim that gets created as a result of setting the above helm values:

kubectl get persistentvolumeclaims mz-bundled-pvc -o yaml

Reference Arbitrary PVC

Usage Engine Private Edition can be configured to reference an arbitrary persistent volume claim by setting the following helm values:

persistence:
  enabled: true
  existingClaim: my-pvc

In this example, my-pvc is an arbitrary persistent volume claim that you have created before hand.

Error rendering macro 'excerpt-include' : No link could be created for 'Common Installation Configurations'.

  • No labels