Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Before installing Usage Engine Private Edition, you need to set up a Kubernetes cluster on AWS EKS (Amazon's OCI OKE (Oracle’s managed Kubernetes Service for EC2service).

First, you need to create a basic Kubernetes cluster needs to be created. This You can be done do this in two different ways:

  • Using the eksctl CLI terraform tool.

  • Using the AWS OCI management console.

In this guide, eksctl terraform will be used. Mainly , mainly because it will enable you to create the basic Kubernetes cluster in minutes with just a single command.

Once the basic Kubernetes cluster has been created, you need to add additional infrastructure needs to be added. For this terraform is used. You can use terraform for this as well.

Before proceeding, go to Release Information, and download the awsoci.tar.gz file for the Usage Engine Private Edition version that is being installedyou want to install. Once downloaded, extract its content to a suitable location.

Assumptions

There are a few assumptions been made when using terraform to create cluster resources:

Example:

Code Block
#  ____  _____ _____   _____ _   _ _____ ____  _____
# / ___|| ____|_   _| |_   _| | | | ____/ ___|| ____|_
# \___ \|  _|   | |     | | | |_| |  _| \___ \|  _| (_)
#  ___) | |___  | |     | | |  _  | |___ ___) | |___ _
# |____/|_____| |_|     |_| |_| |_|_____|____/|_____(_)

# The below values must be set explicitly in order for the setup to work correctly.

vpc_id = "vpc-04ff16421e3ccdd94"
aws_region = "eu-west-1"
aws_account_id = "058264429588"

# Name of the cluster, it must be unique in the account.
cluster_name = "example-cluster"

# Domain DNS name
# The DNS zone must already exist in Route53 or in other cloud provider DNS zone.
# We'll create a subdomain zone from parent domain, the final domain will be in format "<cluster_name>.<domain>".
# Please note that if this domain is hosted on another AWS account or other cloud provider, then you must
# set auto_create_ns_record = false and manually add the subdomain NS record to the parent domain.
domain = "stratus.digitalroute.net"

# Admin user password to the database.
db_password = "super_SeCrEt_db_pAsSwOrD_457!"
Info

Important notes if your parent domain zone is not under the same account:

  • You need to set auto_create_ns_record = false to disable subdomain NS record auto creation in the parent domain.

  • Terraform apply will fail due to certificate validation timeout error │ Error: waiting for ACM Certificate (arn:aws:acm:ap-southeast-1:027763730008:certificate/84ae1022-15bd-430a-ab3e-278f01b0edb6) to be issued: timeout while waiting for state to become 'ISSUED' (last state: 'PENDING_VALIDATION', timeout: 2m0s)

  • When the error above happened, you need to manually retrieve the name servers value from the created subdomain and add them to parent domain as NS record. If you are not using Route53 as the parent domain, please refer to your Domain Registrar documentation on how to add NS record.

  • Once NS record is added to the parent domain, go to AWS Console |  AWS Certificate Manager (ACM) and wait for the certificate status become verified. It will take 10-20 minutes.

  • After the certificate is verified, run the terraform apply again to continue provisioning.

  1. Run the following commands

Code Block
languagebash
terraform init
terraform plan
terraform apply
  1. Wait for the terraform commands to finish.

Code Block
languagebash
Apply complete! Resources: 16 added, 0 changed, 0 destroyed.

Outputs:

certificate_arn = "arn:aws:acm:eu-west-1:058264429588:certificate/526ed179-afa7-4778-b1b8-bfbcb95e4534"
db_endpoint = "example-cluster-db.c70g0ggo8m66.eu-west-1.rds.amazonaws.com:5432"
db_password = <sensitive>
db_user = "dbadmin"
efs_id = "fs-0f0bb5c0ef98f5b6f"
eks_domain_zone_id = "Z076760737OMHF392P9P7"
eks_domain_zone_name = "example-cluster.stratus.digitalroute.net"
name_servers = tolist([
  "ns-1344.awsdns-40.org",
  "ns-2018.awsdns-60.co.uk",
  "ns-55.awsdns-06.com",
  "ns-664.awsdns-19.net",
])
private_subnets = [
  "subnet-0956aa9898f78900d",
  "subnet-0b6d1364dfb4090d6",
  "subnet-0da06b6a88f9f45e7",
]
public_subnets = [
  "subnet-01174b6e86367827b",
  "subnet-0d0b14a68fe42ba09",
  "subnet-0eed6adde0748e1f6",
]
Info

Make sure to save the output from terraform above. Reason being that it is used as input throughout the remainder of this installation guide.

A basic Kubernetes cluster has now been created.

A RDS PostgreSQL database instance up and running on private subnet VPC with default listening port 5432. The default database PlatformDatabase is accessible within the cluster at end point example-cluster-db.c70g0ggo8m66.eu-west-1.rds.amazonaws.com with admin username dbadmin.

...

  1. We assume that you have an existing parent domain i.e. , in the example below example.com, hosted on the same account as the cluster that we are going to create in the coming following section and that you wish want to access the cluster environment through via the hostname. Terraform will create a subdomain in the following format: <cluster_name>.<domain>.

    1. cluster name: uepe-eksoke

    2. domain: example.com

    3. final domain: uepe-eksoke.example.com

  2. In addition, we We also assume that terraform is allowed to add a NS (NameServer) record to the parent domain . This which is needed to allow DNS delegation from the parent domain to subdomain.

  3. Please note that in case your parent domain is not under the same account or your parent domain is hosted in another cloud provider, then you must set auto_create_ns_record to false in the terraform template to disable subdomain NS record auto creation in parent domain.

  4. The service hostname that created by Usage Engine Private Edition will be accessible in format <service_name>.<cluster_name>.<domain> i.e. desktop-online.uepe-eks.example.com.

  5. Terraform needs to persist the state of your provisioned infrastructure, by default the state file is stored locally on the computer that terraform is executed from. However if you have multiple person working on the infrastructure then it is recommended to store the state file on remote persistent such as S3 bucket, see https://developer.hashicorp.com/terraform/language/settings/backends/s3 for more information.

  6. We use EFS (NFS) as the default persistent storage for data needs to be persisted.

  7. We use RDS for Usage Engine Private Edition database, default engine type is PostgreSQL.

Create Basic Cluster

The following steps explains how to create a basic Kubernetes cluster using a configuration file named uepe-eks.yaml:

  1. Go to <the location where you extracted the aws.tar.gz file>/aws/eksctl and edit theuepe-eks.yaml file.

  2. In the metadata section, specify the desired cluster name, AWS region and Kubernetes version (please refer to the https://infozone.atlassian.net/wiki/x/owDKCg to find out which Kubernetes versions that are compatible with this release of Usage Engine Private Edition).

  3. In the nodeGroups section, specify the desired node size within the cluster. Set minSize and maxSize to specify a limit to the number of node’s minimum and maximum range. Set desiredCapacity to specify the exact number of node running within the cluster. In this example, we are creating a 3 nodes cluster with public and private VPC.

The uepe-eks.yaml configuration file looks like this:

Code Block
languageyaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: example-cluster
  region: eu-west-1
  version: "1.29"
  tags:
    deployment: aws-template

vpc:
  clusterEndpoints:
    publicAccess:  true
    privateAccess: true
    
iam:
  withOIDC: true
  serviceAccounts:
  - metadata:
      name: aws-load-balancer-controller
      namespace: uepe
      labels: {aws-usage: "aws-load-balancer-contoller"}
    wellKnownPolicies:
      awsLoadBalancerController: true
  - metadata:
      name: external-dns
      namespace: uepe
      labels: {aws-usage: "external-dns"}
    wellKnownPolicies:
      externalDNS: true
  - metadata:
      name: cert-manager
      namespace: cert-manager
    wellKnownPolicies:
      certManager: true
  - metadata:
      name: cluster-autoscaler
      namespace: uepe
      labels: {aws-usage: "cluster-ops"}
    wellKnownPolicies:
      autoScaler: true
  - metadata:
      name: efs-csi-controller-sa
      namespace: uepe
      labels: {aws-usage: "aws-efs-csi-driver"}
    wellKnownPolicies:
      efsCSIController: true
  - metadata:
      name: ebs-csi-controller-sa
      namespace: uepe
      labels: {aws-usage: "aws-ebs-csi-driver"}
    wellKnownPolicies:
      ebsCSIController: true

nodeGroups:
  - name: public-nodes
    instanceType: m5.large
    minSize: 3
    maxSize: 3
    desiredCapacity: 3
    volumeSize: 80
    labels: {role: worker}
    volumeEncrypted: true
    tags:
      nodegroup-role: worker

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]
Info

https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html has been configured for each cluster add-on under the iam.serviceAccounts section in the above uepe-eks.yaml file. Hence, a service account for each cluster add-on will be created in the specified namespace respectively.

Please make sure to use the same namespace when installing the respective add-on in the Kubernetes Cluster Add-ons - AWS section.

For instance, using the namespaces specified in the uepe-eks.yaml file above, means that:

  • external-dns must be installed in namespace uepe.

  • cert-manager must be installed in namespace cert-manager.

Execute the following command to create the cluster based on your desired settings:

Code Block
languagebash
eksctl create cluster -f uepe-eks.yaml --kubeconfig=./kubeconfig.yaml

A Kubernetes cluster with the desired number of nodes should be created within 15 minutes.

Also, the above eksctl command will generate a ./kubeconfig.yaml file containing information on how to connect to your newly created cluster. Make sure to set the KUBECONFIG environment variable to point to that file:

Code Block
languagebash
export KUBECONFIG=<full path to ./kubeconfig.yaml>

This will ensure that tools like kubectl and helm will connect to your newly created cluster.

You can check the status of the cluster nodes like this:

Code Block
languagebash
eksctl get nodegroup --cluster example-cluster

For this example cluster the output will looks something like this:

Code Block
CLUSTER         NODEGROUP       STATUS          CREATED                 MIN SIZE    MAX SIZE    DESIRED CAPACITY    INSTANCE TYPE   IMAGE ID                ASG NAME                                                                TYPE
example-cluster public-nodes	CREATE_COMPLETE 2024-03-11T13:59:28Z    3           3           3                   m5.large        ami-02e2de73058d55743   eksctl-example-cluster-nodegroup-public-nodes-NodeGroup-eb5aNADEiibs    unmanaged

Setup Additional Infrastructure Resources on AWS

At this stage, a basic Kubernetes cluster has been created. However, some additional infrastructure resources remain to be set up. Namely the following:

  • Hosted Zone (subdomain) for domain name.

  • ACM Certificate for the domain name (to be used with any load balancers).

  • KMS CMK key which is used for encryption at-rest for EFS, RDS and SSM.

  • EFS with security group in place.

  • RDS PostgreSQL with security group in place.

Follow these steps to set up the remaining infrastructure resources:

  1. Go to <the location where you extracted the aws.tar.gz file>/terraform

  2. Copy terraform.tf.vars.example to terraform.tfvars.

  3. Retrieve the following values from AWS Console and fill in the parameters in terraform.tfvars

...

terraform.tfvars

...

Where to get the value from?

...

vpc_id

...

In the AWS management console, you can find this information by searching for “Your VPCs”. Pick the VPC ID of the cluster that you created in the previous section.

...

aws_region

...

From metadata.region in your uepe-eks.yamlfile.

...

aws_account_id

...

In the AWS management console, this is the Account ID that is listed on your Account page.

...

cluster_name

...

From metadata.name in your uepe-eks.yaml file.

...

domain

...

In the AWS management console, on the Route 53 service page, this is the Hosted zone name of your existing Hosted zone.

...

domain_zone_id

...

In the AWS management console, on the Route 53 service page, this is the Hosted zone ID of your existing Hosted zone.

...

db_password

...

Choose a secure password for the system database administrator.

Minimum 10 characters.

  1. Terraform needs to persist the state of your provisioned infrastructure. By default, the state file is stored locally on the computer that terraform is executed from. However, if multiple persons are working on the infrastructure, then it is recommended to store the state file using a remote persistence such as Object Storage, see https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformUsingObjectStore.htm for more information.

  2. The OCI File System service (NFS) is used as the default persistent storage for data that needs to be persisted.

  3. The OCI Managed PostgreSQL service is used as the Usage Engine Private Edition database.

  4. The user Principle is used throughout the entire installation. The user must prepare the private key file locally. The user can create and download the private key via the OCI console by selecting Profile | My Profile | API keys | Add API key.

Create Basic Cluster and additional infrastructure

To create a basic Kubernetes cluster with public and private VPC:

  1. Go to <folder where you extracted the oci.tar.gz file>/oci/terraform and copy theterraform.tfvars.example to terraform.tfvars.

  2. Edit the terraform.tfvars file.

  3. Specify the desired cluster name, OCI region and kubernetes_version (see Compatibility Matrix (4.2) to find out which Kubernetes versions that are compatible with this release of Usage Engine Private Edition). Specify your OCI tenancy_ocid, user_ocid, fingerprint, compartment_ocid and private_key_path (which can be found on the OCI dashboard’s Profile page), as well as the desired number of nodes per cluster (oke_num_nodes).

  4. If you are going to use another database than Derby, specify db_password, db_version and db_username.

terraform.tfvars

Where to get the value from?

tenancy_ocid

In the OCI management console, the tenancy_ocid is listed on Profile | Tenancy: <tenant-name> | Tenancy Details.

fingerprint

Fingerprint is only available when the user has created API keys, see private_key_path below.

In the OCI management console, the fingerprint is listed on Profile | My Profile | Resources | API keys when the API keys have been created.

user_ocid

In the OCI management console, user_ocid is listed on Profile | My Profile

private_key_path

The full path to your private key file’s filename.

To create and download your private key, go to Profile | My Profile | Resources | API keys, create your API key and click Download.

region

The region in which you will install your cluster, for example "eu-frankfurt-1".

cluster_name

A name for your cluster. Cluster names must start with a lowercase letter followed by up to 39 lowercase letters, numbers or hyphens. They cannot end with a hyphen. The cluster name must be unique in the project.

domain

Your existing domain name. In the OCI management console, this is the DNS name that is listed on Networking |DNS management | Zones.

The service hostname created by Usage Engine Private Edition will be accessible in the following format: <service_name>.<cluster_name>.<domain> i.e. desktop-online.uepe-oke.example.com.

kubernetes_version

The kubernetes version in alpha numeric string, for example “v1.29.1".

oke_num_nodes

The number of cluster nodes in numeric format, for example “3”.

oke_availability_domain

The availability domain name for the cluster, for example "Vafx:EU-FRANKFURT-1-AD-1".

db_password

Choose a secure password for the system database administrator, minimum 10 characters.

db_version

The database version in numeric format, for example “14“.

oke_image_id

The OCID of the image to be used for worker node instance creation.

To see the available image under your compartment, use the command:

Code Block
oci ce node-pool-options get --node-pool-option-id all --compartment-id <your compartment ocid>.

db_enabled

db_enables is a boolean flag for enabling cloud SQL database resource creation.

fss_enabled

fss_enabled is a boolean flag for enabling file storage resource creation. It is set to false by default. Set it to true if you need persistent file storage.

auto_create_ns_record

auto_create_ns_record is a boolean flag for enabling subdomain NS record to be automatically created in the parent domain. If your parent domain is not under the same compartment, or if your parent domain is hosted in another cloud provider, then you must set it to false.

Info

Example

Code Block
languagetext
#  ____  _____ _____   _____ _   _ _____ ____  _____
# / ___|| ____|_   _| |_   _| | | | ____/ ___|| ____|_
# \___ \|  _|   | |     | | | |_| |  _| \___ \|  _| (_)
#  ___) | |___  | |     | | |  _  | |___ ___) | |___ _
# |____/|_____| |_|     |_| |_| |_|_____|____/|_____(_)

# The below values must be set explicitly in order for the setup to work correctly.

tenancy_ocid     = "ocid1.tenancy.oc1..aaaaaaaamnl7f7t2yrlas2si7b5hpo6t23dqi6mjo3eot6ijl2nqcog5h6ha"
fingerprint      = "7d:67:b3:9d:a3:8f:6d:37:f3:e9:7d:e5:45:ec:df:56"
user_ocid        = "ocid1.user.oc1..aaaaaaaauhk3uhiryg7sw2xjmvf45zasduqwr2cium53gmdxwipe4iqdrfuq"
private_key_path = "/Users/kamheng.choy/Downloads/kamheng.choy@digitalroute.com_2024-04-07T10_07_56.490Z.pem"

# Deployment compartment
compartment_ocid = "ocid1.compartment.oc1..aaaaaaaa56wmblidgvvicamsqkf7sqcqu5yxdhvu3wlvomzgonhflcrv6kcq"

# region
region = "eu-frankfurt-1"

# Name of the cluster, it must be unique in the project.
cluster_name = "test-uepe-cluster-1"

# Domain DNS name
# We'll create a subdomain zone from parent domain, the final domain will be in format "<cluster_name>.<domain>".
# Please note that if this domain is hosted on another OCI project or other cloud provider, then you must
# set auto_create_ns_record = false and manually add the subdomain NS record to the parent domain.
# auto_create_ns_record = false
domain = "stratus.oci.digitalroute.net"

# Admin user password to the database
db_password = "super_SeCrEt_db_pAsSwOrD_457!"

#  _______        _______    _    _  __    _    ____  _     _____
# |_   _\ \      / / ____|  / \  | |/ /   / \  | __ )| |   | ____|_
#   | |  \ \ /\ / /|  _|   / _ \ | ' /   / _ \ |  _ \| |   |  _| (_)
#   | |   \ V  V / | |___ / ___ \| . \  / ___ \| |_) | |___| |___ _
#   |_|    \_/\_/  |_____/_/   \_\_|\_\/_/   \_\____/|_____|_____(_)

# The below sections are the default values, tweak them to your needs.

# Kubernetes version
kubernetes_version = "v1.29.1"

# Number of nodes per cluster
oke_num_nodes = 3
# Worker node machine type
node_pool_shape = "VM.Standard.E4.Flex"
oke_availability_domain = "Vafx:EU-FRANKFURT-1-AD-1"

oke_image_id = "ocid1.image.oc1.eu-frankfurt-1.aaaaaaaapwbqurbd2hpmj2at354r3dkok4o4644am4hwgdagoekpcaon7shq"

# IP CIDR range allocate to the control plane
vcn_cidr_blocks = "10.0.0.0/16"

# Network file system (NFS) persistent storage
fss_enabled = true
fss_availability_domain = "Vafx:EU-FRANKFURT-1-AD-1"

# Cloud SQL database
db_enabled = true
# DB instance type
db_instance_shape = "PostgreSQL.VM.Standard.E4.Flex.4.64GB"
# DB version
db_version = "14"

Note

Note!

If your parent domain zone is not under the same project:

  • Set auto_create_ns_record = false to disable subdomain NS record auto creation in the parent domain.

  • Perform terraform apply.

  • When terraform has been applied, copy the name server's value from the terraform output and manually add them to parent domain as a NS record. If you are not using OCI DNS as the parent domain, see your Domain Registrar documentation for information on how to add NS record.

  1. Run the following commands:

Code Block
terraform init
terraform plan
terraform apply
  1. Wait until the terraform commands have completed and you see the following kind of information:

Code Block
languagetext
Apply complete! Resources: 35 added, 0 changed, 0 destroyed.

Outputs:

backend_nsg = "ocid1.networksecuritygroup.oc1.eu-frankfurt-1.aaaaaaaacreo4kf5kd2n7nk4fn2kcsuv6kye2noowhpjypcmrqmms32gpg3a"
cluster_dns_zone_name = "test-uepe-cluster-1.stratus.oci.digitalroute.net"
cluster_dns_zone_name_servers = [
  "ns1.p201.dns.oraclecloud.net.",
  "ns2.p201.dns.oraclecloud.net.",
  "ns3.p201.dns.oraclecloud.net.",
  "ns4.p201.dns.oraclecloud.net.",
]
cluster_dns_zone_ocid = "ocid1.dns-zone.oc1..aaaaaaaacd5nsfzmir3efo5e2pcuga4t622vcxcqkc3ezizl64e5gofo7dza"
cluster_name = "test-uepe-cluster-1"
cluster_ocid = "ocid1.cluster.oc1.eu-frankfurt-1.aaaaaaaaerg6ctgepnuaipifispmuweqi5nvfhswxpu3luuctcvitslu3fea"
compartment_ocid = "ocid1.compartment.oc1..aaaaaaaa56wmblidgvvicamsqkf7sqcqu5yxdhvu3wlvomzgonhflcrv6kcq"
db_admin_user = "postgres"
db_endpoint = "db5j5pt3qwjqmmjgfremgugr7cxtsq-dbinstance-70c946d1330e.postgresql.eu-frankfurt-1.oc1.oraclecloud.com"
db_port = 5432
filesystem_mount_path = "/uepe"
filesystem_ocid = "ocid1.filesystem.oc1.eu_frankfurt_1.aaaaaaaaaais2zcnmzzgcllqojxwiotfouwwm4tbnzvwm5lsoqwtcllbmqwtgaaa"
kms_key_ocid = ""
loadbalancer_ocid = "ocid1.loadbalancer.oc1.eu-frankfurt-1.aaaaaaaanmx4u2yllufrjetacqt5bsgiyznkg7fif3bjfl36xoduyngesvra"
loadbalancer_subnet_ocid = "ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaapyqsowgik7gak3wkihsm3jtronnc5klbf46jerjnudrqsnlbco5q"
mount_target_IP_address = "10.0.4.212"
mount_target_subnet_ocid = "ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaaoh36ywx4rki7qtre33f53amjy2zylm6mnqeix6cydn5ul4shfqja"
region = "eu-frankfurt-1"
tenancy_ocid = "ocid1.tenancy.oc1..aaaaaaaamnl7f7t2yrlas2si7b5hpo6t23dqi6mjo3eot6ijl2nqcog5h6ha"
Info

Ensure to save the output from terraform above since it will be used as input throughout the remainder of this installation guide.

A basic Kubernetes cluster has now been set up successfully.

Insert excerpt
General Kubernetes Preparations
General Kubernetes Preparations
nameterraform state message
nopaneltrue

An RDS PostgreSQL database instance is up and running on a private subnet VPC with default listening port 5432. The default database postgres is accessible within the cluster at end point db5j5pt3qwjqmmjgfremgugr7cxtsq-dbinstance-70c946d1330e.postgresql.eu-frankfurt-1.oc1.oraclecloud.com with admin username postgres.

You can see the status of the cluster, db and the other resources in the OCI dashboard.

Configure Cluster Access

To configure cluster access, run the following command:

Code Block
languagebash
oci ce cluster create-kubeconfig --cluster-id <cluster ocid> --file ./kubeconfig.yaml --region eu-frankfurt-1 --token-version 2.0.0  --kube-endpoint PUBLIC_ENDPOINT

A ./kubeconfig.yaml file containing information on how to connect to your newly created cluster will be generated. Set the KUBECONFIG environment variable to point to that file by running the following command:

Code Block
export KUBECONFIG=<full path to ./kubeconfig.yaml>

This will ensure that tools like kubectl and helm will connect to your newly created cluster.

You can check the status of the cluster nodes by running the following command:

Code Block
kubectl get nodes

In this example cluster, the output will look something like this:

Code Block
NAME         STATUS   ROLES   AGE   VERSION
10.0.2.111   Ready    node    27h   v1.29.1
10.0.2.158   Ready    node    27h   v1.29.1
10.0.2.230   Ready    node    27h   v1.29.1

Insert excerpt
General Kubernetes Preparations
General Kubernetes Preparations
namecommon-namespace
nopaneltrue

This section is now complete and you can proceed to the Kubernetes Cluster Add-ons - OCI (4.2) section.