Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Before installing Usage Engine Private Edition, you need to set up a Kubernetes cluster on OCI OKE (Oracle’s managed Kubernetes service).

First a basic Kubernetes cluster needs to be created. This can be done in two different ways:

  • Using the terraform tool.

  • Using the OCI management console.

In this guide, terraform will be used. Mainly because it will enable you to create the basic Kubernetes cluster in minutes with just a single command.

Once the basic Kubernetes cluster has been created, additional infrastructure needs to be added. For this terraform is also used.

Before proceeding, go to Release Information, and download the oci.tar.gz file for the Usage Engine Private Edition version that is being installed. Once downloaded, extract its content to a suitable location.

Assumptions

There are a few assumptions been made when using terraform to create cluster resources:

  1. We assume you have an existing parent domain i.e. example.com hosted on the same account as the cluster that we going to create in the coming section and you wish to access the cluster environment through the hostname. Terraform will create a subdomain in format <cluster_name>.<domain>.

    1. cluster name: uepe-eks

    2. domain: example.com

    3. final domain: uepe-eks.example.com

  2. In addition, we also assume terraform is allowed to add a NS (NameServer) record to the parent domain. This is to allow DNS delegation from the parent domain to subdomain.

  3. Please note that in case your parent domain is not under the same account or your parent domain is hosted in another cloud provider, then you must set auto_create_ns_record to false in the terraform template to disable subdomain NS record auto creation in parent domain.

  4. The service hostname that created by Usage Engine Private Edition will be accessible in format <service_name>.<cluster_name>.<domain> i.e. desktop-online.uepe-eks.example.com.

  5. Terraform needs to persist the state of your provisioned infrastructure, by default the state file is stored locally on the computer that terraform is executed from. However if you have multiple person working on the infrastructure then it is recommended to store the state file on remote persistent such as S3 bucket, see https://developer.hashicorp.com/terraform/language/settings/backends/s3 for more information.

  6. We use the OCI File System service (NFS) as the default persistent storage for data needs to be persisted.

  7. We use the OCI Managed PostgreSQL service for Usage Engine Private Edition database.

Create Basic Cluster and additional infrastructure

The following steps explains how to create a basic Kubernetes cluster with public and private VPC:

  1. Go to <the location where you extracted the gcp.tar.gz file>/gcp/terraform and copy theterraform.tfvars.example to terraform.tfvars.

  2. Edit the terraform.tfvars file.

  3. Specify the desired cluster name, GCP region and kubernetes_version prefix (please refer to the Compatibility Matrix (4.1) to find out which Kubernetes versions that are compatible with this release of Usage Engine Private Edition). Also specify your GCP project id (which can be found on the GCP dashboard), as well as the desired number of nodes per region (gke_num_nodes).

  4. If you will be running with a database other than Derby also specify db_password, db_version and db_allocated_storage.

terraform.tfvars

Where to get the value from?

project_id

In the GCP management console, this is the Project ID that is listed on Cloud overview | Dashboard | Project info. Or use command gcloud projects list to retrieve project info.

project_number

In the GCP management console, this is the Project Number that is listed on Cloud overview | Dashboard | Project info. Or use command gcloud projects list to retrieve project info.

region

The region in which you will install your cluster, refer to https://cloud.google.com/compute/docs/regions-zones for possible values. Or use command gcloud compute regions list to get the values.

cluster_name

A name for your cluster. Cluster names must start with a lowercase letter followed by up to 39 lowercase letters, numbers or hyphens. They can't end with a hyphen. The cluster name must be unique in the project.

domain

Your existing domain name. In the GCP management console, this is the DNS name that is listed on page Cloud DNS | Zones. Or use command gcloud dns managed-zones list to get the dns name.

kubernetes_version_prefix

Prefix version for kubernetes (default “1.27.").

gke_num_nodes

Number of cluster nodes per zone.

db_password

Choose a secure password for the system database administrator.

Minimum 10 characters.

db_version

Database version, check https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/sql_database_instance#database_version for possible values. Default is POSTGRES_15 (PostgreSQL version 15).

db_allocated_storage

Allocated amount of storage for the database. Default is “10” (10GB).

filestore_location

To find out available zones of your region, use command gcloud compute zones list --filter="region:<region>".

Replace <region> with the region value configured above, i.e., the region in which you will install your cluster

Example:

#  ____  _____ _____   _____ _   _ _____ ____  _____
# / ___|| ____|_   _| |_   _| | | | ____/ ___|| ____|_
# \___ \|  _|   | |     | | | |_| |  _| \___ \|  _| (_)
#  ___) | |___  | |     | | |  _  | |___ ___) | |___ _
# |____/|_____| |_|     |_| |_| |_|_____|____/|_____(_)

# The below values must be set explicitly in order for the setup to work correctly.

# Project settings, use command `gcloud projects list` to retrieve project info.
project_id = "pt-dev-stratus-bliz"
project_number = "413241157368"

# Region to deploy, use command `gcloud compute regions list` to get available regions.
region = "europe-north1"

# Name of the cluster, it must be unique in the project.
cluster_name = "my-uepe-gke-1"

# Domain DNS name
# The DNS zone must already exist in Cloud DNS or in other cloud provider DNS zone.
# We'll create a subdomain zone from parent domain, the final domain will be in format "<cluster_name>.<domain>".
# Please note that if this domain is hosted on another GCP project or other cloud provider, then you must
# set auto_create_ns_record = false and manually add the subdomain NS record to the parent domain.
domain = "pe-mz.gcp.digitalroute.net"

# Admin user password to the database
db_password = "super_SeCrEt_db_pAsSwOrD_457"

.........

#  _____ _ _           _
# |  ___(_) | ___  ___| |_ ___  _ __ ___
# | |_  | | |/ _ \/ __| __/ _ \| '__/ _ \
# |  _| | | |  __/\__ \ || (_) | | |  __/
# |_|   |_|_|\___||___/\__\___/|_|  \___|

# Network file system (NFS) persistent storage
# For testing purpose, you could use block storage as alternative cheaper option.
# However do note that block storage has its limitation where it only works for single node cluster setup (ReadWriteOnce access mode).
# See https://cloud.google.com/kubernetes-engine/docs/concepts/storage-overview for explanation.
filestore_enabled = true
# Service tier of the instance
# See https://cloud.google.com/filestore/docs/reference/rest/v1/Tier for available service tier.
filestore_service_tier = "STANDARD"
# Location of the instance, you MUST set a zone if the service tier is not ENTERPRISE. For ENTERPRISE tier, this can be a region.
# To find out available zones of your region, use command `gcloud compute zones list --filter="region:europe-north1"`.
filestore_location = "europe-north1-a"
# Storage capacity in GB, must be at least 1024
filestore_capacity = 1024
# The name of the fileshare (16 characters or less)
fileshare_name = "share1"

Important notes if your parent domain zone is not under the same project:

  • You need to set auto_create_ns_record = false to disable subdomain NS record auto creation in the parent domain.

  • Perform terraform apply.

  • After terraform apply is finished, copy the name servers value from terraform output and manually add them to parent domain as NS record. If you are not using Cloud DNS as the parent domain, please refer to your Domain Registrar documentation on how to add NS record.

  1. Authenticate your computer with GCP

gcloud auth application-default login
  1. Run the following commands

terraform init
terraform plan
terraform apply
  1. Wait for the terraform commands to finish.

Apply complete! Resources: 20 added, 0 changed, 0 destroyed.

Outputs:

cert_manager_namespace = "cert-manager"
cert_manager_service_account = "cert-manager-my-uepe-gke-1@pt-dev-stratus-bliz.iam.gserviceaccount.com"
db_endpoint = "db.my-uepe-gke-1.pe-mz.gcp.digitalroute.net"
external_dns_namespace = "uepe"
external_dns_service_account = "external-dns-my-uepe-gke-1@pt-dev-stratus-bliz.iam.gserviceaccount.com"
filestore_capacity_gb = 1024
filestore_csi_volume_handle = "modeInstance/europe-north1-a/my-uepe-gke-1-filestore/share1"
filestore_ip_address = "10.143.245.42"
filestore_persistence_yaml = "./manifests/filestore_persistence.yaml"
filestore_share_name = "share1"
gke_domain_dns_name = "my-uepe-gke-1.pe-mz.gcp.digitalroute.net"
gke_domain_zone_name = "my-uepe-gke-1-pe-mz-gcp-digitalroute-net"
kubernetes_cluster_host = "34.124.151.111"
kubernetes_cluster_location = "europe-north1"
kubernetes_cluster_name = "my-uepe-gke-1"
name_servers = tolist([
  "ns-cloud-b1.googledomains.com.",
  "ns-cloud-b2.googledomains.com.",
  "ns-cloud-b3.googledomains.com.",
  "ns-cloud-b4.googledomains.com.",
])
project_id = "pt-dev-stratus-bliz"
project_number = "413241157368"
region = "europe-north1"

Make sure to save the output from terraform above. Reason being that it is used as input throughout the remainder of this installation guide.

The persistent volume and persistent volume claim yaml file being generated at the end of the terraform apply. This yaml file is located at manifests/filestore_persistence.yaml. This yaml file shall be executed at the later section.

Please note that persistent volume setup is an optional step. Ignore this yaml file if you are not intended to have persistent file storage.

A fully functional Kubernetes cluster has now been set up successfully.

A RDS PostgreSQL database instance up and running on private subnet VPC with default listening port 5432. The default database postgres is accessible within the cluster at end point db.my-uepe-gke-1.pe-mz.gcp.digitalroute.net with admin username postgres.

You can check the status of the cluster, db and the other resources in the GCP dashboard.

Setup Additional Infrastructure Resources on AWS

At this stage, a basic Kubernetes cluster has been created. However, some additional infrastructure resources remain to be set up. Namely the following:

  • Hosted Zone (subdomain) for domain name.

  • ACM Certificate for the domain name (to be used with any load balancers).

  • KMS CMK key which is used for encryption at-rest for EFS, RDS and SSM.

  • EFS with security group in place.

  • RDS PostgreSQL with security group in place.

Follow these steps to set up the remaining infrastructure resources:

  1. Go to <the location where you extracted the aws.tar.gz file>/terraform

  2. Copy terraform.tf.vars.example to terraform.tfvars.

  3. Retrieve the following values from AWS Console and fill in the parameters in terraform.tfvars

terraform.tfvars

Where to get the value from?

vpc_id

In the AWS management console, you can find this information by searching for “Your VPCs”. Pick the VPC ID of the cluster that you created in the previous section.

aws_region

From metadata.region in your uepe-eks.yamlfile.

aws_account_id

In the AWS management console, this is the Account ID that is listed on your Account page.

cluster_name

From metadata.name in your uepe-eks.yaml file.

domain

In the AWS management console, on the Route 53 service page, this is the Hosted zone name of your existing Hosted zone.

domain_zone_id

In the AWS management console, on the Route 53 service page, this is the Hosted zone ID of your existing Hosted zone.

db_password

Choose a secure password for the system database administrator.

Minimum 10 characters.

Example:

#  ____  _____ _____   _____ _   _ _____ ____  _____
# / ___|| ____|_   _| |_   _| | | | ____/ ___|| ____|_
# \___ \|  _|   | |     | | | |_| |  _| \___ \|  _| (_)
#  ___) | |___  | |     | | |  _  | |___ ___) | |___ _
# |____/|_____| |_|     |_| |_| |_|_____|____/|_____(_)

# The below values must be set explicitly in order for the setup to work correctly.

vpc_id = "vpc-04ff16421e3ccdd94"
aws_region = "eu-west-1"
aws_account_id = "058264429588"

# Name of the cluster, it must be unique in the account.
cluster_name = "example-cluster"

# Domain DNS name
# The DNS zone must already exist in Route53 or in other cloud provider DNS zone.
# We'll create a subdomain zone from parent domain, the final domain will be in format "<cluster_name>.<domain>".
# Please note that if this domain is hosted on another AWS account or other cloud provider, then you must
# set auto_create_ns_record = false and manually add the subdomain NS record to the parent domain.
domain = "stratus.digitalroute.net"

# Admin user password to the database.
db_password = "super_SeCrEt_db_pAsSwOrD_457!"

Important notes if your parent domain zone is not under the same account:

  • You need to set auto_create_ns_record = false to disable subdomain NS record auto creation in the parent domain.

  • Terraform apply will fail due to certificate validation timeout error │ Error: waiting for ACM Certificate (arn:aws:acm:ap-southeast-1:027763730008:certificate/84ae1022-15bd-430a-ab3e-278f01b0edb6) to be issued: timeout while waiting for state to become 'ISSUED' (last state: 'PENDING_VALIDATION', timeout: 2m0s)

  • When the error above happened, you need to manually retrieve the name servers value from the created subdomain and add them to parent domain as NS record. If you are not using Route53 as the parent domain, please refer to your Domain Registrar documentation on how to add NS record.

  • Once NS record is added to the parent domain, go to AWS Console |  AWS Certificate Manager (ACM) and wait for the certificate status become verified. It will take 10-20 minutes.

  • After the certificate is verified, run the terraform apply again to continue provisioning.

  1. Run the following commands

terraform init
terraform plan
terraform apply
  1. Wait for the terraform commands to finish.

Apply complete! Resources: 16 added, 0 changed, 0 destroyed.

Outputs:

certificate_arn = "arn:aws:acm:eu-west-1:058264429588:certificate/526ed179-afa7-4778-b1b8-bfbcb95e4534"
db_endpoint = "example-cluster-db.c70g0ggo8m66.eu-west-1.rds.amazonaws.com:5432"
db_password = <sensitive>
db_user = "dbadmin"
efs_id = "fs-0f0bb5c0ef98f5b6f"
eks_domain_zone_id = "Z076760737OMHF392P9P7"
eks_domain_zone_name = "example-cluster.stratus.digitalroute.net"
name_servers = tolist([
  "ns-1344.awsdns-40.org",
  "ns-2018.awsdns-60.co.uk",
  "ns-55.awsdns-06.com",
  "ns-664.awsdns-19.net",
])
private_subnets = [
  "subnet-0956aa9898f78900d",
  "subnet-0b6d1364dfb4090d6",
  "subnet-0da06b6a88f9f45e7",
]
public_subnets = [
  "subnet-01174b6e86367827b",
  "subnet-0d0b14a68fe42ba09",
  "subnet-0eed6adde0748e1f6",
]

Make sure to save the output from terraform above. Reason being that it is used as input throughout the remainder of this installation guide.

A basic Kubernetes cluster has now been created.

A RDS PostgreSQL database instance up and running on private subnet VPC with default listening port 5432. The default database PlatformDatabase is accessible within the cluster at end point example-cluster-db.c70g0ggo8m66.eu-west-1.rds.amazonaws.com with admin username dbadmin.

Now proceed to the Kubernetes Cluster Add-ons - OCI section.

  • No labels