Before installing Usage Engine Private Edition, you need to set up a Kubernetes cluster on OCI OKE (Oracle’s managed Kubernetes service).
First, you need to create a basic Kubernetes cluster needs to be created. This You can be done do this in two different ways:
Using the
terraform
tool.Using the OCI management console.
In this guide, terraform
will be used. Mainly , mainly because it will enable you to create the basic Kubernetes cluster in minutes with just a single command.
Once the basic Kubernetes cluster has been created, you need to add additional infrastructure needs to be added. For this terraform
is also used. You can use terraform
for this as well.
Before proceeding, go to Release Information, and download the oci.tar.gz
file for the Usage Engine Private Edition version that is being installedyou want to install. Once downloaded, extract its content to a suitable location.
Assumptions
There are a few assumptions been made when using terraform to create cluster resources:
...
We assume that you have an existing parent domain i.e. , in the example below example.com, hosted on the same account as the cluster that we are going to create in the coming following section and that you wish want to access the cluster environment through via the hostname. Terraform will create a subdomain in the following format:
<cluster_name>.<domain>
.cluster name: uepe-eksoke
domain: example.com
final domain: uepe-eksoke.example.com
In addition, we We also assume that terraform is allowed to add a NS (NameServer) record to the parent domain . This which is needed to allow DNS delegation from the parent domain to subdomain.
Please note that in case your parent domain is not under the same account or your parent domain is hosted in another cloud provider, then you must set
auto_create_ns_record
to false in the terraform template to disable subdomain NS record auto creation in parent domain.The service hostname that created by Usage Engine Private Edition will be accessible in format
<service_name>.<cluster_name>.<domain>
i.e. desktop-online.uepe-eks.example.com.Terraform needs to persist the state of your provisioned infrastructure, by default the state file is stored locally on the computer that terraform is executed from. However if you have multiple person working on the infrastructure then it is recommended to store the state file on remote persistent such as S3 bucket, see https://developer.hashicorp.com/terraform/language/settings/backends/s3 for more information.
We use the OCI File System service (NFS) as the default persistent storage for data needs to be persisted.
We use the OCI Managed PostgreSQL service for Usage Engine Private Edition database.
Create Basic Cluster and additional infrastructure
The following steps explains how to create a basic Kubernetes cluster with public and private VPC:
Go to
<the location where you extracted the gcp.tar.gz file>/gcp/terraform
and copy theterraform.tfvars.example
toterraform.tfvars
.Edit the
terraform.tfvars
file.Specify the desired cluster
name
, OCIregion
andkubernetes_version
(please refer to the Compatibility Matrix (4.1) to find out which Kubernetes versions that are compatible with this release of Usage Engine Private Edition). Also specify your OCItenancy_ocid, user_ocid, fingerprint, compartment_ocid and private_key_path
(which can be found on the OCI dashboard’s Profile page), as well as the desired number of nodes per cluster (oke_num_nodes
).If you will be running with a database other than Derby also specify
db_password
,db_version
anddb_username
.
...
terraform.tfvars
...
Where to get the value from?
...
project_id
...
In the GCP management console, this is the Project ID that is listed on Cloud overview | Dashboard | Project info
. Or use command gcloud projects list
to retrieve project info.
...
project_number
...
In the GCP management console, this is the Project Number that is listed on Cloud overview | Dashboard | Project info
. Or use command gcloud projects list
to retrieve project info.
...
region
...
The region in which you will install your cluster, refer to https://cloud.google.com/compute/docs/regions-zones for possible values. Or use command gcloud compute regions list
to get the values.
...
cluster_name
...
A name for your cluster. Cluster names must start with a lowercase letter followed by up to 39 lowercase letters, numbers or hyphens. They can't end with a hyphen. The cluster name must be unique in the project.
...
domain
...
Your existing domain name. In the GCP management console, this is the DNS name that is listed on page Cloud DNS | Zones
. Or use command gcloud dns managed-zones list
to get the dns name.
...
kubernetes_version_prefix
...
Prefix version for kubernetes (default “1.27.
").
...
gke_num_nodes
...
Number of cluster nodes per zone.
...
db_password
...
Choose a secure password for the system database administrator.
Minimum 10 characters.
...
db_version
...
Database version, check https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/sql_database_instance#database_version for possible values. Default is POSTGRES_15
(PostgreSQL version 15).
...
db_allocated_storage
...
Allocated amount of storage for the database. Default is “10” (10GB).
...
filestore_location
...
To find out available zones of your region, use command gcloud compute zones list --filter="region:<region>"
.
Replace <region>
with the region value configured above, i.e., the region in which you will install your cluster
Terraform needs to persist the state of your provisioned infrastructure. By default, the state file is stored locally on the computer that terraform is executed from. However, if multiple persons are working on the infrastructure, then it is recommended to store the state file using a remote persistence such as Object Storage, see https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformUsingObjectStore.htm for more information.
The OCI File System service (NFS) is used as the default persistent storage for data that needs to be persisted.
The OCI Managed PostgreSQL service is used as the Usage Engine Private Edition database.
The user Principle is used throughout the entire installation. The user must prepare the private key file locally. The user can create and download the private key via the OCI console by selecting Profile | My Profile | API keys | Add API key.
Create Basic Cluster and additional infrastructure
To create a basic Kubernetes cluster with public and private VPC:
Go to
<folder where you extracted the oci.tar.gz file>/oci/terraform
and copy theterraform.tfvars.example
toterraform.tfvars
.Edit the
terraform.tfvars
file.Specify the desired cluster
name
, OCIregion
andkubernetes_version
(see Compatibility Matrix (4.2) to find out which Kubernetes versions that are compatible with this release of Usage Engine Private Edition). Specify your OCItenancy_ocid, user_ocid, fingerprint, compartment_ocid and private_key_path
(which can be found on the OCI dashboard’s Profile page), as well as the desired number of nodes per cluster (oke_num_nodes
).If you are going to use another database than Derby, specify
db_password
,db_version
anddb_username
.
terraform.tfvars | Where to get the value from? | ||
---|---|---|---|
| In the OCI management console, the | ||
| Fingerprint is only available when the user has created API keys, see In the OCI management console, the | ||
| In the OCI management console, | ||
| The full path to your private key file’s filename. To create and download your private key, go to | ||
| The region in which you will install your cluster, for example | ||
| A name for your cluster. Cluster names must start with a lowercase letter followed by up to 39 lowercase letters, numbers or hyphens. They cannot end with a hyphen. The cluster name must be unique in the project. | ||
| Your existing domain name. In the OCI management console, this is the DNS name that is listed on The service hostname created by Usage Engine Private Edition will be accessible in the following format: | ||
| The kubernetes version in alpha numeric string, for example “ | ||
| The number of cluster nodes in numeric format, for example “ | ||
| The availability domain name for the cluster, for example | ||
| Choose a secure password for the system database administrator, minimum 10 characters. | ||
| The database version in numeric format, for example “ | ||
| The OCID of the image to be used for worker node instance creation. To see the available image under your compartment, use the command:
| ||
|
| ||
|
| ||
| |
Info | ||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Example
|
...
|
Note |
---|
Note! If your parent domain zone is not under the same project: |
...
|
...
|
...
|
...
|
...
Run the following commands
Code Block |
---|
terraform init
terraform plan
terraform apply |
Wait for the terraform commands to finish.
Code Block | ||
---|---|---|
| ||
Apply complete! Resources: 20 added, 0 changed, 0 destroyed.
Outputs:
cert_manager_namespace = "cert-manager"
cert_manager_service_account = "cert-manager-my-uepe-gke-1@pt-dev-stratus-bliz.iam.gserviceaccount.com"
db_endpoint = "db.my-uepe-gke-1.pe-mz.gcp.digitalroute.net"
external_dns_namespace = "uepe"
external_dns_service_account = "external-dns-my-uepe-gke-1@pt-dev-stratus-bliz.iam.gserviceaccount.com"
filestore_capacity_gb = 1024
filestore_csi_volume_handle = "modeInstance/europe-north1-a/my-uepe-gke-1-filestore/share1"
filestore_ip_address = "10.143.245.42"
filestore_persistence_yaml = "./manifests/filestore_persistence.yaml"
filestore_share_name = "share1"
gke_domain_dns_name = "my-uepe-gke-1.pe-mz.gcp.digitalroute.net"
gke_domain_zone_name = "my-uepe-gke-1-pe-mz-gcp-digitalroute-net"
kubernetes_cluster_host = "34.124.151.111"
kubernetes_cluster_location = "europe-north1"
kubernetes_cluster_name = "my-uepe-gke-1"
name_servers = tolist([
"ns-cloud-b1.googledomains.com.",
"ns-cloud-b2.googledomains.com.",
"ns-cloud-b3.googledomains.com.",
"ns-cloud-b4.googledomains.com.",
])
project_id = "pt-dev-stratus-bliz"
project_number = "413241157368"
region = "europe-north1" |
Info |
---|
Make sure to save the output from terraform above. Reason being that it is used as input throughout the remainder of this installation guide. |
A fully functional Kubernetes cluster has now been set up successfully.
A RDS PostgreSQL database instance up and running on private subnet VPC with default listening port 5432. The default database postgres
is accessible within the cluster at end point db.my-uepe-gke-1.pe-mz.gcp.digitalroute.net
with admin username postgres
.
You can check the status of the cluster, db and the other resources in the OCI dashboard.
Setup Additional Infrastructure Resources on AWS
At this stage, a basic Kubernetes cluster has been created. However, some additional infrastructure resources remain to be set up. Namely the following:
Hosted Zone (subdomain) for domain name.
ACM Certificate for the domain name (to be used with any load balancers).
KMS CMK key which is used for encryption at-rest for EFS, RDS and SSM.
EFS with security group in place.
RDS PostgreSQL with security group in place.
Follow these steps to set up the remaining infrastructure resources:
Go to
<the location where you extracted the aws.tar.gz file>/terraform
Copy
terraform.tf.vars.example
toterraform.tfvars
.Retrieve the following values from AWS Console and fill in the parameters in terraform.tfvars
...
terraform.tfvars
...
Where to get the value from?
...
vpc_id
...
In the AWS management console, you can find this information by searching for “Your VPCs”. Pick the VPC ID of the cluster that you created in the previous section.
...
aws_region
...
From metadata.region
in your uepe-eks.yaml
file.
...
aws_account_id
...
In the AWS management console, this is the Account ID that is listed on your Account page.
...
cluster_name
...
From metadata.name
in your uepe-eks.yaml
file.
...
domain
...
In the AWS management console, on the Route 53 service page, this is the Hosted zone name of your existing Hosted zone.
...
domain_zone_id
...
In the AWS management console, on the Route 53 service page, this is the Hosted zone ID of your existing Hosted zone.
...
db_password
...
Choose a secure password for the system database administrator.
Minimum 10 characters.
Example:
Code Block |
---|
# ____ _____ _____ _____ _ _ _____ ____ _____
# / ___|| ____|_ _| |_ _| | | | ____/ ___|| ____|_
# \___ \| _| | | | | | |_| | _| \___ \| _| (_)
# ___) | |___ | | | | | _ | |___ ___) | |___ _
# |____/|_____| |_| |_| |_| |_|_____|____/|_____(_)
# The below values must be set explicitly in order for the setup to work correctly.
vpc_id = "vpc-04ff16421e3ccdd94"
aws_region = "eu-west-1"
aws_account_id = "058264429588"
# Name of the cluster, it must be unique in the account.
cluster_name = "example-cluster"
# Domain DNS name
# The DNS zone must already exist in Route53 or in other cloud provider DNS zone.
# We'll create a subdomain zone from parent domain, the final domain will be in format "<cluster_name>.<domain>".
# Please note that if this domain is hosted on another AWS account or other cloud provider, then you must
# set auto_create_ns_record = false and manually add the subdomain NS record to the parent domain.
domain = "stratus.digitalroute.net"
# Admin user password to the database.
db_password = "super_SeCrEt_db_pAsSwOrD_457!" |
Info |
---|
Important notes if your parent domain zone is not under the same account:
|
Run the following commands
Code Block | ||
---|---|---|
| ||
terraform init
terraform plan
terraform apply |
Wait for the terraform commands to finish.
Code Block | ||
---|---|---|
| ||
Apply complete! Resources: 16 added, 0 changed, 0 destroyed.
Outputs:
certificate_arn = "arn:aws:acm:eu-west-1:058264429588:certificate/526ed179-afa7-4778-b1b8-bfbcb95e4534"
db_endpoint = "example-cluster-db.c70g0ggo8m66.eu-west-1.rds.amazonaws.com:5432"
db_password = <sensitive>
db_user = "dbadmin"
efs_id = "fs-0f0bb5c0ef98f5b6f"
eks_domain_zone_id = "Z076760737OMHF392P9P7"
eks_domain_zone_name = "example-cluster.stratus.digitalroute.net"
name_servers = tolist([
"ns-1344.awsdns-40.org",
"ns-2018.awsdns-60.co.uk",
"ns-55.awsdns-06.com",
"ns-664.awsdns-19.net",
])
private_subnets = [
"subnet-0956aa9898f78900d",
"subnet-0b6d1364dfb4090d6",
"subnet-0da06b6a88f9f45e7",
]
public_subnets = [
"subnet-01174b6e86367827b",
"subnet-0d0b14a68fe42ba09",
"subnet-0eed6adde0748e1f6",
] |
Info |
---|
Make sure to save the output from terraform above. Reason being that it is used as input throughout the remainder of this installation guide. |
A basic Kubernetes cluster has now been created.
A RDS PostgreSQL database instance up and running on private subnet VPC with default listening port 5432. The default database PlatformDatabase
is accessible within the cluster at end point example-cluster-db.c70g0ggo8m66.eu-west-1.rds.amazonaws.com
with admin username dbadmin
.
Now
|
Run the following commands:
Code Block |
---|
terraform init
terraform plan
terraform apply |
Wait until the terraform commands have completed and you see the following kind of information:
Code Block | ||
---|---|---|
| ||
Apply complete! Resources: 35 added, 0 changed, 0 destroyed.
Outputs:
backend_nsg = "ocid1.networksecuritygroup.oc1.eu-frankfurt-1.aaaaaaaacreo4kf5kd2n7nk4fn2kcsuv6kye2noowhpjypcmrqmms32gpg3a"
cluster_dns_zone_name = "test-uepe-cluster-1.stratus.oci.digitalroute.net"
cluster_dns_zone_name_servers = [
"ns1.p201.dns.oraclecloud.net.",
"ns2.p201.dns.oraclecloud.net.",
"ns3.p201.dns.oraclecloud.net.",
"ns4.p201.dns.oraclecloud.net.",
]
cluster_dns_zone_ocid = "ocid1.dns-zone.oc1..aaaaaaaacd5nsfzmir3efo5e2pcuga4t622vcxcqkc3ezizl64e5gofo7dza"
cluster_name = "test-uepe-cluster-1"
cluster_ocid = "ocid1.cluster.oc1.eu-frankfurt-1.aaaaaaaaerg6ctgepnuaipifispmuweqi5nvfhswxpu3luuctcvitslu3fea"
compartment_ocid = "ocid1.compartment.oc1..aaaaaaaa56wmblidgvvicamsqkf7sqcqu5yxdhvu3wlvomzgonhflcrv6kcq"
db_admin_user = "postgres"
db_endpoint = "db5j5pt3qwjqmmjgfremgugr7cxtsq-dbinstance-70c946d1330e.postgresql.eu-frankfurt-1.oc1.oraclecloud.com"
db_port = 5432
filesystem_mount_path = "/uepe"
filesystem_ocid = "ocid1.filesystem.oc1.eu_frankfurt_1.aaaaaaaaaais2zcnmzzgcllqojxwiotfouwwm4tbnzvwm5lsoqwtcllbmqwtgaaa"
kms_key_ocid = ""
loadbalancer_ocid = "ocid1.loadbalancer.oc1.eu-frankfurt-1.aaaaaaaanmx4u2yllufrjetacqt5bsgiyznkg7fif3bjfl36xoduyngesvra"
loadbalancer_subnet_ocid = "ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaapyqsowgik7gak3wkihsm3jtronnc5klbf46jerjnudrqsnlbco5q"
mount_target_IP_address = "10.0.4.212"
mount_target_subnet_ocid = "ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaaoh36ywx4rki7qtre33f53amjy2zylm6mnqeix6cydn5ul4shfqja"
region = "eu-frankfurt-1"
tenancy_ocid = "ocid1.tenancy.oc1..aaaaaaaamnl7f7t2yrlas2si7b5hpo6t23dqi6mjo3eot6ijl2nqcog5h6ha" |
Info |
---|
Ensure to save the output from terraform above since it will be used as input throughout the remainder of this installation guide. |
A basic Kubernetes cluster has now been set up successfully.
Insert excerpt | ||||||||
---|---|---|---|---|---|---|---|---|
|
An RDS PostgreSQL database instance is up and running on a private subnet VPC with default listening port 5432. The default database postgres
is accessible within the cluster at end point db5j5pt3qwjqmmjgfremgugr7cxtsq-dbinstance-70c946d1330e.postgresql.eu-frankfurt-1.oc1.oraclecloud.com
with admin username postgres
.
You can see the status of the cluster, db and the other resources in the OCI dashboard.
Configure Cluster Access
To configure cluster access, run the following command:
Code Block | ||
---|---|---|
| ||
oci ce cluster create-kubeconfig --cluster-id <cluster ocid> --file ./kubeconfig.yaml --region eu-frankfurt-1 --token-version 2.0.0 --kube-endpoint PUBLIC_ENDPOINT |
A ./kubeconfig.yaml
file containing information on how to connect to your newly created cluster will be generated. Set the KUBECONFIG
environment variable to point to that file by running the following command:
Code Block |
---|
export KUBECONFIG=<full path to ./kubeconfig.yaml> |
This will ensure that tools like kubectl
and helm
will connect to your newly created cluster.
You can check the status of the cluster nodes by running the following command:
Code Block |
---|
kubectl get nodes |
In this example cluster, the output will look something like this:
Code Block |
---|
NAME STATUS ROLES AGE VERSION
10.0.2.111 Ready node 27h v1.29.1
10.0.2.158 Ready node 27h v1.29.1
10.0.2.230 Ready node 27h v1.29.1 |
Insert excerpt | ||||||||
---|---|---|---|---|---|---|---|---|
|
This section is now complete and you can proceed to the Kubernetes Cluster Add-ons - OCI (4.2) section.