Before installing Usage Engine Private Edition, you need to set up a Kubernetes cluster on GCP Kubernetes Engine.
First a basic Kubernetes cluster needs to be created. This can be done in two different ways:
Using the
terraform
CLI tool.Using the GCP management console.
In this guide, terraform
will be used. Mainly because it will enable you to create the basic Kubernetes cluster in minutes with just a single command.
Once the basic Kubernetes cluster has been created, additional infrastructure needs to be added. For this terraform
is also used.
The templates used to set up the cluster can be found in the gcp.tar.gz
file that is downloadable from Release Information
Before proceeding, go to Release Information, and download the gcp.tar.gz
file for the Usage Engine Private Edition version that is being installed. Once downloaded, extract its content to a suitable location.
Assumptions
There are a few assumptions been made when using terraform to create cluster resources:
We assume you have an existing parent domain i.e. example.com hosted on the same project as the cluster that we going to create in the coming section and you wish to access the cluster environment through the hostname. Terraform will create a subdomain in format
<cluster_name>.<domain>
.cluster name: uepe-gke
domain: example.com
final domain: uepe-gke.example.com
In addition, we also assume terraform is allowed to add a NS (NameServer) record to the parent domain. This is to allow DNS delegation from the parent domain to subdomain.
Please note that in case your parent domain is not under the same project or your parent domain is hosted in another cloud provider, then you must set
auto_create_ns_record
to false in the terraform template to disable subdomain NS record auto creation in parent domain.The service hostname that created by Usage Engine Private Edition will be accessible in format
<service_name>.<cluster_name>.<domain>
i.e. desktop-online.uepe-gke.example.com.Terraform need to persist the state of your provisioned infrastructure, by default the state file is stored locally on the computer that terraform is executed from. However if you have multiple person working on the infrastructure then it is recommended to store the state file on remote persistent such as Cloud Storage bucket, see https://cloud.google.com/docs/terraform/resource-management/store-state for more information.
We use Filestore (NFS) as the default persistent storage for data needs to be persisted.
We use Cloud SQL for Usage Engine Private Edition database, default instance type is PostgreSQL.
Create Basic Cluster and additional infrastructure
The following steps explains how to create a basic Kubernetes cluster with public and private VPC:
Go to
<the location where you extracted the gcp.tar.gz file>/gcp/terraform
and copy theterraform.tfvars.example
toterraform.tfvars
.Edit the
terraform.tfvars
file.Specify the desired cluster
name
, GCPregion
andkubernetes_version prefix
(please refer to the Compatibility Matrix to find out which Kubernetes versions that are compatible with this release of Usage Engine Private Edition). Also specify your GCPproject id
(which can be found on the GCP dashboard), as well as the desired number of nodes per region (gke_num_nodes
).If you will be running with a database other than Derby also specify
db_password
,db_version
anddb_allocated_storage
.
terraform.tfvars | Where to get the value from? |
---|---|
| In the GCP management console, this is the Project ID that is listed on |
| In the GCP management console, this is the Project Number that is listed on |
| The region in which you will install your cluster, refer to https://cloud.google.com/compute/docs/regions-zones for possible values. Or use command |
| A name for your cluster. Cluster names must start with a lowercase letter followed by up to 39 lowercase letters, numbers or hyphens. They can't end with a hyphen. The cluster name must be unique in the project. |
| Your existing domain name. In the GCP management console, this is the DNS name that is listed on page |
| Prefix version for kubernetes (default “ |
| Number of cluster nodes per zone. |
| Choose a secure password for the system database administrator. Minimum 10 characters. |
| Database version, check https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/sql_database_instance#database_version for possible values. Default is |
| Allocated amount of storage for the database. Default is “10” (10GB). |
| To find out available zones of your region, use command Replace |
Example:
# ____ _____ _____ _____ _ _ _____ ____ _____ # / ___|| ____|_ _| |_ _| | | | ____/ ___|| ____|_ # \___ \| _| | | | | | |_| | _| \___ \| _| (_) # ___) | |___ | | | | | _ | |___ ___) | |___ _ # |____/|_____| |_| |_| |_| |_|_____|____/|_____(_) # The below values must be set explicitly in order for the setup to work correctly. # Project settings, use command `gcloud projects list` to retrieve project info. project_id = "pt-dev-stratus-bliz" project_number = "413241157368" # Region to deploy, use command `gcloud compute regions list` to get available regions. region = "europe-north1" # Name of the cluster, it must be unique in the project. cluster_name = "my-uepe-gke-1" # Domain DNS name # The DNS zone must already exist in Cloud DNS or in other cloud provider DNS zone. # We'll create a subdomain zone from parent domain, the final domain will be in format "<cluster_name>.<domain>". # Please note that if this domain is hosted on another GCP project or other cloud provider, then you must # set auto_create_ns_record = false and manually add the subdomain NS record to the parent domain. domain = "pe-mz.gcp.digitalroute.net" # Admin user password to the database db_password = "super_SeCrEt_db_pAsSwOrD_457" ......... # _____ _ _ _ # | ___(_) | ___ ___| |_ ___ _ __ ___ # | |_ | | |/ _ \/ __| __/ _ \| '__/ _ \ # | _| | | | __/\__ \ || (_) | | | __/ # |_| |_|_|\___||___/\__\___/|_| \___| # Network file system (NFS) persistent storage # For testing purpose, you could use block storage as alternative cheaper option. # However do note that block storage has its limitation where it only works for single node cluster setup (ReadWriteOnce access mode). # See https://cloud.google.com/kubernetes-engine/docs/concepts/storage-overview for explanation. filestore_enabled = true # Service tier of the instance # See https://cloud.google.com/filestore/docs/reference/rest/v1/Tier for available service tier. filestore_service_tier = "STANDARD" # Location of the instance, you MUST set a zone if the service tier is not ENTERPRISE. For ENTERPRISE tier, this can be a region. # To find out available zones of your region, use command `gcloud compute zones list --filter="region:europe-north1"`. filestore_location = "europe-north1-a" # Storage capacity in GB, must be at least 1024 filestore_capacity = 1024 # The name of the fileshare (16 characters or less) fileshare_name = "share1"
Important notes if your parent domain zone is not under the same project:
You need to set
auto_create_ns_record = false
to disable subdomain NS record auto creation in the parent domain.Perform terraform apply.
After terraform apply is finished, copy the name servers value from terraform output and manually add them to parent domain as NS record. If you are not using Cloud DNS as the parent domain, please refer to your Domain Registrar documentation on how to add NS record.
Authenticate your computer with GCP
gcloud auth application-default login
Run the following commands
terraform init terraform plan terraform apply
Wait for the terraform commands to finish.
Apply complete! Resources: 20 added, 0 changed, 0 destroyed. Outputs: cert_manager_namespace = "cert-manager" cert_manager_service_account = "cert-manager-my-uepe-gke-1@pt-dev-stratus-bliz.iam.gserviceaccount.com" db_endpoint = "db.my-uepe-gke-1.pe-mz.gcp.digitalroute.net" external_dns_namespace = "uepe" external_dns_service_account = "external-dns-my-uepe-gke-1@pt-dev-stratus-bliz.iam.gserviceaccount.com" filestore_capacity_gb = 1024 filestore_csi_volume_handle = "modeInstance/europe-north1-a/my-uepe-gke-1-filestore/share1" filestore_ip_address = "10.143.245.42" filestore_persistence_yaml = "./manifests/filestore_persistence.yaml" filestore_share_name = "share1" gke_domain_dns_name = "my-uepe-gke-1.pe-mz.gcp.digitalroute.net" gke_domain_zone_name = "my-uepe-gke-1-pe-mz-gcp-digitalroute-net" kubernetes_cluster_host = "34.124.151.111" kubernetes_cluster_location = "europe-north1" kubernetes_cluster_name = "my-uepe-gke-1" name_servers = tolist([ "ns-cloud-b1.googledomains.com.", "ns-cloud-b2.googledomains.com.", "ns-cloud-b3.googledomains.com.", "ns-cloud-b4.googledomains.com.", ]) project_id = "pt-dev-stratus-bliz" project_number = "413241157368" region = "europe-north1"
Make sure to save the output from terraform above. Reason being that it is used as input throughout the remainder of this installation guide.
The persistent volume and persistent volume claim yaml file being generated at the end of the terraform apply. This yaml file is located at manifests/filestore_persistence.yaml. This yaml file shall be executed at the later section.
Please note that persistent volume setup is an optional step. Ignore this yaml file if you are not intended to have persistent file storage.
A fully functional Kubernetes cluster has now been set up successfully.
A RDS PostgreSQL database instance up and running on private subnet VPC with default listening port 5432. The default database postgres
is accessible within the cluster at end point db.my-uepe-gke-1.pe-mz.gcp.digitalroute.net
with admin username postgres
.
You can check the status of the cluster, db and the other resources in the GCP dashboard.
Configure Cluster Access
kubectl
and other Kubernetes clients require an authentication plugin, gke-gcloud-auth-plugin
, which uses the Client-go Credential Plugins framework to provide authentication tokens to communicate with GKE clusters.
User must install this plugin to use kubectl
and other clients to interact with GKE. Existing clients display an error message if the plugin is not installed.
Check whether the plugin is already installed:
gke-gcloud-auth-plugin --version
Proceed to install the plugin if the output displays “command not found: gke-gcloud-auth-plugin” else skip plugin installation.
Install the gke-gcloud-auth-plugin
binary:
gcloud components install gke-gcloud-auth-plugin
Check the plugin version again. The output should display the plugin version information.
gke-gcloud-auth-plugin --version
To bind your local kubectl to the created cluster, run the following command.
gcloud container clusters get-credentials <cluster_name> --region=<cluster_region_zone>
By default, credentials are written to HOME/.kube/config
. You can provide an alternate path by setting the KUBECONFIG
environment variable. For instance,
export KUBECONFIG=<full path to ./kubeconfig.yaml>
Verify the configuration:
kubectl get nodes
Create a namespace called Unless explicitly stated, this is the namespace that is used throughout the remainder of this installation guide. Hint! You can also create and use a namespace with another name. This command shows all namespaces that currently exist in your cluster: Namespace
uepe
:kubectl create namespace uepe
kubectl get namespaces
Persistent Volume and Persistent Volume Claim
Please note that persistent volume setup is an optional step. Skip this section if you are not intended to have persistent file storage.
Persistent Volume (PV) and Persistent Volume Claim (PVC) must be setup before Usage Engine Private Edition Helm Chart installation. The PV and PVC yaml files have already been generated in <terraform script directory>/manifests/filestore_persistence.yaml.
Change directory to <terraform script directory>/manifests.
To setup Persistent Volume and Persistent Volume Claim:
kubectl apply -f filestore_persistence.yaml -n uepe
To ensure PVC bounded to allocated Persistent Volume:
kubectl get pvc -n uepe
The output should display PVC status is Bound
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-uepe-gke-1-filestore-pvc Bound my-uepe-gke-1-filestore-pv 1024 RWX 14h
This section is now complete. Now proceed to the Kubernetes Cluster Add-ons - GCP section.