Set Up Kubernetes Cluster - GCP (4.3)
Before installing Usage Engine Private Edition, you need to set up a Kubernetes cluster on GCP Kubernetes Engine.
First a basic Kubernetes cluster needs to be created. This can be done in two different ways:
Using the
terraform
CLI tool.Using the GCP management console.
In this guide, terraform
will be used. Mainly because it will enable you to create the basic Kubernetes cluster in minutes with just a single command.
Once the basic Kubernetes cluster has been created, additional infrastructure needs to be added. For this terraform
is also used.
The templates used to set up the cluster can be found in the gcp.tar.gz
file that is downloadable from Release Information
Before proceeding, go to Release Information, and download the gcp.tar.gz
file for the Usage Engine Private Edition version that is being installed. Once downloaded, extract its content to a suitable location.
Assumptions
There are a few assumptions been made when using terraform to create cluster resources:
We assume you have an existing parent domain i.e. example.com hosted on the same project as the cluster that we going to create in the coming section and you wish to access the cluster environment through the hostname. Terraform will create a subdomain in format
<cluster_name>.<domain>
.cluster name: uepe-gke
domain: example.com
final domain: uepe-gke.example.com
In addition, we also assume terraform is allowed to add a NS (NameServer) record to the parent domain. This is to allow DNS delegation from the parent domain to subdomain.
Terraform need to persist the state of your provisioned infrastructure, by default the state file is stored locally on the computer that terraform is executed from. However if you have multiple person working on the infrastructure then it is recommended to store the state file on remote persistent such as Cloud Storage bucket, see Store Terraform state in a Cloud Storage bucket | Google Cloud for more information.
We use Filestore (NFS) as the default persistent storage for data needs to be persisted.
We use Cloud SQL for Usage Engine Private Edition database, default instance type is PostgreSQL.
Create Basic Cluster and additional infrastructure
The following steps explains how to create a basic Kubernetes cluster with public and private VPC:
Go to
<the location where you extracted the gcp.tar.gz file>/gcp/terraform
and copy theterraform.tfvars.example
toterraform.tfvars
.Edit the
terraform.tfvars
file.Specify the desired cluster
name
, GCPregion
andkubernetes_version prefix
(please refer to the Compatibility Matrix (4.3) to find out which Kubernetes versions that are compatible with this release of Usage Engine Private Edition). Also specify your GCPproject id
(which can be found on the GCP dashboard), as well as the desired number of nodes per region (gke_num_nodes
).If you will be running with a database other than Derby also specify
db_password
,db_version
anddb_allocated_storage
.
terraform.tfvars | Where to get the value from? |
---|---|
| In the GCP management console, this is the Project ID that is listed on |
| In the GCP management console, this is the Project Number that is listed on |
| The region in which you will install your cluster, refer to Regions and zones | Compute Engine Documentation | Google Cloud for possible values. Or use command |
| A name for your cluster. Cluster names must start with a lowercase letter followed by up to 39 lowercase letters, numbers or hyphens. They can't end with a hyphen. The cluster name must be unique in the project. |
| Your existing domain name. In the GCP management console, this is the DNS name that is listed on page The service hostname that created by Usage Engine Private Edition will be accessible in format |
| Prefix version for kubernetes (default “ |
| Number of cluster nodes per zone. |
| Choose a secure password for the system database administrator. Minimum 10 characters. |
| Database version, check Terraform Registry for possible values. Default is |
| Allocated amount of storage for the database. Default is “10” (10GB). |
| To find out available zones of your region, use command Replace |
| Boolean flag to enable subdomain NS record auto creation in parent domain. In case your parent domain is not under the same project or your parent domain is hosted in another cloud provider, then you must set it to false. |
Example:
# ____ _____ _____ _____ _ _ _____ ____ _____
# / ___|| ____|_ _| |_ _| | | | ____/ ___|| ____|_
# \___ \| _| | | | | | |_| | _| \___ \| _| (_)
# ___) | |___ | | | | | _ | |___ ___) | |___ _
# |____/|_____| |_| |_| |_| |_|_____|____/|_____(_)
# The below values must be set explicitly in order for the setup to work correctly.
# Project settings, use command `gcloud projects list` to retrieve project info.
project_id = "pt-dev-stratus-bliz"
project_number = "413241157368"
# Region to deploy, use command `gcloud compute regions list` to get available regions.
region = "europe-north1"
# Name of the cluster, it must be unique in the project.
cluster_name = "my-uepe-gke-1"
# Domain DNS name
# The DNS zone must already exist in Cloud DNS or in other cloud provider DNS zone.
# We'll create a subdomain zone from parent domain, the final domain will be in format "<cluster_name>.<domain>".
# Please note that if this domain is hosted on another GCP project or other cloud provider, then you must
# set auto_create_ns_record = false and manually add the subdomain NS record to the parent domain.
domain = "pe-mz.gcp.digitalroute.net"
# Admin user password to the database
db_password = "super_SeCrEt_db_pAsSwOrD_457"
.........
# _____ _ _ _
# | ___(_) | ___ ___| |_ ___ _ __ ___
# | |_ | | |/ _ \/ __| __/ _ \| '__/ _ \
# | _| | | | __/\__ \ || (_) | | | __/
# |_| |_|_|\___||___/\__\___/|_| \___|
# Network file system (NFS) persistent storage
# For testing purpose, you could use block storage as alternative cheaper option.
# However do note that block storage has its limitation where it only works for single node cluster setup (ReadWriteOnce access mode).
# See https://cloud.google.com/kubernetes-engine/docs/concepts/storage-overview for explanation.
filestore_enabled = true
# Service tier of the instance
# See https://cloud.google.com/filestore/docs/reference/rest/v1/Tier for available service tier.
filestore_service_tier = "STANDARD"
# Location of the instance, you MUST set a zone if the service tier is not ENTERPRISE. For ENTERPRISE tier, this can be a region.
# To find out available zones of your region, use command `gcloud compute zones list --filter="region:europe-north1"`.
filestore_location = "europe-north1-a"
# Storage capacity in GB, must be at least 1024
filestore_capacity = 1024
# The name of the fileshare (16 characters or less)
fileshare_name = "share1"
Important notes if your parent domain zone is not under the same project:
You need to set
auto_create_ns_record = false
to disable subdomain NS record auto creation in the parent domain.Perform terraform apply.
After terraform apply is finished, copy the name servers value from terraform output and manually add them to parent domain as NS record. If you are not using Cloud DNS as the parent domain, please refer to your Domain Registrar documentation on how to add NS record.
Authenticate your computer with GCP
gcloud auth application-default login
Run the following commands
terraform init
terraform plan
terraform apply
Wait for the terraform commands to finish.
Make sure to save the output from terraform above. Reason being that it is used as input throughout the remainder of this installation guide.
The persistent volume and persistent volume claim yaml file being generated at the end of the terraform apply. This yaml file is located at manifests/filestore_persistence.yaml. This yaml file shall be executed at the later section.
Please note that persistent volume setup is an optional step. Ignore this yaml file if you are not intended to have persistent file storage.
A fully functional Kubernetes cluster has now been set up successfully.
A RDS PostgreSQL database instance up and running on private subnet VPC with default listening port 5432. The default database postgres
is accessible within the cluster at end point db.my-uepe-gke-1.pe-mz.gcp.digitalroute.net
with admin username postgres
.
You can check the status of the cluster, db and the other resources in the GCP dashboard.
Configure Cluster Access
kubectl
and other Kubernetes clients require an authentication plugin, gke-gcloud-auth-plugin
, which uses the Client-go Credential Plugins framework to provide authentication tokens to communicate with GKE clusters.
User must install this plugin to use kubectl
and other clients to interact with GKE. Existing clients display an error message if the plugin is not installed.
Check whether the plugin is already installed:
Proceed to install the plugin if the output displays “command not found: gke-gcloud-auth-plugin” else skip plugin installation.
Install the gke-gcloud-auth-plugin
binary:
Check the plugin version again. The output should display the plugin version information.
Command to authorize gcloud access to the Cloud Platform with Google user credentials.
To bind your local kubectl to the created cluster, run the following command.
Verify the configuration:
Persistent Volume and Persistent Volume Claim
Persistent Volume (PV) and Persistent Volume Claim (PVC) must be setup before Usage Engine Private Edition Helm Chart installation. The PV and PVC yaml files have already been generated in
<terraform script directory>/manifests/filestore_persistence.yaml
.
Change directory to <terraform script directory>/manifests
.
To setup Persistent Volume and Persistent Volume Claim:
To ensure PVC bounded to allocated Persistent Volume:
The output should display PVC status is Bound
This section is now complete and you can proceed to the Kubernetes Cluster Add-ons - GCP (4.3) section.