Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. We assume you have an existing parent domain i.e. example.com hosted on the same project as the cluster that we going to create in the coming section and you wish to access the cluster environment through the hostname. Terraform will create a subdomain in format <cluster_name>.<domain>.

    1. cluster name: uepe-gke

    2. domain: example.com

    3. final domain: uepe-gke.example.com

  2. In addition, we also assume terraform is allowed to add a NS (NameServer) record to the parent domain. This is to allow DNS delegation from the parent domain to subdomain.

  3. Please note that in case your parent domain is not under the same project or your parent domain is hosted in another cloud provider, then you must set auto_create_ns_record to false in the terraform template to disable subdomain NS record auto creation in parent domain.

  4. The service hostname that created by Usage Engine Private Edition will be accessible in format <service_name>.<cluster_name>.<domain> i.e. desktop-online.uepe-gke.example.com.

  5. Terraform need to persist the state of your provisioned infrastructure, by default the state file is stored locally on the computer that terraform is executed from. However if you have multiple person working on the infrastructure then it is recommended to store the state file on remote persistent such as Cloud Storage bucket, see https://cloud.google.com/docs/terraform/resource-management/store-state for more information.

  6. We use Filestore (NFS) as the default persistent storage for data needs to be persisted.

  7. We use Cloud SQL for Usage Engine Private Edition database, default instance type is PostgreSQL.

...

  1. Go to <the location where you extracted the gcp.tar.gz file>/gcp/terraform and copy theterraform.tfvars.example to terraform.tfvars.

  2. Edit the terraform.tfvars file.

  3. Specify the desired cluster name, GCP region and kubernetes_version prefix (please refer to the Compatibility Matrix (4.2) to find out which Kubernetes versions that are compatible with this release of Usage Engine Private Edition). Also specify your GCP project id (which can be found on the GCP dashboard), as well as the desired number of nodes per region (gke_num_nodes).

  4. If you will be running with a database other than Derby also specify db_password, db_version and db_allocated_storage.

terraform.tfvars

Where to get the value from?

project_id

In the GCP management console, this is the Project ID that is listed on Cloud overview | Dashboard | Project info. Or use command gcloud projects list to retrieve project info.

project_number

In the GCP management console, this is the Project Number that is listed on Cloud overview | Dashboard | Project info. Or use command gcloud projects list to retrieve project info.

region

The region in which you will install your cluster, refer to https://cloud.google.com/compute/docs/regions-zones for possible values. Or use command gcloud compute regions list to get the values.

cluster_name

A name for your cluster. Cluster names must start with a lowercase letter followed by up to 39 lowercase letters, numbers or hyphens. They can't end with a hyphen. The cluster name must be unique in the project.

domain

Your existing domain name. In the GCP management console, this is the DNS name that is listed on page Cloud DNS | Zones. Or use command gcloud dns managed-zones list to get the dns nameget the dns name.

The service hostname that created by Usage Engine Private Edition will be accessible in format <service_name>.<cluster_name>.<domain> i.e. desktop-online.uepe-gke.example.com.

kubernetes_version_prefix

Prefix version for kubernetes (default “1.27.").

gke_num_nodes

Number of cluster nodes per zone.

db_password

Choose a secure password for the system database administrator.

Minimum 10 characters.

db_version

Database version, check https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/sql_database_instance#database_version for possible values. Default is POSTGRES_15 (PostgreSQL version 15).

db_allocated_storage

Allocated amount of storage for the database. Default is “10” (10GB).

filestore_location

To find out available zones of your region, use command gcloud compute zones list --filter="region:<region>".

Replace <region> with the region value configured above, i.e., the region in which you will install your clustercluster

auto_create_ns_record

Boolean flag to enable subdomain NS record auto creation in parent domain. In case your parent domain is not under the same project or your parent domain is hosted in another cloud provider, then you must set it to false.

Example:

Code Block
languagetext
#  ____  _____ _____   _____ _   _ _____ ____  _____
# / ___|| ____|_   _| |_   _| | | | ____/ ___|| ____|_
# \___ \|  _|   | |     | | | |_| |  _| \___ \|  _| (_)
#  ___) | |___  | |     | | |  _  | |___ ___) | |___ _
# |____/|_____| |_|     |_| |_| |_|_____|____/|_____(_)

# The below values must be set explicitly in order for the setup to work correctly.

# Project settings, use command `gcloud projects list` to retrieve project info.
project_id = "pt-dev-stratus-bliz"
project_number = "413241157368"

# Region to deploy, use command `gcloud compute regions list` to get available regions.
region = "europe-north1"

# Name of the cluster, it must be unique in the project.
cluster_name = "my-uepe-gke-1"

# Domain DNS name
# The DNS zone must already exist in Cloud DNS or in other cloud provider DNS zone.
# We'll create a subdomain zone from parent domain, the final domain will be in format "<cluster_name>.<domain>".
# Please note that if this domain is hosted on another GCP project or other cloud provider, then you must
# set auto_create_ns_record = false and manually add the subdomain NS record to the parent domain.
domain = "pe-mz.gcp.digitalroute.net"

# Admin user password to the database
db_password = "super_SeCrEt_db_pAsSwOrD_457"

.........

#  _____ _ _           _
# |  ___(_) | ___  ___| |_ ___  _ __ ___
# | |_  | | |/ _ \/ __| __/ _ \| '__/ _ \
# |  _| | | |  __/\__ \ || (_) | | |  __/
# |_|   |_|_|\___||___/\__\___/|_|  \___|

# Network file system (NFS) persistent storage
# For testing purpose, you could use block storage as alternative cheaper option.
# However do note that block storage has its limitation where it only works for single node cluster setup (ReadWriteOnce access mode).
# See https://cloud.google.com/kubernetes-engine/docs/concepts/storage-overview for explanation.
filestore_enabled = true
# Service tier of the instance
# See https://cloud.google.com/filestore/docs/reference/rest/v1/Tier for available service tier.
filestore_service_tier = "STANDARD"
# Location of the instance, you MUST set a zone if the service tier is not ENTERPRISE. For ENTERPRISE tier, this can be a region.
# To find out available zones of your region, use command `gcloud compute zones list --filter="region:europe-north1"`.
filestore_location = "europe-north1-a"
# Storage capacity in GB, must be at least 1024
filestore_capacity = 1024
# The name of the fileshare (16 characters or less)
fileshare_name = "share1"

...

A fully functional Kubernetes cluster has now been set up successfully.

Insert excerpt
General Kubernetes Preparations
General Kubernetes Preparations
nameterraform state message
nopaneltrue

A RDS PostgreSQL database instance up and running on private subnet VPC with default listening port 5432. The default database postgres is accessible within the cluster at end point db.my-uepe-gke-1.pe-mz.gcp.digitalroute.net with admin username postgres.

...

Code Block
languagebash
gke-gcloud-auth-plugin --version

Command to authorize gcloud access to the Cloud Platform with Google user credentials.

Code Block
gcloud auth login

To bind your local kubectl to the created cluster, run the following command.

...

Code Block
kubectl get nodes

Insert excerpt
UEPE4D:General Kubernetes PreparationsUEPE4D:
General Kubernetes Preparations
namecommon-namespace
nopaneltrue

...

Persistent Volume (PV) and Persistent Volume Claim (PVC) must be setup before Usage Engine Private Edition Helm Chart installation. The PV and PVC yaml files have already been generated in

<terraform script directory>/manifests/filestore_persistence.yaml.

Change directory to <terraform script directory>/manifests.

To setup Persistent Volume and Persistent Volume Claim:

...

This section is now complete . Now and you can proceed to the Kubernetes Cluster Add-ons - GCP (4.2) section.