Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

terraform.tfvars

Where to get the value from?

project_id

In the GCP management console, this is the Project ID that is listed on Cloud overview | Dashboard | Project info. Or use command gcloud projects list to retrieve project info.

project_number

In the GCP management console, this is the Project Number that is listed on Cloud overview | Dashboard | Project info. Or use command gcloud projects list to retrieve project info.

region

The region in which you will install your cluster, refer to https://cloud.google.com/compute/docs/regions-zones for possible values. Or use command gcloud compute regions list to get the values.

cluster_name

A name for your cluster. Cluster names must start with a lowercase letter followed by up to 39 lowercase letters, numbers or hyphens. They can't end with a hyphen. The cluster name must be unique in the project.

domain

Your existing domain name. In the GCP management console, this is the DNS name that is listed on page Cloud DNS | Zones. Or use command gcloud dns managed-zones list to get the dns name.

kubernetes_version_prefix

Prefix version for kubernetes (default “1.27.").

gke_num_nodes

Number of cluster nodes per zone.

db_password

Choose a secure password for the system database administrator.

Minimum 10 characters.

db_version

Database version, check https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/sql_database_instance#database_version for possible values. Default is POSTGRES_15 (PostgreSQL version 15).

db_allocated_storage

Allocated amount of storage for the database. Default is “10” (10GB).

filestore_location

To find out available zones of your region, use command gcloud compute zones list --filter="region:<region>".

Replace <region> with the region value configured above, i.e., the region in which you will install your cluster

Example:

Code Block
languageyamltext
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: example-cluster
  region: eu-west-1
  version: "1.29"
  tags:
    deployment: aws-template

vpc:
  clusterEndpoints:
    publicAccess:  true
    privateAccess: true
    
iam:
  withOIDC: true
  serviceAccounts:
  - metadata:
      name: aws-load-balancer-controller
      namespace: uepe
      labels: {aws-usage: "aws-load-balancer-contoller"}
    wellKnownPolicies:
      awsLoadBalancerController: true
  - metadata:
      name: external-dns
      namespace: uepe
      labels: {aws-usage: "external-dns"}
    wellKnownPolicies:
      externalDNS: true
  - metadata:
      name: cert-manager
      namespace: cert-manager
    wellKnownPolicies:
      certManager: true
  - metadata:
      name: cluster-autoscaler
      namespace: uepe
      labels: {aws-usage: "cluster-ops"}
    wellKnownPolicies:
      autoScaler: true
  - metadata:
      name: efs-csi-controller-sa
      namespace: uepe
      labels: {aws-usage: "aws-efs-csi-driver"}
    wellKnownPolicies:
      efsCSIController: true
  - metadata:
      name: ebs-csi-controller-sa
      namespace: uepe
      labels: {aws-usage: "aws-ebs-csi-driver"}
    wellKnownPolicies:
      ebsCSIController: true

nodeGroups:
  - name: public-nodes
    instanceType: m5.large
    minSize: 3
    maxSize: 3
    desiredCapacity: 3
    volumeSize: 80
    labels: {role: worker}
    volumeEncrypted: true
    tags:
      nodegroup-role: worker

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]
Info

https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html has been configured for each cluster add-on under the iam.serviceAccounts section in the above uepe-eks.yaml file. Hence, a service account for each cluster add-on will be created in the specified namespace respectively.

Please make sure to use the same namespace when installing the respective add-on in the Kubernetes Cluster Add-ons - OCI section.

For instance, using the namespaces specified in the uepe-eks.yaml file above, means that:

  • external-dns must be installed in namespace uepe.

  • cert-manager must be installed in namespace cert-manager.

Execute the following command to create the cluster based on your desired settings:

Code Block
languagebash
eksctl create cluster -f uepe-eks.yaml --kubeconfig=./kubeconfig.yaml

A Kubernetes cluster with the desired number of nodes should be created within 15 minutes.

Also, the above eksctl command will generate a ./kubeconfig.yaml file containing information on how to connect to your newly created cluster. Make sure to set the KUBECONFIG environment variable to point to that file:

Code Block
languagebash
export KUBECONFIG=<full path to ./kubeconfig.yaml>

This will ensure that tools like kubectl and helm will connect to your newly created cluster.

You can check the status of the cluster nodes like this:

Code Block
languagebash
eksctl get nodegroup --cluster example-cluster

For this example cluster the output will looks something like this:

Code Block
CLUSTER         NODEGROUP       STATUS          CREATED                 MIN SIZE    MAX SIZE    DESIRED CAPACITY    INSTANCE TYPE   IMAGE ID                ASG NAME                                                                TYPE
example-cluster public-nodes	CREATE_COMPLETE 2024-03-11T13:59:28Z    3           3           3                   m5.large        ami-02e2de73058d55743   eksctl-example-cluster-nodegroup-public-nodes-NodeGroup-eb5aNADEiibs    unmanaged#  ____  _____ _____   _____ _   _ _____ ____  _____
# / ___|| ____|_   _| |_   _| | | | ____/ ___|| ____|_
# \___ \|  _|   | |     | | | |_| |  _| \___ \|  _| (_)
#  ___) | |___  | |     | | |  _  | |___ ___) | |___ _
# |____/|_____| |_|     |_| |_| |_|_____|____/|_____(_)

# The below values must be set explicitly in order for the setup to work correctly.

# Project settings, use command `gcloud projects list` to retrieve project info.
project_id = "pt-dev-stratus-bliz"
project_number = "413241157368"

# Region to deploy, use command `gcloud compute regions list` to get available regions.
region = "europe-north1"

# Name of the cluster, it must be unique in the project.
cluster_name = "my-uepe-gke-1"

# Domain DNS name
# The DNS zone must already exist in Cloud DNS or in other cloud provider DNS zone.
# We'll create a subdomain zone from parent domain, the final domain will be in format "<cluster_name>.<domain>".
# Please note that if this domain is hosted on another GCP project or other cloud provider, then you must
# set auto_create_ns_record = false and manually add the subdomain NS record to the parent domain.
domain = "pe-mz.gcp.digitalroute.net"

# Admin user password to the database
db_password = "super_SeCrEt_db_pAsSwOrD_457"

.........

#  _____ _ _           _
# |  ___(_) | ___  ___| |_ ___  _ __ ___
# | |_  | | |/ _ \/ __| __/ _ \| '__/ _ \
# |  _| | | |  __/\__ \ || (_) | | |  __/
# |_|   |_|_|\___||___/\__\___/|_|  \___|

# Network file system (NFS) persistent storage
# For testing purpose, you could use block storage as alternative cheaper option.
# However do note that block storage has its limitation where it only works for single node cluster setup (ReadWriteOnce access mode).
# See https://cloud.google.com/kubernetes-engine/docs/concepts/storage-overview for explanation.
filestore_enabled = true
# Service tier of the instance
# See https://cloud.google.com/filestore/docs/reference/rest/v1/Tier for available service tier.
filestore_service_tier = "STANDARD"
# Location of the instance, you MUST set a zone if the service tier is not ENTERPRISE. For ENTERPRISE tier, this can be a region.
# To find out available zones of your region, use command `gcloud compute zones list --filter="region:europe-north1"`.
filestore_location = "europe-north1-a"
# Storage capacity in GB, must be at least 1024
filestore_capacity = 1024
# The name of the fileshare (16 characters or less)
fileshare_name = "share1"

Important notes if your parent domain zone is not under the same project:

  • You need to set auto_create_ns_record = false to disable subdomain NS record auto creation in the parent domain.

  • Perform terraform apply.

  • After terraform apply is finished, copy the name servers value from terraform output and manually add them to parent domain as NS record. If you are not using Cloud DNS as the parent domain, please refer to your Domain Registrar documentation on how to add NS record.

  1. Authenticate your computer with GCP

Code Block
gcloud auth application-default login
  1. Run the following commands

Code Block
terraform init
terraform plan
terraform apply
  1. Wait for the terraform commands to finish.

Code Block
languagetext
Apply complete! Resources: 20 added, 0 changed, 0 destroyed.

Outputs:

cert_manager_namespace = "cert-manager"
cert_manager_service_account = "cert-manager-my-uepe-gke-1@pt-dev-stratus-bliz.iam.gserviceaccount.com"
db_endpoint = "db.my-uepe-gke-1.pe-mz.gcp.digitalroute.net"
external_dns_namespace = "uepe"
external_dns_service_account = "external-dns-my-uepe-gke-1@pt-dev-stratus-bliz.iam.gserviceaccount.com"
filestore_capacity_gb = 1024
filestore_csi_volume_handle = "modeInstance/europe-north1-a/my-uepe-gke-1-filestore/share1"
filestore_ip_address = "10.143.245.42"
filestore_persistence_yaml = "./manifests/filestore_persistence.yaml"
filestore_share_name = "share1"
gke_domain_dns_name = "my-uepe-gke-1.pe-mz.gcp.digitalroute.net"
gke_domain_zone_name = "my-uepe-gke-1-pe-mz-gcp-digitalroute-net"
kubernetes_cluster_host = "34.124.151.111"
kubernetes_cluster_location = "europe-north1"
kubernetes_cluster_name = "my-uepe-gke-1"
name_servers = tolist([
  "ns-cloud-b1.googledomains.com.",
  "ns-cloud-b2.googledomains.com.",
  "ns-cloud-b3.googledomains.com.",
  "ns-cloud-b4.googledomains.com.",
])
project_id = "pt-dev-stratus-bliz"
project_number = "413241157368"
region = "europe-north1"
Info

Make sure to save the output from terraform above. Reason being that it is used as input throughout the remainder of this installation guide.

Info

The persistent volume and persistent volume claim yaml file being generated at the end of the terraform apply. This yaml file is located at manifests/filestore_persistence.yaml. This yaml file shall be executed at the later section.

Please note that persistent volume setup is an optional step. Ignore this yaml file if you are not intended to have persistent file storage.

A fully functional Kubernetes cluster has now been set up successfully.

A RDS PostgreSQL database instance up and running on private subnet VPC with default listening port 5432. The default database postgres is accessible within the cluster at end point db.my-uepe-gke-1.pe-mz.gcp.digitalroute.net with admin username postgres.

You can check the status of the cluster, db and the other resources in the GCP dashboard.

Setup Additional Infrastructure Resources on AWS

...