Before installing Usage Engine Private Edition, you need to set up a Kubernetes cluster on AWS EKS (Amazon's managed Kubernetes Service for EC2).
First a basic Kubernetes cluster needs to be created. This can be done in two different ways:
Using the
eksctl
CLI tool.Using the AWS management console.
In this guide, eksctl
will be used. Mainly because it will enable you to create the basic Kubernetes cluster in minutes with just a single command.
Once the basic Kubernetes cluster has been created, additional infrastructure needs to be added. For this terraform
is used.
Before proceeding, go to Release Information, and download the aws.tar.gz
file for the Usage Engine Private Edition version that is being installed. Once downloaded, extract its content to a suitable location.
Assumptions
There are a few assumptions been made when using terraform to create cluster resources:
We assume you have an existing parent domain i.e. example.com hosted on the same account as the cluster that we going to create in the coming section and you wish to access the cluster environment through the hostname. Terraform will create a subdomain in format
<cluster_name>.<domain>
.cluster name: uepe-eks
domain: example.com
final domain: uepe-eks.example.com
In addition, we also assume terraform is allowed to add a NS (NameServer) record to the parent domain. This is to allow DNS delegation from the parent domain to subdomain.
Please note that in case your parent domain is not under the same account or your parent domain is hosted in another cloud provider, then you must set
auto_create_ns_record
to false in the terraform template to disable subdomain NS record auto creation in parent domain.The service hostname that created by Usage Engine Private Edition will be accessible in format
<service_name>.<cluster_name>.<domain>
i.e. desktop-online.uepe-eks.example.com.Terraform needs to persist the state of your provisioned infrastructure, by default the state file is stored locally on the computer that terraform is executed from. However if you have multiple person working on the infrastructure then it is recommended to store the state file on remote persistent such as S3 bucket, see https://developer.hashicorp.com/terraform/language/settings/backends/s3 for more information.
We use EFS (NFS) as the default persistent storage for data needs to be persisted.
We use RDS for Usage Engine Private Edition database, default engine type is PostgreSQL.
Create Basic Cluster
The following steps explains how to create a basic Kubernetes cluster using a configuration file named uepe-eks.yaml
:
Go to
<the location where you extracted the aws.tar.gz file>/aws/eksctl
and edit theuepe-eks.yaml
file.In the
metadata
section, specify the desired clustername
, AWSregion
and Kubernetesversion
(please refer to the https://infozone.atlassian.net/wiki/x/owDKCg to find out which Kubernetes versions that are compatible with this release of Usage Engine Private Edition).In the
nodeGroups
section, specify the desired node size within the cluster. SetminSize
andmaxSize
to specify a limit to the number of node’s minimum and maximum range. SetdesiredCapacity
to specify the exact number of node running within the cluster. In this example, we are creating a 3 nodes cluster with public and private VPC.
The uepe-eks.yaml
configuration file looks like this:
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: example-cluster region: eu-west-1 version: "1.29" tags: deployment: aws-template vpc: clusterEndpoints: publicAccess: true privateAccess: true iam: withOIDC: true serviceAccounts: - metadata: name: aws-load-balancer-controller namespace: uepe labels: {aws-usage: "aws-load-balancer-contoller"} wellKnownPolicies: awsLoadBalancerController: true - metadata: name: external-dns namespace: uepe labels: {aws-usage: "external-dns"} wellKnownPolicies: externalDNS: true - metadata: name: cert-manager namespace: cert-manager wellKnownPolicies: certManager: true - metadata: name: cluster-autoscaler namespace: uepe labels: {aws-usage: "cluster-ops"} wellKnownPolicies: autoScaler: true - metadata: name: efs-csi-controller-sa namespace: uepe labels: {aws-usage: "aws-efs-csi-driver"} wellKnownPolicies: efsCSIController: true - metadata: name: ebs-csi-controller-sa namespace: uepe labels: {aws-usage: "aws-ebs-csi-driver"} wellKnownPolicies: ebsCSIController: true nodeGroups: - name: public-nodes instanceType: m5.large minSize: 3 maxSize: 3 desiredCapacity: 3 volumeSize: 80 labels: {role: worker} volumeEncrypted: true tags: nodegroup-role: worker cloudWatch: clusterLogging: enableTypes: ["*"]
https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html has been configured for each cluster add-on under the iam.serviceAccounts
section in the above uepe-eks.yaml
file. Hence, a service account for each cluster add-on will be created in the specified namespace respectively.
Please make sure to use the same namespace when installing the respective add-on in the Kubernetes Cluster Add-ons - AWS section.
For instance, using the namespaces specified in the uepe-eks.yaml
file above, means that:
external-dns must be installed in namespace uepe.
cert-manager must be installed in namespace cert-manager.
Execute the following command to create the cluster based on your desired settings:
eksctl create cluster -f uepe-eks.yaml --kubeconfig=./kubeconfig.yaml
A Kubernetes cluster with the desired number of nodes should be created within 15 minutes.
Also, the above eksctl
command will generate a ./kubeconfig.yaml
file containing information on how to connect to your newly created cluster. Make sure to set the KUBECONFIG
environment variable to point to that file:
export KUBECONFIG=<full path to ./kubeconfig.yaml>
This will ensure that tools like kubectl
and helm
will connect to your newly created cluster.
You can check the status of the cluster nodes like this:
eksctl get nodegroup --cluster example-cluster
For this example cluster the output will looks something like this:
CLUSTER NODEGROUP STATUS CREATED MIN SIZE MAX SIZE DESIRED CAPACITY INSTANCE TYPE IMAGE ID ASG NAME TYPE example-cluster public-nodes CREATE_COMPLETE 2024-03-11T13:59:28Z 3 3 3 m5.large ami-02e2de73058d55743 eksctl-example-cluster-nodegroup-public-nodes-NodeGroup-eb5aNADEiibs unmanaged
Setup Additional Infrastructure Resources on AWS
At this stage, a basic Kubernetes cluster has been created. However, some additional infrastructure resources remain to be set up. Namely the following:
Hosted Zone (subdomain) for domain name.
ACM Certificate for the domain name (to be used with any load balancers).
KMS CMK key which is used for encryption at-rest for EFS, RDS and SSM.
EFS with security group in place.
RDS PostgreSQL with security group in place.
Follow these steps to set up the remaining infrastructure resources:
Go to
<the location where you extracted the aws.tar.gz file>/terraform
Copy
terraform.tf.vars.example
toterraform.tfvars
.Retrieve the following values from AWS Console and fill in the parameters in terraform.tfvars
terraform.tfvars | Where to get the value from? |
---|---|
| In the AWS management console, you can find this information by searching for “Your VPCs”. Pick the VPC ID of the cluster that you created in the previous section. |
| From |
| In the AWS management console, this is the Account ID that is listed on your Account page. |
| From |
| In the AWS management console, on the Route 53 service page, this is the Hosted zone name of your existing Hosted zone. |
| In the AWS management console, on the Route 53 service page, this is the Hosted zone ID of your existing Hosted zone. |
| Choose a secure password for the system database administrator. Minimum 10 characters. |
Example:
# ____ _____ _____ _____ _ _ _____ ____ _____ # / ___|| ____|_ _| |_ _| | | | ____/ ___|| ____|_ # \___ \| _| | | | | | |_| | _| \___ \| _| (_) # ___) | |___ | | | | | _ | |___ ___) | |___ _ # |____/|_____| |_| |_| |_| |_|_____|____/|_____(_) # The below values must be set explicitly in order for the setup to work correctly. vpc_id = "vpc-04ff16421e3ccdd94" aws_region = "eu-west-1" aws_account_id = "058264429588" # Name of the cluster, it must be unique in the account. cluster_name = "example-cluster" # Domain DNS name # The DNS zone must already exist in Route53 or in other cloud provider DNS zone. # We'll create a subdomain zone from parent domain, the final domain will be in format "<cluster_name>.<domain>". # Please note that if this domain is hosted on another AWS account or other cloud provider, then you must # set auto_create_ns_record = false and manually add the subdomain NS record to the parent domain. domain = "stratus.digitalroute.net" # Admin user password to the database. db_password = "super_SeCrEt_db_pAsSwOrD_457!"
Important notes if your parent domain zone is not under the same account:
You need to set
auto_create_ns_record = false
to disable subdomain NS record auto creation in the parent domain.Terraform apply will fail due to certificate validation timeout error
│ Error: waiting for ACM Certificate (arn:aws:acm:ap-southeast-1:027763730008:certificate/84ae1022-15bd-430a-ab3e-278f01b0edb6) to be issued: timeout while waiting for state to become 'ISSUED' (last state: 'PENDING_VALIDATION', timeout: 2m0s)
When the error above happened, you need to manually retrieve the name servers value from the created subdomain and add them to parent domain as NS record. If you are not using Route53 as the parent domain, please refer to your Domain Registrar documentation on how to add NS record.
Once NS record is added to the parent domain, go to AWS Console | AWS Certificate Manager (ACM) and wait for the certificate status become verified. It will take 10-20 minutes.
After the certificate is verified, run the terraform apply again to continue provisioning.
Run the following commands
terraform init terraform plan terraform apply
Wait for the terraform commands to finish.
Apply complete! Resources: 16 added, 0 changed, 0 destroyed. Outputs: certificate_arn = "arn:aws:acm:eu-west-1:058264429588:certificate/526ed179-afa7-4778-b1b8-bfbcb95e4534" db_endpoint = "example-cluster-db.c70g0ggo8m66.eu-west-1.rds.amazonaws.com:5432" db_password = <sensitive> db_user = "dbadmin" efs_id = "fs-0f0bb5c0ef98f5b6f" eks_domain_zone_id = "Z076760737OMHF392P9P7" eks_domain_zone_name = "example-cluster.stratus.digitalroute.net" name_servers = tolist([ "ns-1344.awsdns-40.org", "ns-2018.awsdns-60.co.uk", "ns-55.awsdns-06.com", "ns-664.awsdns-19.net", ]) private_subnets = [ "subnet-0956aa9898f78900d", "subnet-0b6d1364dfb4090d6", "subnet-0da06b6a88f9f45e7", ] public_subnets = [ "subnet-01174b6e86367827b", "subnet-0d0b14a68fe42ba09", "subnet-0eed6adde0748e1f6", ]
Make sure to save the output from terraform above. Reason being that it is used as input throughout the remainder of this installation guide.
A Kubernetes cluster has now been created.
Now proceed to the Kubernetes Cluster Add-ons - AWS section.