Tip |
---|
A sample template for eksctl and terraform is available from the the Release Information page and you can use these 'as is' or modify them to create a system according to your own requirements. |
You can use this as a reference or modify accordingly with your infrastructure standards, Please refer the Pre-installation (3.0) page for the AWS service list required for MZ installation in AWS Working With the Infrastructure TemplateFor EKS ClusterSet up the VPC and EKS Cluster Note |
---|
| If you are using Openshift cluster refer to the section for Openshift. |
The default value for cluster name is mz-eks and region is eu-west-1 Go to the eksctl folder in the unzipped contents.
Update the following fields in the mz-eks.yaml file: refer https://github.com/weaveworks/eksctl for more parameters can be used in eksctl template
Code Block |
---|
Metadata
name (cluster name, default to mz-eks)
aws region (default to eu-west-1)
Worker Nodes
instanceType (change the instant type to match your application load)
minSize, maxSize and desiredSize (if you wish to have more nodes running) |
Execute the following command to start the creation of the EKS cluster Code Block |
---|
$ eksctl create cluster -f mz-eks.yaml --kubeconfig=./kubeconfig
$ export KUBECONFIG=`pwd`/kubeconfig
You should be able to run any kubectl commands from now on |
Note |
---|
| The eksctl create command can take a concidreble amout of time to be completed. |
Create Resources Required for with EKS
Go to the terraform folder and copy the terraform.tfvars.example to terraform.tfvars
Code Block |
---|
$ cp terraform.tfvars.example terraform.tfvars |
Get the following values from the AWS Console and fill in the parameters in the terraform.tfvars. terraform.tfvars | Where to get the values |
---|
aws_account_id | From AWS Console Go to My Account page and you will see the Account Id under Account Settings | aws_region | Follow as per the value configured in mz-eks.yaml Default is eu-west-1 | cluster_name | Follow as per the value configured in mz-eks.yaml Default is mz-eks | db_password | Use a secure password for the platform database. Minimum 10 characters. | domain domain_zone_id | From AWS Console On the Route 53 page, find your existing Hosted Zone and copy the Hosted Zone ID and Domain Name. | vpc_id | From AWS Console On the VPC Service page, find the vpc name eksctl-mz-eks-cluster/VPC and copy the VPC ID. |
Code Block |
---|
# ____ _____ _____ _____ _ _ _____ ____ _____
# / ___|| ____|_ _| |_ _| | | | ____/ ___|| ____|_
# \___ \| _| | | | | | |_| | _| \___ \| _| (_)
# ___) | |___ | | | | | _ | |___ ___) | |___ _
# |____/|_____| |_| |_| |_| |_|_____|____/|_____(_)
# The below values must be set explicitly in order for the setup to work correctly.
vpc_id = "vpc-xxxxxxxxxxxxxxxxx"
aws_region = "eu-west-1"
aws_account_id = ""
# cluster_name.domain will be the final domain name
cluster_name = "mz-eks"
domain = "example.com"
# Route 53 Hosted Zone ID
# This should be the Zone ID of the Domain above. Ie. that domain must already exist in Route 53.
# We'll insert the nameservers of the new domain name "cluster-name.domain" as a NS record in domain's hosted zone.
domain_zone_id = ""
# Password to the database.
db_password = "" |
Run the following commands: Code Block |
---|
$ terraform init
$ terraform plan
$ terraform apply |
- Save the output from terraform for the next step.
Installing AWS HelpersRun the following commands by replacing the placeholders with the values from the terraform output Place Holder | Value from terraform output |
---|
<region> | Follow as per the value configured in cluster.yaml Default is eu-west-1 | <eks_domain_zone_name> | eks_domain_zone_name | <eks_domain_zone_id> | eks_domain_zone_id | <efs id> | efs_id | <cluster_name> | Follow as per the value configured in cluster.yaml Default is mz-eks |
Follow the guide from https://github.com/kubernetes-sigs/aws-efs-csi-driver to install Amazon EFS CSI Driver: Code Block |
---|
$ helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/
$ helm repo update
$ helm upgrade --install aws-efs-csi-driver --namespace <namespace> aws-efs-csi-driver/aws-efs-csi-driver |
If you need to dynamically provision persistent volume claim (PVC) through Amazon EFS access points, follow the guide from https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/examples/kubernetes/dynamic_provisioning to install storage class. Below is the example of storage class yaml setup:
Code Block |
---|
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: efs.csi.aws.com
parameters:
provisioningMode: efs-ap
fileSystemId: <efs id>
directoryPerms: "700" |
Follow the guide from https://github.com/bitnami/charts/tree/master/bitnami/external-dns to install External DNS: Code Block |
---|
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo update
$ helm upgrade --install external-dns bitnami/external-dns \
-n <namespace> \
--set provider=aws \
--set aws.zoneType=public \
--set txtOwnerId=<eks_domain_zone_id> \
--set "domainFilters[0]=<eks_domain_zone_name>" \
--set policy=sync |
Follow the guide from https://github.com/aws/eks-charts/tree/master/stable/aws-load-balancer-controller to install AWS Load Balancer Controller Code Block |
---|
$ helm repo add eks https://aws.github.io/eks-charts
$ helm repo update
$ helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n <namespace> \
--set clusterName=<cluster_name> |
Create a custom values yaml and populate it with the information below which will be used by NGINX ingress controller installation. Code Block |
---|
title | ingress-nginx-values.yaml |
---|
| controller:
scope:
enabled: true
admissionWebhooks:
enabled: false
metrics:
enabled: false
serviceMonitor:
enabled: false
ingressClassResource:
name: nginx
enabled: true
default: false
controllerValue: "k8s.io/ingress-nginx"
watchIngressWithoutClass: false
service:
targetPorts:
http: 80
https: 443
type: NodePort
extraArgs:
v: 1
containerSecurityContext:
runAsUser: 101
allowPrivilegeEscalation: true
serviceAccount:
create: false |
Install NGINX chart with custom values yaml: Note |
---|
See https://github.com/kubernetes/ingress-nginx/releases for the released NGINX helm chart version. If you are running multiple installations on the cluster, in case the Nginx IngressClass resource is already installed, then you should add the following to your helm command to avoid hitting a resource already exists error.
--set controller.ingressClassResource.enabled=false
|
Code Block |
---|
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo update
$ helm install <release name> ingress-nginx/ingress-nginx --version <NGINX helm chart version> -f ingress-nginx-values.yaml -n <namespace> |
For Openshift ClusterThese installation steps applies only when you are using an Openshift cluster on your AWS. The procedures and steps below are specific only for Openshift clusters. Set up Openshift ClusterSet up the Openshift cluster on your AWS. You may refer to https://docs.openshift.com/container-platform/4.7/installing/installing_aws/installing-aws-default.html for the steps on setting up an Openshift cluster. Note |
---|
| It is important that you explicitly set up the SCC with the following strategies: RunAsAny for RUNASUSER
RunAsAny for FSGROUP
You should also bind your SCC to a Service Account for Openshift. To point into using your defined Service Account, you can modify the serviceAccountName properties in the values.yaml file to your own. |
Warning |
---|
| You are required to give your Service Account Name the nonroot access to allow for the installation of the . You may use the following command to grant the nonroot access to your Service Account Name. Code Block |
---|
oc adm policy add-scc-to-user nonroot -z <service account name> -n <namespace> |
|
Create Resources Required for with OpenshiftWith your cluster set up successfully, you may now proceed with the steps below: Go to the terraform folder and copy the terraform.tfvars.example to terraform.tfvars
Code Block |
---|
$ cp terraform.tfvars.example terraform.tfvars |
Get the following values from the AWS Console and fill in the parameters in the terraform.tfvars. terraform.tfvars | Where to get the values |
---|
aws_account_id | From AWS Console Go to My Account page and you will see the Account Id under Account Settings | aws_region | Follow as per the value configured in mz-eks.yaml Default is eu-west-1 | cluster_name | Follow as per the value configured in mz-eks.yaml Default is mz-eks | db_password | Use a secure password for the platform database. Minimum 10 characters. | domain domain_zone_id | From AWS Console On the Route 53 page, find your existing Hosted Zone and copy the Hosted Zone ID and Domain Name. | vpc_id | From AWS Console On the VPC Service page, find the vpc name where your Openshift cluster is located and copy the VPC ID. |
Run the following commands: Code Block |
---|
$ terraform init
$ terraform plan
$ terraform apply |
Save the output from terraform for the next step.
Installing AWS HelpersRun the following commands by replacing the placeholders with the values from the terraform output
Place Holder | Value from terraform output |
---|
<region> | Follow as per the value configured in cluster.yaml Default is eu-west-1 | <eks_domain_zone_name> | eks_domain_zone_name | <eks_domain_zone_id> | eks_domain_zone_id | <efs id> | efs_id | <cluster_name> | Follow as per the value configured in cluster.yaml Default is mz-eks |
Follow the guide from https://github.com/kubernetes-sigs/aws-efs-csi-driver to install Amazon EFS CSI Driver: Code Block |
---|
$ helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/
$ helm repo update
$ helm upgrade --install aws-efs-csi-driver --namespace <namespace> aws-efs-csi-driver/aws-efs-csi-driver |
Note |
---|
The efs-csi-controller-sa and efs-csi-node-sa service account requires the privileged SCC access permission to be granted. |
Note |
---|
The driver requires IAM permission to talk to Amazon EFS to manage the volume on your behalf, hence you must set up driver permission that is mentioned in the installation steps. |
Follow the guide from https://github.com/bitnami/charts/tree/master/bitnami/external-dns to install External DNS: Code Block |
---|
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo update
$ helm upgrade --install external-dns bitnami/external-dns \
-n <namespace> \
--set provider=aws \
--set aws.zoneType=public \
--set txtOwnerId=<eks_domain_zone_id> \
--set "domainFilters[0]=<eks_domain_zone_name>" \
--set policy=sync \
--set aws.region=<region> \
--set aws.credentials.accessKey=<AWS_Access_Key> \
--set aws.credentials.secretKey=<AWS_Secret_Access_Key> |
Note |
---|
The external-dns service account requires the nonroot SCC access permission to be granted. |
Follow the guide from https://github.com/aws/eks-charts/tree/master/stable/aws-load-balancer-controller to install AWS Load Balancer Controller Code Block |
---|
$ helm repo add eks https://aws.github.io/eks-charts
$ helm repo update
$ helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n <namespace> \
--set clusterName=<cluster_name> \
--set region=<region> \
--set vpcId=<vpc_id> \
--set env.AWS_ACCESS_KEY_ID=<AWS_Access_Key> \
--set env.AWS_SECRET_ACCESS_KEY=<AWS_Secret_Access_Key> |
Note |
---|
The aws-load-balancer-controller service account requires the nonroot SCC access permission to be granted. |
Create a custom values yaml and populate it with the information below which will be used by NGINX ingress controller installation. Code Block |
---|
title | ingress-nginx-values.yaml |
---|
| controller:
scope:
enabled: true
admissionWebhooks:
enabled: false
metrics:
enabled: false
serviceMonitor:
enabled: false
ingressClassResource:
name: nginx
enabled: true
default: false
controllerValue: "k8s.io/ingress-nginx"
watchIngressWithoutClass: false
service:
targetPorts:
http: 80
https: 443
type: NodePort
extraArgs:
v: 1
containerSecurityContext:
runAsUser: 101
allowPrivilegeEscalation: true
serviceAccount:
create: false |
Install NGINX chart with custom values yaml: Note |
---|
See https://github.com/kubernetes/ingress-nginx/releases for the released NGINX helm chart version. If you are running multiple installations on the cluster, in case the Nginx IngressClass resource is already installed, then you should add the following to your helm command to avoid hitting a resource already exists error.
--set controller.ingressClassResource.enabled=false
|
Code Block |
---|
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo update
$ helm install <release name> ingress-nginx/ingress-nginx --version <NGINX helm chart version> -f ingress-nginx-values.yaml -n <namespace> |
|