Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

title
  1. :

    ingress-nginx-values.yaml

Excerpt
Tip

A sample template for eksctl and terraform is available from

the  /wiki/spaces/DRXXE/pages/6194875

the Release Information page and you can use these 'as is' or modify them to create a system according to your own requirements.

You can use this as a reference or modify accordingly with according to your infrastructure standards, Please refer the see Pre-installation (4.0) page for the AWS service list required for MZ installation in AWS.

Working With the Infrastructure Template

For EKS Cluster

Set up the VPC and EKS Cluster

title
Note

Note!

If you are using Openshift cluster refer to the section for Openshift Setup - OpenShift (4.0).

The default value for the cluster name is mz-eks and region is eu-west-1

  1. Go to the eksctl folder in the unzipped contents.
    Update the following fields in the mz-eks.yaml file:

refer 
  1. See https://github.com/weaveworks/eksctl for information about more parameters that can be used in the eksctl template

    Code Block
    Metadata         
        name (cluster name, default to mz-eks)
        aws region (default to eu-west-1)
    
    Worker Nodes
        instanceType (change the instant type to match your application load)
        minSize, maxSize and desiredSize (if you wish to have more nodes running)
  2. Execute the following

command
  1. commands to start the creation of the EKS cluster

    Code Block
    $ eksctl create cluster -f mz-eks.yaml --kubeconfig=./kubeconfig
    $ export KUBECONFIG=`pwd`/kubeconfig
    
    You should be able to run any kubectl commands from now on
Note
title

Note!

The eksctl

The eksctl create command can

take

ta a

concidreble amout

considerable amount of time to be completed.

Create Resources Required for Image Modified with EKS

  1. Go to the terraform folderand copy the terraform.tfvars.example to terraform.tfvars.

    Code Block
    $ cp terraform.tfvars.example terraform.tfvars
  2. Retrieve the following values from AWS Console and fill in the parameters in terraform.tfvars.

    terraform.tfvarsWhere to get the values

    aws_account_id

    From AWS Console
    Go to My Account page and you will see the Account Id under Account Settings

    aws_region

    Follow as per the value configured in mz-eks.yaml

    Default is eu-west-1

    cluster_name

    Follow as per the value configured in mz-eks.yaml

    Default is mz-eks

    db_password

    Use a secure password for the platform database. Minimum 10 characters.

    domain

    domain_zone_id

    From AWS Console
    On the Route 53 page, find your existing Hosted Zone and copy the Hosted Zone ID and Domain Name.

    vpc_id

    From AWS Console
    On the VPC Service page, find the vpc name eksctl-mz-eks-cluster/VPC and copy the VPC ID.

    Code Block
    #  ____  _____ _____   _____ _   _ _____ ____  _____
    # / ___|| ____|_   _| |_   _| | | | ____/ ___|| ____|_
    # \___ \|  _|   | |     | | | |_| |  _| \___ \|  _| (_)
    #  ___) | |___  | |     | | |  _  | |___ ___) | |___ _
    # |____/|_____| |_|     |_| |_| |_|_____|____/|_____(_)
    
    # The below values must be set explicitly in order for the setup to work correctly.
    
    vpc_id = "vpc-xxxxxxxxxxxxxxxxx"
    aws_region = "eu-west-1"
    aws_account_id = ""
    
    # cluster_name.domain will be the final domain name
    cluster_name = "mz-eks"
    domain = "example.com"
    
    # Route 53 Hosted Zone ID
    # This should be the Zone ID of the Domain above. Ie. that domain must already exist in Route 53.
    # We'll insert the nameservers of the new domain name "cluster-name.domain" as a NS record in domain's hosted zone.
    domain_zone_id = ""
    
    # Password to the database.
    db_password = ""
  3. Run the following commands:

    Code Block
    $ terraform init
    $ terraform plan
    $ terraform apply
  4. Save the output from terraform for the next step.

Installing AWS Helpers

  1. Run the following commands by replacing the placeholders with values from the terraform output.

    PlaceholderValue from terraform output

    <region>

    Follow as per the value configured in cluster.yaml

    Default is eu-west-1

    <eks_domain_zone_name>

    eks_domain_zone_name

    <eks_domain_zone_id>

    eks_domain_zone_id

    <efs id>

    efs_id

    <cluster_name>

    Follow as per the value configured in cluster.yaml

    Default is mz-eks

Refer to 
  1. See https://github.com/kubernetes-sigs/aws-efs-csi-driver for information on how to install Amazon EFS CSI Driver as follows:

    Code Block
    $ helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/
    $ helm repo update
    $ helm upgrade --install aws-efs-csi-driver --namespace <namespace> aws-efs-csi-driver/aws-efs-csi-driver
  2. If you need to dynamically provision persistent volume claim (PVC) through Amazon EFS access points,

refer to 
  1. see https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/examples/kubernetes/dynamic_provisioning

 to
  1.  for information on how to install the storage class.

    Below is

the
  1. an example of a storage class yaml setup:

    Code Block
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: aws-efs
    provisioner: efs.csi.aws.com
    parameters:
      provisioningMode: efs-ap
      fileSystemId: <efs id>
      directoryPerms: "700"
Refer to 
  1. See https://github.com/bitnami/charts/tree/master/bitnami/external-dns for information on how to install External DNS as follows:

    Code Block
    $ helm repo add bitnami https://charts.bitnami.com/bitnami
    $ helm repo update
    $ helm upgrade --install external-dns bitnami/external-dns \
    -n <namespace> \
    --set provider=aws \
    --set aws.zoneType=public \
    --set txtOwnerId=<eks_domain_zone_id> \
    --set "domainFilters[0]=<eks_domain_zone_name>" \
    --set policy=sync
Refer to 
  1. See https://github.com/aws/eks-charts/tree/master/stable/aws-load-balancer-controller for information on how to install AWS Load Balancer Controller as follows:

    Code Block
    $ helm repo add eks https://aws.github.io/eks-charts
    $ helm repo update
    $ helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
    -n <namespace> \
    --set clusterName=<cluster_name>
  2. Create a custom values yaml and populate it with the following information, which will be used in NGINX ingress controller installation

.
Code Block
Code Block
controller:
  scope:
    enabled: true
  admissionWebhooks:
    enabled: false
  metrics:
    enabled: false
    serviceMonitor:
      enabled: false
  ingressClassResource:
    name: nginx
    enabled: true
    default: false
    controllerValue: "k8s.io/ingress-nginx"
    watchIngressWithoutClass: false
  service:
    targetPorts:
      http: 80
      https: 443
    type: NodePort
  extraArgs:
    v: 1
  containerSecurityContext:
    runAsUser: 101
    allowPrivilegeEscalation: true
serviceAccount:
  create: false
  • Install NGINX chart with custom values yaml:

  • note
    Info

    See https://github.com/kubernetes/ingress-nginx/releases  for the released NGINX helm chart version.

    If you are running multiple installations on the cluster, in case the Nginx IngressClass resource is already installed, then you should add the following to your helm command to avoid hitting a resource already exists error.

    --set controller.ingressClassResource.enabled=false

    Code Block
    $ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    $ helm repo update
    $ helm install <release name> ingress-nginx/ingress-nginx --version <NGINX helm chart version> -f ingress-nginx-values.yaml -n <namespace>

    For Openshift Cluster

    Anchor
    openshift
    openshift

    These installation steps are only applicable only if you are using an Openshift cluster on your AWS. The following procedures and steps are specific to Openshift clusters.

    Set up Openshift Cluster

    To set up the Openshift cluster on your AWS, refer to see https://docs.openshift.com/container-platform/4.7/installing/installing_aws/installing-aws-default.html.

    title
    Note

    Note!

    It is important that you explicitly set up the SCC with the following strategies:

    • RunAsAny for RUNASUSER

    • RunAsAny for FSGROUP

    You should also bind your SCC to a Service Account for Openshift. To point Image Modified into using your defined Service Account, you can modify the serviceAccountName properties in values.yaml file to your own.

    title
    Warning

    Warning!

    You are required to give your Service Account Name the nonroot access to allow for the installation of Image Modified. You may use the following command to grant the nonroot access to your Service Account Name.

    Code Block
    oc adm policy add-scc-to-user nonroot -z <service account name> -n <namespace>

    Create Resources Required for Image Modified with Openshift

    Once you have successfully set up the cluster, proceed with the following steps:

    1. Go to the terraform folderand copy the terraform.tfvars.example to terraform.tfvars.

      Code Block
      $ cp terraform.tfvars.example terraform.tfvars
    2. Retrieve the following values from AWS Console and fill in the parameters in terraform.tfvars.

      terraform.tfvarsWhere to get the values

      aws_account_id

      From AWS Console
      Go to My Account page and you will see the Account Id under Account Settings

      aws_region

      Follow as per the value configured in mz-eks.yaml

      Default is eu-west-1

      cluster_name

      Follow as per the value configured in mz-eks.yaml

      Default is mz-eks

      db_password

      Use a secure password for the platform database. Minimum 10 characters.

      domain

      domain_zone_id

      From AWS Console
      On the Route 53 page, find your existing Hosted Zone and copy the Hosted Zone ID and Domain Name.

      vpc_id

      From AWS Console
      On the VPC Service page, find the vpc name where your Openshift cluster is located and copy the VPC ID.

    3. Comment out the following line from kms.tf file. The role will not exist in this instance.

      Code Block
         principals {
            type  = "AWS"
            identifiers = ["arn:aws:iam::${var.aws_account_id}:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling"]
      
          }
    4. Run the following commands:

      Code Block
      $ terraform init
      $ terraform plan
      $ terraform apply
    5. Save the output from terraform for the next step.

    Installing AWS Helpers

    1. Run the following commands by replacing the placeholders with values from the terraform output.

      PlaceholderValue from terraform output

      <region>

      Follow as per the value configured in cluster.yaml

      Default is eu-west-1

      <eks_domain_zone_name>

      eks_domain_zone_name

      <eks_domain_zone_id>

      eks_domain_zone_id

      <efs id>

      efs_id

      <cluster_name>

      Follow as per the value configured in cluster.yaml

      Default is mz-eks

    Refer to 
    1. See https://github.com/kubernetes-sigs/aws-efs-csi-driver for information on how to install Amazon EFS CSI Driver as follows:

      Code Block
      $ helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/
      $ helm repo update
      $ helm upgrade --install aws-efs-csi-driver --namespace <namespace> aws-efs-csi-driver/aws-efs-csi-driver
      Note

      The efs-csi-controller-sa and efs-csi-node-sa service accounts require the privileged SCC access permission to be granted.

      Note

      The driver requires IAM permission to talk to Amazon EFS to manage the volume on your behalf, hence you must set up driver permission that is mentioned in the installation steps.

    Refer to 
    1. See https://github.com/bitnami/charts/tree/master/bitnami/external-dns for information on how to install External DNS as follows:

      Code Block
      $ helm repo add bitnami https://charts.bitnami.com/bitnami
      $ helm repo update
      $ helm upgrade --install external-dns bitnami/external-dns \
      -n <namespace> \
      --set provider=aws \
      --set aws.zoneType=public \
      --set txtOwnerId=<eks_domain_zone_id> \
      --set "domainFilters[0]=<eks_domain_zone_name>" \
      --set policy=sync \
      --set aws.region=<region> \
      --set aws.credentials.accessKey=<AWS_Access_Key> \
      --set aws.credentials.secretKey=<AWS_Secret_Access_Key>
      Note

      The external-dns service account requires the nonroot SCC access permission to be granted.

    Refer to 
    1. See https://github.com/aws/eks-charts/tree/master/stable/aws-load-balancer-controller for information on how to install AWS Load Balancer Controller as follows:

      Code Block
      $ helm repo add eks https://aws.github.io/eks-charts
      $ helm repo update
      $ helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
      -n <namespace> \
      --set clusterName=<cluster_name> \
      --set region=<region> \
      --set vpcId=<vpc_id> \
      --set env.AWS_ACCESS_KEY_ID=<AWS_Access_Key> \
      --set env.AWS_SECRET_ACCESS_KEY=<AWS_Secret_Access_Key>
      Note

      The aws-load-balancer-controller service account requires the nonroot SCC access permission to be granted.

    2. Create a custom values yaml and populate it with the following information, which will be used in NGINX ingress controller installation.

    Code Blocktitle
    1. ingress-nginx-values.yaml

      Code Block
      controller:
        scope:
          enabled: true
        admissionWebhooks:
          enabled: false
        metrics:
          enabled: false
          serviceMonitor:
            enabled: false
        ingressClassResource:
          name: nginx
          enabled: true
          default: false
          controllerValue: "k8s.io/ingress-nginx"
          watchIngressWithoutClass: false
        service:
          targetPorts:
            http: 80
            https: 443
          type: NodePort
        extraArgs:
          v: 1
        containerSecurityContext:
          runAsUser: 101
          allowPrivilegeEscalation: true
      serviceAccount:
        create: false
    2. Install NGINX chart with custom values yaml:

    note
    Info

    See https://github.com/kubernetes/ingress-nginx/releases  for the released NGINX helm chart version.

    If you are running multiple installations on the cluster, in case the Nginx IngressClass resource is already installed, then you should add the following to your helm command to avoid hitting a resource already exists error.

    --set controller.ingressClassResource.enabled=false

    Code Block
    $ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    $ helm repo update
    $ helm install <release name> ingress-nginx/ingress-nginx --version <NGINX helm chart version> -f ingress-nginx-values.yaml -n <namespace>