Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

OCI Add-ons

The Add the following OCI-specific resources should be added:

...

oci-file-

...

service-storage

Info

This is an optional add-on. Refer to the , see Introduction - OCI chapter (4.2) for additional information.

Note!

The Amazon Elastic File System Container Storage Interface (CSI) Driver implements the CSI specification for container orchestrators to manage the lifecycle of Amazon EFS file systems.

To install the Amazon EFS CSI Driver, follow these steps:

...

Persistent volume setup is an optional step. Skip this section if you do not intend to have persistent file storage.

The OCI File Storage service provides a durable, scalable, distributed, and enterprise-grade network file system.

A persistent volume claim (PVC) is a request for persistent file storage. The OCI File Storage service file systems are mounted inside containers running on clusters created by Container Engine for Kubernetes using a CSI (Container Storage Interface) volume plugin deployed on the clusters.

To enable the CSI volume plugin to create and manage File Storage resources, the appropriate IAM policies must be applied by following these steps:

  1. Apply policy to create and/or manage file systems, mount targets, and export paths:

Code Block
languagebash
helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/

...

ALLOW any-user to manage file-family in compartment <compartment-name> where request.principal.type = 'cluster'
  1. Apply policy to use VNICs, private IPs, private DNS zones, and subnets:

Code Block
languagebash
helmALLOW repo update
  1. Helm install:

Code Block
languagebash
helm install aws-efs-csi-driver --namespace uepe aws-efs-csi-driver/aws-efs-csi-driver --version <helm chart version> \
--set controller.serviceAccount.create=false \
--set controller.serviceAccount.name=efs-csi-controller-sa

Where <helm chart version> is a compatible version listed in the Compatibility Matrix.

Info

Helm install command assumes service account for Amazon EFS CSI Driver already exists.

Service Account name set to metadata.name under iam.serviceAccounts portion in the uepe-eks.yaml file in Set Up Kubernetes Cluster - AWS section

Namespace set to metadata.namespace under iam.serviceAccounts portion in the uepe-eks.yaml file in Set Up Kubernetes Cluster - AWS section

Dynamic Provisioning

These steps describe how to create a dynamically provisioned volume created through Amazon EFS access points and a corresponding persistent volume claim (PVC).

  1. Prepare a storageclass.yaml file with StorageClass manifest for Amazon EFS:

Code Block
languageyaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: aws-efs
provisioner: efs.csi.aws.com
parameters:
  provisioningMode: efs-ap
  fileSystemId: <efs_id from terraform output>
  directoryPerms: "700"
  1. Deploy the storage class

Code Block
languagebash
kubectl apply -f storageclass.yaml

For more information, please refer to the dynamic provisioning documentation.

oci-native-ingress-controller

The OCI native ingress controller implements the rules and configuration options defined in a Kubernetes ingress resource to load balance and route incoming traffic to service pods running on worker nodes in a cluster. The OCI native ingress controller creates an OCI flexible load balancer to handle requests, and configures the OCI load balancer to route requests according to the rules defined in the ingress resource.

The OCI Native Ingress controller creates the following OCI load balancer resources:

  • A load balancer for each IngressClass resource where you have specified the OCI native ingress controller as the controller.

  • A load balancer backend set for each unique Kubernetes service name and port number combination that you include in routing rules in Ingress resources in the cluster.

  • A routing policy that reflect the rules defined in the ingress resource, that is used to route traffic to backend set.

  • A load balancer listener for each unique port that you include in routing rules in Ingress resources in the cluster

To install OCI Native Ingress Controller, follow these steps:

  1. Add eks repository to the helm repository:

Code Block
languagebash
helm repo add eks https://aws.github.io/eks-charts
  1. Update helm repository to get the latest software:

Code Block
languagebash
helm repo update
  1. Install the AWS Load Balancer Controller helm chart:

Code Block
languagebash
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n uepe --version <helm chart version> \
--set clusterName=<cluster_name configured in terraform.tfvars> \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller

Where <helm chart version> is a compatible version listed in the Compatibility Matrix.

Info

Helm install command assumes service account for AWS Load Balancer Controller already exists.

Service Account name set to metadata.name under iam.serviceAccounts portion in the uepe-eks.yaml file in Set Up Kubernetes Cluster - AWS section

Namespace set to metadata.namespace under iam.serviceAccounts portion in the uepe-eks.yaml file in Set Up Kubernetes Cluster - AWS section

Kubernetes Add-ons

The following general Kubernetes resources should be added:

external-dns

ExternalDNS is a Kubernetes add-on that configures public DNS servers with information about exposed Kubernetes services to make them discoverable.

To install ExternalDNS, follow these steps:

  1. Add the bitnami helm repository:

    Code Block
    languagebash
    helm repo add bitnami https://charts.bitnami.com/bitnami
  2. Update the helm repository to get the latest software:

    Code Block
    languagebash
    helm repo update
  3. Create a file called external-dns-values.yaml and populate it with the following helm values:

    Code Block
    languageyaml
    aws:
      zoneType: public
    domainFilters:
      - <eks_domain_zone_name from terraform output>
    policy: sync
    provider: aws
    txtOwnerId: <eks_domain_zone_id from terraform output>
    serviceAccount:
      create: false
      name: external-dns
Info

Helm install command assumes service account for ExternalDNS already exists.

Service Account name set to metadata.name under iam.serviceAccounts portion in the uepe-eks.yaml file in Set Up Kubernetes Cluster - AWS section

  1. Install the ExternalDNS helm chart:

    Code Block
    languagebash
    helm install external-dns bitnami/external-dns -n uepe \
    --version <helm chart version> -f external-dns-values.yaml

    Where <helm chart version> is a compatible version listed in the Compatibility Matrix.

Info

Namespace set to metadata.namespace under iam.serviceAccounts portion in the uepe-eks.yaml file in Set Up Kubernetes Cluster - AWS section

ingress-nginx-controller

Info

This is an optional add-on. Refer to the Introduction - OCI chapter for additional information.

The Ingress NGINX Controller is an ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer.

To install the Ingress NGINX Controller, follow these steps:

  1. Add the ingress-nginx helm repository:

    Code Block
    languagebash
    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
  2. Update the helm repository to get the latest software:

    Code Block
    languagebash
    helm repo update
  3. Create a file called ingress-nginx-values.yaml and populate it with the following helm values:

    Code Block
    languageyaml
    controller:
      scope:
        enabled: true
      admissionWebhooks:
        enabled: false
      metrics:
        enabled: false
        serviceMonitor:
          enabled: false
      ingressClassResource:
        name: nginx
        enabled: true
        default: false
        controllerValue: "k8s.io/ingress-nginx"
      watchIngressWithoutClass: false
      service:
        targetPorts:
          http: 80
          https: 443
        type: NodePort
      extraArgs:
        v: 1
    serviceAccount:
      create: false
  4. Install the ingress-nginx-controller helm chart:

    Code Block
    languagebash
    helm install ingress-nginx ingress-nginx/ingress-nginx --version <helm chart version> -f ingress-nginx-values.yaml -n uepe

    Where <helm chart version> is a compatible version listed in the Compatibility Matrix.

Executing helm list should show all add-ons added in this section. Example:

Code Block
languagebash
NAME                        	NAMESPACE	REVISION	UPDATED                             	STATUS  	CHART                             	APP VERSION
aws-efs-csi-driver          	uepe     	1       	2024-02-06 14:00:36.817518 +0800 +08	deployed	aws-efs-csi-driver-2.5.4          	1.7.4      
aws-load-balancer-controller	uepe     	1       	2024-02-06 14:09:22.86071 +0800 +08 	deployed	aws-load-balancer-controller-1.7.0	v2.7.0     
external-dnsany-user to use virtual-network-family in compartment <compartment-name> where request.principal.type = 'cluster'

Update Default CSI Driver

When a pod attempts to access a persistent volume (PV) backed by a file system in the File Storage service, the attempt can fail with a "Permission Denied" message since the volume is only accessible to processes running as root. As a result, a pod that is not running as root receives the "Permission Denied" message when attempting to access a directory or file in the mounted volume.

To avoid getting the "Permission Denied” message, follow these steps:

  1. Obtain the CSIDriver configuration file by running the following command:

Code Block
kubectl get csiDriver fss.csi.oraclecloud.com -o yaml > fss_csi_driver.yaml
  1. Edit the fss_csi_driver.yaml file and change the CSIDriver object's spec.fsGroupPolicy attribute from ReadWriteOnceWithFSType to File, for example as below:

Code Block
kind: CSIDriver
metadata:
  creationTimestamp: "<timestamp>"
  name: fss.csi.oraclecloud.com
  resourceVersion: "<version>"
  uid: <identifier>
spec:
  attachRequired: false
  fsGroupPolicy: File
  podInfoOnMount: false
  requiresRepublish: false
  storageCapacity: false
  volumeLifecycleModes:
  - Persistent
  1. Delete the existing CSIDriver object by running the following command:

Code Block
kubectl delete csiDriver fss.csi.oraclecloud.com
  1. Create the new CSIDriver object from fss_csi_driver.yaml by running the following command:

Code Block
kubectl apply -f fss_csi_driver.yaml

For more information, see Troubleshooting File Storage Service Provisioning of PVCs.

Provisioning of PVC

You can use the File Storage service to provision persistent volume claims (PVCs) in two ways:

Static Provisioning

Follow these steps to create a PVC by creating a PV backed by the new file system and then create the PVC and binds the PVC to the PV backed by the File Storage service:

  1. Prepare a pv.yaml file with PersistentVolume manifest for OCI File Storage with the following content:

Code Block
apiVersion: v1
kind: PersistentVolume
metadata:
  name: fss-pv
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Delete
  csi:
    driver: fss.csi.oraclecloud.com
    volumeHandle: <filesystem_ocid from terraform output>:<mount_target_IP_address from terraform output>:<filesystem_mount_path from terraform output>
  1. Deploy the PersistentVolume by running the following command:

Code Block
kubectl apply -f pv.yaml
  1. Prepare a pvc.yaml file with PersistentVolumeClaim manifest for OCI File Storage with the following content:

Code Block
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fss-pvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 1Gi
  volumeName: fss-pv
  1. Deploy the PersistentVolumeClaim by running the following command:

Code Block
kubectl apply -f pvc.yaml -n uepe
  1. Verify that PVC is bound to the PV successfully by running the following command:

Code Block
kubectl get pv

The output below shows that a persistent volume claim is successfully bound to a persistent volume.

Code Block
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM          STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
fss-pv   1Gi        RWX            Delete           Bound    uepe/fss-pvc                  <unset>                          55s

oci-native-ingress-controller

Note

Note!

The cert-manager needs to be installed prior to oci-native-ingress-controller installation since it refers to the cert-manager internally.

The easiest way to install the cert-manager is via the cluster add-ons. From the console, browse to Containers > Clusters > Cluster details, scroll down to vertical menu, select Resources > Add-ons, select Manage add-ons to install and enable cert-manager.

The OCI native ingress controller implements the rules and configuration options defined in a Kubernetes ingress resource to load balance and route incoming traffic to service pods running on worker nodes in a cluster. The OCI native ingress controller creates an OCI flexible load balancer to handle requests, and configures the OCI load balancer to route requests according to the rules defined in the ingress resource.

The OCI Native Ingress controller creates the following OCI load balancer resources:

  • A load balancer for each IngressClass resource where you have specified the OCI native ingress controller as the controller.

  • A load balancer backend set for each unique Kubernetes service name and port number combination that you include in routing rules in Ingress resources in the cluster.

  • A routing policy that reflect the rules defined in the ingress resource that is used to route traffic to backend set.

  • A load balancer listener for each unique port that you include in routing rules in Ingress resources in the cluster.

To install OCI Native Ingress Controller:

  1. Create a config file named user-auth-config.yaml, containing credential information, in the following format:

Code Block
auth:
  region: <region from terraform output>
  user: <user_ocid configured in terraform.tfvars>
  fingerprint: <fingerprint configured in terraform.tfvars>
  tenancy: <tenancy_ocid from terraform output>
  1. Create a Kubernetes secret resource named oci-config in the cluster by running the following command:

Code Block
languagebash

kubectl create secret generic oci-config \
--from-file=config=user-auth-config.yaml \
--from-file=private-key=<private-key-file-path>.pem \
--namespace uepe
  1. Grant permission to the OCI Native Ingress Controller to access resources created by other OCI services, such as the Load Balancer service and the Certificates service by installing these IAM policies:

Code Block
Allow group <group-name> to manage load-balancers in compartment <compartment-name>
Allow group <group-name> to use virtual-network-family in compartment <compartment-name>
Allow group <group-name> to manage cabundles in compartment <compartment-name>
Allow group <group-name> to manage cabundle-associations in compartment <compartment-name>
Allow group <group-name> to manage leaf-certificates in compartment <compartment-name>
Allow group <group-name> to read leaf-certificate-bundles in compartment <compartment-name>
Allow group <group-name> to manage certificate-associations in compartment <compartment-name>
Allow group <group-name> to read certificate-authorities in compartment <compartment-name>
Allow group <group-name> to manage certificate-authority-associations in compartment <compartment-name>
Allow group <group-name> to read certificate-authority-bundles in compartment <compartment-name>
Allow group <group-name> to read cluster-family in compartment <compartment-name>
Code Block
ALLOW any-user to manage network-security-groups in <compartment-name> Team-Stratus where request.principal.type = 'cluster'
ALLOW any-user to manage vcns in compartment <compartment-name> where request.principal.type = 'cluster'
ALLOW any-user to manage virtual-network-family in compartment <compartment-name> where request.principal.type = 'cluster'
Code Block
Allow group <group-name> to inspect certificate-authority-family in compartment <compartment-name>
Allow group <group-name> to use certificate-authority-delegate in compartment <compartment-name>
Allow group <group-name> to manage leaf-certificate-family in compartment <compartment-name>
Allow group <group-name> to use leaf-certificate-family in compartment <compartment-name>
Allow group <group-name> to use certificate-authority-delegate in compartment <compartment-name>
Allow group <group-name> to manage certificate-associations in compartment <compartment-name>
Allow group <group-name> to inspect certificate-authority-associations in compartment <compartment-name>
Allow group <group-name> to manage cabundle-associations in compartment <compartment-name>
  1. Clone the OCI native ingress controller repository from GitHub by running the following command:

Code Block
git clone https://github.com/oracle/oci-native-ingress-controller
  1. In the local Git repository, navigate to the oci-native-ingress-controller directory and create a config file named oci-native-ingress-controller-values.yaml with the following content:

Code Block
compartment_id: <compartment_ocid from terraform output>
subnet_id: <loadbalancer_subnet_ocid from terraform output>
cluster_id: <cluster_ocid from terraform output>
authType: user
deploymentNamespace: uepe
  1. Install the config file oci-native-ingress-controller-values.yaml by running the following command:

Code Block
helm install oci-native-ingress-controller helm/oci-native-ingress-controller -f oci-native-ingress-controller-values.yaml -n uepe
  1. Confirm that the OCI native ingress controller has been installed successfully by running the following command:

Code Block
kubectl logs <pod-names> -n uepe

The logs should look similar to:

Code Block
I0611 03:24:13.667434       1 leaderelection.go:258] successfully acquired lease uepe/oci-native-ingress-controller
I0611 03:24:13.667480       1 server.go:81] Controller loop...
I0611 03:24:13.672076       1 auth_service.go:94] secret is retrieved from kubernetes api: oci-config
I0611 03:24:13.672463       1 auth_service.go:42] Fetching auth config provider for type: user
I0611 03:24:14.819774       1 server.go:120] CNI Type of given cluster : OCI_VCN_IP_NATIVE
I0611 03:24:14.819999       1 backend.go:374] Starting Backend controller
I0611 03:24:14.819824       1 routingpolicy.go:282] Starting Routing Policy Controller
I0611 03:24:14.819827       1 ingress.go:685] Starting Ingress controller
I0611 03:24:14.819840       1 ingressclass.go:496] Starting Ingress Class controller

When you have installed the OCI native ingress controller, you must created the following Kubernetes resources in order to start using it.

  • IngressClassParameters

  • IngressClass

IngressClassParameters resource

Use the custom IngressClassParameters resource to specify the details of the OCI load balancer you create for the OCI native ingress controller.

Define the resource in a .yaml file named ingress-class-params.yaml as in the example below:

Code Block
apiVersion: "ingress.oraclecloud.com/v1beta1"
kind: IngressClassParameters
metadata:
  name: native-ic-params
  namespace: uepe
spec:
  compartmentId: "<compartment_ocid from terraform output>"
  subnetId: "<loadbalancer_subnet_ocid from terraform output>"
  loadBalancerName: "native-ic-lb-<cluster_name from terraform output>"
  isPrivate: false
  maxBandwidthMbps: 400
  minBandwidthMbps: 100

To create the resource, run the following command:

Code Block
kubectl create -f ingress-class-params.yaml

IngressClass resource

Use the IngressClass resource to associate an Ingress resource with the OCI native ingress controller and the IngressClassParameters resource.

Define the resource in a .yaml file named ingress-class.yaml as in the example below:

Code Block
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: native-ic-ingress-class
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true"
    oci-native-ingress.oraclecloud.com/id: <loadbalancer_ocid from terraform output>
spec:
  controller: oci.oraclecloud.com/native-ingress-controller
  parameters:
    scope: Namespace
    namespace: uepe
    apiGroup: ingress.oraclecloud.com
    kind: ingressclassparameters
    name: native-ic-params

To create the resource, run the following command:

Code Block
kubectl create -f ingress-class.yaml

Kubernetes Add-ons

Add the following general Kubernetes resources:

external-dns

ExternalDNS is a Kubernetes add-on that configures public DNS servers with information about exposed Kubernetes services to make them discoverable.

To install ExternalDNS:

  1. Create a Kubernetes secret containing the Oracle Cloud Infrastructure user authentication details for that the ExternalDNS can use when connecting to the Oracle Cloud Infrastructure API for inserting and updating DNS records in the DNS zone. Create a credentials file named oci.yaml with the following content:

    Code Block
    languagebash
    auth:
      region: <region from terraform output>
      tenancy: <tenancy_ocid from terraform output>
      user: <user_ocid configured in terraform.tfvars>
      key: |
        -----BEGIN RSA PRIVATE KEY-----
       <private-key>
        -----END RSA PRIVATE KEY-----
      fingerprint: <fingerprint configured in terraform.tfvars>
      # Omit if there is not a password for the key
      passphrase: <passphrase>
    compartment: <compartment_ocid from terraform output>
  2. Create a Kubernetes secret named external-dns-config from the credentials file you just created by running the following command:

Code Block
kubectl create secret generic external-dns-config --from-file=oci.yaml -n uepe
  1. Create a configuration file (for example, called external-dns-values.yaml), and specify the name of the Kubernetes secret you just created as in the example below:

Code Block
oci:
  secretName: external-dns-config
provider: oci
policy: sync
domainFilters:
- <cluster_dns_zone_name from terraform output>
txtOwnerId: <cluster_dns_zone_ocid from terraform output>
  1. Add the bitnami helm repository by running the following command:

Code Block
helm repo add bitnami https://charts.bitnami.com/bitnami
  1. Update the helm repository to get the latest software by running the following command:

Code Block
helm repo update
  1. Do a helm install with the yaml file external-dns-values.yaml to deploy ExternalDNS:

Code Block
helm install external-dns bitnami/external-dns -f external-dns-values.yaml -n uepe
  1. Confirm that external-dns has been installed successfully by running the following command:

Code Block
kubectl logs <pod-name> -n uepe

The logs should look similar to the example below:

Code Block
time="2024-06-11T05:29:19Z" level=info msg="Instantiating new Kubernetes client"
time="2024-06-11T05:29:19Z" level=info msg="Using inCluster-config based on serviceaccount-token"
time="2024-06-11T05:29:19Z" level=info msg="Created Kubernetes client https://10.96.0.1:443"
time="2024-06-11T05:29:21Z" level=info msg="All records are already up to date"

ingress-nginx-controller

Info

This is an optional add-on. Refer to the Introduction - OCI (4.2) chapter for additional information.

The Ingress NGINX Controller is an ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer.

To install the Ingress NGINX Controller, follow these steps:

  1. Add the ingress-nginx helm repository:

    Code Block
    languagebash
    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
  2. Update the helm repository to get the latest software:

    Code Block
    languagebash
    helm repo update
  3. Create a file called ingress-nginx-values.yaml and populate it with the following helm values:

    Code Block
    languageyaml
    controller:
      scope:
        enabled: true
      admissionWebhooks:
        enabled: false
      metrics:
        enabled: false
        serviceMonitor:
          enabled: false
      ingressClassResource:
        name: nginx
        enabled: true
        default: false
        controllerValue: "k8s.io/ingress-nginx"
      watchIngressWithoutClass: false
      service:
        externalTrafficPolicy: "Local"
        targetPorts:
          http: 80
          https: 443
        type: NodePort
      extraArgs:
        v: 1
    serviceAccount:
      create: false
  4. Install the ingress-nginx-controller helm chart:

    Code Block
    languagebash
    helm install ingress-nginx ingress-nginx/ingress-nginx --version <helm chart version> -f ingress-nginx-values.yaml -n uepe

    Where <helm chart version> is a compatible version listed in the Compatibility Matrix (4.2).

If you run the helm list -A command you will see all add-ons added in this section, for example like below:

Code Block
languagebash
NAME                        	NAMESPACE	REVISION	UPDATED                         	uepe    	STATUS  	1CHART       	2024-02-06 14:06:28.705309 +0800 +08	deployed	external-dns-6.31.5               	0.14.0    	APP VERSION
ingress-nginx-controller        uepe     	1       	2024-02-22 11:44:54.18561 +0800 +08 	deployed	ingress-nginx-4.9.1               	1.9.6

This section is now complete . Now and you can proceed to the Usage Engine Private Edition Preparations - OCI (4.2) section.