OCI Add-ons
The following OCI specific resources should be added:
oci-file-service-storage
This is an optional add-on. Refer to the Introduction - OCI chapter for additional information.
Please note that persistent volume setup is an optional step. Skip this section if you are not intended to have persistent file storage.
The OCI File Storage service provides a durable, scalable, distributed, enterprise-grade network file system.
A persistent volume claim (PVC) is a request for persistent file storage. The OCI File Storage service file systems are mounted inside containers running on clusters created by Container Engine for Kubernetes using a CSI (Container Storage Interface) volume plugin deployed on the clusters.
To enable the CSI volume plugin to create and manage File Storage resources, appropriate IAM policies must be installed:
Policy to create and/or manage file systems, mount targets, and export paths:
ALLOW any-user to manage file-family in compartment <compartment-name> where request.principal.type = 'cluster'
Policy to use VNICs, private IPs, private DNS zones, and subnets:
ALLOW any-user to use virtual-network-family in compartment <compartment-name> where request.principal.type = 'cluster'
User can use the File Storage service to provision persistent volume claims (PVCs) in two ways:
Dynamic Provisioning (Deprecated)
Static Provisioning (preferred way)
Dynamic Provisioning
These steps describe how to create a dynamically provisioned volume using OCI Volume plugin.
Prepare a
storageclass.yaml
file with StorageClass manifest for OCI File Storage:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: fss-dyn-storage provisioner: fss.csi.oraclecloud.com parameters: availabilityDomain: <availability_Domain> mountTargetSubnetOcid: <mount_target_subnet_ocid from terraform output> kmsKeyOcid: <kms_key_ocid from terraform output, omit if terraform output is empty>
kmsKeyOcid property is optional and can be omitted if data is encrypted at rest using encryption keys managed by Oracle. Only specify if user managed encryption key is used, i.e., kms_key_ocid from terraform output isn't empty.
Deploy the storage class
kubectl apply -f storageclass.yaml
For more information, please refer to the dynamic provisioning documentation.
Static Provisioning
These steps describe how to create a PVC by creating a PV backed by the new file system and then create the PVC and binds the PVC to the PV backed by the File Storage service.
Prepare a
pv.yaml
file with PersistentVolume manifest for OCI File Storage:
apiVersion: v1 kind: PersistentVolume metadata: name: fss-pv spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Delete csi: driver: fss.csi.oraclecloud.com volumeHandle: <filesystem_ocid from terraform output>:<mount_target_IP_address from terraform output>:<filesystem_mount_path from terraform output>
Deploy the PersistentVolume
kubectl apply -f pv.yaml
Prepare a
pvc.yaml
file with PersistentVolumeClaim manifest for OCI File Storage
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: fss-pvc spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 1Gi volumeName: fss-pv
Deploy the PersistentVolumeClaim
kubectl apply -f pvc.yaml -n uepe
Verify PVC is bound to the PV successfully
kubectl get pvc -n uepe
the output below shows persistent volume claim bound to persistent volume successfully
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE fss-pv 1Gi RWX Delete Available <unset> 9s
Pod cannot access file system due to insufficient permissions
When a pod attempts to access a persistent volume (PV) backed by a file system in the File Storage service, the attempt might fail with a "Permission Denied" message.
To resolve the "Permission Denied”, follow these steps:
Obtain the CSIDriver configuration file
kubectl get csiDriver fss.csi.oraclecloud.com -oyaml > fss_csi_driver.yaml
Edit the fss_csi_driver.yaml file and change the CSIDriver object's
spec.fsGroupPolicy
attribute fromReadWriteOnceWithFSType
toFile
. For example,
kind: CSIDriver metadata: creationTimestamp: "<timestamp>" name: fss.csi.oraclecloud.com resourceVersion: "<version>" uid: <identifier> spec: attachRequired: false fsGroupPolicy: File podInfoOnMount: false requiresRepublish: false storageCapacity: false volumeLifecycleModes: - Persistent
Delete the existing CSIDriver object
kubectl delete csiDriver fss.csi.oraclecloud.com
Create the new CSIDriver object from fss_csi_driver.yaml
kubectl apply -f fss_csi_driver.yaml
For more information, please refer to the Troubleshooting File Storage Service Provisioning of PVCs
oci-native-ingress-controller
cert-manager needs to be installed prior to oci-native-ingress-controller installation as it refers to cert-manager internally.
The simplest way to install cert-manager is via the cluster add-ons. From the console, browse to Containers > Clusters > Cluster details, scroll down to vertical menu, select Resources > Add-ons, select Manage add-ons to install and enable cert-manager.
The OCI native ingress controller implements the rules and configuration options defined in a Kubernetes ingress resource to load balance and route incoming traffic to service pods running on worker nodes in a cluster. The OCI native ingress controller creates an OCI flexible load balancer to handle requests, and configures the OCI load balancer to route requests according to the rules defined in the ingress resource.
The OCI Native Ingress controller creates the following OCI load balancer resources:
A load balancer for each
IngressClass
resource where you have specified the OCI native ingress controller as the controller.A load balancer backend set for each unique Kubernetes service name and port number combination that you include in routing rules in
Ingress
resources in the cluster.A routing policy that reflect the rules defined in the ingress resource, that is used to route traffic to backend set.
A load balancer listener for each unique port that you include in routing rules in
Ingress
resources in the cluster
To install OCI Native Ingress Controller, follow these steps:
Create a config file named user-auth-config.yaml, containing credential information, in the following format:
auth: region: <region-identifier> user: <user-ocid> fingerprint: <fingerprint> tenancy: <tenancy-ocid>
Create a Kubernetes secret resource named
oci-config
in the cluster by entering:
kubectl create secret generic oci-config \ --from-file=config=user-auth-config.yaml \ --from-file=private-key=<private-key-file-path>.pem \ --namespace uepe
Grant permission to the OCI Native Ingress Controller to access resources created by other OCI services, such as the Load Balancer service and the Certificates service. Hence, these IAM policies must be installed.
Allow group <group-name> to manage load-balancers in compartment <compartment-name> Allow group <group-name> to use virtual-network-family in compartment <compartment-name> Allow group <group-name> to manage cabundles in compartment <compartment-name> Allow group <group-name> to manage cabundle-associations in compartment <compartment-name> Allow group <group-name> to manage leaf-certificates in compartment <compartment-name> Allow group <group-name> to read leaf-certificate-bundles in compartment <compartment-name> Allow group <group-name> to manage certificate-associations in compartment <compartment-name> Allow group <group-name> to read certificate-authorities in compartment <compartment-name> Allow group <group-name> to manage certificate-authority-associations in compartment <compartment-name> Allow group <group-name> to read certificate-authority-bundles in compartment <compartment-name> Allow group <group-name> to read cluster-family in compartment <compartment-name>
ALLOW any-user to manage network-security-groups in <compartment-name> Team-Stratus where request.principal.type = 'cluster' ALLOW any-user to manage vcns in compartment <compartment-name> where request.principal.type = 'cluster' ALLOW any-user to manage virtual-network-family in compartment <compartment-name> where request.principal.type = 'cluster'
Allow group <group-name> to inspect certificate-authority-family in compartment <compartment-name> Allow group <group-name> to use certificate-authority-delegate in compartment <compartment-name> Allow group <group-name> to manage leaf-certificate-family in compartment <compartment-name> Allow group <group-name> to use leaf-certificate-family in compartment <compartment-name> Allow group <group-name> to use certificate-authority-delegate in compartment <compartment-name> Allow group <group-name> to manage certificate-associations in compartment <compartment-name> Allow group <group-name> to inspect certificate-authority-associations in compartment <compartment-name> Allow group <group-name> to manage cabundle-associations in compartment <compartment-name>
Clone the OCI native ingress controller repository from GitHub
git clone https://github.com/oracle/oci-native-ingress-controller
In the local Git repository, navigate to the
oci-native-ingress-controller
directory and create a config file named oci-native-ingress-controller-values.yaml with the following content:
compartment_id: <compartment_ocid from terraform output> subnet_id: <loadbalancer_subnet_ocid from terraform output> cluster_id: <cluster_ocid from terraform output> authType: user deploymentNamespace: uepe
Generate the manifest .yaml files for the required resources
helm template --include-crds oci-native-ingress-controller helm/oci-native-ingress-controller -f oci-native-ingress-controller-values.yaml --output-dir deploy/manifests
Comment out the Namespace resource from deploy/manifests/oci-native-ingress-controller/templates/deployment.yaml (otherwise it will try to create it, but it’s already created in a previous step).
#apiVersion: v1 #kind: Namespace #metadata: # name: uepe
Deploy the required resources using the manifest .yaml files
kubectl apply -f deploy/manifests/oci-native-ingress-controller/crds
kubectl apply -f deploy/manifests/oci-native-ingress-controller/templates
Confirm that OCI native ingress controller has been installed successfully
kubectl logs <pod-names> -n uepe
The logs should look like this:
I0611 03:24:13.667434 1 leaderelection.go:258] successfully acquired lease uepe/oci-native-ingress-controller I0611 03:24:13.667480 1 server.go:81] Controller loop... I0611 03:24:13.672076 1 auth_service.go:94] secret is retrieved from kubernetes api: oci-config I0611 03:24:13.672463 1 auth_service.go:42] Fetching auth config provider for type: user I0611 03:24:14.819774 1 server.go:120] CNI Type of given cluster : OCI_VCN_IP_NATIVE I0611 03:24:14.819999 1 backend.go:374] Starting Backend controller I0611 03:24:14.819824 1 routingpolicy.go:282] Starting Routing Policy Controller I0611 03:24:14.819827 1 ingress.go:685] Starting Ingress controller I0611 03:24:14.819840 1 ingressclass.go:496] Starting Ingress Class controller
Having installed the OCI native ingress controller, these Kubernetes resources need to be created in order to start using it.
IngressClassParameters
IngressClass
IngressClassParameters resource
Use the custom IngressClassParameters
resource to specify details of the OCI load balancer to create for the OCI native ingress controller.
Define the resource in a .yaml file named ingress-class-params.yaml
apiVersion: "ingress.oraclecloud.com/v1beta1" kind: IngressClassParameters metadata: name: native-ic-params namespace: uepe spec: compartmentId: "<ocid of compartment>" subnetId: "<loadbalancer_subnet_ocid from terraform output>" loadBalancerName: "native-ic-lb-<your cluster name>" isPrivate: false maxBandwidthMbps: 400 minBandwidthMbps: 100
To create the resource, execute
kubectl create -f ingress-class-params.yaml
IngressClass resource
Use the IngressClass
resource to associate an Ingress
resource with the OCI native ingress controller and the IngressClassParameters
resource.
Define the resource in a .yaml file named ingress-class.yaml
apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: native-ic-ingress-class annotations: ingressclass.kubernetes.io/is-default-class: "true" oci-native-ingress.oraclecloud.com/id: <loadbalancer_ocid from terraform output> spec: controller: oci.oraclecloud.com/native-ingress-controller parameters: scope: Namespace namespace: uepe apiGroup: ingress.oraclecloud.com kind: ingressclassparameters name: native-ic-params
To create the resource, execute
kubectl create -f ingress-class.yaml
Kubernetes Add-ons
The following general Kubernetes resources should be added:
external-dns
ExternalDNS is a Kubernetes add-on that configures public DNS servers with information about exposed Kubernetes services to make them discoverable.
To install ExternalDNS, follow these steps:
Create a Kubernetes secret containing the Oracle Cloud Infrastructure user authentication details for ExternalDNS to use when connecting to the Oracle Cloud Infrastructure API to insert and update DNS records in the DNS zone. Create a credentials file named oci.yaml and populate with the following content:
auth: region: <region-identifier> tenancy: <tenancy-ocid> user: <user-ocid> key: | -----BEGIN RSA PRIVATE KEY----- <private-key> -----END RSA PRIVATE KEY----- fingerprint: <fingerprint> # Omit if there is not a password for the key passphrase: <passphrase> compartment: <compartment-ocid>
Create a Kubernetes secret named
external-dns-config
from the credentials file you just created.
kubectl create secret generic external-dns-config --from-file=oci.yaml -n uepe
Create a configuration file (for example, called
external-dns-deployment.yaml
) to create the ExternalDNS deployment, and specify the name of the Kubernetes secret you just created.
apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: uepe --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: k8s.gcr.io/external-dns/external-dns:v0.13.4 args: - --source=service - --source=ingress - --provider=oci - --txt-owner-id=<cluster_dns_zone_ocid from terraform output> volumeMounts: - name: config mountPath: /etc/kubernetes/ volumes: - name: config secret: secretName: external-dns-config
Apply the configuration file to deploy ExternalDNS
kubectl apply -f external-dns-deployment.yaml -n uepe
Confirm that external-dns has been installed successfully
kubectl logs <pod-name> -n uepe
The logs should look like this:
time="2024-06-11T05:29:19Z" level=info msg="Instantiating new Kubernetes client" time="2024-06-11T05:29:19Z" level=info msg="Using inCluster-config based on serviceaccount-token" time="2024-06-11T05:29:19Z" level=info msg="Created Kubernetes client https://10.96.0.1:443" time="2024-06-11T05:29:21Z" level=info msg="All records are already up to date"
ingress-nginx-controller
This is an optional add-on. Refer to the Introduction - OCI chapter for additional information.
The Ingress NGINX Controller is an ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer.
To install the Ingress NGINX Controller, follow these steps:
Add the ingress-nginx helm repository:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
Update the helm repository to get the latest software:
helm repo update
Create a file called
ingress-nginx-values.yaml
and populate it with the following helm values:controller: scope: enabled: true admissionWebhooks: enabled: false metrics: enabled: false serviceMonitor: enabled: false ingressClassResource: name: nginx enabled: true default: false controllerValue: "k8s.io/ingress-nginx" watchIngressWithoutClass: false service: externalTrafficPolicy: "Local" targetPorts: http: 80 https: 443 type: NodePort extraArgs: v: 1 serviceAccount: create: false
Install the
ingress-nginx-controller
helm chart:helm install ingress-nginx ingress-nginx/ingress-nginx --version <helm chart version> -f ingress-nginx-values.yaml -n uepe
Where
<helm chart version>
is a compatible version listed in the Compatibility Matrix (4.1).
Executing helm list -A
should show all add-ons added in this section. Example:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ingress-nginx-controller uepe 1 2024-02-22 11:44:54.18561 +0800 +08 deployed ingress-nginx-4.9.1 1.9.6
This section is now complete. Now proceed to the Usage Engine Private Edition Preparations - OCI section.