OCI Add-ons
The Add the following OCI-specific resources should be added:
oci-file-service-storage
Info |
---|
This is an optional add-on. Refer to the , see Introduction - OCI (4.2) chapter for additional information. |
Info |
Note!
Persistent volume setup is an optional step. Skip this section if you
...
do not
...
intend to have persistent file storage.
The OCI File Storage service provides a durable, scalable, distributed, and enterprise-grade network file system.
...
To enable the CSI volume plugin to create and manage File Storage resources, the appropriate IAM policies must be installedapplied by following these steps:
Policy Apply policy to create and/or manage file systems, mount targets, and export paths:
Code Block | ||
---|---|---|
| ||
ALLOW any-user to manage file-family in compartment <compartment-name> where request.principal.type = 'cluster' |
Policy Apply policy to use VNICs, private IPs, private DNS zones, and subnets:
Code Block | ||
---|---|---|
| ||
ALLOW any-user to use virtual-network-family in compartment <compartment-name> where request.principal.type = 'cluster' |
User can use the File Storage service to provision persistent volume claims (PVCs) in two ways:
Dynamic Provisioning (Deprecated)
Static Provisioning (preferred way)
Dynamic Provisioning
These steps describe how to create a dynamically provisioned volume using OCI Volume plugin.
Prepare a
storageclass.yaml
file with StorageClass manifest for OCI File Storage:
...
language | yaml |
---|
...
Update Default CSI Driver
When a pod attempts to access a persistent volume (PV) backed by a file system in the File Storage service, the attempt can fail with a "Permission Denied" message since the volume is only accessible to processes running as root. As a result, a pod that is not running as root receives the "Permission Denied" message when attempting to access a directory or file in the mounted volume.
To avoid getting the "Permission Denied” message, follow these steps:
Obtain the CSIDriver configuration file by running the following command:
Code Block |
---|
kubectl get csiDriver fss.csi.oraclecloud.com parameters:-o yaml > availabilityDomain: <availability_Domain> mountTargetSubnetOcid: <mount_target_subnet_ocid from terraform output> kmsKeyOcid: <kms_key_ocid from terraform output, omit if terraform output is empty> |
Note |
---|
|
Deploy the storage class
Code Block | ||
---|---|---|
| ||
kubectl apply -f storageclass.yaml |
For more information, please refer to the dynamic provisioning documentation.
Static Provisioning
These steps describe how to create a PVC by creating a PV backed by the new file system and then create the PVC and binds the PVC to the PV backed by the File Storage service.
Prepare a
pv.yaml
file with PersistentVolume manifest for OCI File Storage:
Code Block |
---|
apiVersion: v1
kind: PersistentVolume
metadata:
name: fss-pv
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Delete
csi:
driver: fss.csi.oraclecloud.com
volumeHandle: <filesystem_ocid from terraform output>:<mount_target_IP_address from terraform output>:<filesystem_mount_path from terraform output> |
Deploy the PersistentVolume
Code Block |
---|
kubectl apply -f pv.yaml |
Prepare a
pvc.yaml
file with PersistentVolumeClaim manifest for OCI File Storage
Code Block |
---|
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fss-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
volumeName: fss-pv |
Deploy the PersistentVolumeClaim
Code Block |
---|
kubectl apply -f pvc.yaml -n uepe |
Verify PVC is bound to the PV successfully
Code Block |
---|
kubectl get pvc -n uepe |
the output below shows persistent volume claim bound to persistent volume successfully
Code Block |
---|
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
fss-pv 1Gi RWX Delete Available <unset> 9s |
Pod cannot access file system due to insufficient permissions
When a pod attempts to access a persistent volume (PV) backed by a file system in the File Storage service, the attempt might fail with a "Permission Denied" message.
This is because the volume is only accessible to processes running as root. As a result, a pod that is not running as root receives the "Permission Denied" message when attempting to access a directory or file in the mounted volume.
To resolve the "Permission Denied”, follow these steps:
Obtain the CSIDriver configuration file
Code Block |
---|
kubectl get csiDriver fss.csi.oraclecloud.com -o yaml > fss_csi_driver.yaml |
Edit the fss_csi_driver.yaml file and change the CSIDriver object's
spec.fsGroupPolicy
attribute fromReadWriteOnceWithFSType
toFile
. For example,
Code Block |
---|
kind: CSIDriver
metadata:
creationTimestamp: "<timestamp>"
name: fss.csi.oraclecloud.com
resourceVersion: "<version>"
uid: <identifier>
spec:
attachRequired: false
fsGroupPolicy: File
podInfoOnMount: false
requiresRepublish: false
storageCapacity: false
volumeLifecycleModes:
- Persistent |
Delete the existing CSIDriver object
Code Block |
---|
kubectl delete csiDriver fss.csi.oraclecloud.com |
Create the new CSIDriver object from fss_csi_driver.yaml
Code Block |
---|
kubectl apply -f fss_csi_driver.yaml |
For more information, please refer to the Troubleshooting File Storage Service Provisioning of PVCs
fss_csi_driver.yaml |
Edit the
fss_csi_driver.yaml
file and change the CSIDriver object'sspec.fsGroupPolicy
attribute fromReadWriteOnceWithFSType
toFile
, for example as below:
Code Block |
---|
kind: CSIDriver
metadata:
creationTimestamp: "<timestamp>"
name: fss.csi.oraclecloud.com
resourceVersion: "<version>"
uid: <identifier>
spec:
attachRequired: false
fsGroupPolicy: File
podInfoOnMount: false
requiresRepublish: false
storageCapacity: false
volumeLifecycleModes:
- Persistent |
Delete the existing CSIDriver object by running the following command:
Code Block |
---|
kubectl delete csiDriver fss.csi.oraclecloud.com |
Create the new CSIDriver object from
fss_csi_driver.yaml
by running the following command:
Code Block |
---|
kubectl apply -f fss_csi_driver.yaml |
For more information, see Troubleshooting File Storage Service Provisioning of PVCs.
Provisioning of PVC
You can use the File Storage service to provision persistent volume claims (PVCs) in two ways:
Dynamic Provisioning (deprecated way)
Static Provisioning (preferred way)
Static Provisioning
Follow these steps to create a PVC by creating a PV backed by the new file system and then create the PVC and binds the PVC to the PV backed by the File Storage service:
Prepare a
pv.yaml
file with PersistentVolume manifest for OCI File Storage with the following content:
Code Block |
---|
apiVersion: v1
kind: PersistentVolume
metadata:
name: fss-pv
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Delete
csi:
driver: fss.csi.oraclecloud.com
volumeHandle: <filesystem_ocid from terraform output>:<mount_target_IP_address from terraform output>:<filesystem_mount_path from terraform output> |
Deploy the PersistentVolume by running the following command:
Code Block |
---|
kubectl apply -f pv.yaml |
Prepare a
pvc.yaml
file with PersistentVolumeClaim manifest for OCI File Storage with the following content:
Code Block |
---|
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fss-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
volumeName: fss-pv |
Deploy the PersistentVolumeClaim by running the following command:
Code Block |
---|
kubectl apply -f pvc.yaml -n uepe |
Verify that PVC is bound to the PV successfully by running the following command:
Code Block |
---|
kubectl get pv |
The output below shows that a persistent volume claim is successfully bound to a persistent volume.
Code Block |
---|
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
fss-pv 1Gi RWX Delete Bound uepe/fss-pvc <unset> 55s |
oci-native-ingress-controller
Note |
---|
Note! The cert-manager needs to be installed prior to oci-native-ingress-controller installation as since it refers to the cert-manager internally. The simplest easiest way to install the cert-manager is via the cluster add-ons. From the console, browse to |
...
A load balancer for each
IngressClass
resource where you have specified the OCI native ingress controller as the controller.A load balancer backend set for each unique Kubernetes service name and port number combination that you include in routing rules in
Ingress
resources in the cluster.A routing policy that reflect the rules defined in the ingress resource , that is used to route traffic to backend set.
A load balancer listener for each unique port that you include in routing rules in
Ingress
resources in the cluster.
To install OCI Native Ingress Controller, follow these steps:
Create a config file named
user-auth-config.yaml
, containing credential information, in the following format:
Code Block | language | bash
---|
auth: region: <region-identifier> user: <user-ocid> from terraform output> user: <user_ocid configured in terraform.tfvars> fingerprint: <fingerprint><fingerprint configured in terraform.tfvars> tenancy: <tenancy-ocid>_ocid from terraform output> |
Create a Kubernetes secret resource named
oci-config
in the cluster by enteringrunning the following command:
Code Block | ||
---|---|---|
| ||
kubectl create secret generic oci-config \ --from-file=config=user-auth-config.yaml \ --from-file=private-key=<private-key-file-path>.pem \ --namespace uepe |
Grant permission to the OCI Native Ingress Controller to access resources created by other OCI services, such as the Load Balancer service and the Certificates service . Hence, by installing these IAM policies must be installed.:
Code Block |
---|
Allow group <group-name> to manage load-balancers in compartment <compartment-name> Allow group <group-name> to use virtual-network-family in compartment <compartment-name> Allow group <group-name> to manage cabundles in compartment <compartment-name> Allow group <group-name> to manage cabundle-associations in compartment <compartment-name> Allow group <group-name> to manage leaf-certificates in compartment <compartment-name> Allow group <group-name> to read leaf-certificate-bundles in compartment <compartment-name> Allow group <group-name> to manage certificate-associations in compartment <compartment-name> Allow group <group-name> to read certificate-authorities in compartment <compartment-name> Allow group <group-name> to manage certificate-authority-associations in compartment <compartment-name> Allow group <group-name> to read certificate-authority-bundles in compartment <compartment-name> Allow group <group-name> to read cluster-family in compartment <compartment-name> |
...
Clone the OCI native ingress controller repository from GitHub by running the following command:
Code Block |
---|
git clone https://github.com/oracle/oci-native-ingress-controller |
In the local Git repository, navigate to the
oci-native-ingress-controller
directory and create a config file namedoci-native-ingress-controller-values.yaml
with the following content:
Code Block |
---|
compartment_id: <compartment_ocid from terraform output> subnet_id: <loadbalancer_subnet_ocid from terraform output> cluster_id: <cluster_ocid from terraform output> authType: user deploymentNamespace: uepe |
Perform helm install with Install the config file
oci-native-ingress-controller-values.yaml
by running the following command:
Code Block |
---|
helm install oci-native-ingress-controller helm/oci-native-ingress-controller -f oci-native-ingress-controller-values.yaml -n uepe |
Confirm that the OCI native ingress controller has been installed successfullyinstalled successfully by running the following command:
Code Block |
---|
kubectl logs <pod-names> -n uepe |
The logs should look like thissimilar to:
Code Block |
---|
I0611 03:24:13.667434 1 leaderelection.go:258] successfully acquired lease uepe/oci-native-ingress-controller I0611 03:24:13.667480 1 server.go:81] Controller loop... I0611 03:24:13.672076 1 auth_service.go:94] secret is retrieved from kubernetes api: oci-config I0611 03:24:13.672463 1 auth_service.go:42] Fetching auth config provider for type: user I0611 03:24:14.819774 1 server.go:120] CNI Type of given cluster : OCI_VCN_IP_NATIVE I0611 03:24:14.819999 1 backend.go:374] Starting Backend controller I0611 03:24:14.819824 1 routingpolicy.go:282] Starting Routing Policy Controller I0611 03:24:14.819827 1 ingress.go:685] Starting Ingress controller I0611 03:24:14.819840 1 ingressclass.go:496] Starting Ingress Class controller |
Having When you have installed the OCI native ingress controller, these you must created the following Kubernetes resources need to be created in order to start using it.
IngressClassParameters
IngressClass
IngressClassParameters resource
Use the custom IngressClassParameters
resource to specify the details of the OCI load balancer to you create for the OCI native ingress controller.
Define the resource in a .yaml file named ingress-class-params.yaml
as in the example below:
Code Block |
---|
apiVersion: "ingress.oraclecloud.com/v1beta1" kind: IngressClassParameters metadata: name: native-ic-params namespace: uepe spec: compartmentId: "<ocid of compartment><compartment_ocid from terraform output>" subnetId: "<loadbalancer_subnet_ocid from terraform output>" loadBalancerName: "native-ic-lb-<your cluster name><cluster_name from terraform output>" isPrivate: false maxBandwidthMbps: 400 minBandwidthMbps: 100 |
To create the resource, executerun the following command:
Code Block |
---|
kubectl create -f ingress-class-params.yaml |
IngressClass resource
Use the IngressClass
resource to associate an Ingress
resource with the OCI native ingress controller and the IngressClassParameters
resource.
Define the resource in a .yaml file named ingress-class.yaml
as in the example below:
Code Block |
---|
apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: native-ic-ingress-class annotations: ingressclass.kubernetes.io/is-default-class: "true" oci-native-ingress.oraclecloud.com/id: <loadbalancer_ocid from terraform output> spec: controller: oci.oraclecloud.com/native-ingress-controller parameters: scope: Namespace namespace: uepe apiGroup: ingress.oraclecloud.com kind: ingressclassparameters name: native-ic-params |
To create the resource, executerun the following command:
Code Block |
---|
kubectl create -f ingress-class.yaml |
Kubernetes Add-ons
The Add the following general Kubernetes resources should be added:
external-dns
ExternalDNS is a Kubernetes add-on that configures public DNS servers with information about exposed Kubernetes services to make them discoverable.
To install ExternalDNS, follow these steps:
Create a Kubernetes secret containing the Oracle Cloud Infrastructure user authentication details for that the ExternalDNS to can use when connecting to the Oracle Cloud Infrastructure API to insert for inserting and update updating DNS records in the DNS zone. Create a credentials file named
oci.yaml
and populate with the following content:Code Block language bash auth: region: <region-identifier> from terraform output> tenancy: <tenancy-ocid>_ocid from terraform output> user: <user-ocid>_ocid configured in terraform.tfvars> key: | -----BEGIN RSA PRIVATE KEY----- <private-key> -----END RSA PRIVATE KEY----- fingerprint: <fingerprint> <fingerprint configured in terraform.tfvars> # Omit if there is not a password for the key passphrase: <passphrase> compartment: <compartment-ocid>_ocid from terraform output>
Create a Kubernetes secret named
external-dns-config
from the credentials file you just created .by running the following command:
Code Block |
---|
kubectl create secret generic external-dns-config --from-file=oci.yaml -n uepe |
Create a configuration file (for example, called
external-dns-values.yaml
), and specify the name of the Kubernetes secret you just created .as in the example below:
Code Block |
---|
oci: secretName: external-dns-config provider: oci policy: sync domainFilters: - <cluster_dns_zone_name from terraform output> txtOwnerId: <cluster_dns_zone_ocid from terraform output> |
Add the bitnami helm repository by running the following command:
Code Block |
---|
helm repo add bitnami https://charts.bitnami.com/bitnami |
Update the helm repository to get the latest software by running the following command:
Code Block |
---|
helm repo update |
Perform Do a helm install with the yaml file
external-dns-values.yaml
to deploy ExternalDNS:
Code Block |
---|
helm install external-dns bitnami/external-dns -f external-dns-values.yaml -n uepe |
Confirm that external-dns has been installed successfully by running the following command:
Code Block |
---|
kubectl logs <pod-name> -n uepe |
The logs should look like thissimilar to the example below:
Code Block |
---|
time="2024-06-11T05:29:19Z" level=info msg="Instantiating new Kubernetes client" time="2024-06-11T05:29:19Z" level=info msg="Using inCluster-config based on serviceaccount-token" time="2024-06-11T05:29:19Z" level=info msg="Created Kubernetes client https://10.96.0.1:443" time="2024-06-11T05:29:21Z" level=info msg="All records are already up to date" |
ingress-nginx-controller
Info |
---|
This is an optional add-on. Refer to the Introduction - OCI (4.2) chapter for additional information. |
...
Add the ingress-nginx helm repository:
Code Block language bash helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
Update the helm repository to get the latest software:
Code Block language bash helm repo update
Create a file called
ingress-nginx-values.yaml
and populate it with the following helm values:Code Block language yaml controller: scope: enabled: true admissionWebhooks: enabled: false metrics: enabled: false serviceMonitor: enabled: false ingressClassResource: name: nginx enabled: true default: false controllerValue: "k8s.io/ingress-nginx" watchIngressWithoutClass: false service: externalTrafficPolicy: "Local" targetPorts: http: 80 https: 443 type: NodePort extraArgs: v: 1 serviceAccount: create: false
Install the
ingress-nginx-controller
helm chart:Code Block language bash helm install ingress-nginx ingress-nginx/ingress-nginx --version <helm chart version> -f ingress-nginx-values.yaml -n uepe
Where
<helm chart version>
is a compatible version listed in the Compatibility Matrix (4.12).
Executing If you run the helm list -A
should show command you will see all add-ons added in this section. Example, for example like below:
Code Block | ||
---|---|---|
| ||
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ingress-nginx-controller uepe 1 2024-02-22 11:44:54.18561 +0800 +08 deployed ingress-nginx-4.9.1 1.9.6 |
This section is now complete . Now and you can proceed to the Usage Engine Private Edition Preparations - OCI (4.2) section.