OCI Add-ons
The Add the following OCI-specific resources should be added:
oci-file-service-storage
Info |
---|
This is an optional add-on. Refer to the , see Introduction - OCI chapter (4.2) for additional information. |
Note!
Persistent volume setup is an optional step. Skip this section if you do not intend to have persistent file storage.
The OCI File Storage service provides a durable, scalable, distributed, and enterprise-grade network file system.
...
To enable the CSI volume plugin to create and manage File Storage resources, the appropriate IAM policies must be installedapplied by following these steps:
Policy Apply policy to create and/or manage file systems, mount targets, and export paths:
Code Block | ||
---|---|---|
| ||
ALLOW any-user to manage file-family in compartment <compartment-name> where request.principal.type = 'cluster' |
Policy Apply policy to use VNICs, private IPs, private DNS zones, and subnets:
Code Block | ||
---|---|---|
| ||
ALLOW any-user to use virtual-network-family in compartment <compartment-name> where request.principal.type = 'cluster' |
Policy to enable the CSI volume plugin to access that master encryption key:
Code Block | ||
---|---|---|
| ||
Allow service FssOc1Prod to use keys in compartment <compartment-name> where target.key.id = '<key_OCID>' |
Code Block | ||
---|---|---|
| ||
Allow any-user to use key-delegates in compartment <compartment-name> where ALL {request.principal.type = 'cluster', target.key.id = '<key_OCID>'} |
Where <compartment-name> and <key_OCID> can be retrieved from the console
...
Dynamic Provisioning
These steps describe how to create a dynamically provisioned volume created through OCI File Storage access points and a corresponding persistent volume claim (PVC).
Prepare a
storageclass.yaml
file with StorageClass manifest for OCI File Storage:
Code Block | ||
---|---|---|
| ||
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fss-dyn-storage
provisioner: fss.csi.oraclecloud.com
parameters:
availabilityDomain: <availability_Domain>
mountTargetSubnetOcid: <mountTarget_Subnet_Ocid>
kmsKeyOcid: <key_Ocid> |
Deploy the storage class
Code Block | ||
---|---|---|
| ||
kubectl apply -f storageclass.yaml |
For more information, please refer to the dynamic provisioning documentation.
oci-native-ingress-controller
The OCI native ingress controller implements the rules and configuration options defined in a Kubernetes ingress resource to load balance and route incoming traffic to service pods running on worker nodes in a cluster. The OCI native ingress controller creates an OCI flexible load balancer to handle requests, and configures the OCI load balancer to route requests according to the rules defined in the ingress resource.
The OCI Native Ingress controller creates the following OCI load balancer resources:
A load balancer for each
IngressClass
resource where you have specified the OCI native ingress controller as the controller.A load balancer backend set for each unique Kubernetes service name and port number combination that you include in routing rules in
Ingress
resources in the cluster.A routing policy that reflect the rules defined in the ingress resource, that is used to route traffic to backend set.
A load balancer listener for each unique port that you include in routing rules in
Ingress
resources in the cluster
To install OCI Native Ingress Controller, follow these steps:
Create a config file named user-auth-config.yaml, containing credential information, in the following format:
Code Block | ||
---|---|---|
| ||
auth:
region: <region-identifier>
user: <user-ocid>
fingerprint: <fingerprint>
tenancy: <tenancy-ocid> |
Create a Kubernetes secret resource named
oci-config
in the cluster by entering:
Code Block | ||
---|---|---|
| ||
kubectl create secret generic oci-config \
--from-file=config=user-auth-config.yaml \
--from-file=private-key=<private-key-file-path>.pem \
--namespace uepe |
Grant permission to the OCI Native Ingress Controller to access resources created by other OCI services, such as the Load Balancer service and the Certificates service. Hence, these IAM policies must be installed.
Code Block |
---|
Allow group <group-name> to manage load-balancers in compartment <compartment-name>
Allow group <group-name> to use virtual-network-family in compartment <compartment-name>
Allow group <group-name> to manage cabundles in compartment <compartment-name>
Allow group <group-name> to manage cabundle-associations in compartment <compartment-name>
Allow group <group-name> to manage leaf-certificates in compartment <compartment-name>
Allow group <group-name> to read leaf-certificate-bundles in compartment <compartment-name>
Allow group <group-name> to manage certificate-associations in compartment <compartment-name>
Allow group <group-name> to read certificate-authorities in compartment <compartment-name>
Allow group <group-name> to manage certificate-authority-associations in compartment <compartment-name>
Allow group <group-name> to read certificate-authority-bundles in compartment <compartment-name>
Allow group <group-name> to read cluster-family in compartment <compartment-name> |
Code Block |
---|
ALLOW any-user to manage network-security-groups in <compartment-name> Team-Stratus where request.principal.type = 'cluster'
ALLOW any-user to manage vcns in compartment <compartment-name> where request.principal.type = 'cluster'
ALLOW any-user to manage virtual-network-family in compartment <compartment-name> where request.principal.type = 'cluster' |
...
Update Default CSI Driver
When a pod attempts to access a persistent volume (PV) backed by a file system in the File Storage service, the attempt can fail with a "Permission Denied" message since the volume is only accessible to processes running as root. As a result, a pod that is not running as root receives the "Permission Denied" message when attempting to access a directory or file in the mounted volume.
To avoid getting the "Permission Denied” message, follow these steps:
Obtain the CSIDriver configuration file by running the following command:
Code Block |
---|
kubectl get csiDriver fss.csi.oraclecloud.com -o yaml > fss_csi_driver.yaml |
Edit the
fss_csi_driver.yaml
file and change the CSIDriver object'sspec.fsGroupPolicy
attribute fromReadWriteOnceWithFSType
toFile
, for example as below:
Code Block |
---|
kind: CSIDriver
metadata:
creationTimestamp: "<timestamp>"
name: fss.csi.oraclecloud.com
resourceVersion: "<version>"
uid: <identifier>
spec:
attachRequired: false
fsGroupPolicy: File
podInfoOnMount: false
requiresRepublish: false
storageCapacity: false
volumeLifecycleModes:
- Persistent |
Delete the existing CSIDriver object by running the following command:
Code Block |
---|
kubectl delete csiDriver fss.csi.oraclecloud.com |
Create the new CSIDriver object from
fss_csi_driver.yaml
by running the following command:
Code Block |
---|
kubectl apply -f fss_csi_driver.yaml |
For more information, see Troubleshooting File Storage Service Provisioning of PVCs.
Provisioning of PVC
You can use the File Storage service to provision persistent volume claims (PVCs) in two ways:
Dynamic Provisioning (deprecated way)
Static Provisioning (preferred way)
Static Provisioning
Follow these steps to create a PVC by creating a PV backed by the new file system and then create the PVC and binds the PVC to the PV backed by the File Storage service:
Prepare a
pv.yaml
file with PersistentVolume manifest for OCI File Storage with the following content:
Code Block |
---|
apiVersion: v1
kind: PersistentVolume
metadata:
name: fss-pv
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Delete
csi:
driver: fss.csi.oraclecloud.com
volumeHandle: <filesystem_ocid from terraform output>:<mount_target_IP_address from terraform output>:<filesystem_mount_path from terraform output> |
Deploy the PersistentVolume by running the following command:
Code Block |
---|
kubectl apply -f pv.yaml |
Prepare a
pvc.yaml
file with PersistentVolumeClaim manifest for OCI File Storage with the following content:
Code Block |
---|
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fss-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
volumeName: fss-pv |
Deploy the PersistentVolumeClaim by running the following command:
Code Block |
---|
kubectl apply -f pvc.yaml -n uepe |
Verify that PVC is bound to the PV successfully by running the following command:
Code Block |
---|
kubectl get pv |
The output below shows that a persistent volume claim is successfully bound to a persistent volume.
Code Block |
---|
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
fss-pv 1Gi RWX Delete Bound uepe/fss-pvc <unset> 55s |
oci-native-ingress-controller
Note |
---|
Note! The cert-manager needs to be installed prior to oci-native-ingress-controller installation since it refers to the cert-manager internally. The easiest way to install the cert-manager is via the cluster add-ons. From the console, browse to |
The OCI native ingress controller implements the rules and configuration options defined in a Kubernetes ingress resource to load balance and route incoming traffic to service pods running on worker nodes in a cluster. The OCI native ingress controller creates an OCI flexible load balancer to handle requests, and configures the OCI load balancer to route requests according to the rules defined in the ingress resource.
The OCI Native Ingress controller creates the following OCI load balancer resources:
A load balancer for each
IngressClass
resource where you have specified the OCI native ingress controller as the controller.A load balancer backend set for each unique Kubernetes service name and port number combination that you include in routing rules in
Ingress
resources in the cluster.A routing policy that reflect the rules defined in the ingress resource that is used to route traffic to backend set.
A load balancer listener for each unique port that you include in routing rules in
Ingress
resources in the cluster.
To install OCI Native Ingress Controller:
Create a config file named
user-auth-config.yaml
, containing credential information, in the following format:
Code Block |
---|
auth:
region: <region from terraform output>
user: <user_ocid configured in terraform.tfvars>
fingerprint: <fingerprint configured in terraform.tfvars>
tenancy: <tenancy_ocid from terraform output> |
Create a Kubernetes secret resource named
oci-config
in the cluster by running the following command:
Code Block | ||
---|---|---|
| ||
kubectl create secret generic oci-config \
--from-file=config=user-auth-config.yaml \
--from-file=private-key=<private-key-file-path>.pem \
--namespace uepe |
Grant permission to the OCI Native Ingress Controller to access resources created by other OCI services, such as the Load Balancer service and the Certificates service by installing these IAM policies:
Code Block |
---|
Allow group <group-name> to manage load-balancers in compartment <compartment-name> Allow group <group-name> to inspectuse certificatevirtual-authoritynetwork-family in compartment <compartment-name> Allow group <group-name> to use certificate-authority-delegatemanage cabundles in compartment <compartment-name> Allow group <group-name> to manage leafcabundle-certificate-familyassociations in compartment <compartment-name> Allow group <group-name> to usemanage leaf-certificate-familycertificates in compartment <compartment-name> Allow group <group-name> to useread leaf-certificate-authority-delegatebundles in compartment <compartment-name> Allow group <group-name> to manage certificate-associations in compartment <compartment-name> Allow group <group-name> to inspectread certificate-authority-associationsauthorities in compartment <compartment-name> Allow group <group-name> to manage cabundle-associations in compartment <compartment-name> |
Clone the OCI native ingress controller repository from GitHub
Code Block |
---|
git clone https://github.com/oracle/oci-native-ingress-controller |
In the local Git repository, navigate to the
oci-native-ingress-controller
directory and create a config file named oci-native-ingress-controller-values.yaml with the following content:
Code Block |
---|
compartment_id: <ocid of compartment>
subnet_id: <ocid of load balancer's subnet>
cluster_id: <ocid of the cluster>
authType: user
deploymentNamespace: uepe |
Generate the manifest .yaml files for the required resources
Code Block |
---|
helm template --include-crds oci-native-ingress-controller helm/oci-native-ingress-controller -f oci-native-ingress-controller-values.yaml --output-dir deploy/manifests |
Deploy the required resources using the manifest .yaml files
Code Block |
---|
kubectl apply -f deploy/manifests/oci-native-ingress-controller/crds |
Code Block |
---|
kubectl apply -f deploy/manifests/oci-native-ingress-controller/templates |
Confirm that OCI native ingress controller has been installed successfully
Code Block |
---|
kubectl get pods -n uepe |
Having installed the OCI native ingress controller, these Kubernetes resources need to be created in order to start using it.
IngressClassParameters
IngressClass
IngressClassParameters resource
Use the custom IngressClassParameters
resource to specify details of the OCI load balancer to create for the OCI native ingress controller.
Define the resource in a .yaml file named ingress-class-params.yaml
Code Block |
---|
apiVersion: "ingress.oraclecloud.com/v1beta1"
kind: IngressClassParameters
metadata:
name: native-ic-params
namespace: uepe
spec:
compartmentId: "<ocid of compartment>"
subnetId: "<load balancer subnet's ocid from terraform output>"
loadBalancerName: "native-ic-lb-<your cluster name>"
isPrivate: false
maxBandwidthMbps: 400
minBandwidthMbps: 100 |
To create the resource, execute
Code Block |
---|
kubectl create -f ingress-class-params.yaml |
IngressClass resource
Use the IngressClass
resource to associate an Ingress
resource with the OCI native ingress controller and the IngressClassParameters
resource.
Define the resource in a .yaml file named ingress-class.yaml
Code Block |
---|
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: native-ic-ingress-class
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
oci-native-ingress.oraclecloud.com/id: <loadbalancer's ocid from terraform output>
spec:
controller: oci.oraclecloud.com/native-ingress-controller
parameters:
scope: Namespace
namespace: uepe
apiGroup: ingress.oraclecloud.com
kind: ingressclassparameters
name: native-ic-params |
To create the resource, execute
Code Block |
---|
kubectl create -f ingress-class.yaml |
Kubernetes Add-ons
The following general Kubernetes resources should be added:
external-dns
ExternalDNS is a Kubernetes add-on that configures public DNS servers with information about exposed Kubernetes services to make them discoverable.
To install ExternalDNS, follow these steps:
Create a Kubernetes secret containing the Oracle Cloud Infrastructure user authentication details for ExternalDNS to use when connecting to the Oracle Cloud Infrastructure API to insert and update DNS records in the DNS zone. Create a credentials file named oci.yaml and populate with the following content:
Code Block language bash auth: region: <region-identifier> tenancy: <tenancy-ocid> user: <user-ocid> key: | -----BEGIN RSA PRIVATE KEY----- <private-key> -----END RSA PRIVATE KEY----- fingerprint: <fingerprint> # Omit if there is not a password for the key passphrase: <passphrase> compartment: <compartment-ocid>
Create a Kubernetes secret named
external-dns-config
from the credentials file you just created.
Code Block |
---|
kubectl create secret generic external-dns-config --from-file=oci.yaml |
Create a configuration file (for example, called
external-dns-deployment.yaml
) to create the ExternalDNS deployment, and specify the name of the Kubernetes secret you just created.
Code Block |
---|
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.7.3
args:
- --source=service
- --source=ingress
- --provider=oci
- --txt-owner-id=<DNS zone's ocid from terraform output>
volumeMounts:
- name: config
mountPath: /etc/kubernetes/
volumes:
- name: config
secret:
secretName: external-dns-config |
Apply the configuration file to deploy ExternalDNS
Code Block |
---|
kubectl apply -f external-dns-deployment.yaml -n uepe |
Confirm that external-dns has been installed successfully
Code Block |
---|
kubectl get pods -n uepe certificate-authority-associations in compartment <compartment-name> Allow group <group-name> to read certificate-authority-bundles in compartment <compartment-name> Allow group <group-name> to read cluster-family in compartment <compartment-name> |
Code Block |
---|
ALLOW any-user to manage network-security-groups in <compartment-name> Team-Stratus where request.principal.type = 'cluster'
ALLOW any-user to manage vcns in compartment <compartment-name> where request.principal.type = 'cluster'
ALLOW any-user to manage virtual-network-family in compartment <compartment-name> where request.principal.type = 'cluster' |
Code Block |
---|
Allow group <group-name> to inspect certificate-authority-family in compartment <compartment-name>
Allow group <group-name> to use certificate-authority-delegate in compartment <compartment-name>
Allow group <group-name> to manage leaf-certificate-family in compartment <compartment-name>
Allow group <group-name> to use leaf-certificate-family in compartment <compartment-name>
Allow group <group-name> to use certificate-authority-delegate in compartment <compartment-name>
Allow group <group-name> to manage certificate-associations in compartment <compartment-name>
Allow group <group-name> to inspect certificate-authority-associations in compartment <compartment-name>
Allow group <group-name> to manage cabundle-associations in compartment <compartment-name> |
Clone the OCI native ingress controller repository from GitHub by running the following command:
Code Block |
---|
git clone https://github.com/oracle/oci-native-ingress-controller |
In the local Git repository, navigate to the
oci-native-ingress-controller
directory and create a config file namedoci-native-ingress-controller-values.yaml
with the following content:
Code Block |
---|
compartment_id: <compartment_ocid from terraform output>
subnet_id: <loadbalancer_subnet_ocid from terraform output>
cluster_id: <cluster_ocid from terraform output>
authType: user
deploymentNamespace: uepe |
Install the config file
oci-native-ingress-controller-values.yaml
by running the following command:
Code Block |
---|
helm install oci-native-ingress-controller helm/oci-native-ingress-controller -f oci-native-ingress-controller-values.yaml -n uepe |
Confirm that the OCI native ingress controller has been installed successfully by running the following command:
Code Block |
---|
kubectl logs <pod-names> -n uepe |
The logs should look similar to:
Code Block |
---|
I0611 03:24:13.667434 1 leaderelection.go:258] successfully acquired lease uepe/oci-native-ingress-controller
I0611 03:24:13.667480 1 server.go:81] Controller loop...
I0611 03:24:13.672076 1 auth_service.go:94] secret is retrieved from kubernetes api: oci-config
I0611 03:24:13.672463 1 auth_service.go:42] Fetching auth config provider for type: user
I0611 03:24:14.819774 1 server.go:120] CNI Type of given cluster : OCI_VCN_IP_NATIVE
I0611 03:24:14.819999 1 backend.go:374] Starting Backend controller
I0611 03:24:14.819824 1 routingpolicy.go:282] Starting Routing Policy Controller
I0611 03:24:14.819827 1 ingress.go:685] Starting Ingress controller
I0611 03:24:14.819840 1 ingressclass.go:496] Starting Ingress Class controller |
When you have installed the OCI native ingress controller, you must created the following Kubernetes resources in order to start using it.
IngressClassParameters
IngressClass
IngressClassParameters resource
Use the custom IngressClassParameters
resource to specify the details of the OCI load balancer you create for the OCI native ingress controller.
Define the resource in a .yaml file named ingress-class-params.yaml
as in the example below:
Code Block |
---|
apiVersion: "ingress.oraclecloud.com/v1beta1"
kind: IngressClassParameters
metadata:
name: native-ic-params
namespace: uepe
spec:
compartmentId: "<compartment_ocid from terraform output>"
subnetId: "<loadbalancer_subnet_ocid from terraform output>"
loadBalancerName: "native-ic-lb-<cluster_name from terraform output>"
isPrivate: false
maxBandwidthMbps: 400
minBandwidthMbps: 100 |
To create the resource, run the following command:
Code Block |
---|
kubectl create -f ingress-class-params.yaml |
IngressClass resource
Use the IngressClass
resource to associate an Ingress
resource with the OCI native ingress controller and the IngressClassParameters
resource.
Define the resource in a .yaml file named ingress-class.yaml
as in the example below:
Code Block |
---|
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: native-ic-ingress-class
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
oci-native-ingress.oraclecloud.com/id: <loadbalancer_ocid from terraform output>
spec:
controller: oci.oraclecloud.com/native-ingress-controller
parameters:
scope: Namespace
namespace: uepe
apiGroup: ingress.oraclecloud.com
kind: ingressclassparameters
name: native-ic-params |
To create the resource, run the following command:
Code Block |
---|
kubectl create -f ingress-class.yaml |
Kubernetes Add-ons
Add the following general Kubernetes resources:
external-dns
ExternalDNS is a Kubernetes add-on that configures public DNS servers with information about exposed Kubernetes services to make them discoverable.
To install ExternalDNS:
Create a Kubernetes secret containing the Oracle Cloud Infrastructure user authentication details for that the ExternalDNS can use when connecting to the Oracle Cloud Infrastructure API for inserting and updating DNS records in the DNS zone. Create a credentials file named
oci.yaml
with the following content:Code Block language bash auth: region: <region from terraform output> tenancy: <tenancy_ocid from terraform output> user: <user_ocid configured in terraform.tfvars> key: | -----BEGIN RSA PRIVATE KEY----- <private-key> -----END RSA PRIVATE KEY----- fingerprint: <fingerprint configured in terraform.tfvars> # Omit if there is not a password for the key passphrase: <passphrase> compartment: <compartment_ocid from terraform output>
Create a Kubernetes secret named
external-dns-config
from the credentials file you just created by running the following command:
Code Block |
---|
kubectl create secret generic external-dns-config --from-file=oci.yaml -n uepe |
Create a configuration file (for example, called
external-dns-values.yaml
), and specify the name of the Kubernetes secret you just created as in the example below:
Code Block |
---|
oci:
secretName: external-dns-config
provider: oci
policy: sync
domainFilters:
- <cluster_dns_zone_name from terraform output>
txtOwnerId: <cluster_dns_zone_ocid from terraform output> |
Add the bitnami helm repository by running the following command:
Code Block |
---|
helm repo add bitnami https://charts.bitnami.com/bitnami |
Update the helm repository to get the latest software by running the following command:
Code Block |
---|
helm repo update |
Do a helm install with the yaml file
external-dns-values.yaml
to deploy ExternalDNS:
Code Block |
---|
helm install external-dns bitnami/external-dns -f external-dns-values.yaml -n uepe |
Confirm that external-dns has been installed successfully by running the following command:
Code Block |
---|
kubectl logs <pod-name> -n uepe |
The logs should look similar to the example below:
Code Block |
---|
time="2024-06-11T05:29:19Z" level=info msg="Instantiating new Kubernetes client"
time="2024-06-11T05:29:19Z" level=info msg="Using inCluster-config based on serviceaccount-token"
time="2024-06-11T05:29:19Z" level=info msg="Created Kubernetes client https://10.96.0.1:443"
time="2024-06-11T05:29:21Z" level=info msg="All records are already up to date" |
ingress-nginx-controller
Info |
---|
This is an optional add-on. Refer to the Introduction - OCI (4.2) chapter for additional information. |
...
Add the ingress-nginx helm repository:
Code Block language bash helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
Update the helm repository to get the latest software:
Code Block language bash helm repo update
Create a file called
ingress-nginx-values.yaml
and populate it with the following helm values:Code Block language yaml controller: scope: enabled: true admissionWebhooks: enabled: false metrics: enabled: false serviceMonitor: enabled: false ingressClassResource: name: nginx enabled: true default: false controllerValue: "k8s.io/ingress-nginx" watchIngressWithoutClass: false service: externalTrafficPolicy: "Local" targetPorts: http: 80 https: 443 type: NodePort extraArgs: v: 1 serviceAccount: create: false
Install the
ingress-nginx-controller
helm chart:Code Block language bash helm install ingress-nginx ingress-nginx/ingress-nginx --version <helm chart version> -f ingress-nginx-values.yaml -n uepe
Where
<helm chart version>
is a compatible version listed in the Compatibility Matrix (4.12).
Executing If you run the helm list
should show -A
command you will see all add-ons added in this section. Example, for example like below:
Code Block | ||
---|---|---|
| ||
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ingress-nginx-controller uepe 1 2024-02-22 11:44:54.18561 +0800 +08 deployed ingress-nginx-4.9.1 1.9.6 |
This section is now complete . Now and you can proceed to the Usage Engine Private Edition Preparations - OCI (4.2) section.