Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Note!

You need to have a proper OKE cluster setup in order to proceed with these steps. Refer to Set Up Kubernetes Cluster - OCI (4.3) to create the OKE cluster first.

...

Thesetools give you powerful and flexible log collection, storage, monitoring and visualization. The Elasticsearch database storage also provides powerful tools to perform analytics on the log data. The OCI Logging Analytics Service is a cloud solution that aggregates, indexes, and analyzes a variety of log data from on-premises and multicloud environments. It enables you to search, explore, and correlate this data, derive operational insights and make informed decisions. This guide doesn't describe these tools' functionality in details as it is outside the scope of this guide.

...

Code Block
log:
  # Format can be "json" or "raw". Default is "raw"
  format: json

Kubernetes Monitoring Solution in Oracle Logging Analytics

Use the Kubernetes Monitoring Solution inOracle Logging Analyticsto monitor and generate insights into your Kubernetes deployed in OCI, third party public clouds, private clouds, or on-premises including managed Kubernetes deployments.

To connect your Kubernetes cluster with Logging Analytics:

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Solutions, and click Kubernetes. The Kubernetes Monitoring Solution page opens.

    logging-analytics.png
  2. In the Kubernetes Monitoring Solution page, click Connect clusters. The Add Data wizard opens. Here, the Monitor Kubernetes section is already expanded. Click Oracle OKE. The Configure OKE environment monitoring page opens.

    config-oke-clusters.png
  3. Select the OKE cluster that you want to connect withOracle Logging Analytics and click Next

  4. Select the compartment for telemetry data and related monitoring resources.

  5. Do not select the required Policies and dynamic groups

  6. Select the metrics server for the collection of usage metrics. You can disable the check box if you have already installed it.

  7. Select the Solution deployment option to enable manual deployment of the selected cluster

    configure-oke-logsmetrics.png
  8. Click Configure log collection to proceed

  9. Wait for the log collection configuration to complete

  10. Complete and proceed to the Log Explorer.

Stream Container Logs to Elastic Search and Visualize with Kibana

...

  1. Get service name of Elastic Search pods. This service name is the value set to Host in [OUTPUT] directive.

    Code Block
    kubectl get svc -n logging
  2. Get username and password credential for Elastic X-Pack access. The decrypted username and password are the value set to HTTP_User and HTTP_Passwd in [OUTPUT] directive.

    Code Block
    kubectl get secrets --namespace=logging elasticsearch-master-credentials -ojsonpath='{.data.username}' | base64 -d
    kubectl get secrets --namespace=logging elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
  3. Create a custom values yaml, for example fluent-bit-values.yaml with the following content.

    Code Block
    config:
      inputs: |
        [INPUT]
            Name                tail
            Tag                 application.*
            Exclude_Path        /var/log/containers/kube-proxy*
            Path                /var/log/containers/*.log
            multiline.parser    docker, cri
            Mem_Buf_Limit       50MB
            Skip_Long_Lines     On
            Refresh_Interval    10
            Read_from_Head      True
        
      filters: |
        [FILTER]
            Name                kubernetes
            Match               application.*
            Kube_URL            https://kubernetes.default.svc:443
            Kube_Tag_Prefix     application.var.log.containers.
            Merge_Log           On
            Merge_Log_Key       log_processed
            K8S-Logging.Parser  On
            K8S-Logging.Exclude Off
            Labels              Off
            Annotations         Off
            Buffer_Size         0
      outputs: |
        [OUTPUT]
            Name                es
            Match               application.*
            Host                elasticsearch-master
            tls                 On
            tls.verify          Off
            HTTP_User           elastic
            HTTP_Passwd         SbeSsXiuWbAnbxUT
            Suppress_Type_Name  On
            Index               fluentbit
            Trace_Error         On
    
    
  4. To add thefluent helm repo, run:

    Code Block
    helm repo add fluent https://fluent.github.io/helm-charts
    helm repo update
  5. Deploy the Fluent Bit DaemonSet to the cluster.

    Code Block
    helm install fluent-bit fluent/fluent-bit -n logging -f fluent-bit-values.yaml
  6. Verify every Fluent-bit pod's log. Should not see any error or exception if connection to Elastic Search is established successfully.

    Code Block
    kubectl logs <fluent-bit pod name> -n logging

...

  1. Retrieve the public access hostname of the Kibana dashboard.

    Code Block
    kubectl get service -n logging kibana-kibana -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
  2. Login to Kibana dashboard web UI with username password same as HTTP_User and HTTP_Passwd configured in previous section

  3. Go to Management > Stack Management > Index Management

  4. If Fluent-bit connection to Elastic Search established successfully, the Indices is created automatically .

    indices.png
  5. Go to Management > Stack Management > Kibana. Create Data view matching the index pattern.

    data-views.png
  6. Go to Analytics > Discover to search for logs belong to each index pattern respectively. Image Removed

    discover.pngImage Added
  7. User can filter logs using KQL syntax. For instance, enter "kubernetes.pod_name : oci-native-ingress" in the KQL filter input field

  8. Log record in json format is parsed into fields.

    Code Block
    {
      "_p": [
        "F"
      ],
      "_p.keyword": [
        "F"
      ],
      "@timestamp": [
        "2024-06-20T06:43:59.178Z"
      ],
      "kubernetes.container_image": [
        "ghcr.io/oracle/oci-native-ingress-controller:v1.3.5"
      ],
      "kubernetes.container_image.keyword": [
        "ghcr.io/oracle/oci-native-ingress-controller:v1.3.5"
      ],
      "kubernetes.container_name": [
        "oci-native-ingress-controller"
      ],
      "kubernetes.container_name.keyword": [
        "oci-native-ingress-controller"
      ],
      "kubernetes.docker_id": [
        "e927b9990c66822ea136b87867626d79fb22bc7cb67700b2b07b643bf53a5a01"
      ],
      "kubernetes.docker_id.keyword": [
        "e927b9990c66822ea136b87867626d79fb22bc7cb67700b2b07b643bf53a5a01"
      ],
      "kubernetes.host": [
        "10.0.10.177"
      ],
      "kubernetes.host.keyword": [
        "10.0.10.177"
      ],
      "kubernetes.labels.app.kubernetes.io/instance": [
        "oci-native-ingress-controller"
      ],
      "kubernetes.labels.app.kubernetes.io/instance.keyword": [
        "oci-native-ingress-controller"
      ],
      "kubernetes.labels.app.kubernetes.io/name": [
        "oci-native-ingress-controller"
      ],
      "kubernetes.labels.app.kubernetes.io/name.keyword": [
        "oci-native-ingress-controller"
      ],
      "kubernetes.labels.pod-template-hash": [
        "67bb8d5f4d"
      ],
      "kubernetes.labels.pod-template-hash.keyword": [
        "67bb8d5f4d"
      ],
      "kubernetes.namespace_name": [
        "native-ingress-controller-system"
      ],
      "kubernetes.namespace_name.keyword": [
        "native-ingress-controller-system"
      ],
      "kubernetes.pod_id": [
        "d3a618b4-c726-4fcd-8ba3-062ddac33716"
      ],
      "kubernetes.pod_id.keyword": [
        "d3a618b4-c726-4fcd-8ba3-062ddac33716"
      ],
      "kubernetes.pod_name": [
        "oci-native-ingress-controller-67bb8d5f4d-strw9"
      ],
      "kubernetes.pod_name.keyword": [
        "oci-native-ingress-controller-67bb8d5f4d-strw9"
      ],
      "log": [
        "I0620 06:43:59.178703       1 backend.go:272] \"validating pod readiness gate status\" pod=\"uepe/ingress-nginx-controller-7477648b4c-pt92s\" gate=podreadiness.ingress.oraclecloud.com/k8s_e4e294007c"
      ],
      "log.keyword": [
        "I0620 06:43:59.178703       1 backend.go:272] \"validating pod readiness gate status\" pod=\"uepe/ingress-nginx-controller-7477648b4c-pt92s\" gate=podreadiness.ingress.oraclecloud.com/k8s_e4e294007c"
      ],
      "stream": [
        "stderr"
      ],
      "stream.keyword": [
        "stderr"
      ],
      "time": [
        "2024-06-20T06:43:59.178Z"
      ],
      "_id": "8HVjNJABKtF0FswcRymT",
      "_index": "fluentbit",
      "_score": null
    }

...