Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Version published after converting to the new editor

Note
titleNote!

You need to have a proper OKE cluster setup in order to proceed with these steps. Refer to /wiki/spaces/UEPE4D/pages/211091598 to create the OKE cluster first.

By default Usage Engine deployed in Kubernetes outputs logging to disk and console output. If persistent disk storage is enabled, the logs end up on the mounted shared disk. But persistent disk is not always the desired log target, especially in a cloud environment where persistent data is typically accessed through services and APIs rather than as files. The console logs can be accessed through the "kubectl logs" command or from a Kubernetes dashboard. The buffer for storing the Kubernetes console logs is stored in memory only though and thus will be lost when a Pod terminates.

To get a production ready log configuration you can use tools from the Kubernetes ecosystem and OCI Logging Analytics Service. In this guide we show you how to set up:

  • Fluent-bit for log collection and log forwarding
  • Elasticsearch for log storage
  • Kibana for log visualization
  • OCI Logging Analytics Service for Analytics

Thesetools give you powerful and flexible log collection, storage, monitoring and visualization. The Elasticsearch database storage also provides powerful tools to perform analytics on the log data. The OCI Logging Analytics Service is a cloud solution that aggregates, indexes, and analyzes a variety of log data from on-premises and multicloud environments. It enables you to search, explore, and correlate this data, derive operational insights and make informed decisions. This guide doesn't describe these tools' functionality in details as it is outside the scope of this guide.

Prerequisite

Before setting up log collection, make sure your Usage Engine Private Edition was installed with JSON formatted logging enabled.


Code Block
linenumberstrue
log:
  # Format can be "json" or "raw". Default is "raw"
  format: json


Kubernetes Monitoring Solution in Oracle Logging Analytics

Use the Kubernetes Monitoring Solution inOracle Logging Analyticsto monitor and generate insights into your Kubernetes deployed in OCI, third party public clouds, private clouds, or on-premises including managed Kubernetes deployments.

To connect your Kubernetes cluster with Logging Analytics:

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Solutions, and click Kubernetes. The Kubernetes Monitoring Solution page opens.

  2. In the Kubernetes Monitoring Solution page, click Connect clusters. The Add Data wizard opens. Here, the Monitor Kubernetes section is already expanded. Click Oracle OKE. The Configure OKE environment monitoring page opens.

  3. Select the OKE cluster that you want to connect withOracle Logging Analytics and click Next
  4. Select the compartment for telemetry data and related monitoring resources.
  5. Do not select the required Policies and dynamic groups
  6. Select the metrics server for the collection of usage metrics. You can disable the check box if you have already installed it.
  7. Select the Solution deployment option to enable manual deployment of the selected clusterImage Modified
  8. Click Configure log collection to proceed
  9. Wait for the log collection configuration to complete
  10. Complete and proceed to the Log Explorer.

Stream container logs to Elastic Search and visualize with Kibana

...

Elastic search will be installed to the namespace logging

  1. Create namespace logging

    Code Block
    linenumberstrue
    kubectl create namespace logging


  2. Add Elastic Search repository to Helm and update repository to retrieve the latest version


    Code Block
    linenumberstrue
    helm repo add elastic https://helm.elastic.co
    helm repo update


  3. Install Elastic Search. 

    Note
    titleNote!

    For simplicity this example installs Elasticsearch without persistent storage. Refer to Elasticsearch Helm chart documentation for help to enable persistent storage:

    https://github.com/elastic/helm-charts/tree/master/elasticsearch


    Code Block
    linenumberstrue
    helm install elasticsearch elastic/elasticsearch -n logging --set=persistence.enabled=false


Install Fluent-bit

Fluent-bit will be installed to the same namespace as Elastic Search, i.e., logging.

  1. Get service name of Elastic Search pods. This service name is the value set to Host in [OUTPUT] directive.


    Code Block
    linenumberstrue
    kubectl get svc -n logging


  2. Get username and password credential for Elastic X-Pack access. The decrypted username and password are the value set to HTTP_User and HTTP_Passwd in [OUTPUT] directive.

    Code Block
    linenumberstrue
    kubectl get secrets --namespace=logging elasticsearch-master-credentials -ojsonpath='{.data.username}' | base64 -d
    kubectl get secrets --namespace=logging elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d


  3. Create a custom values yaml, for example fluent-bit-values.yaml with the following content


    Code Block
    linenumberstrue
    config:
      inputs: |
        [INPUT]
            Name                tail
            Tag                 application.*
            Exclude_Path        /var/log/containers/kube-proxy*
            Path                /var/log/containers/*.log
            multiline.parser    docker, cri
            Mem_Buf_Limit       50MB
            Skip_Long_Lines     On
            Refresh_Interval    10
            Read_from_Head      True
        
      filters: |
        [FILTER]
            Name                kubernetes
            Match               application.*
            Kube_URL            https://kubernetes.default.svc:443
            Kube_Tag_Prefix     application.var.log.containers.
            Merge_Log           On
            Merge_Log_Key       log_processed
            K8S-Logging.Parser  On
            K8S-Logging.Exclude Off
            Labels              Off
            Annotations         Off
            Buffer_Size         0
      outputs: |
        [OUTPUT]
            Name                es
            Match               application.*
            Host                elasticsearch-master
            tls                 On
            tls.verify          Off
            HTTP_User           elastic
            HTTP_Passwd         SbeSsXiuWbAnbxUT
            Suppress_Type_Name  On
            Index               fluentbit
            Trace_Error         On
    
    


  4. To add thefluent helm repo, run:


    Code Block
    linenumberstrue
    helm repo add fluent https://fluent.github.io/helm-charts
    helm repo update


  5. Deploy the Fluent Bit DaemonSet to the cluster.


    Code Block
    linenumberstrue
    helm install fluent-bit fluent/fluent-bit -n logging -f fluent-bit-values.yaml


  6. Verify every Fluent-bit pod's log. Should not see any error or exception if connection to Elastic Search is established successfully.


    Code Block
    linenumberstrue
    kubectl logs <fluent-bit pod name> -n logging


Install Kibana

Kibana will be installed to the same namespace as Fluent-bit, i.e., logging

  1. Install Kibana. Note that service type is set to LoadBalancer to allow public access.


    Code Block
    linenumberstrue
    helm install kibana elastic/kibana -n logging --set=service.type=LoadBalancer --set=service.port=80


...

  1. Retrieve the public access hostname of the Kibana dashboard.


    Code Block
    linenumberstrue
    kubectl get service -n logging kibana-kibana -o jsonpath='{.status.loadBalancer.ingress[0].ip}'


  2. Login to Kibana dashboard web UI with username password same as HTTP_User and HTTP_Passwd configured in previous section
  3. Go to Management > Stack Management > Index Management. 
  4. If Fluent-bit connection to Elastic Search established successfully, the Indices is created automatically  Image Modified
  5. Go to Management > Stack Management > Kibana. Create Data view matching the index patternImage Modified
  6. Go to Analytics > Discover to search for logs belong to each index pattern respectively. Image Modified
  7. User can filter logs using KQL syntax. For instance, enter "kubernetes.pod_name : oci-native-ingress" in the KQL filter input fieldImage Modified
  8. Log record in json format is parsed into fields


    Code Block
    linenumberstrue
    {
      "_p": [
        "F"
      ],
      "_p.keyword": [
        "F"
      ],
      "@timestamp": [
        "2024-06-20T06:43:59.178Z"
      ],
      "kubernetes.container_image": [
        "ghcr.io/oracle/oci-native-ingress-controller:v1.3.5"
      ],
      "kubernetes.container_image.keyword": [
        "ghcr.io/oracle/oci-native-ingress-controller:v1.3.5"
      ],
      "kubernetes.container_name": [
        "oci-native-ingress-controller"
      ],
      "kubernetes.container_name.keyword": [
        "oci-native-ingress-controller"
      ],
      "kubernetes.docker_id": [
        "e927b9990c66822ea136b87867626d79fb22bc7cb67700b2b07b643bf53a5a01"
      ],
      "kubernetes.docker_id.keyword": [
        "e927b9990c66822ea136b87867626d79fb22bc7cb67700b2b07b643bf53a5a01"
      ],
      "kubernetes.host": [
        "10.0.10.177"
      ],
      "kubernetes.host.keyword": [
        "10.0.10.177"
      ],
      "kubernetes.labels.app.kubernetes.io/instance": [
        "oci-native-ingress-controller"
      ],
      "kubernetes.labels.app.kubernetes.io/instance.keyword": [
        "oci-native-ingress-controller"
      ],
      "kubernetes.labels.app.kubernetes.io/name": [
        "oci-native-ingress-controller"
      ],
      "kubernetes.labels.app.kubernetes.io/name.keyword": [
        "oci-native-ingress-controller"
      ],
      "kubernetes.labels.pod-template-hash": [
        "67bb8d5f4d"
      ],
      "kubernetes.labels.pod-template-hash.keyword": [
        "67bb8d5f4d"
      ],
      "kubernetes.namespace_name": [
        "native-ingress-controller-system"
      ],
      "kubernetes.namespace_name.keyword": [
        "native-ingress-controller-system"
      ],
      "kubernetes.pod_id": [
        "d3a618b4-c726-4fcd-8ba3-062ddac33716"
      ],
      "kubernetes.pod_id.keyword": [
        "d3a618b4-c726-4fcd-8ba3-062ddac33716"
      ],
      "kubernetes.pod_name": [
        "oci-native-ingress-controller-67bb8d5f4d-strw9"
      ],
      "kubernetes.pod_name.keyword": [
        "oci-native-ingress-controller-67bb8d5f4d-strw9"
      ],
      "log": [
        "I0620 06:43:59.178703       1 backend.go:272] \"validating pod readiness gate status\" pod=\"uepe/ingress-nginx-controller-7477648b4c-pt92s\" gate=podreadiness.ingress.oraclecloud.com/k8s_e4e294007c"
      ],
      "log.keyword": [
        "I0620 06:43:59.178703       1 backend.go:272] \"validating pod readiness gate status\" pod=\"uepe/ingress-nginx-controller-7477648b4c-pt92s\" gate=podreadiness.ingress.oraclecloud.com/k8s_e4e294007c"
      ],
      "stream": [
        "stderr"
      ],
      "stream.keyword": [
        "stderr"
      ],
      "time": [
        "2024-06-20T06:43:59.178Z"
      ],
      "_id": "8HVjNJABKtF0FswcRymT",
      "_index": "fluentbit",
      "_score": null
    }


...