Configure Log Collection, Target, and Visualization - OCI (4.2)

Note!

You need to have a proper OKE cluster setup in order to proceed with these steps. Refer to Set Up Kubernetes Cluster - OCI (4.2) to create the OKE cluster first.

By default Usage Engine deployed in Kubernetes outputs logging to disk and console output. If persistent disk storage is enabled, the logs end up on the mounted shared disk. But persistent disk is not always the desired log target, especially in a cloud environment where persistent data is typically accessed through services and APIs rather than as files. The console logs can be accessed through the "kubectl logs" command or from a Kubernetes dashboard. The buffer for storing the Kubernetes console logs is stored in memory only though and thus will be lost when a Pod terminates.

To get a production ready log configuration you can use tools from the Kubernetes ecosystem and OCI Logging Analytics Service. In this guide we show you how to set up:

  • Fluent-bit for log collection and log forwarding

  • Elasticsearch for log storage

  • Kibana for log visualization

  • OCI Logging Analytics Service for Analytics

These tools give you powerful and flexible log collection, storage, monitoring and visualization. The Elasticsearch database storage also provides powerful tools to perform analytics on the log data. The OCI Logging Analytics Service is a cloud solution that aggregates, indexes, and analyzes a variety of log data from on-premises and multicloud environments. It enables you to search, explore, and correlate this data, derive operational insights and make informed decisions. This guide doesn't describe these tools' functionality in details as it is outside the scope of this guide.

Prerequisite

Before setting up log collection, make sure your Usage Engine Private Edition was installed withย JSON formatted logging enabled.

log: # Format can be "json" or "raw". Default is "raw" format: json

Kubernetes Monitoring Solution inย Oracle Logging Analytics

Use the Kubernetes Monitoring Solution inOracle Logging Analyticsto monitor and generate insights into your Kubernetes deployed in OCI, third party public clouds, private clouds, or on-premises including managed Kubernetes deployments.

To connect your Kubernetes cluster with Logging Analytics:

  1. Open the navigation menu and clickย Observability & Management. Underย Logging Analytics, clickย Solutions, and clickย Kubernetes. Theย Kubernetes Monitoring Solutionย page opens.

    logging-analytics.png
    Logging Analytics
  2. In the Kubernetes Monitoring Solution page, clickย Connect clusters. Theย Add Dataย wizard opens. Here, the Monitor Kubernetes section is already expanded. Clickย Oracle OKE. Theย Configure OKE environment monitoringย page opens.

    config-oke-clusters.png
    Configure OKE Monitoring - Select Clusters
  3. Select the OKE cluster that you want to connect withOracle Logging Analytics and click Next

  4. Select the compartment for telemetry data and related monitoring resources.

  5. Do not select the requiredย Policies and dynamic groups.ย 

  6. Selectย theย metrics serverย for the collection of usage metrics. You can disable the check box if you have already installed it.

  7. Select theย Solution deployment optionย to enable manual deployment of the selected cluster

  8. Clickย Configure log collectionย to proceed

  9. Wait for the log collection configuration to complete

  10. Complete and proceed to the Log Explorer.

Streamย Container Logs toย Elastic Search and Visualize with Kibana

Important!

Youย mustย install Elastic Search, Fluent-bit and Kibana on the same namespace in order to allow working properly. There are some of the reasons:

  • Elastic Search service needs to be accessible by Fluent-bit andย Kibana to establish connection.

  • Kibana required Elastic Search master cert secret presented on the namespace.

Hence, in this guide we are using namespace 'logging' for the installations.

Install Elastic Search

Elastic search will be installed to the namespaceย logging.ย 

  1. Create namespace logging

    kubectl create namespace logging
  2. Add Elastic Search repository to Helm and update repository to retrieve the latest version

    helm repo add elastic https://helm.elastic.co helm repo update
  3. Install Elastic Search.ย 

Install Fluent-bit

Fluent-bitย will be installed to the same namespace as Elastic Search, i.e.,ย logging.

  1. Get service name of Elastic Search pods. This service name is the value set to Host in [OUTPUT] directive.

  2. Getย username and password credential for Elastic X-Pack access. The decrypted username and password are the value set to HTTP_User andย HTTP_Passwdย in [OUTPUT] directive.

  3. Create a custom values yaml, for exampleย fluent-bit-values.yamlย with the following content.

  4. To add thefluentย helm repo, run:

  5. Deploy the Fluent Bit DaemonSet to the cluster.

  6. Verify every Fluent-bit pod's log. Should not see any error or exception if connection to Elastic Search is established successfully.

Install Kibana

Kibanaย will be installed to the same namespace as Fluent-bit, i.e., logging.ย 

  1. Install Kibana.ย Note that service type is set to LoadBalancer to allow public access.

Configure Kibana

Kibana is a visual interface tool that allows you to explore, visualize, and build a dashboard over the log data massed in Elastic Search cluster.ย 

Up to this stage, all pods under namespaceย logging should be up and running.ย 

If all looks good, you can proceed to login to Kibana dashboard web UI.

  1. Retrieve the public access hostname of the Kibana dashboard.

  2. Login to Kibana dashboard web UI with username password same asย HTTP_User andย HTTP_Passwd configured in previous section

  3. Go to Management > Stack Management > Index Management.ย 

  4. If Fluent-bit connection to Elastic Search established successfully, the Indices is created automaticallyย .

  5. Go to Management > Stack Management > Kibana. Create Data view matching the index pattern.

  6. Go to Analytics > Discover to search for logs belong to each index pattern respectively.ย 

  7. User can filter logs using KQL syntax. For instance, enter "kubernetes.pod_name : oci-native-ingress" in the KQL filter input field.

  8. Log record in json format is parsed into fields.