Note | ||
---|---|---|
| ||
You need to have a proper OKE cluster setup in order to proceed with these steps. Refer to /wiki/spaces/UEPE4D/pages/211091598 to create the OKE cluster first. |
By default Usage Engine deployed in Kubernetes outputs logging to disk and console output. If persistent disk storage is enabled, the logs end up on the mounted shared disk. But persistent disk is not always the desired log target, especially in a cloud environment where persistent data is typically accessed through services and APIs rather than as files. The console logs can be accessed through the "kubectl logs" command or from a Kubernetes dashboard. The buffer for storing the Kubernetes console logs is stored in memory only though and thus will be lost when a Pod terminates.
To get a production ready log configuration you can use tools from the Kubernetes ecosystem and OCI Logging Analytics Service. In this guide we show you how to set up:
- Fluent-bit for log collection and log forwarding
- Elasticsearch for log storage
- Kibana for log visualization
- OCI Logging Analytics Service for Analytics
Thesetools give you powerful and flexible log collection, storage, monitoring and visualization. The Elasticsearch database storage also provides powerful tools to perform analytics on the log data. The OCI Logging Analytics Service is a cloud solution that aggregates, indexes, and analyzes a variety of log data from on-premises and multicloud environments. It enables you to search, explore, and correlate this data, derive operational insights and make informed decisions. This guide doesn't describe these tools' functionality in details as it is outside the scope of this guide.
Prerequisite
Before setting up log collection, make sure your Usage Engine Private Edition was installed with JSON formatted logging enabled.
Code Block | ||
---|---|---|
| ||
log: # Format can be "json" or "raw". Default is "raw" format: json |
Kubernetes Monitoring Solution in Oracle Logging Analytics
Use the Kubernetes Monitoring Solution inOracle Logging Analyticsto monitor and generate insights into your Kubernetes deployed in OCI, third party public clouds, private clouds, or on-premises including managed Kubernetes deployments.
To connect your Kubernetes cluster with Logging Analytics:
Open the navigation menu and click Observability & Management. Under Logging Analytics, click Solutions, and click Kubernetes. The Kubernetes Monitoring Solution page opens.
In the Kubernetes Monitoring Solution page, click Connect clusters. The Add Data wizard opens. Here, the Monitor Kubernetes section is already expanded. Click Oracle OKE. The Configure OKE environment monitoring page opens.
- Select the OKE cluster that you want to connect withOracle Logging Analytics and click Next
- Select the compartment for telemetry data and related monitoring resources.
- Do not select the required Policies and dynamic groups.
- Select the metrics server for the collection of usage metrics. You can disable the check box if you have already installed it.
- Select the Solution deployment option to enable manual deployment of the selected cluster
- Click Configure log collection to proceed
- Wait for the log collection configuration to complete
- Complete and proceed to the Log Explorer.
Stream container logs to Elastic Search and visualize with Kibana
...
Elastic search will be installed to the namespace logging.
Create namespace logging
Code Block linenumbers true kubectl create namespace logging
- Add Elastic Search repository to Helm and update repository to retrieve the latest version
Code Block linenumbers true helm repo add elastic https://helm.elastic.co helm repo update
Install Elastic Search.
Note title Note! For simplicity this example installs Elasticsearch without persistent storage. Refer to Elasticsearch Helm chart documentation for help to enable persistent storage:
https://github.com/elastic/helm-charts/tree/master/elasticsearchCode Block linenumbers true helm install elasticsearch elastic/elasticsearch -n logging --set=persistence.enabled=false
Install Fluent-bit
Fluent-bit will be installed to the same namespace as Elastic Search, i.e., logging.
- Get service name of Elastic Search pods. This service name is the value set to Host in [OUTPUT] directive.
Code Block linenumbers true kubectl get svc -n logging
Get username and password credential for Elastic X-Pack access. The decrypted username and password are the value set to HTTP_User and HTTP_Passwd in [OUTPUT] directive.
Code Block linenumbers true kubectl get secrets --namespace=logging elasticsearch-master-credentials -ojsonpath='{.data.username}' | base64 -d kubectl get secrets --namespace=logging elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
- Create a custom values yaml, for example fluent-bit-values.yaml with the following content
Code Block linenumbers true config: inputs: | [INPUT] Name tail Tag application.* Exclude_Path /var/log/containers/kube-proxy* Path /var/log/containers/*.log multiline.parser docker, cri Mem_Buf_Limit 50MB Skip_Long_Lines On Refresh_Interval 10 Read_from_Head True filters: | [FILTER] Name kubernetes Match application.* Kube_URL https://kubernetes.default.svc:443 Kube_Tag_Prefix application.var.log.containers. Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off Labels Off Annotations Off Buffer_Size 0 outputs: | [OUTPUT] Name es Match application.* Host elasticsearch-master tls On tls.verify Off HTTP_User elastic HTTP_Passwd SbeSsXiuWbAnbxUT Suppress_Type_Name On Index fluentbit Trace_Error On
- To add the
fluent
helm repo, run:Code Block linenumbers true helm repo add fluent https://fluent.github.io/helm-charts helm repo update
- Deploy the Fluent Bit DaemonSet to the cluster.
Code Block linenumbers true helm install fluent-bit fluent/fluent-bit -n logging -f fluent-bit-values.yaml
- Verify every Fluent-bit pod's log. Should not see any error or exception if connection to Elastic Search is established successfully.
Code Block linenumbers true kubectl logs <fluent-bit pod name> -n logging
Install Kibana
Kibana will be installed to the same namespace as Fluent-bit, i.e., logging.
- Install Kibana. Note that service type is set to LoadBalancer to allow public access.
Code Block linenumbers true helm install kibana elastic/kibana -n logging --set=service.type=LoadBalancer --set=service.port=80
...
- Retrieve the public access hostname of the Kibana dashboard.
Code Block linenumbers true kubectl get service -n logging kibana-kibana -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
- Login to Kibana dashboard web UI with username password same as HTTP_User and HTTP_Passwd configured in previous section
- Go to Management > Stack Management > Index Management.
- If Fluent-bit connection to Elastic Search established successfully, the Indices is created automatically
- Go to Management > Stack Management > Kibana. Create Data view matching the index pattern
- Go to Analytics > Discover to search for logs belong to each index pattern respectively.
- User can filter logs using KQL syntax. For instance, enter "kubernetes.pod_name : oci-native-ingress" in the KQL filter input field
- Log record in json format is parsed into fields
Code Block linenumbers true { "_p": [ "F" ], "_p.keyword": [ "F" ], "@timestamp": [ "2024-06-20T06:43:59.178Z" ], "kubernetes.container_image": [ "ghcr.io/oracle/oci-native-ingress-controller:v1.3.5" ], "kubernetes.container_image.keyword": [ "ghcr.io/oracle/oci-native-ingress-controller:v1.3.5" ], "kubernetes.container_name": [ "oci-native-ingress-controller" ], "kubernetes.container_name.keyword": [ "oci-native-ingress-controller" ], "kubernetes.docker_id": [ "e927b9990c66822ea136b87867626d79fb22bc7cb67700b2b07b643bf53a5a01" ], "kubernetes.docker_id.keyword": [ "e927b9990c66822ea136b87867626d79fb22bc7cb67700b2b07b643bf53a5a01" ], "kubernetes.host": [ "10.0.10.177" ], "kubernetes.host.keyword": [ "10.0.10.177" ], "kubernetes.labels.app.kubernetes.io/instance": [ "oci-native-ingress-controller" ], "kubernetes.labels.app.kubernetes.io/instance.keyword": [ "oci-native-ingress-controller" ], "kubernetes.labels.app.kubernetes.io/name": [ "oci-native-ingress-controller" ], "kubernetes.labels.app.kubernetes.io/name.keyword": [ "oci-native-ingress-controller" ], "kubernetes.labels.pod-template-hash": [ "67bb8d5f4d" ], "kubernetes.labels.pod-template-hash.keyword": [ "67bb8d5f4d" ], "kubernetes.namespace_name": [ "native-ingress-controller-system" ], "kubernetes.namespace_name.keyword": [ "native-ingress-controller-system" ], "kubernetes.pod_id": [ "d3a618b4-c726-4fcd-8ba3-062ddac33716" ], "kubernetes.pod_id.keyword": [ "d3a618b4-c726-4fcd-8ba3-062ddac33716" ], "kubernetes.pod_name": [ "oci-native-ingress-controller-67bb8d5f4d-strw9" ], "kubernetes.pod_name.keyword": [ "oci-native-ingress-controller-67bb8d5f4d-strw9" ], "log": [ "I0620 06:43:59.178703 1 backend.go:272] \"validating pod readiness gate status\" pod=\"uepe/ingress-nginx-controller-7477648b4c-pt92s\" gate=podreadiness.ingress.oraclecloud.com/k8s_e4e294007c" ], "log.keyword": [ "I0620 06:43:59.178703 1 backend.go:272] \"validating pod readiness gate status\" pod=\"uepe/ingress-nginx-controller-7477648b4c-pt92s\" gate=podreadiness.ingress.oraclecloud.com/k8s_e4e294007c" ], "stream": [ "stderr" ], "stream.keyword": [ "stderr" ], "time": [ "2024-06-20T06:43:59.178Z" ], "_id": "8HVjNJABKtF0FswcRymT", "_index": "fluentbit", "_score": null }
...