Note | ||
---|---|---|
| ||
You need to have a proper GKE cluster setup in order to proceed with these steps. Refer to Set Up Kubernetes Cluster - GCP to create the GKE cluster first. |
By default Usage Engine deployed in Kubernetes outputs logging to disk and console output. If persistent disk storage is enabled, the logs end up on the mounted shared disk. But persistent disk is not always the desired log target, especially in a cloud environment where persistent data is typically accessed through services and APIs rather than as files. The console logs can be accessed through the "kubectl logs" command or from a Kubernetes dashboard. The buffer for storing the Kubernetes console logs is stored in memory only though and thus will be lost when a Pod terminates.
To get a production ready log configuration you can use tools from the Kubernetes ecosystem and GCP Cloud Logging. In this guide we show you how to set up:
- GCP Cloud logging for storage and monitoring
- Fluent-bit for log collection and log forwarding
- Elasticsearch for log storage
- Kibana for log visualization
Thesetools give you powerful and flexible log collection, storage, monitoring and visualization. The Elasticsearch database storage also provides powerful tools to perform analytics on the log data. The GCP Logs Explorer is a monitoring service built for DevOps engineers, developers, site reliability engineers (SREs), IT managers, and product owners. This guide doesn't describe these tools' functionality in details as it is outside the scope of this guide.
Prerequisite
Before setting up log collection, make sure your Usage Engine Private Edition was installed with JSON formatted logging enabled.
Code Block | ||
---|---|---|
| ||
log: # Format can be "json" or "raw". Default is "raw" format: json |
Stream container logs to GCP Cloud Logging
Before using GCP Cloud Logging, you need to ensure Cloud Logging API is enabled in on your Google Cloud project. Refer to the guide https://cloud.google.com/kubernetes-engine/docs/troubleshooting/logging to verify if logging is enabled in your project.
Fluent-bit is a log processor that used to send containers logs to GCP Cloud Logging. By default, a managed Fluent-bit will be installed by GKE during cluster creation.
When After Cloud Logging API is enabled, all containers logs should will be automatically send sent to the Cloud Logging. To verify the logging, go to GCP console page Logging > Logs Explorer and check if container logs are populated.
...
Note | ||
---|---|---|
| ||
Note that you must install Elastic Search, Fluent-bit and Kibana on the same namespace in order to allow working properly. There are some of the reasons behind:
Hence, in this guide we are using namespace 'logging' for the installations. |
...
Install Elastic Search
Add Elastic Search repository to Helm and update repository to retrieve the latest version.
Code Block linenumbers true helm repo add elastic https://helm.elastic.co helm repo update
Install Elastic Search.
Note title Note! For simplicity this example installs Elasticsearch without persistent storage. Refer to Elasticsearch Helm chart documentation for help to enable persistent storage:
https://github.com/elastic/helm-charts/tree/master/elasticsearchCode Block linenumbers true helm install elasticsearch elastic/elasticsearch -n logging --set=persistence.enabled=false
Install custom Fluent-bit
Add Fluent helm repository and update repository to retrieve the latest version.
Code Block linenumbers true helm repo add fluent https://fluent.github.io/helm-charts helm repo update
Retrieve the Elastic Search access credentials by using commands below. Save the output, you will need them in the next step.
Code Block linenumbers true kubectl get secrets --namespace=logging elasticsearch-master-credentials -ojsonpath='{.data.username}' | base64 -d kubectl get secrets --namespace=logging elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
Create a custom values yaml, for example fluent-bit-values.yaml and set the content below. After that, replace values of HTTP_User and HTTP_Passwd to the output from previous step.
Code Block linenumbers true config: inputs: | [INPUT] Name tail Alias kube_containers Tag kube_<namespace_name>_<pod_name>_<container_name> Exclude_Path /var/log/containers/*_kube-system_*.log,/var/log/containers/*_istio-system_*.log,/var/log/containers/*_knative-serving_*.log,/var/log/containers/*_gke-system_*.log,/var/log/containers/*_config-management-system_*.log,/var/log/containers/*_gmp-system_*.log,/var/log/containers/*_gke-managed-cim_*.log Path /var/log/containers/*.log multiline.parser docker, cri Mem_Buf_Limit 50MB Skip_Long_Lines On Refresh_Interval 1 Read_from_Head True filters: | [FILTER] Name kubernetes Match kube.* Kube_URL https://kubernetes.default.svc:443 Kube_Tag_Prefix application.var.log.containers. Merge_Log On K8S-Logging.Parser On K8S-Logging.Exclude Off Labels Off Annotations Off Use_Kubelet On Kubelet_Port 10250 Buffer_Size 0 outputs: | [OUTPUT] Name es Match * Host elasticsearch-master tls on tls.verify off HTTP_User elastic HTTP_Passwd zUqEBtrE4H9bfO8K Suppress_Type_Name On Index fluentbit Trace_Error on
Install Fluent-bit with the custom values yaml.
Code Block linenumbers true helm install fluent-bit fluent/fluent-bit -n logging -f fluent-bit-values.yaml
Verify Fluent-bit pod's log. Should not see any error or exception if connection to Elastic Search is established successfully.
Code Block linenumbers true kubectl logs <fluent-bit pod name> -n logging
Install Kibana
Install Kibana. Note that service type is set to LoadBalancer in order to allow public access.
Code Block linenumbers true helm install kibana elastic/kibana -n logging --set=service.type=LoadBalancer --set=service.port=80
...
Retrieve the public access IP of the Kibana dashboard.
Code Block linenumbers true kubectl get service -n logging kibana-kibana -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
- Login to Kibana dashboard web UI with username password same as HTTP_User and HTTP_Passwd configured in previous section.
- Go to Management > Stack Management > Index Management. Create the Index Template with Index Pattern matching the indexes configured in previous section.
- If Fluent-bit connection to Elastic Search established successfully, the Indices is created automatically.
- Go to Management > Stack Management > Kibana. Create Data view matching the index pattern
- Go to Analytics > Discover to view logs.
- User can filter logs using KQL syntax. For instance, enter "ECDeployment" in the KQL filter input field.