Configure Log Collection, Target, and Visualization - GCP
Note!
You need to have a proper GKE cluster setup in order to proceed with these steps. Refer to Set Up Kubernetes Cluster - GCP (4.3) to create the GKE cluster first.
By default Usage Engine deployed in Kubernetes outputs logging to disk and console output. If persistent disk storage is enabled, the logs end up on the mounted shared disk. But persistent disk is not always the desired log target, especially in a cloud environment where persistent data is typically accessed through services and APIs rather than as files. The console logs can be accessed through the "kubectl logs" command or from a Kubernetes dashboard. The buffer for storing the Kubernetes console logs is stored in memory only though and thus will be lost when a Pod terminates.
To get a production ready log configuration you can use tools from the Kubernetes ecosystem and GCP Cloud Logging. In this guide we show you how to set up:
GCP Cloud logging for storage and monitoring
Fluent-bit for log collection and log forwarding
Elasticsearch for log storage
Kibana for log visualization
These tools give you powerful and flexible log collection, storage, monitoring and visualization. The Elasticsearch database storage also provides powerful tools to perform analytics on the log data. Theย GCP Logs Explorer is a monitoring service built for DevOps engineers, developers, site reliability engineers (SREs), IT managers, and product owners. This guide doesn't describe these tools' functionality in details as it is outside the scope of this guide.
Prerequisite
Before setting up log collection, make sure your Usage Engine Private Edition was installed withย JSON formatted logging enabled.
log:
# Format can be "json" or "raw". Default is "raw"
format: json
Stream Container Logs to GCP Cloud Logging
Before using GCP Cloud Logging,ย you need to ensure Cloud Logging API is enabled on yourย Google Cloud project. Refer to theย guideย https://cloud.google.com/kubernetes-engine/docs/troubleshooting/loggingย to verify if logging is enabled.
Fluent-bit is a log processor that used to sendย containers logs to GCP Cloud Logging. By default, a managedย Fluent-bit will be installed by GKE during cluster creation.
Afterย Cloud Logging API is enabled, all containers logsย shouldย automaticallyย send to the Cloud Logging. To verify logging, go to GCP console pageย Logging > Logs Explorerย and check if container logs are populated.
Streamย Container Logs toย Elastic Search and Visualize with Kibana
Important!
You must install Elastic Search, Fluent-bit and Kibana on the same namespace in order to allow working properly. There are some of the reasons:
Elastic Search service needs to be accessible by Fluent-bit andย Kibana to establish connection.
Kibana required Elastic Search master cert secret presented on the namespace.
Hence, in this guide we are using namespace 'logging' for the installations.
Install Elastic Search
Add Elastic Search repository to Helm and update repository to retrieve the latest version.
helm repo add elastic https://helm.elastic.co helm repo update
Install Elastic Search.
Note!
For simplicity this example installs Elasticsearch without persistent storage. Refer to Elasticsearch Helm chart documentation for help to enable persistent storage:
helm install elasticsearch elastic/elasticsearch -n logging --set=persistence.enabled=false
Install custom Fluent-bit
Add Fluent helm repository and update repository to retrieve the latest version.
Retrieve the Elastic Search access credentials by using commands below. Save the output, you will need them in the next step.
Create a custom values yaml, for exampleย fluent-bit-values.yamlย and set the content below. After that, replace values ofย HTTP_User and HTTP_Passwdย to the output from previous step.
Install Fluent-bit with the custom values yaml.
Verify Fluent-bit pod's log. Should not see any error or exception if connection to Elastic Search is established successfully.
Install Kibana
Install Kibana. The service type is set to LoadBalancer in order to allow public access.
Configure Kibana
Kibana is a visual interface tool that allows you to explore, visualize, and build a dashboard over the log data massed in Elastic Search cluster.ย
Up to this stage, all pods under namespaceย logging should be up and running.ย
If all looks good, you can proceed to login to Kibana dashboard web UI.
Retrieve the public access IP of the Kibana dashboard.
Login to Kibana dashboard web UI with username password same asย HTTP_User andย HTTP_Passwd configured in previous section.
Go to Management > Stack Management > Index Management. Create the Index Template withย Index Pattern matching the indexes configured in previous section.
Index TemplatesIf Fluent-bit connection to Elastic Search established successfully, the Indices is created automatically.
IndicesGo to Management > Stack Management > Kibana. Create Data view matching the index pattern
Go to Analytics > Discover to view logs.
User can filter logs using KQL syntax. For instance, enter "ECDeployment" in the KQL filter input field.
ย