Configure Log Collection, Target, and Visualization - Azure
Note!
You need to have a proper AKS cluster setup in order to proceed with these steps. Refer to Set Up Kubernetes Cluster - Azure (4.3)ย to create the AKS cluster first.
To get a production ready log configuration you can use tools from the Kubernetes ecosystem and Azure Log Analytics Service. In this guide we show you how to set up:
Fluent-bit for log collection and log forwarding
Elasticsearch for log storage
Kibana for log visualization
Azure Log Analytics for querying monitoring logs
These tools give you powerful and flexible log collection, storage, monitoring and visualization. The Elasticsearch database storage also provides powerful tools to perform analytics on the log data. The Azure Log Analytics is a tool for querying monitoring logs built for DevOps engineers, developers, site reliability engineers (SREs), IT managers, and product owners. See the official user documentation for detailed information about these tools.
Prerequisite
AKS Container Insights
You can collect, store and analyze logs and event data for container debugging purpose. To enable container insights, go to Home > your-cluster-name > Monitoring > Logs. Click Configure monitoring to proceed.
Azure Log Analytics
You can log data from Azure Monitor in a Log Analytics workspace. Azure provides an analysis engine and a rich query language. The logs show the context of any problems, and are useful for identifying root causes.
For more information, refer to Azure Log Analytics tutorial.
Streamย Container Logs toย Elastic Search and Visualize with Kibana
Important
Youย mustย install Elastic Search, Fluent-bit and Kibana on the same namespace in order to allow working properly. These are some of the reasons:
Elastic Search service needs to be accessible by Fluent-bit andย Kibana to establish connection.
Kibana required Elastic Search master cert secret presented on the namespace.
Hence, in this guide we are using namespace 'logging' for the installations.
Install Elastic Search
Elastic Search will be installed to the namespace logging
Create namespace logging
kubectl create namespace logging
Add Elastic Search repository to Helm and update repository to retrieve the latest version with the following commands:
helm repo add elastic https://helm.elastic.co
helm repo update
Install Elastic Search.ย
Example - Installing Elasticsearch without Persistent storage
This example installs Elasticsearch without persistent storage. Refer to Elasticsearch Helm chart documentation for help to enable persistent storage:
Install Fluent-bit
Fluent-bitย will be installed in the same namespace as Elastic Search, that isย logging.
Get the service name of Elastic Search pods with the following command:
This service name is the value set to Host in [OUTPUT] directive.
Get theย username and password credential for Elastic X-Pack access with the following commands:
The decrypted username and password are the value set to HTTP_User andย HTTP_Passwdย in the [OUTPUT] directive.
Create a custom values yaml file, for exampleย fluent-bit-values.yamlย with the following content:
Add the
fluent
ย helm repo and update repo with the following commands:
Deploy the Fluent Bit DaemonSet to the cluster with the following command:
Verify every Fluent-bit pod's log. Should not see any error or exception if connection to Elastic Search is established successfully with the following command:
Install Kibana
Kibanaย will be installed to the same namespace as Fluent-bit, i.e., logging.ย
Install Kibana.ย Note that service type is set to LoadBalancer to allow public access
Configure Kibana
Kibana is a visual interface tool that allows you to explore, visualize, and build a dashboard over the log data massed in Elastic Search cluster.ย
Up to this stage, all pods under namespaceย logging should be up and running.ย
If all looks good, you can proceed to login to Kibana dashboard web UI.
Retrieve the public access IP Address of the Kibana dashboard with the following command:
Login to Kibana dashboard web interface using the HTTP_User and HTTP_Passwd configured in the previous section.
Go to Management > Stack Management > Index Management.
If the Fluent-bit connection to Elastic Search established successfully, the Indices is created automatically.
Go to Management > Stack Management > Kibana and create a Data view matching the index pattern
Go to Analytics > Discover to search for logs belong to each index pattern respectively.
You can filter logs using KQL syntax. For instance, enter "ECDeployment" in the KQL filter input field.
A log record in json format is parsed into fields, as below: