Note!
You need to have a proper EKS cluster setup in order to proceed with these steps. Refer to Set Up Kubernetes Cluster - AWS (4.3) to create the EKS cluster first.
By default, Usage Engine deployed in Kubernetes outputs logging to disk and console output. If persistent disk storage is enabled, the logs end up on the mounted shared disk. However, persistent disk is not always the desired log target, especially in a cloud environment where persistent data is typically accessed through services and APIs rather than as files. The console logs can be accessed through the "kubectl logs" command or from a Kubernetes dashboard. The buffer for storing the Kubernetes console logs is stored in memory only though and thus will be lost when a Pod terminates.
To get a production ready log configuration you can use tools from the Kubernetes ecosystem and AWS CloudWatch. In this guide we show you how to set up:
Fluent-bit for log collection and log forwarding
Elasticsearch for log storage
Kibana for log visualization
AWS CloudWatch for monitoring
These tools give you powerful and flexible log collection, storage, monitoring and visualization. The Elasticsearch database storage also provides powerful tools to perform analytics on the log data. The AWS CloudWatch is a monitoring service built for DevOps engineers, developers, site reliability engineers (SREs), IT managers, and product owners. See the official user documentation for detailed information about these tools.
Prerequisite
Before setting up log collection, ensure that Usage Engine was installed with JSON formatted logging enabled, as below:
log: # Format can be "json" or "raw". Default is "raw" format: json
Setup AWS IAM OIDC Provider
To use AWS Identity and Access Management (IAM) roles for service accounts, an IAM OIDC provider must exist for your cluster's OIDC issuer URL. Prior to creating AWS policy and role, you need to setup Identity Provider using EKS cluster's OpenID Connect Provider URL.
Login to AWS Management Console, go to EKS > Clusters > Your Cluster Name.
On Overview tab, section Details, click on the copy button under OpenID Connect Provider URL to copy the URL to the clipboard.
Go to IAM > Identity Providers.
Add an Identity Provider and select OpenID Connect.
Paste the copied URL as Provider URL.
Enter "sts.amazonaws.com" as Audience.
Click Add Provider and proceed to complete the Identity Providers creation
Setup AWS IAM Policy and Role
In order for Fluent-bit to send logs to AWS CloudWatch, you need to setup AWS access policy to access the AWS CloudWatch and attach this policy to an AWS Role.
Login to AWS Management Console, go to IAM > Policies.
Create a new policy using the JSON tab in the Policy Editor. Enter the permission statement in JSON format, as below:
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "logs:DescribeLogGroups", "logs:DescribeLogStreams", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "*", "Effect": "Allow" } ] }
Click Next and proceed to create the policy.
Back in the IAM Dashboard, go to IAM > Roles.
Create a new role and select Web Identity.
Select the OpenID Connect Provider Id as Identity Provider.
Click Next and proceed to create the role.
Once the new role has been created, you need to to edit the role's trust relationship to associate it to the Fluent-bit's Service Account.
Go to IAM > Roles > Your Role Name.
On the Trust relationship tab, edit trust policy.
Edit the "StringEquals" field to use the Fluent-bit's namespace and Service Account Name, as below:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::211006581866:oidc-provider/oidc.eks.ap-southeast-2.amazonaws.com/id/360F8C7227656FC5627D5DA70F181583" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.eks.ap-southeast-2.amazonaws.com/id/360F8C7227656FC5627D5DA70F181583:sub": "system:serviceaccount:<Fluent-bit namespace>:<fluent-bit Service Account Name>" } } } ] }
Install Fluent-bit
To stream container logs to CloudWatch Logs, install AWS for Fluent-bit:
Create a namespace called amazon-cloudwatch with the following command:
kubectl create namespace amazon-cloudwatch
Create a ConfigMap called fluent-bit-cluster-info and replace my-cluster-name and my-cluster-region with your cluster's name and Region, as below:
ClusterName=<my-cluster-name> RegionName=<my-cluster-region> FluentBitHttpPort='2020' FluentBitReadFromHead='Off' [[ ${FluentBitReadFromHead} = 'On' ]] && FluentBitReadFromTail='Off'|| FluentBitReadFromTail='On' [[ -z ${FluentBitHttpPort} ]] && FluentBitHttpServer='Off' || FluentBitHttpServer='On' kubectl create configmap fluent-bit-cluster-info \ --from-literal=cluster.name=${ClusterName} \ --from-literal=http.server=${FluentBitHttpServer} \ --from-literal=http.port=${FluentBitHttpPort} \ --from-literal=read.head=${FluentBitReadFromHead} \ --from-literal=read.tail=${FluentBitReadFromTail} \ --from-literal=logs.region=${RegionName} -n amazon-cloudwatch
Deploy the Fluent-bit DaemonSet to the cluster with the following command:
kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/fluent-bit/fluent-bit.yaml
Associate the IAM role to cloudwatch-agent and fluent-bit service accounts and replace ACCOUNT_ID and IAM_ROLE_NAME with AWS Account ID and the IAM role used for service accounts with the following command:
kubectl annotate serviceaccounts fluent-bit -n amazon-cloudwatch "eks.amazonaws.com/role-arn=arn:aws:iam::ACCOUNT_ID:role/IAM_ROLE_NAME"
Go to CloudWatch > View logs and verify that the following log groups have been created:
/aws/containerinsights/Your Cluster Name/application /aws/containerinsights/Your Cluster Name/dataplane /aws/containerinsights/Your Cluster Name/host
For each log group, verify there are log streams available in the Log stream tab.
Install Elastic Search
Elastic search will be installed to the same namespace as Fluent-bit, i.e., amazon-cloudwatch.
Add Elastic Search repository to Helm and update repository to retrieve the latest version with the following commands:
helm repo add elastic https://helm.elastic.co
helm repo update
Install Elastic Search.
Example - Installing Elasticsearch without Persistent storage
This example installs Elasticsearch without persistent storage. Refer to Elasticsearch Helm chart documentation for help to enable persistent storage:
https://github.com/elastic/helm-charts/tree/master/elasticsearch
helm install elasticsearch elastic/elasticsearch -n amazon-cloudwatch --set=persistence.enabled=false
Install Kibana
Kibana will be installed in the same namespace as Fluent-bit, that is amazon-cloudwatch.
Download the Kibana helm chart and unpack it in local directory with the following command:
helm fetch elastic/kibana --untar
Change directory to kibana and edit the values.yaml file by adding service annotation to create an Internet-facing, Network type Load Balancer, as below:
service: type: ClusterIP loadBalancerIP: "" port: 5601 nodePort: "" labels: {} annotations: service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
Install Kibana by path to an unpacked local directory with the following command:
helm install kibana kibana -n amazon-cloudwatch --set=service.type=LoadBalancer --set=service.port=80
Configure Fluent-bit to send logs to Elastic Search
These are additional steps to configure fluent-bit ConfigMap named fluent-bit-config.
Get the service name of the Elastic Search pods with the following command:
kubectl get svc -n amazon-cloudwatch
This service name is the value set to Host in [OUTPUT] directive.
Get username and password credential for Elastic X-Pack access. with the following commands:
kubectl get secrets --namespace=amazon-cloudwatch elasticsearch-master-credentials -ojsonpath='{.data.username}' | base64 -d
kubectl get secrets --namespace=amazon-cloudwatch elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
The decrypted username and password are the values set to HTTP_User and HTTP_Passwd in [OUTPUT] directive.
Download the fluent-bit deamonset yaml file into a local directory with the following command:
curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/fluent-bit/fluent-bit.yaml > fluent-bit.yaml
Edit the fluent-bit.yaml file by going to the ConfigMap named fluent-bit-config, and for each config file, add the output directive to send logs to Elastic Search, as below:
application-log.conf
[OUTPUT] Name es Match application.* Host elasticsearch-master tls on tls.verify off HTTP_User elastic HTTP_Passwd DbrfdbnzCNYympQZ Suppress_Type_Name On Index fluentbit.app
dataplane-log.conf
[OUTPUT] Name es Match dataplane.* Host elasticsearch-master tls on tls.verify off HTTP_User elastic HTTP_Passwd DbrfdbnzCNYympQZ Suppress_Type_Name On Index fluentbit.dataplane
host-log.conf
[OUTPUT] Name es Match host.* Host elasticsearch-master tls on tls.verify off HTTP_User elastic HTTP_Passwd DbrfdbnzCNYympQZ Suppress_Type_Name On Index fluentbit.host
Delete the existing fluent-bit pods, config map with the following command:
kubectl delete -f fluent-bit.yaml
Install and apply the new configuration to fluent-bit pods, config map with the following command:
kubectl apply -f fluent-bit.yaml
Re-associate the IAM role with the cloudwatch-agent and fluent-bit service accounts, and replace ACCOUNT_ID and IAM_ROLE_NAME with AWS Account ID and the IAM role used for service accounts with the following command:
kubectl annotate serviceaccounts fluent-bit -n amazon-cloudwatch "eks.amazonaws.com/role-arn=arn:aws:iam::ACCOUNT_ID:role/IAM_ROLE_NAME"
Verify every Fluent-bit pod's log with the following command:
kubectl logs <fluent-bit pod name> -n amazon-cloudwatch
You should not see any error or exception if connection to Elastic Search is established successfully.
Configure Kibana
Kibana is a visual interface tool that allows you to explore, visualize, and build a dashboard over the log data massed in Elastic Search cluster.
Up to this stage, all pods under namespace amazon-cloudwatch should be up and running.
NAME READY STATUS RESTARTS AGE elasticsearch-master-0 1/1 Running 0 4d3h elasticsearch-master-1 1/1 Running 0 4d3h fluent-bit-2kpgr 1/1 Running 0 3d fluent-bit-6wtnr 1/1 Running 0 3d fluent-bit-ns42z 1/1 Running 0 3d kibana-kibana-658dc749cd-hbc8s 1/1 Running 0 3d4h
If all looks good, you can proceed to login to Kibana dashboard web UI.
Retrieve the public access hostname of the Kibana dashboard.
kubectl get service -n amazon-cloudwatch kibana-kibana -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
Login to Kibana dashboard web UI with username password same as HTTP_User and HTTP_Passwd configured in previous section
Go to Management > Stack Management > Index Management. Create the Index Template with Index Pattern matching the indexes configured in previous section
If Fluent-bit connection to Elastic Search established successfully, the Indices is created automatically.
Go to Management > Stack Management > Kibana. Create Data view matching the index pattern
Go to Analytics > Discover to search for logs belong to each index pattern respectively.
User can filter logs using KQL syntax. For instance, enter "kubernetes.pod_name:platform-0" in the KQL filter input field
Log record in json format is parsed into fields
{ "_p": [ "F" ], "_p.keyword": [ "F" ], "@timestamp": [ "2024-02-21T09:14:49.079Z" ], "kubernetes.container_hash": [ "ghcr.io/digitalroute-public/usage-engine-private-edition@sha256:fceb32e07cfae86db58d9a83328e4539eb5f42455cd6a0463e9ac955b3642848" ], "kubernetes.container_hash.keyword": [ "ghcr.io/digitalroute-public/usage-engine-private-edition@sha256:fceb32e07cfae86db58d9a83328e4539eb5f42455cd6a0463e9ac955b3642848" ], "kubernetes.container_image": [ "ghcr.io/digitalroute-public/usage-engine-private-edition:4.0.0-operator" ], "kubernetes.container_image.keyword": [ "ghcr.io/digitalroute-public/usage-engine-private-edition:4.0.0-operator" ], "kubernetes.container_name": [ "manager" ], "kubernetes.container_name.keyword": [ "manager" ], "kubernetes.docker_id": [ "9af8ba62db2aacbb39435ed8894bc078013ea1126a561a85a1d486ee8e12367d" ], "kubernetes.docker_id.keyword": [ "9af8ba62db2aacbb39435ed8894bc078013ea1126a561a85a1d486ee8e12367d" ], "kubernetes.host": [ "ip-192-168-34-51.ap-southeast-2.compute.internal" ], "kubernetes.host.keyword": [ "ip-192-168-34-51.ap-southeast-2.compute.internal" ], "kubernetes.namespace_name": [ "uepe" ], "kubernetes.namespace_name.keyword": [ "uepe" ], "kubernetes.pod_id": [ "5a911c45-d2b0-4f53-b474-ae8aee304d4a" ], "kubernetes.pod_id.keyword": [ "5a911c45-d2b0-4f53-b474-ae8aee304d4a" ], "kubernetes.pod_name": [ "uepe-operator-controller-manager-6fdc476cb5-9282q" ], "kubernetes.pod_name.keyword": [ "uepe-operator-controller-manager-6fdc476cb5-9282q" ], "log": [ "{\"level\":\"info\",\"ts\":\"2024-02-21T09:14:49Z\",\"logger\":\"controllers.ECDeployment\",\"msg\":\"Reconciling\",\"ECDeployment\":\"uepe/http2\"}" ], "log_processed.ECDeployment": [ "uepe/http2" ], "log_processed.ECDeployment.keyword": [ "uepe/http2" ], "log_processed.level": [ "info" ], "log_processed.level.keyword": [ "info" ], "log_processed.logger": [ "controllers.ECDeployment" ], "log_processed.logger.keyword": [ "controllers.ECDeployment" ], "log_processed.msg": [ "Reconciling" ], "log_processed.msg.keyword": [ "Reconciling" ], "log_processed.ts": [ "2024-02-21T09:14:49.000Z" ], "log.keyword": [ "{\"level\":\"info\",\"ts\":\"2024-02-21T09:14:49Z\",\"logger\":\"controllers.ECDeployment\",\"msg\":\"Reconciling\",\"ECDeployment\":\"uepe/http2\"}" ], "stream": [ "stderr" ], "stream.keyword": [ "stderr" ], "time": [ "2024-02-21T09:14:49.079Z" ], "_id": "ijvyyo0B9xu2H_IDTAqi", "_index": "fluentbit.app", "_score": null }