Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Note

Note!

You need to have a proper GKE cluster setup in order to proceed with these steps. Refer to Set Up Kubernetes Cluster - GCP (4.3) to create the GKE cluster first.

...

To get a production ready log configuration you can use tools from the Kubernetes ecosystem and GCP Cloud Logging. In this guide we show you how to set up:

...

Stream container logs to GCP Cloud Logging

Before using GCP Cloud Logging, you need to ensure Cloud Logging API is enabled on your Google Cloud project. Refer to the guide https://cloud.google.com/kubernetes-engine/docs/troubleshooting/logging to verify if logging is enabled.

Fluent-bit is a log processor that used to send containers logs to GCP Cloud Logging. By default, a managed Fluent-bit will be installed by GKE during cluster creation.

After Cloud Logging API is enabled, all containers logs should automatically send to the Cloud Logging. To verify logging, go to GCP console page Logging > Logs Explorer and check if container logs are populated.

...

  1. Add Fluent helm repository and update repository to retrieve the latest version.

    Code Block
    helm repo add fluent https://fluent.github.io/helm-charts
    helm repo update

  2. Retrieve the Elastic Search access credentials by using commands below. Save the output, you will need them in the next step.

    Code Block
    kubectl get secrets --namespace=logging elasticsearch-master-credentials -ojsonpath='{.data.username}' | base64 -d
    kubectl get secrets --namespace=logging elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d

  3. Create a custom values yaml, for example fluent-bit-values.yaml and set the content below. After that, replace values of HTTP_User and HTTP_Passwd to the output from previous step.

    Code Block
    config:
      inputs: |
        [INPUT]
            Name                tail
            Alias               kube_containers
            Tag                 kube_<namespace_name>_<pod_name>_<container_name>
            Exclude_Path        /var/log/containers/*_kube-system_*.log,/var/log/containers/*_istio-system_*.log,/var/log/containers/*_knative-serving_*.log,/var/log/containers/*_gke-system_*.log,/var/log/containers/*_config-management-system_*.log,/var/log/containers/*_gmp-system_*.log,/var/log/containers/*_gke-managed-cim_*.log
            Path                /var/log/containers/*.log
            multiline.parser    docker, cri
            Mem_Buf_Limit       50MB
            Skip_Long_Lines     On
            Refresh_Interval    1
            Read_from_Head      True
    
      filters: |
        [FILTER]
            Name                kubernetes
            Match               kube.*
            Kube_URL            https://kubernetes.default.svc:443
            Kube_Tag_Prefix     application.var.log.containers.
            Merge_Log           On
            K8S-Logging.Parser  On
            K8S-Logging.Exclude Off
            Labels              Off
            Annotations         Off
            Use_Kubelet         On
            Kubelet_Port        10250
            Buffer_Size         0
    
      outputs: |
        [OUTPUT]
            Name                es
            Match               *
            Host                elasticsearch-master
            tls                 on
            tls.verify          off
            HTTP_User           elastic
            HTTP_Passwd         zUqEBtrE4H9bfO8K
            Suppress_Type_Name  On
            Index               fluentbit
            Trace_Error         on

  4. Install Fluent-bit with the custom values yaml.

    Code Block
    helm install fluent-bit fluent/fluent-bit -n logging -f fluent-bit-values.yaml

  5. Verify Fluent-bit pod's log. Should not see any error or exception if connection to Elastic Search is established successfully.

    Code Block
    kubectl logs <fluent-bit pod name> -n logging

...

  1. Retrieve the public access IP of the Kibana dashboard.

    Code Block
    kubectl get service -n logging kibana-kibana -o jsonpath='{.status.loadBalancer.ingress[0].ip}'

  2. Login to Kibana dashboard web UI with username password same as HTTP_User and HTTP_Passwd configured in previous section.

  3. Go to Management > Stack Management > Index Management. Create the Index Template with Index Pattern matching the indexes configured in previous section.

  4. If Fluent-bit connection to Elastic Search established successfully, the Indices is created automatically.

  5. Go to Management > Stack Management > Kibana. Create Data view matching the index pattern

  6. Go to Analytics> Discover to view logs.

    image-20241008-072007.pngImage Removedimage-20241008-072007.pngImage Added

  7. User can filter logs using KQL syntax. For instance, enter "ECDeployment" in the KQL filter input field.

...