Setting up Prometheus(3.0)

This page describes the steps to install a Prometheus monitoring server in your deployment. The steps we provide in the example below involves setting up the JMX exporter on your Platform for Prometheus to scrape the metrics from the Platform.

This is a step-by-step installation instruction, using helm to install Prometheus with or without persistence. As this is just one example provided to install Prometheus, you can definitely install the Prometheus server and adapter in however way you want to.

Additionally, should you choose to use the metrics data to configure auto scaling for EC Deployments, you need to install a Prometheus adapter along with your Prometheus server.

If you have not installed the Prometheus server, proceed with the steps from prerequisites until the very end.

If you have already installed the Prometheus server, you can skip ahead to the steps for installing the Prometheus adapter here.

For more information about EC Deployments, refer to Execution Context Deployments (ECDs) (3.0).

Info!

Only one instance of Prometheus is required in a Kubernetes cluster. This single Prometheus server will monitor and scrape for metrics from all the different namespaces in your Kubernetes cluster.

Prerequisite

You will need to install Helm3 first before installing Prometheus following the examples listed below. If you already have helm installed from when you installed Usage Engine, then you can skip this step.

Configuring JMX exporter for Platform

You can enable the JMX exporter for your Platform. If you want Prometheus to scrape the metrics from Platform, you should configure the JMX exporter, as it exposes and exports all the metrics in your Platform for the Prometheus server to pick up.

The value field in the values.yaml file for your Platform is platform.export.jmx.enabled. Setting this value to true enables the Prometheus server to scrape the JMX metrics from the Platform. Additionally, you can configure a port for your JMX exporter using platform.export.jmx.port and assigning it any port number allows the JMX exporter to expose the metrics on that particular port.

Note!

You need to expose the JMX port on your Platform if you have applied the changes to the values.yaml on an already installed Platform. You can use this command to forward the port.

kubectl port-forward <pod name> <arbitrary port>:<jmx port defined in debug.jmx.port>

Example

kubectl port-forward platform-0 30103:8888


Example: values.yaml file with JMX exporter enabled for Platform

This is an example of a values.yaml file where the JMX exporter is enabled for Platform on port 8888.

usage-engine-private-edition / values.yaml:
    jmx:
	  export:
        enabled: true
	    port: 8888

Installing Prometheus without Persistence

The following steps show how to install the Prometheus server without the use of persistence. Be aware that your metrics data will not be retained should your deployment be brought down. We do not recommend deploying Prometheus without persistence into a production environment.

  1. Add the helm repo for Prometheus.

    helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
  2. Install Prometheus using the helm install command. For this example, we opted to install the Prometheus server in its own namespace called prometheus.  Enter the value of the port you want the Prometheus server node port to be configured with

    helm install -n <namespace> prometheus prometheus-community/prometheus \
      --set server.persistentVolume.enabled=false \
      --set server.service.type=NodePort \
      --set server.service.nodePort=<port> \
      --set alertmanager.persistentVolume.enabled=false

    Example: helm install Prometheus

    helm install -n prometheus prometheus prometheus-community/prometheus \
      --set server.persistentVolume.enabled=false \
      --set server.service.type=NodePort \
      --set server.service.nodePort=31010 \
      --set alertmanager.persistentVolume.enabled=false

Installing Prometheus with Persistence

The following steps show how to install the Prometheus server with the use of persistence volumes on your Kubernetes cluster.

  1. Create a yaml file and describe the Persistent Volume and Persistent Volume Claim for your Prometheus server. The example used here creates the persistent volume on an NFS file server that is mounted onto the cluster. The value set in nfs.path is the directory on the NFS file server that stores the metrics data.

    Example: Persistence for Prometheus

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: prometheus
    spec:
      accessModes:
      - ReadWriteMany
      capacity:
        storage: 10Gi
      nfs:
        path: /export/snap/metrics/prometheus
        server: 192.154.14.120
      persistentVolumeReclaimPolicy: Retain
      storageClassName: prometheus-persistent
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: prometheus-persistent
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 10Gi
      storageClassName: prometheus-persistent
  2. After creating the yaml file, run this command:

    kubectl apply -f <persistent volume yaml> -n <namespace>
  3. Add the helm repo for Prometheus.

    helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
  4. Install Prometheus using the helm install command. For this example, we opted to install the Prometheus server in its own namespace called prometheus.  Enter the value of the port you want the Prometheus server node port to be configured with and set the name of the Persistent Volume Claim that you have created in the steps before.

    helm install -n <namespace> prometheus prometheus-community/prometheus \
      --set server.persistentVolume.enabled=true \
      --set server.persistentVolume.accessModes=ReadWriteMany \
      --set server.persistentVolume.existingClaim="prometheus-persistent" \
      --set server.service.type=NodePort \
      --set server.service.nodePort=<port> \
      --set alertmanager.persistentVolume.enabled=false

    Example: helm install Prometheus - with persistence

    helm install -n prometheus prometheus prometheus-community/prometheus \
      --set server.persistentVolume.enabled=true \
      --set server.persistentVolume.accessModes=ReadWriteMany \
      --set server.persistentVolume.existingClaim="prometheus-persistent" \
      --set server.service.type=NodePort \
      --set server.service.nodePort=31010 \
      --set alertmanager.persistentVolume.enabled=false

Verify the Prometheus Installation

This step will have you check that your Prometheus is deployed correctly.

  1. After installing the Prometheus server, you will be given an export command to use to acquire the URL and the Port for the Prometheus server. The command can look something like this:

    export NODE_PORT=$(kubectl get --namespace <namespace> -o jsonpath="{.spec.ports[0].nodePort}" services prometheus-server)
    export NODE_IP=$(kubectl get nodes --namespace <namespace> -o jsonpath="{.items[0].status.addresses[0].address}")

    Example: Exporting the value for Prometheus Node IP and Node Port

    $ export NODE_PORT=$(kubectl get --namespace prometheus -o jsonpath="{.spec.ports[0].nodePort}" services prometheus-server)
    $ export NODE_IP=$(kubectl get nodes --namespace prometheus -o jsonpath="{.items[0].status.addresses[0].address}")
  2. To generate the URL from the result of the two export commands above, use this echo command. Then copy the result to your browser.

    echo http://$NODE_IP:$NODE_PORT


    Example: Url for Prometheus GUI

    http://192.168.52.26:31010

Install Prometheus Adapter

The Prometheus adapter functions as a gatekeeper, where it retrieves the metrics from a Prometheus server and then publishes these metrics to Kubernetes metrics API. The adapter uses a configuration file to set the rules that determine what metrics the adapter will publish. You can also configure your own custom metrics using the configuration file. For examples of how to configure your own custom metrics, refer to Creating Custom Metrics on Prometheus Adapter(3.0).

Refer to https://github.com/kubernetes-sigs/prometheus-adapter for more information about the Prometheus adapter and how to configure the rules for the configuration file.

For simplicity, there is a sample configuration file called prom-adapter-values.yaml. This file is configured with the rule to make all com.digitalroute related metrics available in Kubernetes. To find the url and port, you can use the export NODE_PORT command as shown on the verify the Prometheus installation steps above.

Note!

If you have installed your Prometheus server in its own namespace, make sure that the Prometheus adapter has access to your Prometheus server.

prom-adapter-values.yaml
prometheus:
  url: http://192.168.52.26
  port: 31010
logLevel: 6
rules:
  custom:
  - seriesQuery: '{__name__=~"^com_digitalroute.*"}'
    resources:
      overrides:
        namespace: {resource: "namespace"}
        pod: {resource: "pod"}
    name:
      matches: ^(.*)
      as: ""
    metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)


Installation of the Prometheus adapter uses the same helm repository that you have added when installing the Prometheus server. Use this helm install command for the Prometheus adapter with the prom-adapter-values.yaml configuration file.

helm install -n <namespace> prometheus-adapter prometheus-community/prometheus-adapter -f <Prometheus adapter configuration file>

Example: helm install Prometheus adapter

helm install -n mznamespace prometheus-adapter prometheus-community/prometheus-adapter -f prom-adapter-values.yaml