Making Metrics Available to Prometheus(4.2)
Metrics
Metrics can be made available for scraping by Prometheus in two ways:
Through the automatic service discovery provided by the Prometheus stack
By using raw scrape configurations
There are four sources of metrics:
Platform pod
Operator pods
EC pod(s)
Making  Metrics Available to Prometheus Through the Stack
If the prometheus stack is deployed in your cluster, this is how the  metrics are made available to prometheus for scraping:
Making  metrics available to Prometheus
The metrics from the  platform and operator pods are automatically discoverable by Prometheus by using the PodMonitor and ServiceMonitor resources, which are automatically setup when installing or updating the helm chart.
If the Prometheus stack deployment requires matching by labels, the helm value global.metrics.monitor.labels can be used to set the required labels when installing  or updating of the helm chart.
Since the metrics from the EC pod(s) are exposed via Prometheus annotations (i.e. prometheus.io/scrape, prometheus.io/port and prometheus.io/path) rather than via PodMonitor/ServiceMonitor resources, Prometheus requires an additional scrape configuration to pick up these metrics.
Making  Metrics Available to Prometheus by Using Raw Scrape Configurations
If the Prometheus stack is not deployed in your cluster, a raw scrape configuration is used to make Prometheus pick up the platform metrics.
For this means that the  prometheus.io/scrape, prometheus.io/port and prometheus.io/path annotations need to be applied to the platform pod. These annotations are controlled through the helm chart via the jmx.export.* values and must not be set explicitly. Refer to the helm chart for further details.
Note!
The operator metrics cannot be exposed this way.
Example
This is an example of how metrics from all sources (i.e. the platform pod, the operator pods and all EC pods) can be made available for scraping by Prometheus.
This example is based on the prometheus stack.
Label matching with the label prometheus: application-monitoring
is used to match the PodMonitor and ServiceMonitor resources.
Install or upgrade the Prometheus stack with the following command:
helm upgrade --install kube-prometheus-stack prometheus-community/kube-prometheus-stack -f prometheus-values.yaml
Where prometheus-values.yaml contains:
prometheus: prometheusSpec: additionalScrapeConfigs: - job_name: ec-pods kubernetes_sd_configs: - role: pod relabel_configs: # maps to the 'prometheus.io/scrape: "true"' annotation on the EC pod - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true # maps to the 'prometheus.io/path: /metrics' annotation on the EC pod - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) # maps to the 'prometheus.io/port: "9090"' annotation on the EC pod - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ # maps the pod name to the "pod" label on the metric - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: pod # maps to all the labels of the EC pod - this is optional and not required for the scraping to happen - action: labelmap regex: __meta_kubernetes_pod_label_(.+) podMonitorSelector: matchLabels: prometheus: application-monitoring serviceMonitorSelector: matchLabels: prometheus: application-monitoring
Proceed to install or upgrade  and make sure to set the helm value global.metrics.monitor.labels.prometheus=application-monitoring
.
When looking in Prometheus under Status → Service Discovery, you will now see something like this:
Prometheus has discovered the EC pods through ec-pods scrape configuration, the platform pod through the PodMonitor, and the operator pods through the ServiceMonitor.
Note!
You may have to wait for up to 30 seconds for the service discovery to refresh.