-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
When using the Prometheus receiver and exporter I no longer get the job or instance labels that are default in Prometheus. Is this expected behaviour?
When Prometheus scrapes them I lose these fields even with honor_labels: true. I have manually checked the metrics and they are missing these labels so it's not an issue with Prometheus. Is there a way to manually add them back in with some relabel_configs magic? I have tried but it seems to get ignored whenever I use the job or instance labels.
I eventually want two exporters here one of which can be scraped locally by Prometheus and the other will forward onto another collecter. Our default use case is without Prometheus so scraping Prometheus with the collector isn't an option.
Version: v0.2.3
service:
pipelines:
metrics:
receivers: [prometheus]
exporters: [prometheus]
exporters:
prometheus:
endpoint: "0.0.0.0:9090"
receivers:
prometheus:
config:
global:
scrape_interval: {{ .Values.scrapeInterval }}
scrape_configs:
- job_name: 'pilot'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- {{ .Release.Namespace }}
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-pilot;http-monitoring
- job_name: 'oc-agent'
scrape_interval: 5s
honor_labels: true
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- {{ .Release.Namespace }}
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: otel-collector;prometheus