Skip to content

Collector sending otelcol_exporter_queue_size metric on single exporter #10444

@corradomulas

Description

@corradomulas

Hi all,
I have the following issue while working with OTel Collector, and I can't seem to find anything on docs or useful config parameters for avoiding this issue. Feel free to ask for more details if needed, and if providing such other details is compatible with the reason some information is being redacted. Thanks in advance for your help.

Describe the bug
The metric otelcol_exporter_queue_size is being sent to prometheus for only one exporter instead of each one.

What did you expect to see?
I expect to see a queue metric for each exporter.

What did you see instead?
I see the aforementioned metric for only the first service initialised by collector at startup. I checked on Grafana and in the timeline, each time container restarts, a different exporter queue on that metric is exposed. There are no related errors on logs, each exporter is configured the same way, including the one initialised at startup of OTel collector container.

What version did you use?
ADOT v.0.39.1

What config did you use?

Prometheus receiver config:

prometheus/
config:
scrape_configs:
- job_name:
scrape_interval: 1m
static_configs:
- targets:
- '127.0.0.1:8888'

Service conf:

service:
[...]
metrics/:
receivers:
- prometheus/
exporters:
- prometheusremotewrite

Prometheus exporter config:

exporters:
prometheusremotewrite:
endpoint:
resource_to_telemetry_conversion:
enabled: true
add_metric_suffixes: false
auth:
authenticator:

Metrics and logs level are already set at maximum verbosity, other pieces of the config are omitted on purpose.

Environment
Docker container of OTel Collector, tagged latest

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions