forked from open-telemetry/opentelemetry-collector-contrib
-
Notifications
You must be signed in to change notification settings - Fork 0
Split by dimensions extension #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
celian-garcia
merged 1 commit into
feature/azuremonitorreceiver/use-batch-api-merge-and-revert
from
feat/split-extension
Feb 14, 2025
Merged
Split by dimensions extension #1
celian-garcia
merged 1 commit into
feature/azuremonitorreceiver/use-batch-api-merge-and-revert
from
feat/split-extension
Feb 14, 2025
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
9408d1c to
8fc66fd
Compare
489f033 to
edf2096
Compare
…mension Signed-off-by: Célian Garcia <[email protected]>
edf2096 to
7bea5ad
Compare
50ac164
into
feature/azuremonitorreceiver/use-batch-api-merge-and-revert
137 of 153 checks passed
celian-garcia
pushed a commit
that referenced
this pull request
Mar 8, 2025
…ry#38146) <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> #### Description This PR adds support for configuring metric temporality (delta or cumulative) in the telemetrygen tool. This allows users to generate metrics with different temporality types, which is particularly useful for testing different metric collection scenarios. ##### Changes - Added `temporalityType` flag for metrics pipeline that accepts values: `delta` or `cumulative` <!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. --> #### Link to tracking issue Fixes open-telemetry#38073 <!--Describe what testing was performed and which tests were added.--> #### Testing Testing was performed by setting up as simple collector with `otlp `receiver and `debug` exporter ##### Sum Datapoint ``` Resource SchemaURL: https://opentelemetry.io/schemas/1.13.0 Resource attributes: -> service.name: Str(telemetrygen) ScopeMetrics #0 ScopeMetrics SchemaURL: InstrumentationScope Metric #0 Descriptor: -> Name: gen -> Description: -> Unit: -> DataType: Histogram -> AggregationTemporality: Delta ``` ##### Histogram Datapoint ``` Count: 0 Sum: 3940.000000 ExplicitBounds #0: 0.000000 ExplicitBounds #1: 5.000000 ExplicitBounds #2: 10.000000 ExplicitBounds open-telemetry#3: 25.000000 ExplicitBounds open-telemetry#4: 50.000000 ExplicitBounds open-telemetry#5: 75.000000 ExplicitBounds open-telemetry#6: 100.000000 ExplicitBounds open-telemetry#7: 250.000000 ExplicitBounds open-telemetry#8: 500.000000 ExplicitBounds open-telemetry#9: 750.000000 ExplicitBounds open-telemetry#10: 1000.000000 ExplicitBounds open-telemetry#11: 2500.000000 ExplicitBounds open-telemetry#12: 5000.000000 ExplicitBounds open-telemetry#13: 7500.000000 ExplicitBounds open-telemetry#14: 10000.000000 Buckets #0, Count: 0 Buckets #1, Count: 0 Buckets #2, Count: 1 Buckets open-telemetry#3, Count: 0 Buckets open-telemetry#4, Count: 0 Buckets open-telemetry#5, Count: 0 Buckets open-telemetry#6, Count: 3 Buckets open-telemetry#7, Count: 4 Buckets open-telemetry#8, Count: 1 Buckets open-telemetry#9, Count: 1 Buckets open-telemetry#10, Count: 0 Buckets open-telemetry#11, Count: 0 Buckets open-telemetry#12, Count: 0 Buckets open-telemetry#13, Count: 0 Buckets open-telemetry#14, Count: 0 ``` - [x] Need to add tests for the config --------- Signed-off-by: Nikos Angelopoulos <[email protected]>
celian-garcia
pushed a commit
that referenced
this pull request
Mar 18, 2025
…-telemetry#38500) <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> #### Description Breaking change. Overhaul in document routing. New document routing logic: ``` Documents are statically or dynamically routed to the target index / data stream in the following order. The first routing mode that applies will be used. 1. "Static mode": Route to `logs_index` for log records, `metrics_index` for data points and `traces_index` for spans, if these configs are not empty respectively. [^3] 2. "Dynamic - Index attribute mode": Route to index name specified in `elasticsearch.index` attribute (precedence: log record / data point / span attribute > scope attribute > resource attribute) if the attribute exists. [^3] 3. "Dynamic - Data stream routing mode": Route to data stream constructed from `${data_stream.type}-${data_stream.dataset}-${data_stream.namespace}`, where `data_stream.type` is `logs` for log records, `metrics` for data points, and `traces` for spans, and is static. [^3] In a special case with `mapping::mode: bodymap`, `data_stream.type` field (valid values: `logs`, `metrics`) can be dynamically set from attributes. The resulting documents will contain the corresponding `data_stream.*` fields, see restrictions applied to [Data Stream Fields](https://www.elastic.co/guide/en/ecs/current/ecs-data_stream.html). 1. `data_stream.dataset` or `data_stream.namespace` in attributes (precedence: log record / data point / span attribute > scope attribute > resource attribute) 2. Otherwise, if scope name matches regex `/receiver/(\w*receiver)`, `data_stream.dataset` will be capture group #1 3. Otherwise, `data_stream.dataset` falls back to `generic` and `data_stream.namespace` falls back to `default`. ``` ``` In OTel mapping mode (`mapping::mode: otel`), there is special handling in addition to the above document routing rules in [Elasticsearch document routing](#elasticsearch-document-routing). The order to determine the routing mode is the same as [Elasticsearch document routing](#elasticsearch-document-routing). 1. "Static mode": Span events are separate documents routed to `logs_index` if non-empty. 2. "Dynamic - Index attribute mode": Span events are separate documents routed using attribute `elasticsearch.index` (precedence: span event attribute > scope attribute > resource attribute) if the attribute exists. 3. "Dynamic - Data stream routing mode": - For all documents, `data_stream.dataset` will always be appended with `.otel`. - A special case to (3)(1) in [Elasticsearch document routing](#elasticsearch-document-routing), span events are separate documents that have `data_stream.type: logs` and are routed using data stream attributes (precedence: span event attribute > scope attribute > resource attribute) ``` Effective changes: - Deprecate and make `{logs,metrics,traces}_dynamic_index` config no-op - Config validation error on `{logs,metrics,traces}_dynamic_index::enabled` and `{logs,metrics,traces}_index` set at the same time, as users who rely on dynamic index should not set `{logs,metrics,traces}_index`. - Remove `elasticsearch.index.{prefix,suffix}` handling. Replace it with `elasticsearch.index` handling that uses attribute value as index directly. Users rely on the previously supported `elasticsearch.index.prefix` and `elasticsearch.index.suffix` should migrate to a transform processor that sets `elasticsearch.index`. - Fix a bug where receiver-based routing overwrites data_stream.dataset. Should be released together with open-telemetry#38458 <!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. --> #### Link to tracking issue Fixes open-telemetry#38361 <!--Describe what testing was performed and which tests were added.--> #### Testing <!--Describe the documentation added.--> #### Documentation <!--Please delete paragraphs that you did not use before submitting.--> --------- Co-authored-by: Andrzej Stencel <[email protected]>
celian-garcia
pushed a commit
that referenced
this pull request
Jul 2, 2025
…elemetry#40559) <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> #### Description Prometheus remote write exporter add extra logs for the RW2 code path. <!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. --> #### Link to tracking issue Fixes Partially implements open-telemetry#33661 (when merging PR please don't close the tracing issue) <!--Describe what testing was performed and which tests were added.--> #### Testing * [x] e2e ran with prometheus locally Sample of the logs running against both Prometheus `v3.4.1` and `v2.53.1` (which doesn't support RW2).  ``` 2025-06-09T17:15:44.390+0200 info [email protected]/server.go:886 [core] [Server #1 ListenSocket #2]ListenSocket created {"resource": {"service.instance.id": "383f7fdd-08e5-4e4c-85fe-c1fe3002d1ba", "service.name": "otelcontribcol", "service.version": "0.127.0-dev"}, "grpc_log": true} 2025-06-09T17:15:52.173+0200 warn prometheusremotewriteexporter/exporter.go:420 X-Prometheus-Remote-Write-Samples-Written header is missing from the response, suggesting that the endpoint doesn't support RW2. {"resource": {"service.instance.id": "383f7fdd-08e5-4e4c-85fe-c1fe3002d1ba", "service.name": "otelcontribcol", "service.version": "0.127.0-dev"}, "otelcol.component.id": "prometheusremotewrite/prometheus", "otelcol.component.kind": "exporter", "otelcol.signal": "metrics", "url": "http://localhost:9091/api/v1/write"} 2025-06-09T17:15:53.968+0200 warn prometheusremotewriteexporter/exporter.go:420 X-Prometheus-Remote-Write-Samples-Written header is missing from the response, suggesting that the endpoint doesn't support RW2. {"resource": {"service.instance.id": "383f7fdd-08e5-4e4c-85fe-c1fe3002d1ba", "service.name": "otelcontribcol", "service.version": "0.127.0-dev"}, "otelcol.component.id": "prometheusremotewrite/prometheus", "otelcol.component.kind": "exporter", "otelcol.signal": "metrics", "url": "http://localhost:9091/api/v1/write"} 2025-06-09T17:15:57.164+0200 warn prometheusremotewriteexporter/exporter.go:420 X-Prometheus-Remote-Write-Samples-Written header is missing from the response, suggesting that the endpoint doesn't support RW2. {"resource": {"service.instance.id": "383f7fdd-08e5-4e4c-85fe-c1fe3002d1ba", "service.name": "otelcontribcol", "service.version": "0.127.0-dev"}, "otelcol.component.id": "prometheusremotewrite/prometheus", "otelcol.component.kind": "exporter", "otelcol.signal": "metrics", "url": "http://localhost:9091/api/v1/write"} 2025-06-09T17:15:58.966+0200 warn prometheusremotewriteexporter/exporter.go:420 X-Prometheus-Remote-Write-Samples-Written header is missing from the response, suggesting that the endpoint doesn't support RW2. {"resource": {"service.instance.id": "383f7fdd-08e5-4e4c-85fe-c1fe3002d1ba", "service.name": "otelcontribcol", "service.version": "0.127.0-dev"}, "otelcol.component.id": "prometheusremotewrite/prometheus", "otelcol.component.kind": "exporter", "otelcol.signal": "metrics", "url": "http://localhost:9091/api/v1/write"} 2025-06-09T17:16:02.164+0200 debug prometheusremotewriteexporter/exporter.go:425 X-Prometheus-Remote-Write-Samples-Written {"resource": {"service.instance.id": "383f7fdd-08e5-4e4c-85fe-c1fe3002d1ba", "service.name": "otelcontribcol", "service.version": "0.127.0-dev"}, "otelcol.component.id": "prometheusremotewrite/prometheus", "otelcol.component.kind": "exporter", "otelcol.signal": "metrics", "samples_written": "22"} 2025-06-09T17:16:02.164+0200 debug prometheusremotewriteexporter/exporter.go:428 X-Prometheus-Remote-Write-Histograms-Written {"resource": {"service.instance.id": "383f7fdd-08e5-4e4c-85fe-c1fe3002d1ba", "service.name": "otelcontribcol", "service.version": "0.127.0-dev"}, "otelcol.component.id": "prometheusremotewrite/prometheus", "otelcol.component.kind": "exporter", "otelcol.signal": "metrics", "histograms_written": "0"} 2025-06-09T17:16:02.164+0200 debug prometheusremotewriteexporter/exporter.go:431 X-Prometheus-Remote-Write-Exemplars-Written {"resource": {"service.instance.id": "383f7fdd-08e5-4e4c-85fe-c1fe3002d1ba", "service.name": "otelcontribcol", "service.version": "0.127.0-dev"}, "otelcol.component.id": "prometheusremotewrite/prometheus", "otelcol.component.kind": "exporter", "otelcol.signal": "metrics", "exemplars_written": "0"} 2025-06-09T17:16:03.967+0200 debug prometheusremotewriteexporter/exporter.go:425 X-Prometheus-Remote-Write-Samples-Written {"resource": {"service.instance.id": "383f7fdd-08e5-4e4c-85fe-c1fe3002d1ba", "service.name": "otelcontribcol", "service.version": "0.127.0-dev"}, "otelcol.component.id": "prometheusremotewrite/prometheus", "otelcol.component.kind": "exporter", "otelcol.signal": "metrics", "samples_written": "290"} 2025-06-09T17:16:03.967+0200 debug prometheusremotewriteexporter/exporter.go:428 X-Prometheus-Remote-Write-Histograms-Written {"resource": {"service.instance.id": "383f7fdd-08e5-4e4c-85fe-c1fe3002d1ba", "service.name": "otelcontribcol", "service.version": "0.127.0-dev"}, "otelcol.component.id": "prometheusremotewrite/prometheus", "otelcol.component.kind": "exporter", "otelcol.signal": "metrics", "histograms_written": "0"} ``` cc @krajorama @ywwg
celian-garcia
pushed a commit
that referenced
this pull request
Sep 29, 2025
…b.uid (open-telemetry#42641) <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> #### Description This PR adds support to expose `k8s.cronjob.uid` as resource metadata when a `Job` is owned by a `CronJob`. <!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. --> #### Link to tracking issue Fixes open-telemetry#42557 <!--Describe what testing was performed and which tests were added.--> #### Testing Local tested with `telemetrygen` and is working as expected. ``` [pod/k8sevents-receiver-opentelemetry-collector-6fd9966559-brlb6/opentelemetry-collector] {"level":"debug","ts":"2025-09-11T16:29:11.588Z","caller":"[email protected]/processor.go:159","msg":"getting the pod","resource":{"service.instance.id":"9631e38b-aec3-439f-8178-d96fc8368e1e","service.name":"otelcontribcol","service.version":"0.135.0-dev"},"otelcol.component.id":"k8sattributes","otelcol.component.kind":"processor","otelcol.pipeline.id":"traces","otelcol.signal":"traces","pod":{"Name":"otel-log-cronjob-29293469-lw97x","Address":"10.244.0.70","PodUID":"7960681c-5a24-4287-8bea-e2cf506500ee","Attributes":{"k8s.cronjob.name":"otel-log-cronjob","k8s.cronjob.uid":"082b1c42-e393-46bc-9d51-b20a3700d1ab","k8s.job.name":"otel-log-cronjob-29293469","k8s.job.uid":"fbd853b8-7f63-44d8-ace1-8b48c89e3041"},"StartTime":"2025-09-11T16:29:00Z","Ignore":false,"Namespace":"default","NodeName":"","DeploymentUID":"","StatefulSetUID":"","DaemonSetUID":"","JobUID":"fbd853b8-7f63-44d8-ace1-8b48c89e3041","HostNetwork":false,"Containers":{"ByID":null,"ByName":null},"DeletedAt":"0001-01-01T00:00:00Z"}} [pod/k8sevents-receiver-opentelemetry-collector-6fd9966559-brlb6/opentelemetry-collector] {"level":"info","ts":"2025-09-11T16:29:11.588Z","msg":"Traces","resource":{"service.instance.id":"9631e38b-aec3-439f-8178-d96fc8368e1e","service.name":"otelcontribcol","service.version":"0.135.0-dev"},"otelcol.component.id":"debug","otelcol.component.kind":"exporter","otelcol.signal":"traces","resource spans":1,"spans":2} [pod/k8sevents-receiver-opentelemetry-collector-6fd9966559-brlb6/opentelemetry-collector] {"level":"info","ts":"2025-09-11T16:29:11.588Z","msg":"ResourceSpans #0\nResource SchemaURL: https://opentelemetry.io/schemas/1.4.0\nResource attributes:\n -> k8s.container.name: Str(telemetrygen)\n -> service.name: Str(telemetrygen)\n -> k8s.pod.ip: Str(10.244.0.70)\n -> k8s.cronjob.name: Str(otel-log-cronjob)\n -> k8s.cronjob.uid: Str(082b1c42-e393-46bc-9d51-b20a3700d1ab)\n -> k8s.job.uid: Str(fbd853b8-7f63-44d8-ace1-8b48c89e3041)\n -> k8s.job.name: Str(otel-log-cronjob-29293469)\nScopeSpans #0\nScopeSpans SchemaURL: \nInstrumentationScope telemetrygen \nSpan #0\n Trace ID : 3c7381c14a37814676b00a7d961cb219\n Parent ID : 4f8780d5148a9c1c\n ID : 17e9da9533dc93ca\n Name : okey-dokey-0\n Kind : Server\n Start time : 2025-09-11 16:29:09.583785469 +0000 UTC\n End time : 2025-09-11 16:29:09.583908469 +0000 UTC\n Status code : Unset\n Status message : \nAttributes:\n -> net.peer.ip: Str(1.2.3.4)\n -> peer.service: Str(telemetrygen-client)\nSpan #1\n Trace ID : 3c7381c14a37814676b00a7d961cb219\n Parent ID : \n ID : 4f8780d5148a9c1c\n Name : lets-go\n Kind : Client\n Start time : 2025-09-11 16:29:09.583785469 +0000 UTC\n End time : 2025-09-11 16:29:09.583908469 +0000 UTC\n Status code : Unset\n Status message : \nAttributes:\n -> net.peer.ip: Str(1.2.3.4)\n -> peer.service: Str(telemetrygen-server)\n","resource":{"service.instance.id":"9631e38b-aec3-439f-8178-d96fc8368e1e","service.name":"otelcontribcol","service.version":"0.135.0-dev"},"otelcol.component.id":"debug","otelcol.component.kind":"exporter","otelcol.signal":"traces"} ``` Added also the tests to guarantee the proper functionality. --------- Signed-off-by: Paulo Dias <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR is a cherry pick of the original split by dimension PR of the upstream. To load it in our fork and benefit from it.
Currently we have only a global optin optout, but this new dimensions config is allowing fine grain tuning according to the type of the resources. This is useful for the problem mentioned in the README.
Link to tracking issue
Fixes
Testing
I'd like to test locally the with / without split with all the resources. Like that I'll be able to isolate the resource types causing issues.Getting all resource types metrics from all subscriptions is too consuming for my laptop. I have inconsistent result accross time. I will better activate the feature resource type by resource type in deployed test phases and build the list of metrics impacted progressively.Documentation