-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[WIP] Add custom data type support #817
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Add custom data type support #817
Conversation
9d2a055 to
072b8ed
Compare
| return factory.CreateExporter(ctx, creationParams, dataType, cfg) | ||
| } | ||
|
|
||
| return nil, fmt.Errorf("exporter %q does not support data type %q", factoryBase.Type(), dataType) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this function really needed. For metrics and traces, it supposed to be a temporary function to support old->new style data model migration. What if we don't have if factory, ok := factoryBase.(component.DataExporterFactory); ok { condition? Looks like we would get a compile time error instead of this one which seems to be more useful.
| } | ||
|
|
||
| return nil, fmt.Errorf("processor %q does support data type %q", | ||
| cfg.Name(), dataType) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the same question here
|
This makes sense and would allow people to support a lot of other use-cases. +1 for adding something like this. I was thinking of doing this a bit differently. What if the core repo did not deal with building exporters, processors and receivers at all and only built pipeline objects?
Service would then just interact with pipeline objects eliminating the need for service to know and deal data types handled by each pipeline. It can basically just call Once we have this, anyone should be able to implement a different pipeline that deals with a completely different data-type and use it with Otel just like people build other custom components now. Implementing a custom pipeline would mean duplicating/copying a bit of code but the flexibility this will bring to the system would be worth it IMO. Also, it should be rare for people to implement custom pipelines so copying a bit of code (if required) shouldn't make that much of a difference. We can also share a lot of code by using common/base interfaces to pass around components. Pipeline might do some type-assertion on it's components but it would happen only once as part of the startup process. This will also allow people to customize other pipeline behavior like replacing fanout connector with something else or handling errors in a different way without a processor, etc. In case it becomes tedious to implement a pipeline, it should be trivial to support automate it with code gen. With code gen, I'm sure we can make writing a custom pipeline as simple as the following: // data_type.go
package myCustomPipeline
type MyCustomDataType struct {
Name string
}
// gen.go
package myCustomPipeline
//go:generate otel-pipeline-generator -type MyCustomDataType -Name MyCustomPipelineI'm sure this should be all that is required to build a pipeline with default behaviour that processes custom data types. TL;DR: Add only one new type/component called Pipeline, re-work builder package into a default Pipeline component. Let people register custom pipelines. Add a code generator to trivialize building custom pipelines. I just started a basic prototype to see how easily this can be done. Will report back once I have something more tangible. |
|
I think this proposal originally came out of how do we send metadata from receivers. Later we also discussed having metadata be its own formal data type alongside metrics, traces, logs. I think that will be important when deployed in collector mode and users want to be able to take metadata from one collector instance inside the firewall, forward to one outside the firewall, and then send to cloud. If it's just part of the OT protocol then it can use existing receivers/exporters. |
pjanotti
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The approach LGTM @tigrannajaryan
| return componenterror.CombineErrors(errs) | ||
| } | ||
|
|
||
| // NewFanOutConnector wraps multiple new type consumers in a single one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // NewFanOutConnector wraps multiple new type consumers in a single one. | |
| // NewFanOutConnector wraps multiple new consumers in a single one. |
|
|
||
| var _ consumer.DataConsumer = (*FanOutConnector)(nil) | ||
|
|
||
| // Consume exports the span data to all consumers wrapped by the current one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // Consume exports the span data to all consumers wrapped by the current one. | |
| // Consume exports the received data to all consumers wrapped by the current one. |
| return nil, typeMismatchErr(config, requirement.requiredBy, configmodels.TracesDataType) | ||
| for dataType, requirement := range inputDataTypes { | ||
|
|
||
| if dataType == configmodels.TracesDataType { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: a switch looks better in this case
| } | ||
| exporter.te = te | ||
| } else if dataType == configmodels.MetricsDataType { | ||
| // Metrics data type is required. Create a trace exporter based on config. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // Metrics data type is required. Create a trace exporter based on config. | |
| // Metrics data type is required. Create a metrics exporter based on config. |
This will allow to have pipelines of data types that are not one of the builtin type: traces and metrics. See design.md for detailed description of how custom data types work.
072b8ed to
92e86e9
Compare
Codecov Report
@@ Coverage Diff @@
## master #817 +/- ##
==========================================
- Coverage 85.68% 85.51% -0.18%
==========================================
Files 184 184
Lines 13061 13176 +115
==========================================
+ Hits 11191 11267 +76
- Misses 1419 1458 +39
Partials 451 451
Continue to review full report at Codecov.
|
|
Decided to go a different route: #959 |
…lp (open-telemetry#817) Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Liz Fong-Jones <[email protected]> Co-authored-by: lizthegrey <[email protected]>
…#817) Bumps [docker](https://github.com/docker/docker-py) from 5.0.2 to 5.0.3. - [Release notes](https://github.com/docker/docker-py/releases) - [Commits](docker/docker-py@5.0.2...5.0.3) --- updated-dependencies: - dependency-name: docker dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…#817) * [demo] Use k8sattributesprocessor instead of env vars Signed-off-by: Goutham <[email protected]> * Add 0.24 to upgrading.md Signed-off-by: Goutham <[email protected]> * Update charts/opentelemetry-demo/UPGRADING.md --------- Signed-off-by: Goutham <[email protected]> Co-authored-by: Tyler Helmuth <[email protected]>
This will allow to have pipelines of data types that are not one of the
builtin type: traces and metrics.
See design.md for detailed description of how custom data types work.