Skip to content

Conversation

@cyrille-leclerc
Copy link
Member

@cyrille-leclerc cyrille-leclerc commented Oct 2, 2025

Use the OpenTelemetry Kubernetes Operator (aka OTel Operator) through the OpenTelemetry Kube Stack Helm Chart to manage OTel Collectors.

Note that we don't use OTel Operator and its Instrumentation CRD to inject the OTel SDKs config because Helm Chart can not wait for the OTel Operator to be up before creating the OTel Demo components that require to be configured.

Noteworthy:

  • The OTel Demo components lack consistency in their usage of OTLP grpc and http/protobuf requiring to inject the 2 grpc and http/protobuf endpoint configurations.
  • Important limitation on Docker Desktop Kubernetes: limitations of file system mounts requires to disable pod logs and host metrics scrapping.

Pending work:

  • Fix the problem of OTel Col receiver creator httpcheck/frontend-proxy.
  • Decide if we enable by default hostmetrics and filelog receivers as they don't work on docker-desktop Kubernetes

Successfully tested on

  • Docker Desktop Mac Kubernetes: cpu_limit=8, memory_limit=8GB, swap=1GB
  • Digital Ocean Kubernetes 2 nodes with 4vCPUs and 8GB each

FYI @rogercoll

APM dashboard

# Conflicts:
#	charts/opentelemetry-demo/Chart.lock
#	charts/opentelemetry-demo/Chart.yaml
#	charts/opentelemetry-demo/examples/bring-your-own-observability/rendered/component.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/component.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/grafana/configmap-dashboard-provider.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/grafana/deployment.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/opentelemetry-collector/clusterrole.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/opentelemetry-collector/clusterrolebinding.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/opentelemetry-collector/configmap-agent.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/opentelemetry-collector/daemonset.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/opentelemetry-collector/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/component.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/grafana/configmap-dashboard-provider.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/grafana/deployment.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/opentelemetry-collector/clusterrole.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/opentelemetry-collector/clusterrolebinding.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/opentelemetry-collector/configmap.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/opentelemetry-collector/deployment.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/opentelemetry-collector/service.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/opentelemetry-collector/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/default/rendered/component.yaml
#	charts/opentelemetry-demo/examples/default/rendered/grafana/configmap-dashboard-provider.yaml
#	charts/opentelemetry-demo/examples/default/rendered/grafana/deployment.yaml
#	charts/opentelemetry-demo/examples/default/rendered/opentelemetry-collector/clusterrole.yaml
#	charts/opentelemetry-demo/examples/default/rendered/opentelemetry-collector/clusterrolebinding.yaml
#	charts/opentelemetry-demo/examples/default/rendered/opentelemetry-collector/configmap.yaml
#	charts/opentelemetry-demo/examples/default/rendered/opentelemetry-collector/deployment.yaml
#	charts/opentelemetry-demo/examples/default/rendered/opentelemetry-collector/service.yaml
#	charts/opentelemetry-demo/examples/default/rendered/opentelemetry-collector/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/component.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/grafana/configmap-dashboard-provider.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/grafana/deployment.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/opentelemetry-collector/clusterrole.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/opentelemetry-collector/clusterrolebinding.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/opentelemetry-collector/configmap-agent.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/opentelemetry-collector/daemonset.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/opentelemetry-collector/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/component.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/grafana/configmap-dashboard-provider.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/grafana/deployment.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/opentelemetry-collector/clusterrole.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/opentelemetry-collector/clusterrolebinding.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/opentelemetry-collector/configmap.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/opentelemetry-collector/deployment.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/opentelemetry-collector/ingress.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/opentelemetry-collector/service.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/opentelemetry-collector/serviceaccount.yaml
#	charts/opentelemetry-demo/values.yaml
# Conflicts:
#	charts/opentelemetry-demo/Chart.yaml
#	charts/opentelemetry-demo/examples/bring-your-own-observability/rendered/component.yaml
#	charts/opentelemetry-demo/examples/bring-your-own-observability/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/bring-your-own-observability/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/bring-your-own-observability/rendered/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/component.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/grafana-config.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/component.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/grafana-config.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/default/rendered/component.yaml
#	charts/opentelemetry-demo/examples/default/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/default/rendered/grafana-config.yaml
#	charts/opentelemetry-demo/examples/default/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/default/rendered/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/component.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/grafana-config.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/component.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/grafana-config.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/serviceaccount.yaml
Copy link
Contributor

@rogercoll rogercoll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you share more context on why we should use auto-instrumentation for the otel-demo services? The otel-demo services are already configured with its corresponding Otel SDK, what would be the use case of dynamically injecting another SDK?

An alterative would be using the kube-stack Helm Chart for deploying the collectors and create a new k8s-only uninstrumented (no SDK) service in the otel-demo for the auto instrumentation use case.

@cyrille-leclerc
Copy link
Member Author

cyrille-leclerc commented Oct 8, 2025

Could you share more context on why we should use auto-instrumentation for the otel-demo services? The otel-demo services are already configured with its corresponding Otel SDK, what would be the use case of dynamically injecting another SDK?

+1 that otel-demo services are instrumented with OTel SDKs today. However, I think it's valuable to demo the OTel Operator Instrumentation CRD with the inject-sdk pod annotation today:

  • Show the best practices and soon hopefully stop bundling the OTel SDK in the container images of the otel-demo service
  • Get the benefit of the OTel Operator injecting the SDK config through the OTEL_ env vars, particularly resource attributes with the following benefits:
    • For OTel practitioners, get their telemetry fully enriched and compliant with OTel specs like Specify resource attributes using Kubernetes annotations. Manually setting resource attributes on K8s is very error prone.
    • For the OTel project, to verify that the OTel Operator Instrumentation CRD doesn't forget anything. For example I discovered through this PR that most services of the demo set a wrong host.name value, using the k8s.pod.name value. I guess it's because the OTel SDK HostNameProvider that is used when it's not the desired value in containers

An alterative would be using the kube-stack Helm Chart for deploying the collectors and create a new k8s-only uninstrumented (no SDK) service in the otel-demo for the auto instrumentation use case.

+1 I see it as a subsequent milestone.

@cyrille-leclerc cyrille-leclerc changed the title [otel-demo] Use otel-operator to inject OTel configuration and manage OTel Collectors [otel-demo] Use otel-kube-stack helm chart and otel-operator to inject OTel configuration and manage OTel Collectors Oct 21, 2025
… config becauseHelm can't ensure OTel Operator is up before instantiating demo components.
@cyrille-leclerc
Copy link
Member Author

cyrille-leclerc commented Oct 22, 2025

FYI I stopped using the OTel Operator to inject the OTel SDKs config because Helm Chart can not wait for the OTel Operator to be up before creating the OTel Demo components that require to be configured.
I reverted to the principle of configuring OTel SDKs through environment variables passed by the Helm Chart.
I kept instrumentation.opentelemetry.io/inject-sdk: "false" for documentation purpose

@@ -1,1994 +0,0 @@
---
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@puckpuck you are the author of this example, can you confirm it's ok to remove it now that we use the daemonset deployment by default?
ref:

@@ -2,7 +2,7 @@ components:
frontend:
envOverrides:
- name: PUBLIC_OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
value: https://otel-demo-collector.example.com/v1/traces
value: https://otel-demo.example.com/otlp-http/v1/traces
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@puckpuck you are the author of this example. I couldn't find a solution with the otel-kub-stack helm chart to define an ingress for a collector as you did it in the past, I guess it would be gateway collector.

Can you please check if the solution in this PR to channel public OTLP-http through the frontend-proxy would work?

ref:

@@ -0,0 +1,92 @@
components:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@puckpuck I replaced the "customer-env-var" example you created nd that I like very much with "custom-resource-attributes", injecting the team name through a pod annotation that get handled by the K8s attributes processor and soon the OTel Operator generated config, is it ok for you?

daemon:
config:
exporters:
otlphttp/bring-your-own-o11y:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@julianocosta89 @puckpuck can you please check this "bing your own o11y" solution works for the use cases that come to your mind?

# Conflicts:
#	charts/opentelemetry-demo/Chart.lock
#	charts/opentelemetry-demo/Chart.yaml
#	charts/opentelemetry-demo/examples/bring-your-own-observability/rendered/component.yaml
#	charts/opentelemetry-demo/examples/bring-your-own-observability/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/bring-your-own-observability/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/bring-your-own-observability/rendered/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/grafana-config.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/grafana/clusterrole.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/grafana/configmap-dashboard-provider.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/grafana/role.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/grafana/service.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/component.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/grafana/clusterrolebinding.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/grafana/configmap.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/grafana/deployment.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/grafana/rolebinding.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/grafana/secret.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/grafana/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/custom-resource-attribute/rendered/component.yaml
#	charts/opentelemetry-demo/examples/custom-resource-attribute/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/custom-resource-attribute/rendered/grafana-config.yaml
#	charts/opentelemetry-demo/examples/custom-resource-attribute/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/custom-resource-attribute/rendered/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/default/rendered/component.yaml
#	charts/opentelemetry-demo/examples/default/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/default/rendered/grafana-config.yaml
#	charts/opentelemetry-demo/examples/default/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/default/rendered/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/component.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/grafana-config.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/component.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/grafana-config.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/serviceaccount.yaml
@cyrille-leclerc cyrille-leclerc marked this pull request as ready for review October 24, 2025 09:15
@TylerHelmuth
Copy link
Member

Introducing the Operator to the demo chart adds significant complexity. The Operator solves 3 primary use cases:

  1. inject auto-instrumentation
  2. inject sidecar collectors
  3. manage prometheus endpoints via the target allocator

I don't believe the Demo has any of these needs. Additionally, including the Operator would misalign the k8s version of the Demo from the Docker version of the demo. Using the Operator to manage Collectors is not given as a best practice, so managing the Demo's Collectors via Helm is acceptable.

@julianocosta89 is using the Operator in the k8s version of the Demo aligned with the Demo SIG's goals?

@julianocosta89
Copy link
Member

Introducing the Operator to the demo chart adds significant complexity. The Operator solves 3 primary use cases:

  1. inject auto-instrumentation
  2. inject sidecar collectors
  3. manage prometheus endpoints via the target allocator

I don't believe the Demo has any of these needs. Additionally, including the Operator would misalign the k8s version of the Demo from the Docker version of the demo. Using the Operator to manage Collectors is not given as a best practice, so managing the Demo's Collectors via Helm is acceptable.

@julianocosta89 is using the Operator in the k8s version of the Demo aligned with the Demo SIG's goals?

@TylerHelmuth you are correct, none of the 3 primary cases are required for the Demo.

During the SIG meeting we have discussed this and @cyrille-leclerc brought up the following:

The otel-kube-stack is great, soon to be the obvious default to install otel on K8s
* Turnkey setup on K8s
* Auto injection on K8s
* OTel based infra monitoring

Other benefits
. * Simpler config of otel SDKs and collectors

Identified challenges:
* Self signed certs can be fragile
* Host file access doesn’t work (Host Metrics & pod logs) on Docker Desktop Mac

IMO it does bring an extra layer of complexity to the demo, but if that's the recommended approach for K8s deployments, I'm in.
My main concern (and I also brought that up during the meeting) is that all services in the Demo are already instrumented, I'm not sure how the auto injection will work out.

@TylerHelmuth
Copy link
Member

but if that's the recommended approach for K8s deployments, I'm in.

The opentelemetry-kube-stack chart is an approach, but not the recommended approach. We have purposefully never recommended an approach since using the Operator or Helm for managing Collectors in k8s are both valid. The opentelemetry-kube-stack chart is an excellent option if you're already using prometheus and want to switch to OTel, but we do not recommend it over the opentelemetry-collector chart.

My main concern (and I also brought that up during the meeting) is that all services in the Demo are already instrumented, I'm not sure how the auto injection will work out.

This can definitely cause issues. I know that the agents of certain languages, like java, can handle injecting specific instrumentation to a manually instrumented app, but I do not believe that is a universal experience. I typically tell customers of Honeycomb that if they want to manually instrument their code they should stop injecting auto-instrumentation via the Operator and setup manual instrumentation and any auto-instrumentation libraries via their code directly.

@cyrille-leclerc
Copy link
Member Author

Thanks for your detailed feedback.

IMO an underrated benefit of the OTel Operator is that it ensures the the resource attributes emitted by SDKs are the same as the ones of pod logs & metrics or of external telemetry like ebpf-profiles and ebpf-instrumentation, aligning them all on the Specify resource attributes using Kubernetes annotations.

To illustrate, if adopt semconv for resource attributes for pod logs & metrics (see PR #1918), the current examples/kubernetes-infra-monitoring/values.yaml chart will have broken correlation:

  • service.instance.id misaligned:
    • Generated in the SDK telemetry generated
    • Using concat([k8s.namespace.name, k8s.pod.name, k8s.container.name], '.') in pod logs & metrics
  • service.name risking misalignment because it comes from different K8s pod labels:
    • Using app.kubernetes.io/component for SDKs code here
    • Using app.kubernetes.io/name (code here)

My main concern (and I also brought that up during the meeting) is that all services in the Demo are already instrumented, I'm not sure how the auto injection will work out.

I'm with you, we should not use the OTel Operator for injecting SDKs as they re already bundled even thought I would love the demo to showcase injection of the SDK at deployment time for some workloads as it's a great decoupling of instrumentation in the SDLC.

We have purposefully never recommended an approach since using the Operator or Helm for managing Collectors in k8s are both valid.

I see OTel Collector features available in the otel-col-kube-stack that are not available in the collector chart but in the kube-stack chart.
I'll be happy to list the misalignment and help fixing the gaps if the project is interested.

I already proposed the fix:

I think the Collector chart doesn't support presets/clustermetrics on daemonset deployments using a leader election a the kube-stack chart does. I'll be happy to dig more if there is interest.

@TylerHelmuth
Copy link
Member

@cyrille-leclerc its true that the kube-stack chart and collector chart aren't 100% aligned and Im up fore fixing that. But I meant in general that we don't recommend the Operator for managing collectors instead of the collector chart because both are valid ways to install collectors in Kubernetes. Using an Operator is not always required.

I'm not sure how installing an Operator fixes resource attributes. Can you go into more detail about that?

@cyrille-leclerc
Copy link
Member Author

I'm not sure how installing an Operator fixes resource attributes. Can you go into more detail about that?

Please be patient, I'll get back to you as soon as I wrap up an ongoing time consuming PR

# Conflicts:
#	charts/opentelemetry-demo/Chart.yaml
#	charts/opentelemetry-demo/examples/bring-your-own-observability/rendered/component.yaml
#	charts/opentelemetry-demo/examples/bring-your-own-observability/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/bring-your-own-observability/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/bring-your-own-observability/rendered/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/grafana-config.yaml
#	charts/opentelemetry-demo/examples/collector-as-daemonset/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/component.yaml
#	charts/opentelemetry-demo/examples/custom-environment-variables/rendered/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/custom-resource-attributes/rendered/component.yaml
#	charts/opentelemetry-demo/examples/custom-resource-attributes/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/custom-resource-attributes/rendered/grafana-config.yaml
#	charts/opentelemetry-demo/examples/custom-resource-attributes/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/custom-resource-attributes/rendered/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/default/rendered/component.yaml
#	charts/opentelemetry-demo/examples/default/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/default/rendered/grafana-config.yaml
#	charts/opentelemetry-demo/examples/default/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/default/rendered/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/component.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/grafana-config.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/kubernetes-infra-monitoring/rendered/serviceaccount.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/component.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/flagd-config.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/grafana-config.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/product-catalog-products.yaml
#	charts/opentelemetry-demo/examples/public-hosted-ingress/rendered/serviceaccount.yaml
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants