Skip to content

Update requirements-hpu.txt for open telemetry tracing support#857

Closed
louie-tsai wants to merge 7 commits intoHabanaAI:habana_mainfrom
intel-ai-tce:open_telemetry_support
Closed

Update requirements-hpu.txt for open telemetry tracing support#857
louie-tsai wants to merge 7 commits intoHabanaAI:habana_mainfrom
intel-ai-tce:open_telemetry_support

Conversation

@louie-tsai
Copy link
Copy Markdown

@louie-tsai louie-tsai commented Feb 21, 2025

OPEA Project starts having OpenTelemetry tracing feature as below PR.
opea-project/GenAIExamples#1488
We have TGI/TEI enabled for the Open Telemetry tracing, and would like to also enable for vllm.
Current runtime issue is missing opentelemetry-api package, so have the PR to fix it.

@louie-tsai louie-tsai force-pushed the open_telemetry_support branch from 44a0d2e to df648ad Compare February 22, 2025 07:36
setuptools>=61
setuptools-scm>=8
vllm-hpu-extension @ git+https://github.com/HabanaAI/vllm-hpu-extension.git@8087a98
opentelemetry-api
Copy link
Copy Markdown

@michalkuligowski michalkuligowski Feb 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if we want to add it here, those packages are not needed for other workloads. Also as I look into vllm project repo, they dont add it either to requirements file and leave it to user, also please see:

@kzawora-intel can you comment?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@michalkuligowski
Both TGI Gaudi and TEI Gaudi have those packages in their requirements.
here are related lines for those packages installation in TGI.
https://github.com/huggingface/tgi-gaudi/blob/habana-main/server/requirements.txt#L40

We also saw those tracing enabled in OPEA once we gave related OTLP endpoint URL correctly.
opea-project/GenAIExamples#1316

However, vLLM doesn't have that packages, so this is a gap for vLLM Gaudi comparing to TGI Gaudi.
Good to have that enabled like TGI Gaudi.

thanks

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand it's in tgi, but it seems that vllm decided to do it this way. Also did you check

I linked in my previous comment? It shows requirements for opentelemetry, so I think that should suffice.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@michalkuligowski
all other instructions in the otel.md could be covered by launch scripts like docker compose yaml fie, but no package installation need to handle inside Dockerfile, so the otel.md won't help when users deploy the vllm directly into their cluster. install those packages won't impact performance. open telemetry won't be enabled without setting those arguments for vllm server.

@eero-t
Copy link
Copy Markdown

eero-t commented Mar 24, 2025

IMHO doing non-Gaudi change like this in Gaudi fork seems wrong; it should belong either into upstream vLLM project, or OPEA vLLM build.

(OPEA vLLM build script can easily concatenate the needed modules to vllm-fork requirements file.)

@louie-tsai
Copy link
Copy Markdown
Author

IMHO doing non-Gaudi change like this in Gaudi fork seems wrong; it should belong either into upstream vLLM project, or OPEA vLLM build.

(OPEA vLLM build script can easily concatenate the needed modules to vllm-fork requirements file.)

OPEA vLLM use Dockerfile.hpu directly. no other wrapper. also this fork maintain its own requirement.txt, so upstream won't help...

@eero-t
Copy link
Copy Markdown

eero-t commented Apr 1, 2025

OPEA vLLM use Dockerfile.hpu directly. no other wrapper. also this fork maintain its own requirement.txt, so upstream won't help...

@louie-tsai Adding something like cat ../requirements-otel.txt >> requirements-hpu.txt just before docker build -f Dockerfile.hpu ... line here should suffice:
https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/vllm/src/build_docker_vllm.sh

@adobrzyn
Copy link
Copy Markdown

Hi @louie-tsai is this PR still needed? Can we close it?

@michalkuligowski
Copy link
Copy Markdown

Please re-open if still needed, closing as stale

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants