-
Notifications
You must be signed in to change notification settings - Fork 216
[Bug] lvm-llama-vision and lvm-llama-vision-guard failed processing requests #1137
Copy link
Copy link
Closed
Labels
Description
Priority
Undecided
OS type
Ubuntu
Hardware type
Xeon-GNR
Installation method
- Pull docker images from hub.docker.com
- Build docker images from source
Deploy method
- Docker compose
- Docker
- Kubernetes
- Helm
Running nodes
Single Node
What's the version?
git commit 1cc4d21
Description
Following the lvm-llama-vision README, when test the service lvm-llama-vision or lvm-llama-vision-guard, it fails with the docker logs as listed below
Reproduce steps
Following the lvm-llama-vision README
Raw log
/home/user/.local/lib/python3.10/site-packages/pydantic/_internal/_fields.py:132: UserWarning: Field "model_name_or_path" in Audio2TextDoc has conflict with p
rotected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
[2025-01-13 12:36:42,740] [ INFO] - Base service - CORS is enabled.
[2025-01-13 12:36:42,741] [ INFO] - Base service - Setting up HTTP server
[2025-01-13 12:36:42,741] [ INFO] - Base service - Uvicorn server setup on port 9399
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:9399 (Press CTRL+C to quit)
[2025-01-13 12:36:42,752] [ INFO] - Base service - HTTP server setup successful
INFO: 192.168.103.224:58408 - "POST /v1/lvm HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/user/.local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/user/.local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
return await self.app(scope, receive, send)
File "/home/user/.local/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/home/user/.local/lib/python3.10/site-packages/starlette/applications.py", line 113, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/user/.local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/home/user/.local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
... ...
... ...
File "/home/user/.local/lib/python3.10/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
File "/home/user/.local/lib/python3.10/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
File "/home/user/comps/lvms/src/integrations/dependency/llama-vision/lvm.py", line 74, in lvm
initialize()
File "/home/user/comps/lvms/src/integrations/dependency/llama-vision/lvm.py", line 40, in initialize
model = AutoModelForVision2Seq.from_pretrained(
File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3372, in from_pretrained
raise ImportError(
ImportError: Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install 'accelerate>=0.26.0'`Attachments
No response
Reactions are currently unavailable