Anyone having issues with ROCM or VULKAN? #7071
NexGen-3D-Printing
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Not sure if this is a known issue but I can seem to get anything to hit the GPU, I have 7900 GRE and ROCM is installed properly but no matter how small the model, they will always load into system ram.
I tried the following:
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-hipblas
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-vulkan
Both of these fail to even detect a GPU, they only detect the CPU
and this one:
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-hipblas
It does detect the GPU and is detected as AMD, but it doesnt know what type.
under rocminfo in the cli everything is correct, I have gfx1100 and gxf1036
Trying to run anything under the AIO will always result in: localai failed to load model with internal loader: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF
Running LocalAI works perfectly fine under CPU only, but I cant seem to get it to hit the GPU at all, not sure if I'm doing somthing wrong here, or if I'm missing something.
Beta Was this translation helpful? Give feedback.
All reactions