-
-
Notifications
You must be signed in to change notification settings - Fork 3k
fix(intel): Set GPU vendor on Intel images and cleanup #5945
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -140,11 +140,7 @@ docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri | |
| ### Intel GPU Images (oneAPI): | ||
|
|
||
| ```bash | ||
| # Intel GPU with FP16 support | ||
| docker run -ti --name local-ai -p 8080:8080 --device=/dev/dri/card1 --device=/dev/dri/renderD128 localai/localai:latest-gpu-intel-f16 | ||
|
|
||
| # Intel GPU with FP32 support | ||
| docker run -ti --name local-ai -p 8080:8080 --device=/dev/dri/card1 --device=/dev/dri/renderD128 localai/localai:latest-gpu-intel-f32 | ||
| docker run -ti --name local-ai -p 8080:8080 --device=/dev/dri/card1 --device=/dev/dri/renderD128 localai/localai:latest-gpu-intel | ||
| ``` | ||
|
|
||
| ### Vulkan GPU Images: | ||
|
|
@@ -166,7 +162,7 @@ docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-ai | |
| docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-11 | ||
|
|
||
| # Intel GPU version | ||
| docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-gpu-intel-f16 | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we need to update also the https://github.com/mudler/LocalAI/blob/master/docs/static/install.sh script, and I guess documentation too. Maybe better to grep the codebase
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I did, but not for the right thing I think. I'm a bit confused about the AIO image and how this works now. Does it include the backends or are these downloaded? I can't see how that would happen. If they are included then it would make sense to keep the f16 and f32 versions perhaps, but presently I have removed them.
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. all downloaded during start, there aren't anymore backends in any image 👍 |
||
| docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-gpu-intel | ||
|
|
||
| # AMD GPU version | ||
| docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-hipblas | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shall we simply put
intelhere? this is not used anywhere else now, backends have their own build_typeThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dunno, Intel seem to have multiple options and I'm not sure if oneapi would cover all of them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but it's what Intel suggest and are the basic dependency to get going with an intel GPU (common enough for all the other backends). I'd really go simple here and name it
intel. Users don't have to learn internals until they want to.