-
-
Notifications
You must be signed in to change notification settings - Fork 345
Open
Labels
Description
Is your feature request related to a problem? Please describe.
This issue is related to artificial intelligence, such as llama.cpp, which has different compilation builds for different devices. In this case, it is necessary to detect the support of different devices to install different dependencies
Describe the solution you'd like
https://github.com/withcatai/node-llama-cpp/blob/master/src/bindings/utils/detectAvailableComputeLayers.ts
This repo has a method to determine CUDA Vulkan. Can it be integrated into it?
Describe alternatives you've considered
But I haven't found a solution to determine DX12
Directml depends on dx12
Additional context
Reactions are currently unavailable