Skip to content

Is there a way to detect Vulkan, DirectML, and DX12? #980

@wszgrcy

Description

@wszgrcy

Is your feature request related to a problem? Please describe.
This issue is related to artificial intelligence, such as llama.cpp, which has different compilation builds for different devices. In this case, it is necessary to detect the support of different devices to install different dependencies
Describe the solution you'd like
https://github.com/withcatai/node-llama-cpp/blob/master/src/bindings/utils/detectAvailableComputeLayers.ts

This repo has a method to determine CUDA Vulkan. Can it be integrated into it?
Describe alternatives you've considered
But I haven't found a solution to determine DX12
Directml depends on dx12
Additional context

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions