Skip to content

[DOCS] Llama-CPP Linux NVIDIA GPU support and Windows-WSL #2148

@hamzahassan66

Description

@hamzahassan66

Description

What:

Replace
CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python numpy==1.26.0
with
CMAKE_ARGS='-DGGML_CUDA=on' poetry run pip install --force-reinstall --no-cache-dir
llama-cpp-python==0.2.90 numpy==1.26.4 markupsafe==2.1.5

Why:

-DLLAMA_CUBLAS is deprecated
-dependency versions for compatibility

Metadata

Metadata

Assignees

No one assigned

    Labels

    documentationImprovements or additions to documentation

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions