Skip to content

Add back my llama-cpp-python wheels, bump to 0.2.65#5964

Merged
oobabooga merged 3 commits intodevfrom
add-back-llama-cpp-wheels
Apr 30, 2024
Merged

Add back my llama-cpp-python wheels, bump to 0.2.65#5964
oobabooga merged 3 commits intodevfrom
add-back-llama-cpp-wheels

Conversation

@oobabooga
Copy link
Copy Markdown
Owner

@oobabooga oobabooga commented Apr 30, 2024

This adds back support for:

  • AMD GPUs
  • AVX2 CPUs
  • The --tensorcores option for better speed on NVIDIA GPUs

I was previously blocked from uploading my own wheels due to a storage quota on GitHub. Removing those wheels was a big loss in functionality for the project, so I have decided to simply start paying for the necessary storage.

@oobabooga oobabooga changed the title Add back my llama-cpp-python wheels Add back my llama-cpp-python wheels, bump to 0.2.65 Apr 30, 2024
@oobabooga oobabooga mentioned this pull request Apr 30, 2024
@oobabooga oobabooga merged commit 51fb766 into dev Apr 30, 2024
@oobabooga oobabooga deleted the add-back-llama-cpp-wheels branch May 19, 2024 23:29
PoetOnTheRun pushed a commit to PoetOnTheRun/text-generation-webui that referenced this pull request Oct 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant