- 
                Notifications
    You must be signed in to change notification settings 
- Fork 13.5k
Home
        Georgi Gerganov edited this page Jul 31, 2025 
        ·
        10 revisions
      
    Welcome to the llama.cpp wiki!
yay -S llama.cpp
yay -S llama.cpp-cuda
yay -S llama.cpp-hip
yay -S llama.cpp-vulkannix run github:ggerganov/llama.cpp
nix run 'github:ggerganov/llama.cpp#opencl'{ config, pkgs, ... }:
{
  nixpkgs.config.packageOverrides = pkgs: {
      llama-cpp = (
        builtins.getFlake "github:ggerganov/llama.cpp"
      ).packages.${builtins.currentSystem}.default;
    };
  };
  environment.systemPackages = with pkgs; [ llama-cpp ]
}Wait https://github.com/termux/termux-packages/pull/17457.
apt install llama-cpppacman -S llama-cppgit clone --depth=1 https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -Bbuild
cmake --build build -D...
cd build
cpack -G DEB
dpkg -i *.debgit clone --depth=1 https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -Bbuild
cmake --build build -D...
cd build
cpack -G RPM
rpm -i *.rpmUseful information for users that doesn't fit into Readme.
- Home
- Feature Matrix
- GGML Tips & Tricks
- Chat Templating
- Metadata Override
- HuggingFace Model Card Metadata Interoperability Consideration
These are information useful for Maintainers and Developers which does not fit into code comments
Click on a badge to jump to workflow. This is here as a useful general view of all the actions so that we may notice quicker if main branch automation is broken and where.