Conversation
|
I didn't know about this feature in git, seems to be very useful |
|
You can also just do EDIT: sorry, that doesn't work |
Doesn't work: master [?1] ~/development/github/llama.cpp
19:50:59 [1] git pr checkout 18644 --worktree
checkout doesn't match the pr id pattern.
18644 doesn't match the pr id pattern.
--worktree doesn't match the pr id pattern.
master [?1] ~/development/github/llama.cpp
19:51:23 [1] git pr checkout 18644
checkout doesn't match the pr id pattern.
18644 doesn't match the pr id pattern.
master [?1] ~/development/github/llama.cpp
19:51:24 [1] git pr 18644 --worktree
error: unknown option `worktree'Edit: was using Still, it does not work even with $ gh pr checkout 18644 --worktree
unknown flag: --worktree
Usage: gh pr checkout [<number> | <url> | <branch>] [flags]
Flags:
-b, --branch string Local branch name to use (default [the name of the head branch])
--detach Checkout PR with a detached HEAD
-f, --force Reset the existing local branch to the latest state of the pull request
--recurse-submodules Update all submodules after checkout
$ gh --version
gh version 2.83.2 (2025-12-10)
https://github.com/cli/cli/releases/tag/v2.83.2 |
|
to test a PR I usually do: cd into the llama.cpp repo dir and: It will create a new branch named: pr-branch, then build as usual. |
Yup, this almost works. I want to also have the git remote added so I can potentially push stuff to the PR branch. |
|
Let'see if this other idea will help you, if I understood correctly your requirements..(https with token or use ssh instead): .. do your changes.. then as usual: |
Co-authored-by: Sigbjørn Skjæret <[email protected]>
|
sorry, I couldn't help testing this before. I found an error. I am running the script with PR #18641 , it is a draft PR, but still, there is work there to be tested. result in this error: Running the curl alone of LOC 40 results in error HTTP 404. The error is due because at LOC 36, the results is: 'ggml-org/llama.cpp.git' but it should be: 'ggml-org/llama.cpp' instead. Also before to continue to next steps, it is worth considering the output error of the curl, just in case. Running the curl alone like this works: Command Output{ "url": "https://api.github.com/repos/ggml-org/llama.cpp/pulls/18641", "id": 3149612767, "node_id": "PR_kwDOJH_K4M67u0bf", "html_url": "https://github.com//pull/18641", "diff_url": "https://github.com//pull/18641.diff", "patch_url": "https://github.com//pull/18641.patch", "issue_url": "https://api.github.com/repos//issues/18641", "number": 18641, "state": "open", "locked": false, "title": "[Do Not Merge] model : LFM2.5-Audio-1.5B", "user": { "login": "tdakhran", "id": 20753751, "node_id": "MDQ6VXNlcjIwNzUzNzUx", "avatar_url": "https://avatars.githubusercontent.com/u/20753751?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tdakhran", "html_url": "https://github.com/tdakhran", "followers_url": "https://api.github.com/users/tdakhran/followers", "following_url": "https://api.github.com/users/tdakhran/following{/other_user}", "gists_url": "https://api.github.com/users/tdakhran/gists{/gist_id}", "starred_url": "https://api.github.com/users/tdakhran/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tdakhran/subscriptions", "organizations_url": "https://api.github.com/users/tdakhran/orgs", "repos_url": "https://api.github.com/users/tdakhran/repos", "events_url": "https://api.github.com/users/tdakhran/events{/privacy}", "received_events_url": "https://api.github.com/users/tdakhran/received_events", "type": "User", "user_view_type": "public", "site_admin": false }, "body": "Liquid AI released [LFM2.5-Audio-1.5B](https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B). \r\n\r\n> LFM2.5-Audio-1.5B is [Liquid AI](https://www.liquid.ai/)'s updated end-to-end audio foundation model. Key improvements include a custom, LFM based audio detokenizer, llama.cpp compatible GGUFs for CPU inference, and better ASR and TTS performance.\r\n\r\nThis PR is intended to provide a functional implementation in `llama.cpp` until necessary infrastructure is implemented.\r\nThe plan is to split and merge it into upstream in smaller chunks, while keeping and tracking functional implementation here. It will be rebased from time to time.\r\n\r\nGGUFs, precompiled runners, and instructions, live in https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B-GGUF.\r\n\r\nMerge plan:\r\n\r\n- [x] Support `n_embd_out` https://github.com//pull/18607\r\n- [ ] Support llama_memory_hybrid_iswa https://github.com//pull/18601\r\n- [x] istft for audio output https://github.com//pull/18645\r\n- [ ] reuse llama-model infra for audio tokenizer\r\n- [ ] tbd\r\n\r\nDemo of capabilities (watch with audio on)\r\n\r\nhttps://github.com/user-attachments/assets/bfb7e598-f3bb-4e19-8d93-967bbf912e18\r\n\r\nThank you, @ngxson for the help!", "created_at": "2026-01-06T14:25:07Z", "updated_at": "2026-01-07T13:56:01Z", "closed_at": null, "merged_at": null, "merge_commit_sha": "44159cd83d4631b098d384cb569d83b1fe0e66af", "assignee": null, "assignees": [], ], ], }, ], ], ], }, If you like I can help fixing this. Thank you so much for all your work. |
|
to get rid of the trailing part (.git) you can do this: to check the output error status of curl: |
Helper script for quickly creating a git worktree from a PR.
Sample usage: