Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions examples/infill/infill.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -235,7 +235,12 @@ int main(int argc, char ** argv) {

std::vector<llama_token> embd_inp;
std::vector<llama_token> inp_pfx = ::llama_tokenize(ctx, params.input_prefix, false);
params.input_suffix.erase(0, params.input_suffix.find_first_not_of(" "));
std::vector<llama_token> inp_sfx = ::llama_tokenize(ctx, params.input_suffix, false);
const int space_token = 29871;
if (params.escape && inp_sfx.size() > 1 && inp_sfx[0] == space_token) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we always want to remove the leading space, regardless if the params.escape is set or not:

Suggested change
if (params.escape && inp_sfx.size() > 1 && inp_sfx[0] == space_token) {
if (inp_sfx.size() > 1 && inp_sfx[0] == space_token) {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need the params.escape check as only then we add an extra space to the suffix.

inp_sfx.erase(inp_sfx.begin());
}
inp_pfx.insert(inp_pfx.begin(), llama_token_prefix(ctx));
if (add_bos) {
inp_pfx.insert(inp_pfx.begin(), llama_token_bos(ctx));
Expand Down
5 changes: 5 additions & 0 deletions examples/server/server.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -344,8 +344,13 @@ struct llama_server_context

void loadInfill()
{
params.input_suffix.erase(0, params.input_suffix.find_first_not_of(" "));
auto prefix_tokens = tokenize(params.input_prefix, false);
auto suffix_tokens = tokenize(params.input_suffix, false);
const int space_token = 29871;
if (params.escape && suffix_tokens.size() > 1 && suffix_tokens[0] == space_token) {
suffix_tokens.erase(suffix_tokens.begin());
}
prefix_tokens.insert(prefix_tokens.begin(), llama_token_prefix(ctx));
prefix_tokens.insert(prefix_tokens.begin(), llama_token_bos(ctx)); // always add BOS
prefix_tokens.insert(prefix_tokens.end(), llama_token_suffix(ctx));
Expand Down