Skip to content

Commit 6f6c785

Browse files
imartinezsgreshameltociear
authored
feat(llm): Ollama timeout setting (#1773)
* added request_timeout to ollama, default set to 30.0 in settings.yaml and settings-ollama.yaml * Update settings-ollama.yaml * Update settings.yaml * updated settings.py and tidied up settings-ollama-yaml * feat(UI): Faster startup and document listing (#1763) * fix(ingest): update script label (#1770) huggingface -> Hugging Face * Fix lint errors --------- Co-authored-by: Stephen Gresham <steve@gresham.id.au> Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
1 parent c2d6948 commit 6f6c785

4 files changed

Lines changed: 12 additions & 5 deletions

File tree

private_gpt/components/llm/llm_component.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -131,6 +131,7 @@ def __init__(self, settings: Settings) -> None:
131131
temperature=settings.llm.temperature,
132132
context_window=settings.llm.context_window,
133133
additional_kwargs=settings_kwargs,
134+
request_timeout=ollama_settings.request_timeout,
134135
)
135136
case "azopenai":
136137
try:

private_gpt/settings/settings.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -241,6 +241,10 @@ class OllamaSettings(BaseModel):
241241
1.1,
242242
description="Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1)",
243243
)
244+
request_timeout: float = Field(
245+
120.0,
246+
description="Time elapsed until ollama times out the request. Default is 120s. Format is float. ",
247+
)
244248

245249

246250
class AzureOpenAISettings(BaseModel):

settings-ollama.yaml

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,11 +14,12 @@ ollama:
1414
llm_model: mistral
1515
embedding_model: nomic-embed-text
1616
api_base: http://localhost:11434
17-
tfs_z: 1.0 # Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.
18-
top_k: 40 # Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)
19-
top_p: 0.9 # Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)
20-
repeat_last_n: 64 # Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)
21-
repeat_penalty: 1.2 # Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1)
17+
tfs_z: 1.0 # Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.
18+
top_k: 40 # Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)
19+
top_p: 0.9 # Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)
20+
repeat_last_n: 64 # Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)
21+
repeat_penalty: 1.2 # Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1)
22+
request_timeout: 120.0 # Time elapsed until ollama times out the request. Default is 120s. Format is float.
2223

2324
vectorstore:
2425
database: qdrant

settings.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -89,6 +89,7 @@ ollama:
8989
llm_model: llama2
9090
embedding_model: nomic-embed-text
9191
api_base: http://localhost:11434
92+
request_timeout: 120.0
9293

9394
azopenai:
9495
api_key: ${AZ_OPENAI_API_KEY:}

0 commit comments

Comments
 (0)