Skip to content

Ollama configuration in .env: correct base URL and model name (/api, /v1, llama3 vs llama3:latest) #1155

@sebastien-libbrecht

Description

@sebastien-libbrecht

Hello,

I am trying to configure Ollama as the LLM backend using environment variables, but I am having trouble understanding the correct values to put in the .env file.

In particular, I am confused about:

Which base URL should be used

  • With or without /api?
  • With or without /v1?

Which model name is expected

  • llama3 vs llama3:latest

How to avoid common errors such as:

  • 400 Bad Request
  • 404 Not Found

What I have verified so far:

  • Ollama is running correctly
  • The model is present (checked with ollama list)
  • The OpenAI-compatible endpoint responds at /v1/models

Despite this, depending on how the .env variables are configured, the application still returns Bad Request or Not Found errors when performing chat completions.

My questions:

  • What are the recommended environment variables to configure Ollama correctly?
  • Should the base URL include /api, /v1, or only the root URL?
  • Is llama3 a valid model name, or should llama3:latest always be used?
  • Is there an official example of a working .env configuration?

The documentation for Ollama is not very clear on these points, and a concise example would really help.

Thank you in advance for your help,
and happy end-of-year holidays.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions