Skip to content

Copilot with Local Models via Ollama #2193

@shouse-lab

Description

@shouse-lab

Description

It appears marimo supports both GitHub Copilot and Codeium Copilot; I am hoping to use locally hosted models on Ollama as the third option.

Suggested solution

the completion section in ~/.marimo.toml would look like:

[completion]
copilot = "ollama"
api_key = "ollama"
model = "codeqwen:7b-code-v1.5-q5_1"
base_url = "http://localhost:11434/v1/chat/completions"

Alternative

No response

Additional context

No response

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions