You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#or any model from https://ai.google.dev/gemini-api/docs/models/gemini
154
158
155
159
[ai.google]
156
160
api_key = "AI..."
157
161
```
158
162
163
+
#### Using Google Vertex AI (no API key required)
164
+
165
+
1. Ensure you have access to a Google Cloud project with Vertex AI enabled.
166
+
2. Set the following environment variables before starting marimo:
167
+
168
+
```bash
169
+
export GOOGLE_GENAI_USE_VERTEXAI=true
170
+
export GOOGLE_CLOUD_PROJECT='your-project-id'
171
+
export GOOGLE_CLOUD_LOCATION='us-central1'
172
+
```
173
+
174
+
*`GOOGLE_GENAI_USE_VERTEXAI=true` tells the client to use Vertex AI.
175
+
*`GOOGLE_CLOUD_PROJECT` is your GCP project ID.
176
+
*`GOOGLE_CLOUD_LOCATION` is your region (e.g., `us-central1`).
177
+
178
+
3. No API key is needed in your `marimo.toml` for Vertex AI.
179
+
180
+
For details and advanced configuration, see the `google-genai` Python client docs: `https://googleapis.github.io/python-genai/#create-a-client`.
181
+
159
182
### GitHub Copilot
160
183
161
184
Use Copilot for code refactoring or the chat panel (Copilot subscription required).
@@ -193,7 +216,7 @@ Route to many providers through OpenRouter with a single API.
193
216
**Requirements**
194
217
195
218
* Create an API key: [OpenRouter Dashboard](https://openrouter.ai/)
196
-
*`pip install openai` (OpenRouter is OpenAI‑compatible)
219
+
*`pip install openai`or `uv add openai`(OpenRouter is OpenAI‑compatible)
197
220
198
221
**Configuration**
199
222
@@ -219,7 +242,11 @@ Run open-source LLMs locally and connect via an OpenAI‑compatible API.
219
242
**Requirements**
220
243
221
244
* Install [Ollama](https://ollama.com/)
222
-
* Pull a model
245
+
*`pip install openai` or `uv add openai`
246
+
247
+
**Setup**
248
+
249
+
1. Pull a model
223
250
224
251
```bash
225
252
# View available models at https://ollama.com/library
@@ -230,26 +257,27 @@ Run open-source LLMs locally and connect via an OpenAI‑compatible API.
230
257
ollama ls
231
258
```
232
259
233
-
3. Start the Ollama server:
260
+
2. Start the Ollama server:
234
261
235
262
```bash
236
263
ollama serve
237
264
# In another terminal, run a model (optional)
238
265
ollama run codellama
239
266
```
240
267
241
-
4. Visit <http://127.0.0.1:11434> to confirm that the server is running.
268
+
3. Visit <http://127.0.0.1:11434> to confirm that the server is running.
242
269
243
270
!!! note "Port already in use"
244
271
If you get a "port already in use" error, you may need to close an existing Ollama instance. On Windows, click the up arrow in the taskbar, find the Ollama icon, and select "Quit". This is a known issue (see [Ollama Issue #3575](https://github.com/ollama/ollama/issues/3575)). Once you've closed the existing Ollama instance, you should be able to run `ollama serve` successfully.
245
272
246
-
5. Install the OpenAI client (`pip install openai` or `uv add openai`).
247
-
248
-
6. Start marimo:
273
+
**Configuration**
249
274
250
-
```bash
251
-
marimo edit notebook.py
252
-
```
275
+
```toml title="marimo.toml"
276
+
[ai.models]
277
+
chat_model = "ollama/llama3.1:latest"
278
+
edit_model = "ollama/codellama"
279
+
autocomplete_model = "ollama/codellama"# or another model from `ollama ls`
280
+
```
253
281
254
282
??? warning "Important: Use the `/v1` endpoint"
255
283
@@ -270,15 +298,6 @@ Run open-source LLMs locally and connect via an OpenAI‑compatible API.
270
298
curl http://127.0.0.1:11434/v1/models
271
299
```
272
300
273
-
7. Configure `marimo.toml` (or use Settings):
274
-
275
-
```toml title="marimo.toml"
276
-
[ai.models]
277
-
chat_model = "ollama/llama3.1:latest"
278
-
edit_model = "ollama/codellama"
279
-
autocomplete_model = "ollama/codellama"# or another model from `ollama ls`
280
-
```
281
-
282
301
### OpenAI-compatible providers
283
302
284
303
Many providers expose OpenAI-compatible endpoints. Point `base_url` at the provider and use their models.
@@ -288,7 +307,7 @@ Common examples include [GROQ](https://console.groq.com/docs/openai), DeepSeek,
288
307
289
308
* Provider API key
290
309
* Provider OpenAI-compatible `base_url`
291
-
*`pip install openai`
310
+
*`pip install openai` or `uv add openai`
292
311
293
312
**Configuration**
294
313
@@ -334,7 +353,7 @@ Use DeepSeek via its OpenAI‑compatible API.
334
353
**Requirements**
335
354
336
355
* DeepSeek API key
337
-
*`pip install openai`
356
+
*`pip install openai` or `uv add openai`
338
357
339
358
**Configuration**
340
359
@@ -354,7 +373,7 @@ Use Grok models via xAI's OpenAI‑compatible API.
354
373
**Requirements**
355
374
356
375
* xAI API key
357
-
*`pip install openai`
376
+
*`pip install openai` or `uv add openai`
358
377
359
378
**Configuration**
360
379
@@ -374,7 +393,7 @@ Connect to a local model served by LM Studio's OpenAI‑compatible endpoint.
374
393
**Requirements**
375
394
376
395
* Install LM Studio and start its server
377
-
*`pip install openai`
396
+
*`pip install openai` or `uv add openai`
378
397
379
398
**Configuration**
380
399
@@ -393,7 +412,7 @@ Use Mistral via its OpenAI‑compatible API.
393
412
**Requirements**
394
413
395
414
* Mistral API key
396
-
*`pip install openai`
415
+
*`pip install openai` or `uv add openai`
397
416
398
417
**Configuration**
399
418
@@ -413,7 +432,7 @@ Access multiple hosted models via Together AI's OpenAI‑compatible API.
413
432
**Requirements**
414
433
415
434
* Together AI API key
416
-
*`pip install openai`
435
+
*`pip install openai` or `uv add openai`
417
436
418
437
**Configuration**
419
438
@@ -433,7 +452,7 @@ Use Vercel's v0 OpenAI‑compatible models for app-oriented generation.
0 commit comments