Skip to content

Commit eb3ec4a

Browse files
authored
fix: use supported image model (#567)
## Description <!-- What does this PR do? --> ## PR Type <!-- Delete the types that don't apply --!> 🆕 New Feature 🐛 Bug Fix 💅 Refactor 📚 Documentation 🚦 Infrastructure ## Relevant issues <!-- e.g. "Fixes #123" --> ## Checklist - [ ] I have added unit tests that prove my fix/feature works - [ ] New and existing tests pass locally - [ ] Documentation was updated where necessary - [ ] I have read and followed the [contribution guidelines](https://github.com/mozilla-ai/any-llm/blob/main/CONTRIBUTING.md)```
1 parent 2550bee commit eb3ec4a

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

tests/conftest.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ def provider_model_map() -> dict[LLMProvider, str]:
7979
def provider_image_model_map(provider_model_map: dict[LLMProvider, str]) -> dict[LLMProvider, str]:
8080
return {
8181
**provider_model_map,
82-
LLMProvider.WATSONX: "mistralai/pixtral-12b",
82+
LLMProvider.WATSONX: "meta-llama/llama-guard-3-11b-vision",
8383
LLMProvider.SAMBANOVA: "Llama-4-Maverick-17B-128E-Instruct",
8484
LLMProvider.NEBIUS: "openai/gpt-oss-20b",
8585
LLMProvider.OPENROUTER: "mistralai/mistral-small-3.2-24b-instruct:free",

0 commit comments

Comments
 (0)