Skip to content

Conversation

@henchaves
Copy link
Member

Description

Sometimes llama3.1 model doesn't generate JSON responses in a proper format. After doing some manual tests, we have seen that qwen2.5 performs better on this task. Therefore, this PR updates the Ollama docs to showcase qwen2.5 as model example instead of llama3.1.

Related Issue

Type of Change

  • 📚 Examples / docs / tutorials / dependencies update
  • 🔧 Bug fix (non-breaking change which fixes an issue)
  • 🥂 Improvement (non-breaking change which improves an existing feature)
  • 🚀 New feature (non-breaking change which adds functionality)
  • 💥 Breaking change (fix or feature that would cause existing functionality to change)
  • 🔐 Security fix

@henchaves henchaves self-assigned this Mar 10, 2025
@linear
Copy link

linear bot commented Mar 10, 2025

@henchaves henchaves requested a review from kevinmessiaen March 10, 2025 11:11
@sonarqubecloud
Copy link

@henchaves henchaves merged commit 1ba030b into main Mar 19, 2025
19 checks passed
@henchaves henchaves deleted the feature/gsk-4192-replace-llama-with-qwen-on-ollama-docs branch March 19, 2025 18:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

3 participants