Add reasoning_effort parameter support for reasoning models (#3651) #3686
+9
−8
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
**## DescriptionThis PR adds support for the reasoning_effort parameter for reasoning models in LLMBase. The change makes it possible for advanced models (including future GPT-5 and similar) to receive and handle a reasoning_effort config parameter properly.
Updated the constructor to inject a default value for reasoning_effort if not present.
Modified _get_supported_params to include reasoning_effort for reasoning models only.
No new dependencies introduced.
Fixes #3651
**## Typef change
New feature (non-breaking change which adds functionality)
**## How Has This Beenested?
Tested using Google Colab by modifying mem0/llms/base.py, committing and pushing the update.
Confirmed that reasoning_effort support works as intended and code is recognized by git.
Test Script (please provide if your project requires scripts)
**## Checklist
My code follows the style guidelines of this project
I have performed a self-review of my own code
I have commented my code, particularly in hard-to-understand areas
I have made corresponding changes to the documentation (N/A - implementation only)
My changes generate no new warnings
I have added tests that prove my fix is effective or that my feature works (N/A - small feature)
New and existing unit tests pass locally with my changes (basic testing in Colab)
Any dependent changes have been merged and published in downstream modules (N/A)
I have checked my code and corrected any misspellings
For Hacktoberfest:
This PR is submitted as part of Hacktoberfest.
**## Maintainer Checklist [ ] closes #3651
Made sure Checks passed
This pull request is submitted for Hacktoberfest 2025.