Skip to content

Conversation

@agam1092005
Copy link

Description

Added support for the reasoning_effort parameter in AzureOpenAIConfig and OpenAIConfig classes to enable testing and comparison of different reasoning effort levels ("low", "medium", "high") supported by OpenAI's reasoning models (o1, o3, gpt-5).

The parameter was recently added to OpenAI SDK but was not implemented in Mem0's configuration classes, causing a TypeError when users tried to pass reasoning_effort in their config.

This change enables users to evaluate performance and latency trade-offs across reasoning models directly within Mem0.

Fixes #3651

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Refactor (does not change functionality, e.g. code style improvements, linting)
  • Documentation update

How Has This Been Tested?

Tested by initializing both config classes with the reasoning_effort parameter and verifying:

  1. Parameter is accepted and stored correctly
  2. Parameter is included in API params for reasoning models (o1, o3, gpt-5)
  3. Parameter is excluded for non-reasoning models (gpt-4, etc.)
  4. All existing unit tests pass
  • Unit Test
  • Test Script (please provide)
# Test script used
from mem0.configs.llms.azure import AzureOpenAIConfig
from mem0.configs.llms.openai import OpenAIConfig

# Test AzureOpenAIConfig
azure_config = AzureOpenAIConfig(
    model="o1-preview",
    reasoning_effort="medium"
)
assert azure_config.reasoning_effort == "medium"

# Test OpenAIConfig
openai_config = OpenAIConfig(
    model="o1-mini",
    reasoning_effort="low"
)
assert openai_config.reasoning_effort == "low"


Fixes #3651 

@CLAassistant
Copy link

CLAassistant commented Nov 10, 2025

CLA assistant check
All committers have signed the CLA.

@parthshr370
Copy link
Contributor

Hey @agam1092005 thanks for sending this PR , could you update the title to [enhancement] , and then I will start the discussion and the review , could you also point me to this new change in OpenAIConfig , maybe a release note ?

@agam1092005 agam1092005 changed the title Add reasoning_effort parameter to Azure/OpenAI configs [enhancement] Add reasoning_effort parameter support for Azure/OpenAI configs Nov 14, 2025
@agam1092005
Copy link
Author

Hi @parshvadaftari

Thanks for reviewing! I've updated the PR title to include [enhancement].

Regarding the reasoning_effort parameter in OpenAI SDK, here are the references:

OpenAI Documentation & Release Notes:

  1. Official API Reference: Chat Completions - reasoning_effort parameter

    • Supported values: "low", "medium", "high"
    • Available for reasoning models: o1, o3, gpt-5 series
  2. OpenAI Python SDK (v1.54.0+): The parameter was added to support reasoning models

  3. Azure OpenAI Documentation: Azure OpenAI reasoning models

    • Confirms Azure OpenAI also supports this parameter for reasoning models

Context:

The parameter controls the inference-time compute budget for reasoning models. Higher effort levels use more tokens and time but potentially provide better reasoning quality. This is particularly useful for evaluating performance/latency trade-offs.

Let me know if you need any additional information or changes! 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add support for reasoning_effort parameter for reasoning models in AzureOpenAIConfig

3 participants