Skip to content

Conversation

@vedant381
Copy link
Contributor

@vedant381 vedant381 commented Oct 14, 2025

Description

This change addresses an issue where using huggingface_base_url would incorrectly restrict the embedding model to a hardcoded "tei" value and prevent the use of custom model parameters.

With this fix, users can now specify any model name and pass additional parameters (e.g., truncate=True) when connecting to a custom Hugging Face Text Embeddings Inference (TEI) endpoint or any other OpenAI-compatible API. This provides greater flexibility for users hosting their own embedding models.

Fixes #3553

Type of change

Please delete options that are not relevant.

  • This pull request enhances the flexibility and test coverage of the HuggingFace embedding integration. The main improvements are support for custom models and model parameters when using a HuggingFace endpoint, and the addition of a dedicated test for this functionality.

HuggingFace embedding configuration and usage

  • Ensured that when a custom HuggingFace base URL is provided, the model defaults to "tei" if not specified, and otherwise uses the provided model name. (mem0/embeddings/huggingface.py)
  • Updated the embedding creation logic to use the configured model name and pass through any additional model keyword arguments from model_kwargs, allowing for greater customization. (mem0/embeddings/huggingface.py)

Testing improvements

  • Added a new test, test_embed_with_huggingface_base_url, to verify that custom HuggingFace endpoints, models, and model parameters are correctly handled and passed to the embedding client. (tests/embeddings/test_huggingface_embeddings.py)] Bug fix (non-breaking change which fixes an issue)

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

Please delete options that are not relevant.

  • Unit Test
  • Test Script (please provide)

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules
  • I have checked my code and corrected any misspellings

Maintainer Checklist

  • closes #xxxx (Replace xxxx with the GitHub issue number)
  • Made sure Checks passed
    This pull request improves the flexibility and configurability of the HuggingFace embedding integration by allowing custom model names and model keyword arguments to be passed through when using a custom HuggingFace endpoint. It also adds a new test to ensure this functionality works as expected.

Enhancements to HuggingFace embedding configuration:

  • The HuggingFaceEmbedding class now sets the model name to "tei" by default only if a custom HuggingFace base URL is provided and no model is explicitly specified, allowing users to override the model name as needed.
  • The embed method now uses the model name and additional keyword arguments from the config when calling the HuggingFace endpoint, making it possible to customize embedding requests (e.g., to set truncation or other model-specific options).

Testing improvements:

  • Added a new test, test_embed_with_huggingface_base_url, to verify that the embedding logic correctly passes the custom model name and keyword arguments to the HuggingFace endpoint when a base URL is provided.

@CLAassistant
Copy link

CLAassistant commented Oct 14, 2025

CLA assistant check
All committers have signed the CLA.

Copy link
Contributor

@parshvadaftari parshvadaftari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

@parshvadaftari parshvadaftari merged commit ea22e8d into mem0ai:main Oct 14, 2025
6 of 7 checks passed
@parshvadaftari
Copy link
Contributor

Thank you @vedant381 for contributing to mem0!

ron-42 pushed a commit to ron-42/mem0 that referenced this pull request Oct 19, 2025
…gpt-4o-mini

- Fixed hardcoded model issue in openmemory categorization
- Categorization now reads LLM config from database
- Supports custom base_url for OpenAI-compatible providers (SiliconFlow, etc.)
- Added SiliconFlowConfig class for proper configuration
- Updated factory to use SiliconFlowConfig
- Added SiliconFlow documentation with usage examples

Fixes issue where SiliconFlow users got 'Model does not exist' error
because categorization was hardcoded to use gpt-4o-mini instead of
the configured model like deepseek-ai/DeepSeek-R1.

Resolves: mem0ai#3574
ron-42 pushed a commit to ron-42/mem0 that referenced this pull request Oct 19, 2025
…gpt-4o-mini

- Fixed hardcoded model issue in openmemory categorization
- Categorization now reads LLM config from database
- Supports custom base_url for OpenAI-compatible providers (SiliconFlow, etc.)
- Added SiliconFlowConfig class for proper configuration
- Updated factory to use SiliconFlowConfig
- Added SiliconFlow documentation with usage examples

Fixes issue where SiliconFlow users got 'Model does not exist' error
because categorization was hardcoded to use gpt-4o-mini instead of
the configured model like deepseek-ai/DeepSeek-R1.

Resolves: mem0ai#3574
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Hugging Face embeddings cannot set model

3 participants