Skip to content

Conversation

@seaofawareness
Copy link
Contributor

Enables use of Valkey to cache LLM responses in LangGraph

@3coins
Copy link
Collaborator

3coins commented Oct 22, 2025

@seaofawareness
I have merged #697, can you rebase and only include code relevant to Valkey cache here. Workflow should kickoff once the conflicts are resolved.

@michaelnchin michaelnchin self-requested a review October 24, 2025 04:19
Apply pytest.importorskip() pattern to cache tests:
- test_valkey_cache_unit.py (66 tests)
- test_valkey_cache_integration.py (22 tests)

Fixes NameError with @patch decorator on Python 3.10.
Copy link
Collaborator

@michaelnchin michaelnchin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@seaofawareness looks great overall, couple minor comments.

Changes:
- Removed JsonPlusSerializer import (obsolete in 3.0)
- Updated test assertions to check for serde presence instead of type
- Used default cache instance to get serializer for custom serde test

docs(samples): update valkey_cache.ipynb to use Claude 3.7 Sonnet

Removed references to Haiku model and updated to reflect the actual model being used (Claude 3.7 Sonnet).
Copy link
Collaborator

@michaelnchin michaelnchin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - thanks @seaofawareness !

@michaelnchin michaelnchin merged commit 9dce3c8 into langchain-ai:main Oct 30, 2025
12 checks passed
@seaofawareness seaofawareness deleted the cache branch October 30, 2025 19:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants