-
Notifications
You must be signed in to change notification settings - Fork 220
feat: Integrate Valkey with LangGraph LLM Cache #717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@seaofawareness |
Apply pytest.importorskip() pattern to cache tests: - test_valkey_cache_unit.py (66 tests) - test_valkey_cache_integration.py (22 tests) Fixes NameError with @patch decorator on Python 3.10.
michaelnchin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@seaofawareness looks great overall, couple minor comments.
libs/langgraph-checkpoint-aws/langgraph_checkpoint_aws/cache/valkey/cache.py
Outdated
Show resolved
Hide resolved
Changes: - Removed JsonPlusSerializer import (obsolete in 3.0) - Updated test assertions to check for serde presence instead of type - Used default cache instance to get serializer for custom serde test docs(samples): update valkey_cache.ipynb to use Claude 3.7 Sonnet Removed references to Haiku model and updated to reflect the actual model being used (Claude 3.7 Sonnet).
michaelnchin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM - thanks @seaofawareness !
Enables use of Valkey to cache LLM responses in LangGraph