Skip to content

Conversation

@Adebesin-Cell
Copy link
Member

@Adebesin-Cell Adebesin-Cell commented Nov 7, 2025

Closes #371

Description

This PR introduces context caching support for Gemini models to improve performance and reduce API costs for repeated contexts.

Key Implementation Details

  • Implemented GeminiContextCacheManager with cache fingerprinting, TTL management, and automatic cache lifecycle handling.
  • Cache fingerprinting ensures cache reuse when context data remains unchanged.
  • Automatic cleanup and refresh mechanisms maintain cache consistency and efficiency.

Type of Change

  • Bug fix (non-breaking change that fixes an issue)
  • New feature (non-breaking change that adds functionality)
  • Performance improvement
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Code refactoring (no functional changes)
  • Tests
  • Other (please describe):

How Has This Been Tested?

  • Manual testing of context caching with Gemini models
  • Verification of cache lifecycle handling and TTL expiration
  • Performance validation on repeated context interactions

Checklist

  • My code follows the code style of this project
  • I have updated the documentation accordingly
  • I have added tests to cover my changes
  • All new and existing tests passed
  • My changes generate no new warnings
  • I have checked for potential breaking changes and addressed them

Additional Notes

  • The GeminiContextCacheManager reduces redundant API calls by reusing existing context data.
  • Cache lifecycle and TTL configurations are customizable for different workloads.

@changeset-bot
Copy link

changeset-bot bot commented Nov 7, 2025

⚠️ No Changeset found

Latest commit: 111206b

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@vercel
Copy link

vercel bot commented Nov 7, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Updated (UTC)
adk-typescript-docs Ready Ready Preview Dec 3, 2025 8:01pm
adk-web Ready Ready Preview Dec 3, 2025 8:01pm

Copy link
Contributor

@MarvelNwachukwu MarvelNwachukwu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NB: PR isn't merging to main

@Adebesin-Cell
Copy link
Member Author

NB: PR isn't merging to main

It's synced with tracing, thanks for the note 🙏. Good catch

const tools = this.convertToAiSdkTools(request);

const requestParams = {
const requestParams: any = {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👀

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch!

@Royal-lobster
Copy link
Member

@Adebesin-Cell can we also make this work with other models? openai / models from ai sdk?

Hmm, context caching is only on gemini models, but I'll see how we can handle the cache logic for ai adk as well for gemini models

Any progress on this? so far i can only see this is implemented for just gemini models... also for AI sdk does it accept cache config for other models?

@Adebesin-Cell
Copy link
Member Author

@Adebesin-Cell can we also make this work with other models? openai / models from ai sdk?

Hmm, context caching is only on gemini models, but I'll see how we can handle the cache logic for ai adk as well for gemini models

Any progress on this? so far i can only see this is implemented for just gemini models... also for AI sdk does it accept cache config for other models?

No, I think context caching is limited to Gemini models. I added support in the AI SDK so that it accepts a cache configuration name: https://ai-sdk.dev/providers/ai-sdk-providers/google-generative-ai#explicit-caching

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants