feat: local cache with LRU eviction for client-side caching#3748
feat: local cache with LRU eviction for client-side caching#3748ofekshenawa wants to merge 5 commits intofeature/client-side-cachingfrom
Conversation
🛡️ Jit Security Scan Results✅ No security findings were detected in this PR
Security scan by Jit
|
|
bugbot run verbose=true |
|
Bugbot request id: serverGenReqId_bd5f3ded-f844-41d8-9e56-69f1e29dc1c3 |
…onflates nil and empty byte slices
d1be22f to
462843a
Compare
f380471 to
d1be22f
Compare
ndyakov
left a comment
There was a problem hiding this comment.
I would prefer to see this in an internal package and for us to expose only an interface to the client. Other than that, great work @ofekshenawa!
| CacheKey string | ||
| RedisKeys []string | ||
| Value []byte | ||
| State CacheEntryState |
There was a problem hiding this comment.
do we need those to be public?
In general, do we need the whole CacheEntry to be public? What if we provide an interface to the client and put this as a default implementation in internal ? Later on we can ever support other cache implementations if a user would like to do its own.
| c.mu.Lock() | ||
| // Touch under write lock keeps LRU metadata consistent with concurrent deletes/updates. | ||
| if current, exists := c.entries[cacheKey]; exists && current == entry && current.State == CacheEntryValid { | ||
| c.touchEntryLocked(current) | ||
| } | ||
| c.mu.Unlock() |
There was a problem hiding this comment.
can this counter be atomic? It may be approximate (if the entry changes), but would be significantly better to not try to acquire the mutex on read.
| const defaultCacheEntryOverhead int64 = 96 | ||
|
|
||
| func defaultCacheSizer(cacheKey string, redisKeys []string, value []byte) int64 { | ||
| size := defaultCacheEntryOverhead + int64(len(cacheKey)+len(value)) | ||
| for _, key := range redisKeys { | ||
| size += int64(len(key)) + 16 |
There was a problem hiding this comment.
The offsets were an attempt to account for Go's runtime memory overhead beyond the raw byte lengths of the key strings. I was trying to make MemoryUsage() a closer estimate of actual heap usage rather than just the size of the user payload. I changed it in the last commit because I understood that this can be confusing.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 6e56d83. Configure here.

Adds a thread-safe, in-memory local cache with LRU eviction as the foundation for client-side caching
Note
Medium Risk
Introduces new concurrent cache/eviction logic (LRU + memory accounting + waiters) that could cause subtle races, deadlocks, or unexpected eviction behavior if integrated incorrectly.
Overview
Introduces a new
internal/cachepackage implementing a thread-safe local cache with strict LRU eviction and optional entry-count/memory limits via a configurableSizer.Adds an in-flight coordination API (
Reserve/Fulfill/Cancel) so concurrent callers can wait on a placeholder entry, including stale placeholder takeover afterStaleTimeout, plus invalidation by Redis key (DeleteByRedisKey) and fullFlushsupport.Includes a comprehensive test suite covering basic operations, LRU/memory eviction, concurrent access behavior, context-cancelled waits, and stale-takeover semantics.
Reviewed by Cursor Bugbot for commit 3aa5502. Bugbot is set up for automated code reviews on this repo. Configure here.