Skip to content

feat: local cache with LRU eviction for client-side caching#3748

Open
ofekshenawa wants to merge 5 commits intofeature/client-side-cachingfrom
add-local-cache
Open

feat: local cache with LRU eviction for client-side caching#3748
ofekshenawa wants to merge 5 commits intofeature/client-side-cachingfrom
add-local-cache

Conversation

@ofekshenawa
Copy link
Copy Markdown
Collaborator

@ofekshenawa ofekshenawa commented Mar 23, 2026

Adds a thread-safe, in-memory local cache with LRU eviction as the foundation for client-side caching


Note

Medium Risk
Introduces new concurrent cache/eviction logic (LRU + memory accounting + waiters) that could cause subtle races, deadlocks, or unexpected eviction behavior if integrated incorrectly.

Overview
Introduces a new internal/cache package implementing a thread-safe local cache with strict LRU eviction and optional entry-count/memory limits via a configurable Sizer.

Adds an in-flight coordination API (Reserve/Fulfill/Cancel) so concurrent callers can wait on a placeholder entry, including stale placeholder takeover after StaleTimeout, plus invalidation by Redis key (DeleteByRedisKey) and full Flush support.

Includes a comprehensive test suite covering basic operations, LRU/memory eviction, concurrent access behavior, context-cancelled waits, and stale-takeover semantics.

Reviewed by Cursor Bugbot for commit 3aa5502. Bugbot is set up for automated code reviews on this repo. Configure here.

@ofekshenawa ofekshenawa marked this pull request as ready for review March 23, 2026 19:10
@jit-ci
Copy link
Copy Markdown

jit-ci Bot commented Mar 23, 2026

🛡️ Jit Security Scan Results

CRITICAL HIGH MEDIUM

✅ No security findings were detected in this PR


Security scan by Jit

@ofekshenawa
Copy link
Copy Markdown
Collaborator Author

bugbot run verbose=true

@cursor
Copy link
Copy Markdown

cursor Bot commented Mar 25, 2026

Bugbot request id: serverGenReqId_bd5f3ded-f844-41d8-9e56-69f1e29dc1c3

Comment thread internal/cache/cache.go
Comment thread local_cache.go
@ofekshenawa ofekshenawa force-pushed the feature/client-side-caching branch from d1be22f to 462843a Compare April 11, 2026 07:19
Copy link
Copy Markdown
Member

@ndyakov ndyakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to see this in an internal package and for us to expose only an interface to the client. Other than that, great work @ofekshenawa!

Comment thread local_cache.go Outdated
Comment on lines +22 to +25
CacheKey string
RedisKeys []string
Value []byte
State CacheEntryState
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need those to be public?
In general, do we need the whole CacheEntry to be public? What if we provide an interface to the client and put this as a default implementation in internal ? Later on we can ever support other cache implementations if a user would like to do its own.

Comment thread internal/cache/cache.go Outdated
Comment on lines +161 to +166
c.mu.Lock()
// Touch under write lock keeps LRU metadata consistent with concurrent deletes/updates.
if current, exists := c.entries[cacheKey]; exists && current == entry && current.State == CacheEntryValid {
c.touchEntryLocked(current)
}
c.mu.Unlock()
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can this counter be atomic? It may be approximate (if the entry changes), but would be significantly better to not try to acquire the mutex on read.

Comment thread local_cache.go Outdated
Comment on lines +110 to +115
const defaultCacheEntryOverhead int64 = 96

func defaultCacheSizer(cacheKey string, redisKeys []string, value []byte) int64 {
size := defaultCacheEntryOverhead + int64(len(cacheKey)+len(value))
for _, key := range redisKeys {
size += int64(len(key)) + 16
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we have those offsets?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The offsets were an attempt to account for Go's runtime memory overhead beyond the raw byte lengths of the key strings. I was trying to make MemoryUsage() a closer estimate of actual heap usage rather than just the size of the user payload. I changed it in the last commit because I understood that this can be confusing.

Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.

Reviewed by Cursor Bugbot for commit 6e56d83. Configure here.

Comment thread internal/cache/cache.go Outdated
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants