Skip to content

Conversation

@zzstoatzz
Copy link
Collaborator

@zzstoatzz zzstoatzz commented Oct 21, 2025

Integrates fakeredis as a dev dependency to enable fast, in-memory testing without requiring a real Redis server. Use memory:// URLs to enable the memory backend.

Changes

Dev dependency (pyproject.toml):

  • Added fakeredis[lua] as dev dependency (not standard)
  • Uses forked version with xpending_range fix until upstream PR #427 is merged
  • Added allow-direct-references = true to hatch metadata config

Backend detection (src/docket/docket.py):

  • Check for memory:// URL scheme to enable memory backend
  • Create class-level _memory_servers dict keyed by URL for isolation
  • Multiple in-memory dockets supported via different URLs (e.g., memory://test1, memory://test2)
  • Provide helpful error message when fakeredis is not installed

Worker workaround (src/docket/worker.py):

  • Use non-blocking xreadgroup (block=0) with manual sleep for memory backend
  • Detect memory backend via self.docket.url.startswith("memory://")
  • Necessary because fakeredis's async blocking operations don't properly integrate with asyncio's event loop

Tests (tests/test_memory_backend.py):

  • Test memory backend usage with memory:// URLs
  • Test multiple isolated in-memory dockets
  • Test server reuse for same URL
  • Test missing dependency error handling

CI Integration (.github/workflows/ci.yml):

  • Added full test matrix leg for memory backend (REDIS_VERSION=memory)
  • Runs complete test suite with memory backend for confidence

Fixtures (tests/conftest.py):

  • Support REDIS_VERSION=memory to skip Docker container setup
  • Generate unique memory:// URLs per test worker for isolation

Usage

# Use memory:// URL scheme
async with Docket(name="test-docket", url="memory://test") as docket:
    # Use docket normally - backed by in-memory fakeredis
    ...

# Multiple isolated dockets
async with Docket(name="docket-1", url="memory://one") as docket1:
    async with Docket(name="docket-2", url="memory://two") as docket2:
        # Each has separate in-memory data
        ...

Or from command line:

# No special flags needed - memory:// URL is sufficient
python script.py  # if script uses memory:// URL

Technical Details

Non-blocking workaround

The non-blocking approach (block=0 with manual asyncio.sleep()) is necessary because fakeredis's async blocking operations don't properly yield control to the asyncio event loop, preventing concurrent tasks (like the scheduler loop) from running.

fakeredis xpending_range fix

We use a forked version of fakeredis that fixes the xpending_range command to return all 4 required fields (message_id, consumer, time_since_delivered, times_delivered) instead of just 2. This matches real Redis behavior and redis-py's expectations. The fix is in our fork at zzstoatzz/fakeredis-py@fix-xpending-range-fields and has been submitted upstream in PR #427.

URL-based isolation

Different memory:// URLs create separate FakeServer instances, allowing multiple independent in-memory dockets in the same process. The implementation uses a class-level dictionary keyed by the full URL.

Test Coverage

  • 295 tests passing on Python 3.12 and 3.13
  • 100% coverage on all production code with memory backend
  • All pre-commit hooks passing
  • Full CI matrix leg for memory backend

Closes #162

@claude

This comment was marked as outdated.

@github-actions
Copy link

github-actions bot commented Oct 21, 2025

📚 Documentation has been built for this PR!

You can download the documentation directly here:
https://github.com/chrisguidry/docket/actions/runs/18747620180/artifacts/4351274730

Integrates fakeredis as an optional backend to enable fast, in-memory testing
without requiring a real Redis server. Set DOCKET_BACKEND=fake to enable.

Key changes:
- Add optional fakeredis[lua] dependency in pyproject.toml
- Detect DOCKET_BACKEND=fake env var in Docket.__aenter__
- Create shared FakeServer instance so connections share data
- Use non-blocking xreadgroup (block=0) with manual sleep for fakeredis
- Add tests for fakeredis backend integration

The non-blocking approach is necessary because fakeredis's async blocking
operations don't properly integrate with asyncio's event loop scheduling.
Using block=0 with manual sleeps ensures the scheduler loop can run.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@zzstoatzz zzstoatzz force-pushed the nate/backend-abstraction branch from 95d2c74 to 0045c12 Compare October 22, 2025 06:39
- Remove unnecessary workspace member from pyproject.toml
- Replace manual env var handling with pytest monkeypatch fixtures
- Revert unnecessary inline style change in worker.py

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@zzstoatzz zzstoatzz changed the title Add backend abstraction layer Add optional fakeredis backend for in-memory testing Oct 22, 2025
zzstoatzz and others added 2 commits October 22, 2025 01:55
- Update error message to not reference pip, just say "Install fakeredis[lua]"
- Add pytest.importorskip to skip fakeredis tests in CI when dependency not installed
- Tests now pass in CI without fakeredis installed

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
- Add --extra fake to CI workflow to install fakeredis during tests
- Enhance test_docket_backend_fake to actually run a worker and execute tasks
- This exercises the fakeredis-specific polling path (non-blocking xreadgroup)
- Mark unreachable code with pragma: no cover
- Achieves 100% test coverage on all code

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@codecov-commenter
Copy link

codecov-commenter commented Oct 22, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 100.00%. Comparing base (93370c7) to head (385f9ed).

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff            @@
##              main      #165   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files           33        34    +1     
  Lines         4820      4834   +14     
  Branches       267       265    -2     
=========================================
+ Hits          4820      4834   +14     
Flag Coverage Δ
python-3.12 100.00% <100.00%> (ø)
python-3.13 100.00% <100.00%> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
src/docket/docket.py 100.00% <100.00%> (ø)
src/docket/worker.py 100.00% <100.00%> (ø)
tests/conftest.py 100.00% <100.00%> (ø)
tests/test_memory_backend.py 100.00% <100.00%> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

- Add comprehensive documentation in docs/testing.md for in-memory backend
- Include pytest fixture examples and usage patterns
- Document limitations (single-process, ephemeral data)
- Add local_development.py example demonstrating real use cases:
  - Local development without Redis
  - CI/CD environments
  - Educational/tutorial contexts
  - Desktop applications
- Update README.md with installation and link to docs
- Demonstrates immediate tasks, scheduled tasks, and periodic tasks

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Copy link
Owner

@chrisguidry chrisguidry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dude what an awesome find!!!

Some notes:

Rather than referring to this as "fake" let's refer to it as "memory".

Also, rather than the new envvar DOCKET_BACKEND, let's use URLs like memory://{docket-name} to differentiate between real and fake redis. That way we can support multiple dockets in-memory based on the different URLs (this will be important when trying to implement several services in one process).

I think we can also run a full leg of the CI matrix for the memory broker so that we're just running the full test suite against it. This will give us a lot more confidence.

I'm torn on whether to just include fakeredis all the time, rather than as an extra? The package itself is pretty light (~170Kb), but I'm not sure how much the lua bits add. At any rate, it's probably simpler to just include it as a standard dependency since it complements unit testing so well.

zzstoatzz and others added 2 commits October 22, 2025 12:26
Per guidry's feedback on the PR:
- Rename 'fake' to 'memory' throughout codebase
- Replace DOCKET_BACKEND environment variable with memory:// URL scheme
- Support multiple independent in-memory dockets via different URLs
- Move fakeredis from optional extra to standard dependency
- Add full CI matrix leg for memory backend testing

Changes:
- Use memory:// URL scheme instead of environment variable for backend selection
- Implement class-level server dictionary to isolate multiple in-memory dockets
- Update all documentation and examples to use memory:// URLs
- Add memory backend to CI test matrix as a full test leg
- Move fakeredis[lua] to standard dependencies (no longer optional)
- Add test coverage for server reuse with same URL

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Fixed xpending_range compatibility issue with fakeredis by using our
fork that returns all 4 required fields (message_id, consumer,
time_since_delivered, times_delivered) instead of just 2 fields.

Changes:
- Updated pyproject.toml to use fakeredis fork with xpending_range fix
- Added allow-direct-references = true to hatch metadata config
- Updated test_memory_backend_reuses_server to properly cover all lines
- Achieved 100% test coverage with memory backend

fakeredis fix details (in zzstoatzz/fakeredis-py@fix-xpending-range-fields):
- Changed pel dict from Tuple[bytes, int] to Tuple[bytes, int, int]
- Initialize times_delivered=1 when messages are first read
- Increment times_delivered when messages are claimed
- Return all 4 fields in pending() method matching real Redis behavior
```python
from docket import Docket

async with Docket(name="my-docket", url="memory://my-docket") as docket:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I might have misled you, sorry! Now that I see this, we probably do just want memory:// as the URL for all of these (not memory://<docket-name>) because the docket name is specified separately, I forgot about that part. It works either way, honestly. I'll leave it to you!

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All keys are prefixed with the docket name, which is how we can have many dockets on the same redis. But again, it works either way

Python 3.13 was missing coverage on memory backend paths in conftest.py
fixtures. Added pragma: no cover comments to the memory backend paths
that are only hit when REDIS_VERSION=memory.

Lines fixed:
- redis_server fixture: yield None and return statements
- redis_port fixture: memory backend return path
- redis_url fixture: memory backend URL generation
@zzstoatzz zzstoatzz changed the title Add optional fakeredis backend for in-memory testing Add memory backend for in-memory testing Oct 22, 2025
@zzstoatzz zzstoatzz marked this pull request as ready for review October 22, 2025 18:10
Created upstream PR cunla/fakeredis-py#427 to fix
xpending_range. Using fork temporarily until PR is merged and released.

The fix is required because fakeredis was only returning 2 fields per pending
message (message_id, consumer) when Redis spec requires 4 fields (message_id,
consumer, time_since_delivered, times_delivered). This was causing KeyError
in docket's snapshot() method which accesses these fields.
Copy link
Owner

@chrisguidry chrisguidry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

love it even more now

The fakeredis fork with xpending_range fix is only needed for testing with the
memory backend, not for production use of docket. Users who install docket don't
need fakeredis as a standard dependency - it's only used when running tests or
development workflows with memory:// URLs.

Once cunla/fakeredis-py#427 is merged upstream, we can
switch back to the regular fakeredis package in dev dependencies.
All memory:// URLs now share a single FakeServer instance, regardless of the
URL path. Dockets are automatically isolated by their name-based Redis key
prefixes (e.g., docket1:stream vs docket2:stream), so there's no need to
create separate FakeServer instances per URL.

This simplifies the implementation and follows guidry's feedback that "all keys
are prefixed with the docket name, which is how we can have many dockets on the
same redis."

Changes:
- Changed _memory_servers dict to single _memory_server class variable
- Simplified memory:// URL usage to just "memory://" everywhere
- Updated tests to use "memory://" instead of unique URLs
- Updated docstring to show "memory://" as the canonical example
Updated to latest commit from fakeredis fork which fixes ResourceWarning
on Python 3.13 by properly clearing _reader and _writer on disconnect.

The issue was that redis-py's AbstractConnection.__del__ checks for _writer
and raises ResourceWarning if connections aren't properly closed. The fix
ensures FakeConnection.disconnect() sets these to None.
Updated fakeredis fork dependency to commit 067a4ad which adds __del__ method
to FakeConnection. This prevents ResourceWarning during garbage collection on
Python 3.13 when connection objects are collected without disconnect() being
called.

The fix ensures _writer, _reader, and _sock are cleared even during GC,
satisfying Python 3.13's stricter resource warning handling.
@zzstoatzz
Copy link
Collaborator Author

Updated fakeredis fork dependency to address Python 3.13 ResourceWarning. Added __del__ method to FakeConnection to ensure connection cleanup during garbage collection. See zzstoatzz/fakeredis-py@067a4ad

Copy link
Owner

@chrisguidry chrisguidry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still approve! :D

@zzstoatzz zzstoatzz merged commit 0ebb6c3 into main Oct 23, 2025
14 checks passed
@zzstoatzz zzstoatzz deleted the nate/backend-abstraction branch October 23, 2025 14:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

In-memory broker

4 participants