Skip to content

Conversation

@Graeme22
Copy link
Collaborator

@Graeme22 Graeme22 commented Aug 5, 2025

Description

Begins adding structured concurrency via anyio, a new dependency which replaces async_timeout.

Related issue(s)

Fixes #292

API Changes

  • Most objects (Redis clients, connection pools, pubsub, pipelines) are now streamlined for an async-first design. This means they require async context managers and don't provide other ways to use them.
  • Pipelines auto-execute when the async context manager exits. This makes the syntax very clean and makes type safety straightforward. If you need to abort a pipeline for some reason you can still call clear.
  • Connections pools now use two separate pools. One pool is for multiplexing normal commands, the other pool is for dedicated connections. Pools are also blocking by default (in fact, non-blocking pools no longer exist).

Pre-merge checklist

  • Add cluster support
  • Add sentinel support
  • Passing tests
  • Passing lints
  • Docs update

@Graeme22
Copy link
Collaborator Author

Graeme22 commented Aug 15, 2025

Leaving this PR here for reference as well: redis/redis-py#3647.

This looks like a really nice implementation. It would allow us to potentially not break the API at all, since this implementation actually doesn't require the async context manager afaict.

@Graeme22
Copy link
Collaborator Author

Graeme22 commented Sep 12, 2025

Small update on this! This code is working for me with this PR:

from trio import run
from coredis import Redis

redis = Redis.from_url("redis://localhost:6379", decode_responses=True)

async def main():
    async with redis:
        print(await redis.ping())
        async with redis.pubsub(channels=["mychannel"]) as ps:
            await redis.publish("mychannel", "test message!")
            async for msg in ps:
                print(msg)
                if msg["type"] == "message":
                    break
        async with redis.pipeline(transaction=False) as pipe:
            pipe.incr("tmpkey")
            val = pipe.get("tmpkey")
            pipe.delete(["tmpkey"])
        print(await val)

run(main)

"""
PONG
{'type': 'subscribe', 'pattern': None, 'channel': 'mychannel', 'data': 1}
{'type': 'message', 'pattern': None, 'channel': 'mychannel', 'data': 'test message!'}
1
"""

So base functionality is working, and notice I'm using Trio here!

@Graeme22
Copy link
Collaborator Author

I've also updated the connection pool logic. It works like this: When a client requests a connection from the pool, it gets an existing connection that has less than MAX_REQUESTS_PER_CONNECTION pending requests (currently 32, will have to play around to find optimal value). Otherwise, we create a new connection if we're below the max number of connections. If all connections are maxed out, we either throw an error or wait depending on whether it's a blocking pool or not. Also, requests like XREAD that can be blocking set the whole connection to blocked regardless of whether it's full. So new connections are only created when there are a lot of pending requests. This should provide better performance but will have to be tested of course.

@Graeme22 Graeme22 marked this pull request as ready for review September 13, 2025 16:22
replace sleeps

clean up error handling

remove futures from basic client

update connections to use anyio

lazy processing of responses

fix edge cases

pubsub now working

add max idle time

small tweaks

revert lazy processing, use context managers everywhere

pubsub uses strict async context manager

update pubsub tests

blocking pool working

add pipelining and scripting

clean up pubsub a bit

handle blocking connections for pubsub/pipelines/blocking commands

restructure notifications for blocking pool

more reliable transactions (from redis-py)

tweak connection allocation logic

fix race condition

remove monitor, small fixes

guard connection after close

fix on_connect

log connection bug

add diagnostics for git

fix bug

catch error

add logger

idle connections cleanup gracefully, update more tests

update more tests, work on sentinel

fix sentinel bugs

small optimizations

Bump sphinxext-opengraph from 0.10.0 to 0.12.0 (alisaifee#293)

Bumps [sphinxext-opengraph](https://github.com/sphinx-doc/sphinxext-opengraph) from 0.10.0 to 0.12.0.
- [Release notes](https://github.com/sphinx-doc/sphinxext-opengraph/releases)
- [Commits](sphinx-doc/sphinxext-opengraph@v0.10.0...v0.12.0)

---
updated-dependencies:
- dependency-name: sphinxext-opengraph
  dependency-version: 0.12.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

Ensure ssl_context from kwargs is respected when using from_url factory method

Bump sphinxext-opengraph from 0.12.0 to 0.13.0 (alisaifee#297)

Bumps [sphinxext-opengraph](https://github.com/sphinx-doc/sphinxext-opengraph) from 0.12.0 to 0.13.0.
- [Release notes](https://github.com/sphinx-doc/sphinxext-opengraph/releases)
- [Commits](sphinx-doc/sphinxext-opengraph@v0.12.0...v0.13.0)

---
updated-dependencies:
- dependency-name: sphinxext-opengraph
  dependency-version: 0.13.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

Bump sphinx-sitemap from 2.7.2 to 2.8.0 (alisaifee#296)

Bumps [sphinx-sitemap](https://github.com/jdillard/sphinx-sitemap) from 2.7.2 to 2.8.0.
- [Release notes](https://github.com/jdillard/sphinx-sitemap/releases)
- [Changelog](https://github.com/jdillard/sphinx-sitemap/blob/master/CHANGELOG.rst)
- [Commits](jdillard/sphinx-sitemap@v2.7.2...v2.8.0)

---
updated-dependencies:
- dependency-name: sphinx-sitemap
  dependency-version: 2.8.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ali-Akber Saifee <[email protected]>

Update changelog for  5.1.0

Bump mypy from 1.17.1 to 1.18.1 (alisaifee#299)

Bumps [mypy](https://github.com/python/mypy) from 1.17.1 to 1.18.1.
- [Changelog](https://github.com/python/mypy/blob/master/CHANGELOG.md)
- [Commits](python/mypy@v1.17.1...v1.18.1)

---
updated-dependencies:
- dependency-name: mypy
  dependency-version: 1.18.1
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

Switch to bitnamilegacy for redis-sentinel

Gracefully handle MODULE LIST error (alisaifee#301)

PEP-621 compliant project metadata & build configuration (alisaifee#302)

- Move all project metadata to pyproject.toml
- Use uv build system

Fix error in linting step in compatibility workflow

Add verbose to pypi upload step

Fix pure python build step

fix pyproject

finish merging
@hyperlint-ai
Copy link

hyperlint-ai bot commented Sep 26, 2025

PR Change Summary

Introduced structured concurrency using anyio, enhancing the async capabilities of the library.

  • Added structured concurrency via anyio, replacing async_timeout.
  • Streamlined Redis clients and connection pools for async-first design.
  • Pipelines now auto-execute upon exiting the async context manager.
  • Default connection settings allow for 32 concurrent requests.

Modified Files

  • HISTORY.rst
  • docs/source/handbook/development.rst

How can I customize these reviews?

Check out the Hyperlint AI Reviewer docs for more information on how to customize the review.

If you just want to ignore it on this PR, you can add the hyperlint-ignore label to the PR. Future changes won't trigger a Hyperlint review.

Note specifically for link checks, we only check the first 30 links in a file and we cache the results for several hours (for instance, if you just added a page, you might experience this). Our recommendation is to add hyperlint-ignore to the PR to ignore the link check for this PR.

@alisaifee
Copy link
Owner

@Graeme22 perhaps this will fix the merge issues: 8f63204

Fix merge issues

handle errors like EOF

improve connection pool

Revert "improve connection pool"

This reverts commit 99766fd.

more robust cxn pool

fix txn bug, use bitwise mode instead of flags
alisaifee and others added 30 commits November 4, 2025 10:48
implementation

Ensure connection reuse (and restores previous behavior for blocking
connection pool).
Use the same ConnectionQueue (LIFO async queue) used by
the cluster connection pool for the basic connection pool.
This also collapses multiplexed & blocking connections to the same
pool thus allowing a single definition of max_connections
If a connection is terminated after being established
EndOfStream or ClosedResource errors should mark the
connection as unusable and thus discarded by the pool
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature proposal: structured concurrency

2 participants