Skip to content

Conversation

@uriyage
Copy link
Contributor

@uriyage uriyage commented Jun 11, 2025

Issue: #2022
Joined work with @touitou-dan, @akashkgit @nadav-levanoni

Overview

Valkey 8.0 introduced worker threads for I/O operations, but a critical bottleneck remains: all command execution still occurs sequentially in the main thread. This architecture limitation becomes increasingly problematic as systems scale to more cores and memory, and provides minimal performance gains for CPU-intensive workloads where processing, not I/O, is the primary constraint.

This PR extends worker thread capabilities to execute read commands in parallel, removing the main thread bottleneck while maintaining data consistency. it uses the same centralized coordination model where the main-thread is acting as a scheduler as in valkey 8.0 eliminating the need for complex synchronization mechanisms such as locks or atomic operations.

Key Benefits:

  • 2.3x throughput improvement: 1.3M → 3M requests per second for read commands like GET
  • Up to 17x acceleration for CPU-intensive operations like HGETALL
  • Limited code changes required

Main Architecture Changes

Extended Job Types

Worker threads now handle both I/O operations and read command execution, expanding beyond the I/O-only limitation of version 8.0.

  • Valkey 8.0: Main Thread (All Commands) + Worker Threads (I/O Only)
  • This PR: Main Thread (Write Commands + Coordination) + Worker Threads (Read Commands + I/O)

Unified Response Handling

All worker responses flow through a single multi-producer, single-consumer queue, eliminating the current need to constantly scan client lists

Continuation Tasks

When workers encounter operations requiring global data access (like expired key deletion), they create continuation tasks for the main thread to execute, ensuring data consistency.

Slot Access Control Mechanism

To prevent race conditions and ensure correct command execution order, the main thread uses deferred queues - that manage task scheduling and synchronization.

Structure and Components
Each deferred queue contains:

  • Blocked clients list: Clients waiting for command execution
  • Jobs list: Background tasks that need execution on specific slots or the entire database
  • Reference counter: Tracks active operations to determine slot availability

The system maintains:

  • Per-slot deferred queues: One queue for each of the 16,384 hash slots to synchronize read/write commands accessing the same slot.
  • Global exclusive queue: A single queue (deferredCmdExclusive) for operations requiring exclusive database access

How It Works
Example 1: Exclusive Command Handling
When an EVAL command arrives:

  1. If the exclusive queue is busy (refcount > 0), the client gets blocked and added to the pending clients list
  2. All subsequent commands are also queued until the exclusive operation completes
  3. Once the EVAL finishes, the main thread processes the blocked commands in order

Example 2: Slot-level Synchronization
Consider a GET and SET command targeting the same slot:

  1. The GET command gets offloaded to a worker thread, incrementing the slot's refcount
  2. The SET command (requiring exclusive slot access) gets blocked on the slot queue
  3. When the worker completes the GET and responds, the refcount decreases
  4. The SET command then executes once the slot becomes available

Worker Thread Deferred Jobs
Worker threads sometimes need to execute operations that access global data structures. Instead of doing this immediately, they:

  1. Create deferred jobs and add them to their thread-local list (thread_delayed_jobs)
  2. At command-execution completion, the thread post the entire list to the main thread's response queue
  3. The main thread then executes these deferred operations safely

Current deferred job types include:

  • Expired key processing: Deleting expired keys and propagating DEL commands to replicas
  • Rehashing completion: Finalizing incremental rehashing by updating global dictionary structures
  • Error statistics: Updating global error counters after command failures

Special Case: ServerCron

The ServerCron function requires exclusive database access. When other commands are running in parallel, ServerCron gets enqueued as a deferred job on the exclusive queue, ensuring it runs only when no other operations are active.

Commands Offloading

Commands that can be offloaded to I/O threads are commands that are mark with READONLY flag in their json files.

However, different types of commands have different exclusivity requirements:
Database-exclusive commands require complete isolation and cannot run in parallel with any other offloaded command. These include:

  • Write commands that don't target specific slots
  • The EXEC command (which may contain commands affecting the entire database)
  • Administrative commands (marked with CMD_ADMIN flag)
  • Commands with the CMD_NO_MANDATORY_KEYS flag (like EVAL), where the affected keys cannot be determined in advance

Slot-exclusive commands have more limited restrictions - they can run in parallel with other read commands as long as those commands target different slots. They are only blocked by commands targeting the same slot.
This tiered approach to command exclusivity allows the system to maximize parallelism while maintaining data consistency and avoiding conflicts between concurrent operations.
A new configuration io-threads-do-command-offloading was added to disable command offloading.

Event Processing (epoll)

Three key improvements were made to the event polling process:

a) Client Structure Prefetching: When epoll_wait returns a batch of file descriptors, the system now prefetches the associated client structures to improve cache performance
during event processing. This is implemented through a new prefetch callback function added to the aeEventLoop structure, which proactively loads client data into memory before it
's accessed.

b) Epoll Round-robin offloading to Worker Threads: Similar to version 8.0, epoll_wait operations can be offloaded to worker threads. The implementation uses round-robin scheduling across available worker threads, with epoll jobs receiving higher priority than regular I/O and command processing jobs within each worker thread.

c) Batch Size Optimization: The maxevents parameter for epoll_wait was changed from using eventLoop→setsize (which could be quite large) to a fixed value of 200. This change addresses a performance regression that occurred between Linux kernel versions 5.x and 6.x, where larger batch sizes negatively impacted epoll performance.

Client Management

server.current_client converted to thread variable
Converted global server.current_client and server.executing_client to thread-local variables (_current_client, _executing_client) with accessor macros. This enables thread-safe client context management for concurrent command execution on IO threads.

New block type BLOCKED_SLOT for clients blocking on slots
Added BLOCKED_SLOT blocking type for clients waiting on busy slots during command offloading. Includes new blockingState fields (slot_pending_list, pending_client_node) . Allows proper slot contention handling by queuing clients until slots become available.

Async free client that are handled by IO threads
Modified client freeing to eliminate busy waiting for IO thread operations. Previously, freeClient() would busy wait using waitForClientIO() when clients had ongoing IO operations, creating complexity in edge cases. Now defer freeing until IO operations complete, preventing blocking and simplifying the freeing logic.

Limitations

Modules not supported by default

  • Command offloading to IO threads is disabled by default when modules are loaded to ensure compatibility and prevent potential conflicts. This can be overridden by setting io-threads-do-command-offloading-with-modules to yes, but should be used with caution as module behavior with offloaded commands is not guaranteed

Keyspace miss notifications not supported

  • Commands cannot be offloaded to IO threads when keyspace miss notifications (NOTIFY_KEY_MISS) are enabled. This limitation exists because miss notifications require synchronous execution in the main thread.

pipeline commands

  • Since pipeline commands may access different slots, which could result in sending each command to a different thread, the performance gain would be marginal or even negative. This is due to multiple write calls and numerous round trips between the main thread and I/O threads. Therefore, the main thread will process all commands except the last one, which is handled by a separate thread. This same thread also writes all command replies in a single write call.

Performance Evaluation

We implemented the proposed update into a prototype to evaluate the performance gain in read and read/write scenario .

Test Environment

  • Server: c7gn.16xlarge instance
  • Clients: 3 or 4 c7g.16xlarge instances
  • Configuration: All instances in same placement group (IAD region)
  • Valkey 9.0 unstable as of 6/9/2025 : 8 io-threads
  • 3M: 19 threads (1 main thread, 18 worker threads)
  • Mode: All tests in cluster mode
  • NIC IRQ - pinned on cores 57--63

Dataset

  • Strings: 3M keys with 512-byte values - GET and SET as read and write command respectively
  • Hashes: 1M hashes, each with ~ 50 fields (70 bytes/field) - HGETALL and HSET as read and write command respectively
  • Sorted Sets: 1M sorted sets, each with 50 members (70 bytes/member) - ZRANK and ZADD as read and write command respectively
  • Lists: 1M lists, each with ~50 elements (70 bytes/element) - LINDEX and LSET as read and write command respectively

Benchmark Scenarios

  1. 100% read operations
  2. 80% read / 20% write clients

String Operations

Workload Valkey 9.0 3M
100% Read 1299K 3,003K
80% R, 20% W 1242K 1,931K

Hash Operations

Workload Valkey 9.0 3M 3M 40 threads
100% Read 146K 1,415K 2,626K
80% R, 20% W 172K 1,584K 1,526K

Sorted Set Operations

Workload Valkey 9.0 3M
100% Read 365K 2,537K
80% R, 20% W 330K 780K

List Operations

Workload Valkey 9.0 3M
100% Read 659K 2956K
80% R, 20% W 831K 1778K

@madolson madolson added the run-extra-tests Run extra tests on this PR (Runs all tests from daily except valgrind and RESP) label Jun 12, 2025
@madolson
Copy link
Member

@uriyage Can you prioritize doing the merge so that we can properly run the tests?

@madolson madolson requested a review from Copilot June 12, 2025 00:23
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR enables offloading of read commands to I/O worker threads and refactors client-context handling to use thread-local accessors, while also introducing an event prefetch mechanism and enhancing slot-based client blocking.

  • Added CAN_BE_OFFLOADED flags to JSON command definitions and command table entries for read/fast commands
  • Replaced direct uses of server.current_client/server.executing_client with getCurrentClient/setCurrentClient wrappers
  • Extended event loop with configurable epoll_batch_size and an AE_PREFETCH callback, and added slot-based client blocking logic

Reviewed Changes

Copilot reviewed 65 out of 65 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
src/commands/*.json Added "CAN_BE_OFFLOADED" to command_flags for read/fast commands
src/commands.def Added CMD_CAN_BE_OFFLOADED to the server command table entries for matching commands
src/cluster_slot_stats.c Replaced server.current_client with getCurrentClient()
src/cluster.c Replaced server.current_client with getCurrentClient() and added slot-check logic
src/blocked.c Initialized new slot-pending fields and updated unblock logic for BLOCKED_SLOT
src/aof.c Swapped direct server.current_client/executing_client usage for accessor calls
src/ae_epoll.c Introduced epoll_batch_size and folded it into epoll_wait
src/ae.h Added AE_PREFETCH, aePrefetchProc, and epoll_batch_size definitions
src/ae.c Initialized epoll_batch_size, removed stale read events on prefetch registration, hooked prefetch callback
src/acl.c Replaced client checks and references with getCurrentClient()/isCurrentClient()
Comments suppressed due to low confidence (3)

src/commands/hvals.json:11

  • Commands marked with CAN_BE_OFFLOADED should have corresponding unit or integration tests verifying correct offload behavior.
"CAN_BE_OFFLOADED"

src/cluster.c:809

  • Update the comment to reference getCurrentClient() instead of current_client for accuracy.
* current_client here to get the real client if available. And if it is not

src/commands/get.json:12

  • [nitpick] Indentation here mixes tabs and spaces for the new flag; align with the existing two-space indentation for consistency.
"CAN_BE_OFFLOADED"

@uriyage uriyage force-pushed the offload-read-commands branch from d47011e to 7ca6101 Compare June 12, 2025 05:46
Copy link
Collaborator

@hpatro hpatro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commands that can be offloaded to I/O threads are marked with the CAN_BE_OFFLOADED flag in their json files. However, different types of commands have different exclusivity requirements:
Database-exclusive commands require complete isolation and cannot run in parallel with any other offloaded command.

Can't this be done implicitly instead of defining it via a flag explicitly? With that we would be aware all read commands will be offloaded and a dev doesn't need to check if it's enabled. In the future, we also don't risk of missing new read command being not offloaded.

@hpatro hpatro requested review from hpatro and madolson June 12, 2025 17:57
madolson
madolson previously approved these changes Jun 12, 2025
Copy link
Member

@madolson madolson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work!

I'm primarily concerned about pipelining, multi-exec, and other operational commands that might give inconsistent performance. Commands queueing up periodically because we require the exclusive lock can cause P99 latency spikes. It would be nice to see other realistic workloads for testing performance. It would also be nice to understand how large the cluster can scale while still getting these improvements, can 500 primary clusters (~32 slots per primary) still achieve high enough concurrency?

createIntConfig("events-per-io-thread", NULL, MODIFIABLE_CONFIG | HIDDEN_CONFIG, 0, INT_MAX, server.events_per_io_thread, 2, INTEGER_CONFIG, NULL, NULL),
createIntConfig("prefetch-batch-max-size", NULL, MODIFIABLE_CONFIG, 0, 128, server.prefetch_batch_max_size, 16, INTEGER_CONFIG, NULL, NULL),
createBoolConfig("io-threads-do-commands-offloading", NULL, MODIFIABLE_CONFIG, server.io_threads_do_commands_offloading, 1, NULL, NULL), /* Command offloading enabled by default */
createBoolConfig("io-threads-do-commands-offloading-with-modules", NULL, MODIFIABLE_CONFIG, server.io_threads_do_commands_offloading_with_modules, 0, NULL, NULL), /* Module command offloading disabled by default */
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shouldn't be server configuration. Either the module should be the one deciding if it can offload work "per command" or it should be a module wide configuration.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, module commands are not offloaded in any case. However, we also need to ensure that modules only access the keys declared in their commands. If this is not guaranteed, we cannot offload commands for other slots. Once a single module fails to guarantee this behavior, we cannot offload any commands.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really disagree with anything that you are saying, but I don't see how that conflicts with my comment that this shouldn't be a config. Administration shouldn't have to worry about if a module do work outside the context. The module should be declaring that it's safe to offload commands. If any module doesn't declare that, commands shouldn't be able to get offloaded.

/* If no mandatory keys are specified, we can't determine which slot will be accessed */
if (cmd->flags & CMD_NO_MANDATORY_KEYS) return 1;
/* Any Admin level command needs full exclusivity as it impacts system-wide behaviour */
if (cmd->flags & CMD_ADMIN) return 1;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't really a great assumption. Some stuff like slowlog and acl log shouldn't really impact the wide system, we shouldn't need an exclusive lock to execute those. Many systems might have automation hitting this (like ours) which would cause random dips in performance.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We expect admin commands to be infrequent, meaning not in the thousands per second. As long as
this is the case, it won't affect performance, as we have observed.
Exclusivity for admin commands simplifies the code and makes it more secure, as we don't need to worry about scenarios like server configurations being changed while a thread is executing a command, or client's
ACL permissions being modified during execution.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the case, it won't affect performance, as we have observed.

You haven't given any data on P99 performance, just throughput, so I don't think you have observed this.

Exclusivity for admin commands simplifies the code and makes it more secure, as we don't need to worry about scenarios like server configurations being changed while a thread is executing a command, or client's
ACL permissions being modified during execution.

I agree about changing ACL permissions, I was mentioning ACL log.

IoToMTQueueProduce((uint64_t)c | (uint64_t)r, 0);
}

static void processClientIOCommandDone(client *c) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we deduplicate this code with the code in server.c, I don't like having two places we need to update. Ideally let's have a function in server.c that covers the overlap, and we can call that function from here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will require some refactoring of the call function. Do you think we should include it in this PR, or should we create a separate follow-up task?

Copy link
Member

@madolson madolson Jun 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, include it in this PR ideally.

@madolson madolson self-requested a review June 12, 2025 18:59
@madolson
Copy link
Member

(Ooops, didn't mean to approve, but I can't dismiss it while there are merge conflicts apparently?

Signed-off-by: Uri Yagelnik <[email protected]>

# Conflicts:
#	src/blocked.c
#	src/cluster.c
#	src/cluster_slot_stats.c
#	src/commands.def
#	src/io_threads.c
#	src/io_threads.h
#	src/memory_prefetch.c
#	src/networking.c
#	src/server.c
#	src/server.h
@uriyage uriyage force-pushed the offload-read-commands branch from 7ca6101 to aa83602 Compare June 16, 2025 16:14
@madolson madolson dismissed their stale review June 16, 2025 16:17

Accidental approval

@codecov
Copy link

codecov bot commented Jun 16, 2025

Codecov Report

Attention: Patch coverage is 25.57823% with 547 lines in your changes missing coverage. Please review.

Project coverage is 70.89%. Comparing base (2287261) to head (5b258fa).
Report is 8 commits behind head on unstable.

Files with missing lines Patch % Lines
src/io_threads.c 4.17% 436 Missing ⚠️
src/networking.c 48.07% 54 Missing ⚠️
src/server.c 53.44% 27 Missing ⚠️
src/memory_prefetch.c 65.38% 9 Missing ⚠️
src/module.c 0.00% 9 Missing ⚠️
src/rdb.c 0.00% 4 Missing ⚠️
src/blocked.c 66.66% 3 Missing ⚠️
src/db.c 88.88% 2 Missing ⚠️
src/debug.c 0.00% 2 Missing ⚠️
src/kvstore.c 91.66% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##           unstable    #2208      +/-   ##
============================================
- Coverage     71.54%   70.89%   -0.65%     
============================================
  Files           122      123       +1     
  Lines         66491    67404     +913     
============================================
+ Hits          47570    47787     +217     
- Misses        18921    19617     +696     
Files with missing lines Coverage Δ
src/acl.c 90.72% <100.00%> (ø)
src/ae.c 78.46% <100.00%> (+0.81%) ⬆️
src/ae_epoll.c 85.50% <100.00%> (+0.21%) ⬆️
src/aof.c 80.34% <100.00%> (-0.06%) ⬇️
src/cluster.c 90.39% <100.00%> (+0.01%) ⬆️
src/cluster_slot_stats.c 94.21% <100.00%> (ø)
src/config.c 78.47% <ø> (ø)
src/lazyfree.c 86.20% <100.00%> (+0.09%) ⬆️
src/notify.c 97.22% <100.00%> (ø)
src/object.c 81.43% <100.00%> (+0.01%) ⬆️
... and 14 more

... and 17 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@uriyage
Copy link
Contributor Author

uriyage commented Jun 17, 2025

Commands that can be offloaded to I/O threads are marked with the CAN_BE_OFFLOADED flag in their json files. However, different types of commands have different exclusivity requirements: > Database-exclusive commands require complete isolation and cannot run in parallel with any other offloaded command.

Can't this be done implicitly instead of defining it via a flag explicitly? With that we would be aware all read commands will be offloaded and a dev doesn't need to check if it's enabled. In the future, we also don't risk of missing new read command being not offloaded.

@hpatro
Each additional command must be verified to ensure the code doesn't access any global variables that could be modified by other threads.
This is an initial list of the most common commands, we will expand it.

- name: unit tests
run: ./src/valkey-unit-tests

test-ubuntu-io-threads-sanitizer:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably be in daily and not in the CI.

@hpatro
Copy link
Collaborator

hpatro commented Jun 17, 2025

Commands that can be offloaded to I/O threads are marked with the CAN_BE_OFFLOADED flag in their json files. However, different types of commands have different exclusivity requirements: > Database-exclusive commands require complete isolation and cannot run in parallel with any other offloaded command.

Can't this be done implicitly instead of defining it via a flag explicitly? With that we would be aware all read commands will be offloaded and a dev doesn't need to check if it's enabled. In the future, we also don't risk of missing new read command being not offloaded.

@hpatro Each additional command must be verified to ensure the code doesn't access any global variables that could be modified by other threads. This is an initial list of the most common commands, we will expand it.

Might become difficult to maintain in the future. I would like to keep the life of a developer implementing a new read command as easy as it is currently. Could we possibly introduce broad guard rails to avoid such mistakes? Thinking out loud here.

Could you share any command which is currently not isolated and what field do they access? For multi-commands, maybe we mark them with MULTI_DB_ACCESS and avoid this code flow.

@uriyage
Copy link
Contributor Author

uriyage commented Jun 18, 2025

@uriyage Can you prioritize doing the merge so that we can properly run the tests?

@madolson, Done

@uriyage uriyage force-pushed the offload-read-commands branch from d728920 to 5b258fa Compare June 18, 2025 16:19
@uriyage
Copy link
Contributor Author

uriyage commented Jun 18, 2025

Commands that can be offloaded to I/O threads are marked with the CAN_BE_OFFLOADED flag in their json files. However, different types of commands have different exclusivity requirements:
Database-exclusive commands require complete isolation and cannot run in parallel with any other offloaded command.

Can't this be done implicitly instead of defining it via a flag explicitly? With that we would be aware all read commands will be offloaded and a dev doesn't need to check if it's enabled. In the future, we also don't risk of missing new read command being not offloaded.

@hpatro, Thanks. I have reviewed all the missing read-only commands, and it can indeed be done with a few minor changes.

I removed the flag, and now we offload all read-only commands with the following restrictions:

#define CMD_CAN_BE_OFFLOADED(cmd) \
    ((cmd->flags & CMD_READONLY) && \
     !(cmd->flags & CMD_NO_MANDATORY_KEYS) && \
     !(cmd->flags & CMD_MAY_REPLICATE) && \
     !(cmd->flags & CMD_BLOCKING))

In addition, we won't offload commands if c->slot is not set (for example, in the KEYS command). This covers the case of multi-slot access.

@madolson
Copy link
Member

madolson commented Jun 30, 2025

In addition, we won't offload commands if c->slot is not set (for example, in the KEYS command). This covers the case of multi-slot access.

It would still be good to understand how we can offload the SCAN command or at the very last have it not take a server level lock.

}

/* Check if modules are loaded and module offloading is disabled */
if (moduleCount() > 0 && !server.io_threads_do_commands_offloading_with_modules) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd imagine that a property access would be cheaper than a function call...

Suggested change
if (moduleCount() > 0 && !server.io_threads_do_commands_offloading_with_modules) {
if (!server.io_threads_do_commands_offloading_with_modules && moduleCount() > 0) {

rehashing_completion_ctx ctx = {.rehashing_node = metadata->rehashing_node, .kvs = metadata->kvs, .from = from};
metadata->rehashing_node = NULL;

/* If not in main-thread postpone the update of kvs rehashing info to be done later by the main-thread -*/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/* If not in main-thread postpone the update of kvs rehashing info to be done later by the main-thread -*/
/* If not in main-thread, postpone the update of kvs rehashing info to be done later by the main-thread */

return;
}

/* Postpone error updates if its io-thread */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/* Postpone error updates if its io-thread */
/* Postpone error updates if it's io-thread */

Comment on lines +933 to +935
void *async_rm_call_handle; /* ValkeyModuleAsyncRMCallPromise structure.
which is opaque for the Redis core, only
handled in module.c. */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think style is:

Suggested change
void *async_rm_call_handle; /* ValkeyModuleAsyncRMCallPromise structure.
which is opaque for the Redis core, only
handled in module.c. */
void *async_rm_call_handle; /* ValkeyModuleAsyncRMCallPromise structure.
* which is opaque for the Redis core, only
* handled in module.c. */

volatile uint8_t io_command_state; /* Indicate the IO command state of the client */
ustime_t duration; /* Current command duration. Used for measuring latency of blocking/non-blocking cmds */
robj **original_argv; /* Arguments of original command if arguments were rewritten. */
unsigned long long net_input_bytes_curr_cmd; /* Total network input bytes read for the* execution of this client's current command. */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
unsigned long long net_input_bytes_curr_cmd; /* Total network input bytes read for the* execution of this client's current command. */
unsigned long long net_input_bytes_curr_cmd; /* Total network input bytes read for the execution of this client's current command. */

Comment on lines 1797 to +1799
long long
stat_unexpected_error_replies; /* Number of unexpected (aof-loading, replica to primary, etc.) error replies */
long long stat_total_error_replies; /* Total number of issued error replies ( command + rejected errors ) */
long long stat_dump_payload_sanitizations; /* Number deep dump payloads integrity validations. */
long long stat_io_reads_processed; /* Number of read events processed by IO threads */
long long stat_io_writes_processed; /* Number of write events processed by IO threads */
long long stat_io_freed_objects; /* Number of objects freed by IO threads */
long long stat_io_accept_offloaded; /* Number of offloaded accepts */
long long stat_poll_processed_by_io_threads; /* Total number of poll jobs processed by IO */
long long stat_total_reads_processed; /* Total number of read events processed */
long long stat_total_writes_processed; /* Total number of write events processed */
long long stat_client_qbuf_limit_disconnections; /* Total number of clients reached query buf length limit */
long long stat_client_outbuf_limit_disconnections; /* Total number of clients reached output buf length limit */
long long stat_total_prefetch_entries; /* Total number of prefetched dict entries */
long long stat_total_prefetch_batches; /* Total number of prefetched batches */
stat_unexpected_error_replies; /* Number of unexpected (aof-loading, replica to primary, etc.) error replies */
long long stat_total_error_replies; /* Total number of issued error replies ( command + rejected errors ) */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this item now fit?

Suggested change
long long
stat_unexpected_error_replies; /* Number of unexpected (aof-loading, replica to primary, etc.) error replies */
long long stat_total_error_replies; /* Total number of issued error replies ( command + rejected errors ) */
long long stat_dump_payload_sanitizations; /* Number deep dump payloads integrity validations. */
long long stat_io_reads_processed; /* Number of read events processed by IO threads */
long long stat_io_writes_processed; /* Number of write events processed by IO threads */
long long stat_io_freed_objects; /* Number of objects freed by IO threads */
long long stat_io_accept_offloaded; /* Number of offloaded accepts */
long long stat_poll_processed_by_io_threads; /* Total number of poll jobs processed by IO */
long long stat_total_reads_processed; /* Total number of read events processed */
long long stat_total_writes_processed; /* Total number of write events processed */
long long stat_client_qbuf_limit_disconnections; /* Total number of clients reached query buf length limit */
long long stat_client_outbuf_limit_disconnections; /* Total number of clients reached output buf length limit */
long long stat_total_prefetch_entries; /* Total number of prefetched dict entries */
long long stat_total_prefetch_batches; /* Total number of prefetched batches */
stat_unexpected_error_replies; /* Number of unexpected (aof-loading, replica to primary, etc.) error replies */
long long stat_total_error_replies; /* Total number of issued error replies ( command + rejected errors ) */
long long stat_unexpected_error_replies; /* Number of unexpected (aof-loading, replica to primary, etc.) error replies */
long long stat_total_error_replies; /* Total number of issued error replies ( command + rejected errors ) */

@yahorsi
Copy link

yahorsi commented Aug 12, 2025

Guys it is SO big and SO game changing!
I beg you to release it in thr 9.1, please please please :)

@yahorsi
Copy link

yahorsi commented Oct 21, 2025

Just checking how is it going for the 10th release? )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

run-extra-tests Run extra tests on this PR (Runs all tests from daily except valgrind and RESP)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants