Skip to content

Conversation

@enjoy-binbin
Copy link
Member

@enjoy-binbin enjoy-binbin commented Sep 14, 2024

In clusterSaveConfigOrDie, we will exit directly when saving
fails. In the case of disk failure, the cluster node will exit
immediately if there are some changes around the cluster.

Passive exit may bring unexpected effects, such as cluster down.
We think the risk of metadata becoming persistently out of date
is minimal. On the one hand, we have the CLUSTER_WRITABLE_DELAY
logic, which prevents a master node from being rejoined to the
cluster in an unsafe case within 2 seconds.

void clusterUpdateState(void) {
    /* If this is a primary node, wait some time before turning the state
     * into OK, since it is not a good idea to rejoin the cluster as a writable
     * primary, after a reboot, without giving the cluster a chance to
     * reconfigure this node. Note that the delay is calculated starting from
     * the first call to this function and not since the server start, in order
     * to not count the DB loading time. */
    if (first_call_time == 0) first_call_time = mstime();
    if (clusterNodeIsPrimary(myself) && server.cluster->state == CLUSTER_FAIL &&
        mstime() - first_call_time < CLUSTER_WRITABLE_DELAY)
        return;

The remaining potentially worse case is that the node votes twice
in the same epoch. Like we didn't save nodes.conf and the we have
voted for replica X. We reboot and during this time X wins the failover.
After reboot, node Y requests vote for the same epoch and we vote for Y.
Y wins the failover with the same epoch. We have two primaries with
the same epoch. And we get an epoch collissions. It is resolved and some
writes are lost. It's just like a failover, some writes can be lost.
It may be very rare and it is not very bad. We use the CLUSTER_WRITABLE_DELAY
logic to make an optimistic judgment and prevent the node from voting
after restarting.

…ss from existing when saving fails

In clusterSaveConfigOrDie, we will exit directly when saving
fails. In the case of disk failure, the cluster node will exit
immediately if there are some changes around the cluster.

Passive exit may bring unexpected effects, such as cluster down.
Provoide exit-on-cluster-config-file-save-error configuration
timen to control this behavior. Users who do not rely on nodes.conf
can actively migrate it later (or during low traffic periods).

Signed-off-by: Binbin <[email protected]>
@enjoy-binbin
Copy link
Member Author

@PingXie This is a change I wanted to make a while ago, we'll discuss it in a later version.

@codecov
Copy link

codecov bot commented Sep 14, 2024

Codecov Report

❌ Patch coverage is 83.33333% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 72.23%. Comparing base (6c1bb73) to head (3b609eb).

Files with missing lines Patch % Lines
src/cluster_legacy.c 83.33% 2 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##           unstable    #1032      +/-   ##
============================================
+ Coverage     72.10%   72.23%   +0.13%     
============================================
  Files           127      127              
  Lines         70935    70946      +11     
============================================
+ Hits          51146    51249     +103     
+ Misses        19789    19697      -92     
Files with missing lines Coverage Δ
src/cluster_legacy.c 87.67% <83.33%> (+0.25%) ⬆️

... and 13 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@madolson
Copy link
Member

madolson commented Oct 1, 2024

In the case of disk failure, the cluster node will exit
immediately if there are some changes around the cluster.

This adds more states we have to reason about, what exactly is the motivation behind being able to continue without saving to disk? It seems like we maybe should support a mode where we don't have to commit to disk at all. I've also see issues with stuck IO causing the fsync to stall, which also can cause an outage.

@enjoy-binbin
Copy link
Member Author

The motivation is not to exit immediately because it is passive (if it is a primary it may cause the cluster to go down and be unable to write for a certain period of time, it it is a replica it may cause some read request to fail in the scenario of read-write separation), and not exiting immediately allows us to do a manual exit later so that we have enough time to handle it.

@enjoy-binbin
Copy link
Member Author

I've also see issues with stuck IO causing the fsync to stall, which also can cause an outage.

this is also a problem, i think we should figure out a way to get rid of this latency, we dont rely the nodes.conf very much, and the latency is killing us.

@enjoy-binbin
Copy link
Member Author

I'd really appreciate if you guys have some input on this one @zuiderkwast @hpatro @PingXie @madolson

@zuiderkwast
Copy link
Contributor

If saving nodes.conf succeeds once and later it fails, then next time we reboot we can end up with an outdated cluster config. I think this is a problem?

Isn't it better to have a mode that doesn't even try to read or write nodes.conf at all, diskless cluster mode?

With some external control plane like a kubernetes operator, you don't really need persistent nodes.conf. If a node disappears or reboots, it's just replaced by a new node. I understand the wish to run a node without this storage bottle-neck.

@enjoy-binbin
Copy link
Member Author

If saving nodes.conf succeeds once and later it fails, then next time we reboot we can end up with an outdated cluster config. I think this is a problem?

yes, that will be a problem, so people enable this config should not use the reboot way and should not rely on the nodes.conf

Isn't it better to have a mode that doesn't even try to read or write nodes.conf at all, diskless cluster mode?

this seems like a good idea to me.

With some external control plane like a kubernetes operator, you don't really need persistent nodes.conf. If a node disappears or reboots, it's just replaced by a new node. I understand the wish to run a node without this storage bottle-neck.

yes, this is right. Some cases are difficule to handle, sometimes a disk error is discovered and a node needs to be migrated. But the node may update nodes.conf at anytime before the migration (or in the process of migration since a migration for sure will trigger a nodes.conf's update). If the update fails, the node will exit and the primary go down, cluster go fail, and we will need a node-timeout to trigger the failover.

e231406. on the other hand, i want to first, to add latency around the nodes.conf (WDYT?). We have indeed encountered this save function blocking for 100ms+, or open blocking for a few seconds (in the case of logging)

@PingXie
Copy link
Member

PingXie commented Jan 5, 2025

The "diskless cluster mode," in my view, takes a different approach by delegating configuration management to an external component. However, I don’t think it would work for users who prefer a "self-contained" deployment and servers automatically restarting after a crash, which is the use case I'm considering below.

Allowing a node to continue operating despite failing to save its nodes.conf updates feels similar to a FAIL node rejoining the cluster after an indeterminate amount of time, something the current design already permits. That said, there is a key difference to consider, "unbounded staleness." If we crash the server when writes to nodes.conf fail, we can reasonably expect nodes.conf to remain relatively fresh. However, if we allow the server to continue operating, the eventual crash could occur much later, creating "unbounded staleness." I think this shouldn't be a problem for our epoch mechanism, though, since it only accounts for "binary staleness."

I agree stuck IOs are a bit worse. The failover will still happen, which is good, but the old primary who get stuck in the nodes.conf disk IO won't rejoin the shard as a replica. We could consider making these writes async so we can time out nodes.conf updates.

I also find it problematic that the current design mixes configurations (e.g., IP/port, slots) with states (e.g., FAIL/PFAIL) in the same file. I wonder if separating these concerns might reduce the likelihood of server crashes caused by disk failures.

@enjoy-binbin
Copy link
Member Author

The "diskless cluster mode," in my view, takes a different approach by delegating configuration management to an external component. However, I don’t think it would work for users who prefer a "self-contained" deployment and servers automatically restarting after a crash, which is the use case I'm considering below.

yes, it is, it does delegate the conf to external component, and don't benefit these user. Offering a switch seems ok to me, or if there is a better solution.

Allowing a node to continue operating despite failing to save its nodes.conf updates feels similar to a FAIL node rejoining the cluster after an indeterminate amount of time, something the current design already permits. That said, there is a key difference to consider, "unbounded staleness." If we crash the server when writes to nodes.conf fail, we can reasonably expect nodes.conf to remain relatively fresh. However, if we allow the server to continue operating, the eventual crash could occur much later, creating "unbounded staleness." I think this shouldn't be a problem for our epoch mechanism, though, since it only accounts for "binary staleness."

this is a good statement, so should not read or write at all.

I agree stuck IOs are a bit worse. The failover will still happen, which is good, but the old primary who get stuck in the nodes.conf disk IO won't rejoin the shard as a replica. We could consider making these writes async so we can time out nodes.conf updates.

so what happen if a async write fail, should we still do the exit when the async write stuck and eventaully fail? Or if the cluster conf changes multiple times during the async, this will also make nodes.conf become a stale data (we can argue it is in a short window)

I also find it problematic that the current design mixes configurations (e.g., IP/port, slots) with states (e.g., FAIL/PFAIL) in the same file. I wonder if separating these concerns might reduce the likelihood of server crashes caused by disk failures.

yes, i think it can reduce the likelihood, but, we still are facing the same problem.

@PingXie
Copy link
Member

PingXie commented Jan 6, 2025

so what happen if a async write fail, should we still do the exit when the async write stuck and eventaully fail?

The async write is just a means for the server to be able to time out stuck nodes.conf file updates. The update semantics should still be sync on a high level.

yes, i think it can reduce the likelihood, but, we still are facing the same problem.

Right it is a mitigation but might be good enough in practices.

@enjoy-binbin
Copy link
Member Author

The async write is just a means for the server to be able to time out stuck nodes.conf file updates. The update semantics should still be sync on a high level.

ok, i see, we set a timeout and then exit before it gets stuck any longer, right?

Right it is a mitigation but might be good enough in practices.

The reason it that although we split it into two files, when something happen, conf or state changed, we will still need to update the file, which means we are still facing the exit problem, unless we treat some hot update as a non-exit updates, otherwise we will still hit the crash cause by disk failures since we anyway need to save the files (a one big file or two small files.)

@PingXie
Copy link
Member

PingXie commented Jan 6, 2025

we set a timeout and then exit before it gets stuck any longer, right?

correct.

we will still need to update the file, which means we are still facing the exit problem, unless we treat some hot update as a non-exit updates, otherwise we will still hit the crash cause by disk failures since we anyway need to save the files

Yeah it can still fail but my point is that 100% uptime is neither a goal nor attainable. At the end of the day, anything we depend on could and will fail. With the reduced update frequency, I wonder if we could reason that the probability of server crashing due to disk IO failures could be on par with the probability of the (bare-metal or virtual) machine failure?

@enjoy-binbin
Copy link
Member Author

enjoy-binbin commented Jan 6, 2025

another thought i want to mention is that, a lot of normal users (not the self-contained one) think Redis/Valkey is a memory databases, when disabling the persistence (RDB and AOF), they find it is difficult to understand and accept that a cluster fail casued by a conf file failing to update and mostly the conf file is maintained by us (or their admins)

and for a "self-contained" deployment, they will eventually find out that if there is a disk failure, the server will still crash after the reboot, and they will find out the solution is migrate the node (with its nodes.conf and data perhaps, or just add a new replica node in a new machine after the failover)

the disk we use are not reliable over time, which is a pain for me, so i want a way to slove this problem or migrate this problem, to avoid the stuck (latency) or exit.

i would also like to point out that mostly? i guess, or at least for us, people may deploy multiple servers on a single machine, so when a disk failure occurs, multiple nodes (or multiple instances) may be affected.

@PingXie
Copy link
Member

PingXie commented Jan 6, 2025

and for a "self-contained" deployment, they will eventually find out that if there is a disk failure, the server will still crash after the reboot, and they will find out the solution is migrate the node (with its nodes.conf and data perhaps, or just add a new replica node in a new machine after the failover)

Yes, that is a disconnect.

As I mentioned earlier, I don't think the current cluster design requires node.conf updates to be synchronous and synchronous nodes.conf updates don't prevent staleness either, since a down node can re-join the cluster any time. The epoch mechanism is there to prevent nodes with stale cluster views from messing up the global cluster state. I’m open to the idea of introducing this configuration, provided it defaults to off.

@enjoy-binbin
Copy link
Member Author

I’m open to the idea of introducing this configuration, provided it defaults to off.

i am glad that i got you, the default of this configuration is the same as before, that is, if the update fails, the node will do a exit.

@enjoy-binbin
Copy link
Member Author

we can probably rename it to something like cluster-ignore-disk-write-error, just like replica-ignore-disk-write-errors and alian with it

# Replica ignore disk write errors controls the behavior of a replica when it is
# unable to persist a write command received from its primary to disk. By default,
# this configuration is set to 'no' and will crash the replica in this condition.
# It is not recommended to change this default.
#
# replica-ignore-disk-write-errors no

@zuiderkwast
Copy link
Contributor

If we crash the server when writes to nodes.conf fail, we can reasonably expect nodes.conf to remain relatively fresh. However, if we allow the server to continue operating, the eventual crash could occur much later, creating "unbounded staleness."

Interesting! New words like this are always fun. Now, you gave me another idea: Bounded staleness.

Let's say we can tolerate write failures for say 5 minutes (or configurable). If the last successful write was older than that, we can crash. Then we can guarantee the file at least relatively new. How about that?

@hpatro
Copy link
Collaborator

hpatro commented Jan 6, 2025

Isn't it better to have a mode that doesn't even try to read or write nodes.conf at all, diskless cluster mode?

I was also having similar thoughts after dealing with issues regarding incorrect shard-id persisted to the disk temporarily and crash happens on the server, following that the server on restart reads the nodes.conf file and fails to startup due to failing validation check. Even though it's a bug in the code but the dependency on the nodes.conf leads to this issue. I would also prefer to have the config to disable both read/write from nodes.conf. However, the risk is, as all the information is lost and the node after restart would start having new node id, maybe it needs to undergo meet process with the rest of the cluster.

@enjoy-binbin / @zuiderkwast

@enjoy-binbin
Copy link
Member Author

Let's say we can tolerate write failures for say 5 minutes (or configurable). If the last successful write was older than that, we can crash. Then we can guarantee the file at least relatively new. How about that?

i dont think it will help much. When a disk error happen, it is likely unable to recover, except the exhausting the disk, in this case, we can delete some files. But i guess in the most cases, it is unlikely recover, unless we migrate the disk or the machine. So the n minutes window doesn't look that useful.

I would also prefer to have the config to disable both read/write from nodes.conf. However, the risk is, as all the information is lost and the node after restart would start having new node id, maybe it needs to undergo meet process with the rest of the cluster

yes, i also like that idea, we should try to find a way to drop the nodes.conf in this mode, some people who have their own control panel might like this.

@zuiderkwast
Copy link
Contributor

I'm OK with cluster-ignore-disk-write-error (aligned with replica-ignore-disk-write-errors).

For completely disabling nodes.conf, yes the node needs CLUSTER MEET again to join the cluster and it gets a new node-id. We use this with Kubernetes already. Our (internal) Kubernetes operator creates a new node to replace any node that disappears.

Persistent storage in Kubernetes can be problematic, especially in the case of netsplit when the operator can't reach the failing node. In this case, the operator can't be sure if the storage used by the node can safely be reused by another new node or if the node is still alive and is using the storage. Avoiding persistent storage in Kubernetes avoids this problem.

(FWIW, I think we should aim for an official Kubernetes operator as an open source control plane in the long term. It's a very non-trivial job though, because some users want persistant storage and others don't, cluster vs standalone, etc. There's a discussion here: https://github.com/orgs/valkey-io/discussions/746.)

@enjoy-binbin
Copy link
Member Author

i would like to start adding some latency around it, see #1534

@enjoy-binbin enjoy-binbin changed the title Introduce exit-on-cluster-config-file-save-error to prevent the process from existing when saving fails Introduce cluster-ignore-disk-write-errors to prevent the process from existing when saving fails Jan 11, 2025
@enjoy-binbin enjoy-binbin added the release-notes This issue should get a line item in the release notes label Jan 11, 2025
exit(1);
} else {
static mstime_t last_log_time_ms = 0;
const mstime_t log_interval_ms = 10000;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may need a configuration item to control this in the future, currently this logic is copy from replica-ignore-disk-write-errors

R 0 cluster saveconfig
R 1 cluster saveconfig
R 1 cluster failover force
R 1 cluster failover takeover
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason for changing it is that in the previous test case, force does not have enough votes to finish the election.

@enjoy-binbin enjoy-binbin added the major-decision-pending Major decision pending by TSC team label Jan 17, 2025
@madolson madolson changed the title Introduce cluster-ignore-disk-write-errors to prevent the process from existing when saving fails Introduce cluster-ignore-disk-write-errors to prevent the process from exiting when saving fails Mar 10, 2025
@madolson
Copy link
Member

We'll move this to 9.0, we didn't quite implement this for 8.1 and this PR just needs a little bit more refinement.

@madolson madolson removed this from Roadmap Mar 10, 2025
@madolson madolson removed this from Valkey 8.1 Mar 10, 2025
@madolson madolson moved this to Optional for next release candidate in Valkey 9.0 Jul 21, 2025
@madolson madolson moved this from Optional for next release candidate to In Progress in Valkey 9.0 Aug 18, 2025
Copy link
Collaborator

@hpatro hpatro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with the proposal but want to understand if we can change the behavior altogether. one lesser config to maintain.

@enjoy-binbin Is there any risk of not introducing a config and let the in-memory view and disk get out of sync? The metadata persisted being outdated seems not risky to me. Thoughts ?

@zuiderkwast
Copy link
Contributor

Worst case, a primary turns into a replica and fails to persist this. Then it restarts and thinks it's a primary.

The same can happen if a primary goes down and comes back up again.

In this case, I hope the node doesn't start serving clients until it has contacted other cluster nodes to find out it's no longer the primary. Does it?

@enjoy-binbin
Copy link
Member Author

There is indeed does carry some risks as viktor mentioned. And in fact, it is always possible to happen, since a primary can suddenly crash, and during the failover, there will be another new primary online, and if we restart the old primary, this will happen. The conf of the old primary will be different from the new cluster conf.

It might just make the situation worse in some ways, but i guess it is ok in my view. Adding a new config item so that user can control it. Some admins prefer to restart the node (depending on nodes.conf), while some admins prefer to add a new empty node (not depending on nodes.conf)

@zuiderkwast
Copy link
Contributor

if we restart the old primary, this will happen

I think we need another feature: When a primary comes up again after restart and reading nodes.conf, it should contact other nodes before it starts serving clients. It could return -LOADING or something during this time. If we add this behavior, then I think it's safe to let nodes.conf be stale. Then we can avoid adding this new config.

@enjoy-binbin
Copy link
Member Author

We have a similar one, but it only works on FAIL. Perhaps we can extend this condition to work on more than just FAIL.

void clusterUpdateState(void) {
    int j, new_state, new_reason;
    int reachable_primaries = 0;
    static mstime_t among_minority_time;
    static mstime_t first_call_time = 0;

    server.cluster->todo_before_sleep &= ~CLUSTER_TODO_UPDATE_STATE;

    /* If this is a primary node, wait some time before turning the state
     * into OK, since it is not a good idea to rejoin the cluster as a writable
     * primary, after a reboot, without giving the cluster a chance to
     * reconfigure this node. Note that the delay is calculated starting from
     * the first call to this function and not since the server start, in order
     * to not count the DB loading time. */
    if (first_call_time == 0) first_call_time = mstime();
    if (clusterNodeIsPrimary(myself) && server.cluster->state == CLUSTER_FAIL &&
        mstime() - first_call_time < CLUSTER_WRITABLE_DELAY)
        return;

@enjoy-binbin enjoy-binbin changed the title Introduce cluster-ignore-disk-write-errors to prevent the process from exiting when saving fails Fail to save the cluster config file will not exit the process Sep 26, 2025
server.cluster->safe_to_join = 0;
return;
} else {
server.cluster->safe_to_join = 1;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ca we Document and codify the exact criteria to set safe_to_join=1 (e.g., “after nodes.conf successfully written + fsync’ed, clusterState=ok, links stable for N ms”). Also specify when it returns to 0 (e.g., config write failure, epoch mismatch, handshake reset)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cluster major-decision-pending Major decision pending by TSC team release-notes This issue should get a line item in the release notes

Projects

Status: In Progress

Development

Successfully merging this pull request may close these issues.

6 participants