-
Notifications
You must be signed in to change notification settings - Fork 955
Trigger manual failover on SIGTERM / shutdown to cluster primary #1091
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
When a primary disappears, its slots are not served until an automatic failover happens. It takes about n seconds (node timeout plus some seconds). It's too much time for us to not accept writes. If the host machine is about to shutdown for any reason, the processes typically get a sigterm and have some time to shutdown gracefully. In Kubernetes, this is 30 seconds by default. When a primary receives a SIGTERM or a SHUTDOWN, let it trigger a failover to one of the replicas as part of the graceful shutdown. This can reduce some unavailability time. For example the replica needs to sense the primary failure within the node-timeout before initating an election, and now it can initiate an election quickly and win and gossip it. This closes valkey-io#939. Signed-off-by: Binbin <[email protected]>
42dc65e to
6ab8888
Compare
Signed-off-by: Binbin <[email protected]>
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## unstable #1091 +/- ##
============================================
+ Coverage 71.02% 71.09% +0.06%
============================================
Files 123 123
Lines 65683 65766 +83
============================================
+ Hits 46653 46754 +101
+ Misses 19030 19012 -18
🚀 New features to boost your workflow:
|
zuiderkwast
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Thanks for doing this.
The PR description can be updated to explain the solution. Now it is just copy-pasted from the issue. :)
I'm thinking that doing failover in finishShutdown() is maybe too late. finishShutdown is only called when all replicas already have replication offset equal to the primary (checked by isReadyToShutdown()), or after timeout (10 seconds). If one replica is very slow, it will delay the failover. I think we can do the manual failover earlier.
This is the sequence:
- SHUTDOWN or SIGTERM calls
prepareForShutdown(). Here, pause clients for writing and start waiting for replicas offset. - In
serverCron(), we checkisReadyToShutdown()which checks if all replicas haverepl_ack_off == primary_repl_offset. If yes,finishShutdown()is called, otherwise wait more. - finishShutdown.
I think we can send CLUSTER FAILOVER FORCE to the first replica which has repl_ack_off == primary_repl_offset. We can do it in isReadyToShutdown() I think. (We can rename to indicated it does more then check if ready.) Then, we also wait for it to send failover auth request and the primary votes before isReadyToShutdown() returns true.
What do you think?
Signed-off-by: Binbin <[email protected]>
The issue desc is good and very detailed so i copied it, i will update it later.
yean, a failover as soon as possible is good, but itn't true that the primary is down only after it actually exit? so in this case, if a replica is slow and it does not have the chance to catch up the primary, and then the other replica trigger the failover, so the slow replica will need a full sync when it doing the reconfiguration.
so let me sort it out again, you are suggesting that if one replica has already caught up the offset, we should trigger a failover immediately? I guess it is also make sense in this case. |
Signed-off-by: Binbin <[email protected]>
I didn't think about this. The replica can't do psync to the new primary after failover? If it can't, then maybe you're right that the primary should wait for all replicas, at least for some time, to avoid full sync. So, wait for all, then trigger manual failover. If you want, we can add another wait after that (after "finish shutdown"), so the primary can vote for the replica before exit. Wdyt? |
|
Sorry for the late reply, i somehow missed this thread.
Yes, i think this may happen, like if the primary does not flush its output buffer to the slow replica, like primary does not write the buffer to the slow replica, when doing the reconfiguration, the slow replica may use an old offset to psync with the new primary, which will cause a full sync. This may happen, but the probability should be small since the primary will call flushReplicasOutputBuffers to write as much as possible before shutdown.
wait for the vote, i think both are OK. Even if we don't wait, I think the replica will have enough votes. If we really want to, we can even wait until the replica successfully becomes primary before exiting... Do you have a final decision? I will do whatever you think is right. |
I'm thinking if there are any corner cases, like if the cluster is too small to have quorum without the shutting down primary... If it is simple, I prefer to let the primary wait and vote. Then we can avoid the But if this implementation to wait for the vote will be too complex, then let's just skip the vote. I think it's also fine. Without this feature, we wait for automatic failover, which will also not have the vote from the already shutdown primary. |
Signed-off-by: Binbin <[email protected]>
Signed-off-by: Binbin <[email protected]>
366e082 to
519eb2a
Compare
i am going to skip the vote for now, i tried a bit which seemed not easy and not good looking to finish it. Maybe I'll have a better idea later, i will keep it in mind. |
Signed-off-by: Binbin <[email protected]>
Signed-off-by: Binbin <[email protected]>
Signed-off-by: Binbin <[email protected]>
zuiderkwast
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i am going to skip the vote for now, i tried a bit which seemed not easy and not good looking to finish it. Maybe I'll have a better idea later, i will keep it in mind.
I understand. Simple is better.
But possible data loss is not good. See comments below.
Signed-off-by: Binbin <[email protected]>
|
This is an interesting idea. I like the direction we are going in but I agree with @zuiderkwast that potential data loss is not appealing. We can do both though IMO triggering a (graceful) failover as part of Today, we can't forget "myself" nor "my primary" (with the latter being a dynamic state). This adds operational complexity. Imagine that the admin could just send |
Signed-off-by: Binbin <[email protected]>
8ee3291 to
b06a8c4
Compare
@PingXie Yes, it's a good idea, but this PR is about the scenario that the machine is taken down without the control of the Valkey admin. For example, in Kubernetes when a worker is shutdown, SIGTERM is sent to all processes and it waits for 30 seconds by default. When you shutdown your laptop, I believe it's similar, each application gets SIGTERM and has some time to be able to do a graceful shutdown. |
Signed-off-by: Binbin <[email protected]>
Signed-off-by: Binbin <[email protected]>
Signed-off-by: Binbin <[email protected]>
Signed-off-by: Binbin <[email protected]>
Signed-off-by: Binbin <[email protected]>
Signed-off-by: Binbin <[email protected]>
Co-authored-by: Ping Xie <[email protected]> Signed-off-by: Binbin <[email protected]>
|
Discussed briefly in core team meeting. No more open questions, so once low level details are done we can merge it for 9.0. |
I have a few new open questions. Will also reset my approval next |
PingXie
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Signed-off-by: Binbin <[email protected]>
madolson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I never reviewed all the details, but the high-level is good to me now.
…key-io#1091) When a primary disappears, its slots are not served until an automatic failover happens. It takes about n seconds (node timeout plus some seconds). It's too much time for us to not accept writes. If the host machine is about to shutdown for any reason, the processes typically get a sigterm and have some time to shutdown gracefully. In Kubernetes, this is 30 seconds by default. When a primary receives a SIGTERM or a SHUTDOWN, let it trigger a failover to one of the replicas as part of the graceful shutdown. This can reduce some unavailability time. For example the replica needs to sense the primary failure within the node-timeout before initating an election, and now it can initiate an election quickly and win and gossip it. The primary does this by sending a CLUSTER FAILOVER command to the replica. We added a replicaid arg to CLUSTER FAILOVER, after receiving the command, the replica will check whether the node-id is itself, if not, the command will be ignored. The node-id is set by the replica through client setname during the replication handshake. ### New argument for CLUSTER FAILOVER So the format now become CLUSTER FAILOVER [FORCE TAKEOVER] [REPLICAID node-id], this arg does not intented for user use, so it will not be added to the JSON file. ### Replica sends REPLCONF SET-CLUSTER-NODE-ID to inform its node-id During the replication handshake, replica now will use REPLCONF SET-CLUSTER-NODE-ID to inform the primary of replica node-id. ### Primary issue CLUSTER FAILOVER Primary sends CLUSTER FAILOVER FORCE REPLICAID node-id to all replicas because it is a shared replication buffer but only the replica with the mathching id will execute it. ### Add a new auto-failover-on-shutdown config People can disable this feature if they don't like it, the default is 0. This closes valkey-io#939. --------- Signed-off-by: Binbin <[email protected]> Co-authored-by: Viktor Söderqvist <[email protected]> Co-authored-by: Ping Xie <[email protected]> Co-authored-by: Harkrishn Patro <[email protected]>
|
When reviewing #2195, I'm thinking back on this feature again. Here, we added a new config This is not released yet so we can still change this. |
|
right, i totally forgot it. I guess it might be the difference between active and passive? auto-failover-on-shutdown (and shutdown-on-sig failover for sure) is like a passive way, and SHUTDOWN FAILOVER is a active way. And you are right, this anyway is not released and we can change it for good. @valkey-io/core-team thoughts? |
In #1091, a new config `auto-failover-on-shutdown` was added. This PR changes the config to make it unified with other shutdown related options. This feature has not yet been released, so it's not a breaking change. The auto-failover-on-shutdown config is replaced by * A new "failover" option to the existing configs `shutdown-on-sigterm` and `shutdown-on-sigint`. * A new FAILOVER option to the SHUTDOWN command. Additionally, a history entry is added to the SHUTDOWN command which was missing in #2195. Follow-up of #1091. Signed-off-by: Viktor Söderqvist <[email protected]>
update
The
auto-failover-on-shutdownwas removed in #2292, please see #2292 for more details.========================================
When a primary disappears, its slots are not served until an automatic
failover happens. It takes about n seconds (node timeout plus some seconds).
It's too much time for us to not accept writes.
If the host machine is about to shutdown for any reason, the processes
typically get a sigterm and have some time to shutdown gracefully. In
Kubernetes, this is 30 seconds by default.
When a primary receives a SIGTERM or a SHUTDOWN, let it trigger a failover
to one of the replicas as part of the graceful shutdown. This can reduce
some unavailability time. For example the replica needs to sense the
primary failure within the node-timeout before initating an election,
and now it can initiate an election quickly and win and gossip it.
The primary does this by sending a CLUSTER FAILOVER command to the replica.
We added a replicaid arg to CLUSTER FAILOVER, after receiving the command,
the replica will check whether the node-id is itself, if not, the command
will be ignored. The node-id is set by the replica through client setname
during the replication handshake.
New argument for CLUSTER FAILOVER
So the format now become CLUSTER FAILOVER [FORCE TAKEOVER] [REPLICAID node-id],
this arg does not intented for user use, so it will not be added to the JSON
file.
Replica sends REPLCONF SET-CLUSTER-NODE-ID to inform its node-id
During the replication handshake, replica now will use REPLCONF SET-CLUSTER-NODE-ID
to inform the primary of replica node-id.
Primary issue CLUSTER FAILOVER
Primary sends CLUSTER FAILOVER FORCE REPLICAID node-id to all replicas because
it is a shared replication buffer but only the replica with the mathching id
will execute it.
Add a new auto-failover-on-shutdown config
People can disable this feature if they don't like it, the default is 0.
This closes #939.