-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Add support for --detach flag in stack deploy #4258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for --detach flag in stack deploy #4258
Conversation
7f455cd to
93ef689
Compare
|
Hey @thaJeztah and @neersighted I understand you're super busy, but wanted to see if there's anything I could do on my part to help move this forward. It will be a great improvement for our workflow, as we'll be able to deploy with confidence! Thanks! |
|
Waiting for this too |
|
This looks fine to me, matches behavior in |
| return errors.Wrapf(err, "failed to create service %s", name) | ||
| response, err := apiClient.ServiceCreate(ctx, serviceSpec, createOpts) | ||
| if err != nil { | ||
| return errors.Wrapf(err, "Failed to create service %s", name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These errors should probably remain lowercase (go convention it to have lowercase errors). (If needed, we can still add some prefix when printing)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reverted this change in 0dcc6f0!
| if !detach { | ||
| for _, serviceID := range serviceIDs { | ||
| if err := servicecli.WaitOnService(ctx, dockerCli, serviceID, quiet); err != nil { | ||
| errs = append(errs, fmt.Sprintf("%s: %v", serviceID, err)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
haven't tried if it'd look good, and probably fine for a follow-up, but we may be able to use the new errors.Join here, and create a multi-error 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@thaJeztah 👋 I have updated the code according to the suggestion by using errors.Join() in 0dcc6f0.
Given the opportunity, I replaced errors.Errorf and errors.Wrapf with fmt.Errorf in this file, in order to avoid renaming the errors library.
93ef689 to
0dcc6f0
Compare
|
Hey @thaJeztah and @neersighted 👋 Is there anything else I can do to make sure that this one moves forward? |
| } | ||
|
|
||
| func deployServices(ctx context.Context, dockerCli command.Cli, services map[string]swarm.ServiceSpec, namespace convert.Namespace, sendAuth bool, resolveImage string) error { | ||
| func deployServices(ctx context.Context, dockerCli command.Cli, services map[string]swarm.ServiceSpec, namespace convert.Namespace, sendAuth bool, resolveImage string, detach bool, quiet bool) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not critical (as it's a non-exported function), but the list of arguments is getting really long, and consecutive arguments of the same type make for an awkward api.
Starting to wondering if we should instead pass a struct, or perhaps we could even pass the options.Deploy struct, as we already have that (haven't checked if it's in the proper package for that)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @thaJeztah 👋,
Passing options.Deploy in deployServices was increasing the cyclomatic complexity. I've decided to keep deployServices as it previously was and call waitOnServices one level higher. This way the logic is clearer while keeping the cyclomatic complexity as it was.
| flags.BoolVarP(&opts.Detach, "detach", "d", true, "Exit immediately instead of waiting for the stack services to converge") | ||
| flags.BoolVarP(&opts.Quiet, "quiet", "q", false, "Suppress progress output") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like this PR (currently at least) keeps the old behavior (docker stack deploy being non-interactive / detached as default).
If the plan is to eventually make "interactive" (--detach=false) the default, I also went looking how we handled the transition for docker service create; moby/moby@21ec12b (moby/moby#31144)
For that we chose to print a warning; wondering if (and if: when? / which release?) we want to do the same for docker stack; inform users that we plan to switch non-detach as default, with the option to return to the old (--detach[=true]) behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added a message just like in moby/moby@21ec12b (moby/moby#31144) as you said, indicating that we plan to switch non-detach as default.
0dcc6f0 to
e22a17d
Compare
e22a17d to
73a60fc
Compare
|
Hey @thaJeztah @neersighted 👋 I've implemented all the suggested fixes and rebased the PR! Is there anything else that I can do to move this one forward? |
73a60fc to
7f23cad
Compare
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #4258 +/- ##
==========================================
+ Coverage 59.58% 61.24% +1.65%
==========================================
Files 287 287
Lines 24722 20063 -4659
==========================================
- Hits 14731 12287 -2444
+ Misses 9106 6884 -2222
- Partials 885 892 +7 |
7f23cad to
bc35d72
Compare
|
@thaJeztah It seems that we have to re-run the ci, since it failed because of an unrelated error. Let me know if there's anything left to be done here ✌️ |
|
I gave it a kick. For posterity; this was the error. Probably flaky, or some connection issues on GitHub actions (not sure if I've seen that specific one before, so cc @crazy-max - just in case it pops-up more frequently) |
|
What's the status of this PR? Are we ok here? |
|
Sorry for the delay (again) jumping between "etoomany" things. I gave this a try, and I see a couple of things happening when trying this PR;
Perhaps it's not a strict blocker for this PR, but it does all feel a bit "unfinished". Perhaps we can't fix all the progress output (some of that may also be related to the daemon response - haven't checked that), but we could consider
Here's what I tried with this PR; A compose file with 3 services. One of which had a health-check that takes some time to become healthy; services:
a_service:
image: nginx:alpine
healthcheck:
test: ["CMD", "sh", "-c", "if [ ! -f \"/count\" ] ; then ctr=0; else ctr=`cat /count`; fi; ctr=`expr $${ctr} + 1`; echo \"$${ctr}\" > /count; if [ \"$$ctr\" -gt 4 ] ; then exit 0; else exit 1; fi"]
interval: 10s
timeout: 3s
retries: 3
start_period: 60s
deploy:
replicas: 3
b_service:
image: nginx:alpine
c_service:
image: nginx:alpine
When deploying the service, the order was randomized; the first service ("service_c") deployed immediately, but then was checked for 5 seconds to verify if it was "stable". After this, the second service was waited for ("service_a"). This service has the health-check so it now had to wait for a minute or so to become healthy; time docker stack deploy -c ./docker-compose.yaml --detach=false mystack
Creating network mystack_default
Creating service mystack_c_service
Creating service mystack_a_service
Creating service mystack_b_service
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service oj3m4chiob0af8t9qrt320gmj converged
overall progress: 0 out of 3 tasks
1/3: starting [============================================> ]
2/3: starting [============================================> ]
3/3: starting [============================================> ]After that, it continued with the next service ("service_b") which (in the meantime) already was deployed successfully, but due to the client-side logic to verify if it's stable, it still waits for 5 seconds to verify that; ,,,
verify: Service z9xhes4ie4maux02hwskinv2i converged
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service nbzvce7qc0vh7shn9rpdkaxdd converged
________________________________________________________
Executed in 64.18 secs fish external
usr time 440.30 millis 0.18 millis 440.12 millis
sys time 182.75 millis 1.22 millis 181.53 millis |
laurazard
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM, thank you!
| if len(errs) > 0 { | ||
| return errors.Join(errs...) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can just return errors.Join(errs...) here, it'll be nil if len(errs) == 0
Added --detach and --quiet/-q flags to stack deploy. Setting --detach=false waits until all of the stack services have converged. Shows progress bars for each individual task, unless --quiet/-q is specified. Co-authored-by: Sebastiaan van Stijn <[email protected]> Signed-off-by: George Margaritis <[email protected]>
bc35d72 to
b086d72
Compare
|
Did a quick rebase and re-generated the markdown as we updated the diff --git a/docs/reference/commandline/stack_deploy.md b/docs/reference/commandline/stack_deploy.md
index c8b116a09..82934f6f9 100644
--- a/docs/reference/commandline/stack_deploy.md
+++ b/docs/reference/commandline/stack_deploy.md
@@ -12,7 +12,7 @@ Deploy a new stack or update an existing stack
| Name | Type | Default | Description |
|:---------------------------------------------------------|:--------------|:---------|:--------------------------------------------------------------------------------------------------|
| [`-c`](#compose-file), [`--compose-file`](#compose-file) | `stringSlice` | | Path to a Compose file, or `-` to read from stdin |
-| `-d`, `--detach` | | | Exit immediately instead of waiting for the stack services to converge |
+| `-d`, `--detach` | `bool` | `true` | Exit immediately instead of waiting for the stack services to converge |
| `--prune` | | | Prune services that are no longer referenced |
| `-q`, `--quiet` | | | Suppress progress output |
| `--resolve-image` | `string` | `always` | Query the registry to resolve image digest and supported platforms (`always`, `changed`, `never`) |I also opened a tracking ticket to look at follow-up improvements discussed above; #4907 |
thaJeztah
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM let's bring this one in as a first iteration.
Part of #373 along with #4259
Added
--detachand--quiet/-qflags tostack deploy. Setting--detach=falsewaits until all of the services have converged. Shows progress bars for each individual task, unless--quiet/-qis specified.I have used the
WaitOnServiceunder the hood, so that the behavior is in line withservice updateorservice createwhile using the--detachflag.