-
Notifications
You must be signed in to change notification settings - Fork 392
add test process_capabilities_fail #3010
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add test process_capabilities_fail #3010
Conversation
52dfb64 to
841a685
Compare
YJDoc2
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, Thanks for the PR! There is some change needed - basically the current test validates the capabilities call itself fails, i.e. it will error out from line 9. However, we want to check that the container creation itself fails. For this we will need to create a spec with invalid value, and then assert that the container creation has failed.
For this, what we can do is this -
- create a valid spec, with a single valid capacity
- then use this spec with
test_inside_container- here you'll need to update the code of
test_inside_containerintest_utils:204to return error instead of callingunwrap. - In the function call, we get to provide a callback function which runs right before the container is created. This fn gets the rootfs path as a param. The spec (named config.json) it right outside the rootfs files for our tests, so you can edit the path to get the spec string, replace the cap name with an invalid name and re-write it there. Now we will have an invalid spec.
- here you'll need to update the code of
- With this, you can check that the
test_inside_containerfails because of the capabilities issue.
Please let me know if the above explanation is not clear/ you need any help.
beac1c7 to
2b72e74
Compare
|
@YJDoc2 |
|
Hey @kazmsk , thanks for the change, I'll try to take a look at this by the weekend. |
|
Hi @YJDoc2 , how is the status of the review? |
|
Hey @kazmsk , sincere apologies from my side. I got busy with things, and I missed following up with this. May I ask you to sync with current main and resolve the conflicts? Again, apologies for not keeping up with this properly 🙏 |
2b72e74 to
9950c8f
Compare
|
Hi @YJDoc2 , I apologize for not being able to respond earlier. |
YJDoc2
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, the test itself looks good, but one comment to fix. Also -
- Can you add the comment on why we are loading the spec and then re-writing it ; basically what I mentioned in my comment before your recent changes.
- Can you run cargo fmt and check
just lintis passing? That is way the CI is failing.
After these are fixed, I think I'll go ahead and merge. Thanks :)
| "runtimetest".to_string(), | ||
| "process_capabilities_fail".to_string(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need this here, the default command should be ok. We don't actually intend this to reach the runtimetest stage, but instead fail in the creation itself. Right now, if we add this, and somehow the container is created successfully ; then we would still get error from the runtime test, which would result in test_inside_container returning Failed and thus our test incorrectly passing. You can simply set the args to sleep,1m or such (check what we set by default in utils .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have set "sleep, 1m," but I'm not sure where the default is specifically referring to. It would be helpful if you could provide a link.
|
Hi @YJDoc2, |
|
Hi @YJDoc2, |
|
Hey, so I checked this and code wise it seems ok ; but I'm getting one issue here - I suspect that this is related to my system not being configured correctly, and also some changes done in latest ubuntu bases systems , as seen in #3097 (comment) Can you check if the test is passing and failing correctly on your system, and if so also add a check to the error we get to verify it is because the cap is wrong, and not because any other problem? Apart from that, this is good, Thanks :) |
|
Hi @YJDoc2, Is this the correct way to run tests in the local environment? Do the commands seem correct? |
|
Hey,
|
Signed-off-by: Kazuki Massaki <[email protected]> Signed-off-by: kazuki.massaki <[email protected]>
Signed-off-by: Kazuki Massaki <[email protected]> Signed-off-by: kazuki.massaki <[email protected]>
Signed-off-by: Kazuki Massaki <[email protected]> Signed-off-by: kazuki.massaki <[email protected]>
Signed-off-by: kazuki.massaki <[email protected]>
Signed-off-by: kazuki.massaki <[email protected]>
…ainer` Signed-off-by: kazuki.massaki <[email protected]>
Signed-off-by: kazuki.massaki <[email protected]>
Signed-off-by: kazuki.massaki <[email protected]>
a117926 to
75af8b1
Compare
Signed-off-by: kazuki.massaki <[email protected]>
78f0200 to
2d479a0
Compare
|
Hi @YJDoc2, Thank you, I was able to reproduce the issue in my environment as well. |
|
Hey @kazmsk , can you elaborate more on that? Why is it able to start when the spec has invalid cap? Do you have a way to re-create this locally so I can try it out? Thanks :) |
|
Hi @YJDoc2, When I ran This is because test_inside_container passes even though the capability is being overwritten with an invalid one, which led me to conclude that the container is actually starting up. Please let me know if my understanding is incorrect. |
|
Sorry to interrupt, It seems that the Let's take a look at the
As a result, the modified To confirm this, it would be helpful to print the contents of |
Signed-off-by: kazuki.massaki <[email protected]>
Signed-off-by: kazuki.massaki <[email protected]>
Signed-off-by: kazuki.massaki <[email protected]>
Signed-off-by: kazuki.massaki <[email protected]>
|
Hi @YJDoc2, I’ve made some additional changes, could you please take a look? |
|
Hey @kazmsk , yes this seems correct. However this fails on runc because the message is not matching - https://github.com/youki-dev/youki/actions/runs/14965059316/job/42053543183?pr=3010 Can we add corresponding error string check as well? (also comment regarding which string check if for which runtime) Thanks :) |
|
to @kazmsk |
Signed-off-by: kazuki.massaki <[email protected]>
|
Hi @YJDoc2, Thank you for reviewing it multiple times! I've updated the implementation to capture the error from the following message: Currently, if only a warning is detected (and no error), the test is treated as Failed. Additionally, I’ve added comments indicating which runtime each error message corresponds to, for clarity. |
|
Hey @kazmsk , so I suspect that the unable to apply error in runc might be because kernel support, so we should check for warning only, with the |
…essage with TEST_CAP Signed-off-by: kazuki.massaki <[email protected]>
|
Hi @YJDoc2, |
Signed-off-by: kazuki.massaki <[email protected]>
YJDoc2
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Thanks for opening the PR and patiently making the changes requests :)
This implements the process_capabilities_fail validation in #361