[pytest] Improve ansible_host#1006
[pytest] Improve ansible_host#1006lguohan merged 4 commits intosonic-net:masterfrom dawnbeauty:ansible-host
Conversation
…sible module Signed-off-by: Zhiqian Wu [email protected]>
Signed-off-by: Zhiqian Wu <[email protected]>
tests/ansible_host.py
Outdated
|
|
||
| def _run(self, *module_args, **complex_args): | ||
|
|
||
| ignore_errors = complex_args.pop('ignore_errors', False) |
There was a problem hiding this comment.
ignore_errors is not a module argument. I would prefer user code to catch exception
There was a problem hiding this comment.
@stepanblyschak
two reasons:
-
ignore_errors is not a module argument, so it should be popped here instead of pass into module.
OFC, it could be passed in as a task argument and handled inplay_dsofpytest-ansible.module_dispatcher.v<ver>.ModuleDispatcherV<ver>._run, but it need to make more changes with pytest-ansible module. -
With non-zero code, user could catch exceptions, but could not get results if they need to do sth with it.
e.g.
- shell: 'cmd return non-zero code'
register: out
failed_when: 'expected keywords' in out
With ignore_errors, we could easily translate it into pytest:
res = host.shell('cmd', ignore_errors=True)
assert 'expected keywords' in res['stdout']
There was a problem hiding this comment.
I agree with 2, maybe need to add AnsibleModuleException which will include 'res' with pretty printed 'res' by overriding repr?
There was a problem hiding this comment.
It's a good advise. But, do you mean user extract results from AnsibleModuleException if need? Sometimes, we don't known about whether the cmd return non-zero code or not. For cover these, we might need to code like:
try:
res = host.shell("cmd")
except AnsibleModuleException as exc:
res = exc.results
assert 'some keywords' in res['stdout']
Even wrose, we might need to execute a sequence of 'non-zero' cmds. There are too many try/except blocks.
Maybe add AnsibleModuleException include pretty-printed 'res' and keep 'ignore_errors'?
There was a problem hiding this comment.
I agree that this can lead to many try/except. A shortcut function in ansible_host to return result even on AnsibleModuleException can be an option. e.g:
res = host.ignore_errors('shell', 'cmd')
Which is a bit ugly but does not limit modules about 'ignore_errors' key.
If we can guaranty that non of existing modules will use 'ignore_errors' argument I am Ok with your proposal.
There was a problem hiding this comment.
Only found M 'cp_network' will use 'ignore_errors' in ansible-devel(master branch).
In ansible-2.0.0.1, non existing modules use it as argument.
3 alternatives:
- add shortcut function.
host.ignore_errors('shell', 'cmd') - add argument 'module_ignore_errors' (module could use ignore_errors if necessary ).
host.shell('cmd', module_ignore_errors=True) - (new) add attr(or function) to enable/disable ignore_errors.
host.ignore_errors=True; host.shell('cmd'); host.ignore_errors=False
I would prefer 2>1>3.
Signed-off-by: Zhiqian Wu <[email protected]>
Add AnsibleModuleException with pretty-printed 'results' Signed-off-by: Zhiqian Wu <[email protected]>
|
@stepanblyschak please help review.
Also updated descriptions. |
|
@lguohan please review |
Summary: `test_container_checker_telemetry` purpose was the manually
enable the telemetry container, and ensure the monit did not produce any
errors regarding it not running. A recent change upstream in featured
meant that telemetry is no longer started in response to enabling it,
now you need to manually enable the container through `sudo systemctl
start telemtry` for it to actually start. As such, this test failed as
monit produced errors stating `Expected containers not running:
telemetry`.
This test no longer serves it's purpose so I am removing the test case.
Telemetry related code is starting to be removed an so we shouldn't
worry about maintian it's special behaviour about being an options
container.
This will be upstreamed
Fixes # (issue)
<!--
- Fill x for your type of change.
- e.g.
- [x] Bug fix
-->
- [x] Bug fix
- [ ] Testbed and Framework(new/improvement)
- [ ] New Test case
- [ ] Skipped for non-supported platforms
- [ ] Test case improvement
- [ ] 202505
- [x] 202511
manually ran the test
<!--
(If it's a new feature, new test case)
Did you update documentation/Wiki relevant to your implementation?
Link to the wiki page?
-->
Co-authored-by: Ronan Mac Fhlannchadha <[email protected]>
Summary: `test_container_checker_telemetry` purpose was the manually
enable the telemetry container, and ensure the monit did not produce any
errors regarding it not running. A recent change upstream in featured
meant that telemetry is no longer started in response to enabling it,
now you need to manually enable the container through `sudo systemctl
start telemtry` for it to actually start. As such, this test failed as
monit produced errors stating `Expected containers not running:
telemetry`.
This test no longer serves it's purpose so I am removing the test case.
Telemetry related code is starting to be removed an so we shouldn't
worry about maintian it's special behaviour about being an options
container.
This will be upstreamed
Fixes # (issue)
<!--
- Fill x for your type of change.
- e.g.
- [x] Bug fix
-->
- [x] Bug fix
- [ ] Testbed and Framework(new/improvement)
- [ ] New Test case
- [ ] Skipped for non-supported platforms
- [ ] Test case improvement
- [ ] 202505
- [x] 202511
manually ran the test
<!--
(If it's a new feature, new test case)
Did you update documentation/Wiki relevant to your implementation?
Link to the wiki page?
-->
Co-authored-by: Ronan Mac Fhlannchadha <[email protected]>
Signed-off-by: Ronan Mac Fhlannchadha <[email protected]>
Summary: `test_container_checker_telemetry` purpose was the manually
enable the telemetry container, and ensure the monit did not produce any
errors regarding it not running. A recent change upstream in featured
meant that telemetry is no longer started in response to enabling it,
now you need to manually enable the container through `sudo systemctl
start telemtry` for it to actually start. As such, this test failed as
monit produced errors stating `Expected containers not running:
telemetry`.
This test no longer serves it's purpose so I am removing the test case.
Telemetry related code is starting to be removed an so we shouldn't
worry about maintian it's special behaviour about being an options
container.
This will be upstreamed
Fixes # (issue)
<!--
- Fill x for your type of change.
- e.g.
- [x] Bug fix
-->
- [x] Bug fix
- [ ] Testbed and Framework(new/improvement)
- [ ] New Test case
- [ ] Skipped for non-supported platforms
- [ ] Test case improvement
- [ ] 202505
- [x] 202511
manually ran the test
<!--
(If it's a new feature, new test case)
Did you update documentation/Wiki relevant to your implementation?
Link to the wiki page?
-->
Signed-off-by: Ronan Mac Fhlannchadha <[email protected]>
[pytest] adding ability to use stdout callback plugin when running ansible module
[pytest ansible_host] Support 'module_ignore_errors' option
Signed-off-by: Zhiqian Wu [email protected]
Description of PR
Summary:
With stdout callback plugin, we could get pretty-printed output of ansible module results.
It is simple and convenient to read the outputs or print dict/list objects.
So i add a function
dump_ansible_resultsto do that. 'yaml' is the default stdout callback plugin, back port from ansible 2.5 in PR #1005 , you may choose another plugin whenever callingdump_ansible_results.We need to ignore errors when execute some cmds with non-zero return codes.
Type of change
Approach
How did you do it?
Add
dump_ansible_results, it loads stdout callback plugin and return pretty-printed results. If load failed, nothing changed.Add class
AnsibleModuleExceptionwith pretty-printed ansible module results by overriding str. Raise it whenever module failed.In
ansible_host, add amodule_ignore_errorsargument.How did you verify/test it?
assume to execute a simple cmd with shell module
echo hello && falsewithout stdout callback plugin, output is like:
with stdout callback plugin, output is like: