Skip to content

[pytest] Improve ansible_host#1006

Merged
lguohan merged 4 commits intosonic-net:masterfrom
dawnbeauty:ansible-host
Jul 28, 2019
Merged

[pytest] Improve ansible_host#1006
lguohan merged 4 commits intosonic-net:masterfrom
dawnbeauty:ansible-host

Conversation

@dawnbeauty
Copy link
Contributor

@dawnbeauty dawnbeauty commented Jul 12, 2019

[pytest] adding ability to use stdout callback plugin when running ansible module
[pytest ansible_host] Support 'module_ignore_errors' option

Signed-off-by: Zhiqian Wu [email protected]

Description of PR

Summary:

  1. stdout callback
    With stdout callback plugin, we could get pretty-printed output of ansible module results.
    It is simple and convenient to read the outputs or print dict/list objects.

So i add a function dump_ansible_results to do that. 'yaml' is the default stdout callback plugin, back port from ansible 2.5 in PR #1005 , you may choose another plugin whenever calling dump_ansible_results.

  1. module_ignore_errors
    We need to ignore errors when execute some cmds with non-zero return codes.

Type of change

  • [] Bug fix
  • Testbed and Framework(new/improvement)
  • [] Test case(new/improvement)

Approach

How did you do it?

Add dump_ansible_results , it loads stdout callback plugin and return pretty-printed results. If load failed, nothing changed.
Add class AnsibleModuleException with pretty-printed ansible module results by overriding str. Raise it whenever module failed.

In ansible_host, add a module_ignore_errors argument.

How did you verify/test it?

assume to execute a simple cmd with shell module echo hello && false

without stdout callback plugin, output is like:

# py.test  --inventory veos --host-pattern all --user admin   --testbed vms-t0 --testbed_file testbed.csv   --show-capture=stdout test_fixtures.py::test2
...<omit some outputs>

    def test2(duthost):
>       duthost.shell("echo hello && false")

test_fixtures.py:20: 

...<omit some outputs>

    def _run(self, *module_args, **complex_args):
    
        ignore_errors = complex_args.pop('ignore_errors', False)
        res = self.module(*module_args, **complex_args)[self.hostname]
        if res.is_failed and not ignore_errors:
>           raise Exception("run module {} failed, errmsg {}".format(self.module_name, dump_ansible_results(res)))
E           Exception: run module shell failed, errmsg {u'changed': True, u'end': u'2019-07-12 09:30:43.029062', '_ansible_no_log': False, u'stdout': u'hello', u'cmd': u'echo hello && false', u'rc': 1, 'failed': True, u'stderr': u'', u'delta': u'0:00:00.002947', 'invocation': {'module_name': u'command', u'module_args': {u'creates': None, u'executable': None, u'chdir': None, u'_raw_params': u'echo hello && false', u'removes': None, u'warn': True, u'_uses_shell': True}}, 'stdout_lines': [u'hello'], u'start': u'2019-07-12 09:30:43.026115', u'warnings': []}

ansible_host.py:27: Exception

with stdout callback plugin, output is like:

py.test  --inventory veos --host-pattern all --user admin -vvvv  --testbed vms-t0 --testbed_file testbed.csv   --show-capture=stdout test_fixtures.py::test2
...<omit some outputs>

    def test2(duthost):
>       duthost.shell("echo hello && false")

test_fixtures.py:20: 

...<omit some outputs>

    def _run(self, *module_args, **complex_args):
    
        module_ignore_errors = complex_args.pop('module_ignore_errors', False)
    
        res = self.module(*module_args, **complex_args)[self.hostname]
        if res.is_failed and not module_ignore_errors:
>           raise AnsibleModuleException("run module {} failed".format(self.module_name), res)
E           AnsibleModuleException: ERROR! run module shell failed
E           Ansible Results => changed=true 
E             cmd: echo hello && false
E             delta: '0:00:00.003753'
E             end: '2019-07-25 03:37:11.906237'
E             failed: true
E             rc: 1
E             start: '2019-07-25 03:37:11.902484'
E             stderr: ''
E             stdout: hello
E             stdout_lines: <omitted>
E             warnings: []

@dawnbeauty dawnbeauty changed the title [pytest] adding ability to use stdout callback plugin [pytest] Improve ansible_host Jul 12, 2019

def _run(self, *module_args, **complex_args):

ignore_errors = complex_args.pop('ignore_errors', False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ignore_errors is not a module argument. I would prefer user code to catch exception

Copy link
Contributor Author

@dawnbeauty dawnbeauty Jul 15, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stepanblyschak
two reasons:

  1. ignore_errors is not a module argument, so it should be popped here instead of pass into module.
    OFC, it could be passed in as a task argument and handled in play_ds of pytest-ansible.module_dispatcher.v<ver>.ModuleDispatcherV<ver>._run, but it need to make more changes with pytest-ansible module.

  2. With non-zero code, user could catch exceptions, but could not get results if they need to do sth with it.
    e.g.

- shell: 'cmd return non-zero code'
  register: out
  failed_when: 'expected keywords' in out

With ignore_errors, we could easily translate it into pytest:

res  = host.shell('cmd', ignore_errors=True)
assert 'expected keywords' in res['stdout']

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with 2, maybe need to add AnsibleModuleException which will include 'res' with pretty printed 'res' by overriding repr?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a good advise. But, do you mean user extract results from AnsibleModuleException if need? Sometimes, we don't known about whether the cmd return non-zero code or not. For cover these, we might need to code like:

try:
   res = host.shell("cmd")
except AnsibleModuleException as exc:
   res = exc.results
assert 'some keywords' in res['stdout']

Even wrose, we might need to execute a sequence of 'non-zero' cmds. There are too many try/except blocks.

Maybe add AnsibleModuleException include pretty-printed 'res' and keep 'ignore_errors'?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that this can lead to many try/except. A shortcut function in ansible_host to return result even on AnsibleModuleException can be an option. e.g:

res = host.ignore_errors('shell', 'cmd')

Which is a bit ugly but does not limit modules about 'ignore_errors' key.
If we can guaranty that non of existing modules will use 'ignore_errors' argument I am Ok with your proposal.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only found M 'cp_network' will use 'ignore_errors' in ansible-devel(master branch).
In ansible-2.0.0.1, non existing modules use it as argument.

3 alternatives:

  1. add shortcut function. host.ignore_errors('shell', 'cmd')
  2. add argument 'module_ignore_errors' (module could use ignore_errors if necessary ). host.shell('cmd', module_ignore_errors=True)
  3. (new) add attr(or function) to enable/disable ignore_errors. host.ignore_errors=True; host.shell('cmd'); host.ignore_errors=False

I would prefer 2>1>3.

Zhiqian.Wu added 2 commits July 15, 2019 09:04
Add AnsibleModuleException with pretty-printed 'results'

Signed-off-by: Zhiqian Wu <[email protected]>
@dawnbeauty
Copy link
Contributor Author

@stepanblyschak please help review.
changes:

  1. add AnsibleModuleException with pretty-printed 'res' by overriding str(not repr).
  2. add 'module_ignore_errors' instead of 'ignore_errors'

Also updated descriptions.

@stepanblyschak stepanblyschak requested a review from lguohan July 25, 2019 15:37
@stepanblyschak
Copy link
Contributor

@lguohan please review

@lguohan lguohan merged commit 519b114 into sonic-net:master Jul 28, 2019
@dawnbeauty dawnbeauty deleted the ansible-host branch July 29, 2019 09:52
ronan-nexthop added a commit to nexthop-ai/sonic-mgmt that referenced this pull request Feb 13, 2026
Summary: `test_container_checker_telemetry` purpose was the manually
enable the telemetry container, and ensure the monit did not produce any
errors regarding it not running. A recent change upstream in featured
meant that telemetry is no longer started in response to enabling it,
now you need to manually enable the container through `sudo systemctl
start telemtry` for it to actually start. As such, this test failed as
monit produced errors stating `Expected containers not running:
telemetry`.

This test no longer serves it's purpose so I am removing the test case.
Telemetry related code is starting to be removed an so we shouldn't
worry about maintian it's special behaviour about being an options
container.

This will be upstreamed

Fixes # (issue)

<!--
- Fill x for your type of change.
- e.g.
- [x] Bug fix
-->

- [x] Bug fix
- [ ] Testbed and Framework(new/improvement)
- [ ] New Test case
    - [ ] Skipped for non-supported platforms
- [ ] Test case improvement

- [ ] 202505
- [x] 202511

manually ran the test

<!--
(If it's a new feature, new test case)
Did you update documentation/Wiki relevant to your implementation?
Link to the wiki page?
-->

Co-authored-by: Ronan Mac Fhlannchadha <[email protected]>
ronan-nexthop added a commit to nexthop-ai/sonic-mgmt that referenced this pull request Feb 13, 2026
Summary: `test_container_checker_telemetry` purpose was the manually
enable the telemetry container, and ensure the monit did not produce any
errors regarding it not running. A recent change upstream in featured
meant that telemetry is no longer started in response to enabling it,
now you need to manually enable the container through `sudo systemctl
start telemtry` for it to actually start. As such, this test failed as
monit produced errors stating `Expected containers not running:
telemetry`.

This test no longer serves it's purpose so I am removing the test case.
Telemetry related code is starting to be removed an so we shouldn't
worry about maintian it's special behaviour about being an options
container.

This will be upstreamed

Fixes # (issue)

<!--
- Fill x for your type of change.
- e.g.
- [x] Bug fix
-->

- [x] Bug fix
- [ ] Testbed and Framework(new/improvement)
- [ ] New Test case
    - [ ] Skipped for non-supported platforms
- [ ] Test case improvement

- [ ] 202505
- [x] 202511

manually ran the test

<!--
(If it's a new feature, new test case)
Did you update documentation/Wiki relevant to your implementation?
Link to the wiki page?
-->

Co-authored-by: Ronan Mac Fhlannchadha <[email protected]>
Signed-off-by: Ronan Mac Fhlannchadha <[email protected]>
ronan-nexthop added a commit to nexthop-ai/sonic-mgmt that referenced this pull request Feb 13, 2026
Summary: `test_container_checker_telemetry` purpose was the manually
enable the telemetry container, and ensure the monit did not produce any
errors regarding it not running. A recent change upstream in featured
meant that telemetry is no longer started in response to enabling it,
now you need to manually enable the container through `sudo systemctl
start telemtry` for it to actually start. As such, this test failed as
monit produced errors stating `Expected containers not running:
telemetry`.

This test no longer serves it's purpose so I am removing the test case.
Telemetry related code is starting to be removed an so we shouldn't
worry about maintian it's special behaviour about being an options
container.

This will be upstreamed

Fixes # (issue)

<!--
- Fill x for your type of change.
- e.g.
- [x] Bug fix
-->

- [x] Bug fix
- [ ] Testbed and Framework(new/improvement)
- [ ] New Test case
    - [ ] Skipped for non-supported platforms
- [ ] Test case improvement

- [ ] 202505
- [x] 202511

manually ran the test

<!--
(If it's a new feature, new test case)
Did you update documentation/Wiki relevant to your implementation?
Link to the wiki page?
-->

Signed-off-by: Ronan Mac Fhlannchadha <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants