Workaround for no Exception issue for some failed ansible modules#1786
Merged
wangxin merged 1 commit intosonic-net:masterfrom Jun 28, 2020
wangxin:workaround-for-failed-ansible-modules
Merged
Workaround for no Exception issue for some failed ansible modules#1786wangxin merged 1 commit intosonic-net:masterfrom wangxin:workaround-for-failed-ansible-modules
wangxin merged 1 commit intosonic-net:masterfrom
wangxin:workaround-for-failed-ansible-modules
Conversation
Currently pytest-ansible depends on the 'failed' or 'rc' field in the module result to determine whether the result is failed. Some ansible modules only have 'failed' field in their results. Due to an issue of pytest-ansible: ansible/pytest-ansible#47 The module results returned by pytest-ansible do not have such 'failed' field even when the ansible module failed. In this case, the `is_failed` property will always be `False`. Consequent, no exception will be raised when running some ansible module failed. This commit is a workaround for this issue. According to current ansible behavior, the 'exception' field will be available in failed ansible module result most of the time. When such field is observed in the result, we raise `RunAnsibleModuleFail` exception too. Signed-off-by: Xin Wang <[email protected]>
neethajohn
approved these changes
Jun 18, 2020
Contributor
|
retest this please |
tahmed-dev
approved these changes
Jun 19, 2020
3 tasks
Collaborator
Author
|
The vsimage testing failed issue is fixed in #1799. |
jleveque
approved these changes
Jun 22, 2020
wangxin
added a commit
that referenced
this pull request
Jun 24, 2020
PR #1786 fixed an issue that no exception is raised for some failed ansible modules. With that fix, the controle plane acl testing will fail. The reason is that calling some of the ansible modules in this script is expected to fail. However, because of the issue fixed by #1786, no exception is raised . The test script explicitly check error message to determine if the ansible module calling is failed or not. With the fix of #1786, exception will be raised while calling ansible modules that are expected to fail. We need to add `module_ignore_errors=True` for these modules. Then we check the msg field of ansible module results to determine whether the testing is expected or not. Signed-off-by: Xin Wang <[email protected]>
Collaborator
Author
|
retest this please |
3 tasks
This was referenced Aug 7, 2020
kazinator-arista
pushed a commit
to kazinator-arista/sonic-mgmt
that referenced
this pull request
Mar 4, 2026
…nic-net#8627) 7041400 [config reload] Call systemctl reset-failed for snmp,telemetry,mgmt-framework services (sonic-net#1773) (sonic-net#1786) 399d370 Fix logic in RIF counters print (sonic-net#1732) 8329544 [vnet_route_check] don't hardcode prefix length of /24 (sonic-net#1756) 193b028 [neighbor-advertiser] delete the tunnel maps appropriately (sonic-net#1663) 2c82bcf [neighbor_advertiser] Use existing tunnel if present for creating tunnel mappings (sonic-net#1589) 8e22960 [202012][Config] Update config command of Kdump. (sonic-net#1778) be3e5c6 [show][config] cli refactor for muxcable with abstract class implementation from vendors (sonic-net#1722) (sonic-net#1782)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description of PR
Summary:
Fixes # (issue)
Currently pytest-ansible depends on the 'failed' or 'rc' field in the module result
to determine whether the result is failed. Some ansible modules only have 'failed'
field in their results. Due to an issue of pytest-ansible:
ansible/pytest-ansible#47
The module results returned by pytest-ansible do not have such 'failed' field
even when the ansible module failed. In this case, the
is_failedproperty willalways be
False. Consequently, no exception will be raised when runningsuch ansible modules failed.
This commit is a workaround for this pytest-ansible issue. According to current ansible
behavior, the 'exception' field will be available in failed ansible module result
most of the time. I added checking for the 'exception' filed in ansible module results.
When such field is observed in the result, we raise
RunAnsibleModuleFailexception too.Ideally I hope it could be fixed in pytest-ansible.
Type of change
Approach
What is the motivation for this PR?
When some ansible built-in modules failed, no exception is raised. The test case may be considered as passed. This does not make sense.
How did you do it?
Not just check res.is_failed property while checking ansible module execution result. Add checking availability of 'exception' field in the result dictionary. If the 'exception' field is found, raise RunAnsibleModuleFail exception as well.
How did you verify/test it?
I prepared a test script like below:
Without this fix, tese_case1 failed because it has none-zero 'rc` in the result. However, both test_case2 and test_case3 passed.
With this fix, all the test cases failed as expected:
Any platform specific information?
Supported testbed topology if it's a new test case?
Documentation