Conversation
| finally: | ||
| delete_checkpoint(duthost) | ||
|
|
||
| def test_monitor_config_tc1_add_config(duthost): |
| logger.info("tmpfile {}".format(tmpfile)) | ||
|
|
||
| try: | ||
| output = apply_patch(duthost, json_data=json_patch, dest_file=tmpfile) |
There was a problem hiding this comment.
There is some dependency among ACL, mirror and policer. So, the order for applying different configs is required to respect the dependency. The correct order is supposed to be
Policer->mirror_session->ACL_TABLE->ACL_RULE
There was a problem hiding this comment.
The Yang schema will make sure the configs are added with the correct order.
There was a problem hiding this comment.
Let's assume the order for applying different configs is wrong. What happens to the orchagent/syncd? Will they tolerate or complain, or crash?
There was a problem hiding this comment.
The config has its dependency. For example, ACL_RULE is based on ACL_TABLE, which means ACL_TABLE has to be applied first before ACL_RULE. The dependency is defined in yang(leafref specificly).
So if order is wrong, the apply-patch operation will fail and no further impact on orchagent/syncd.
There was a problem hiding this comment.
@bingwang-ms Could you check my question above?
There was a problem hiding this comment.
orchagent will complain some ERROR or WARNING logs if the referenced object is not found. It will not crash.
|
|
||
| mirror_session = duthost.shell("show mirror_session {}".format(MONITOR_CONFIG_MIRROR_SESSION)) | ||
| expect_res_success(duthost, mirror_session, [ | ||
| MONITOR_CONFIG_MIRROR_SESSION, MONITOR_CONFIG_POLICER], []) |
There was a problem hiding this comment.
I was thinking are we able to do more check besides CLI level checking, such as check syslog to ensure that ACL_RULE is created successfully. #plan for future
There was a problem hiding this comment.
Regarding validation, there are multiple things we can check:
- final status, like check policers, acls are there,
- check the right order of applying (Log analyzer is useful here)
There was a problem hiding this comment.
Mark the extra level checking as plan for future.
| mirror_session = duthost.shell("show mirror_session {}".format(MONITOR_CONFIG_MIRROR_SESSION)) | ||
| expect_res_success(duthost, mirror_session, [], [ | ||
| MONITOR_CONFIG_MIRROR_SESSION, MONITOR_CONFIG_POLICER]) | ||
|
|
There was a problem hiding this comment.
Same above. It would be better if we are able to do more check besides CLI level.
Description of PR
Note:
[Done]sonic-net/sonic-utilities#2068 need to be fix first
[Done] sonic-net/sonic-buildimage#9948 need to be merged
[Done]sonic-net/sonic-buildimage#9929
Summary: Testcase of monitor config for generic updater apply-patch
Fixes # (issue)
Type of change
Back port request
Approach
What is the motivation for this PR?
End to End test support for Generic Updater apply-patch
This PR is to verify the usage of 'config apply-patch' works on monitor config
This PR also remove the ignored yang table as they has been fixed
How did you do it?
Add monitor config to dut and check if config change as expected
How did you verify/test it?
Run test of sonic-mgmt/tests/generic_config_updater/test_monitor_config.py on KVM
Any platform specific information?
Supported testbed topology if it's a new test case?
Documentation