Skip to content

Snappi changes work#9

Merged
kamalsahu0001 merged 117 commits intosnappi-changes-prfrom
snappi-changes-work
Apr 22, 2025
Merged

Snappi changes work#9
kamalsahu0001 merged 117 commits intosnappi-changes-prfrom
snappi-changes-work

Conversation

@kamalsahu0001
Copy link
Copy Markdown
Owner

Description of PR

Summary:
Fixes # (issue)

Type of change

  • Bug fix
  • Testbed and Framework(new/improvement)
  • New Test case
    • Skipped for non-supported platforms
  • Test case improvement

Back port request

  • 202012
  • 202205
  • 202305
  • 202311
  • 202405
  • 202411

Approach

What is the motivation for this PR?

How did you do it?

How did you verify/test it?

Any platform specific information?

Supported testbed topology if it's a new test case?

Documentation

kamalsahu0001 and others added 30 commits April 4, 2025 20:46
updated to incorporate new snappi build changes
updated new snappi build changes
updated capture code
Approach
What is the motivation for this PR?
There's a test gap, we don't have test to verify db_migrator

How did you do it?
This test will modify CONFIG_DB and run db_migrator, and verify that DNS_NAMESERVER is from minigraph or golden config.

test_migrate_dns_02: there's minigraph.xml and dns.j2, and there's no golden config. After migration, there's DNS_NAMESERVER in CONFIG_DB, because db_migrator can migrate from minigraph.
test_migrate_dns_03 is used to reproduce SonicQosProfile issue: there's minigraph.xml and dns.j2, and I added SonicQosProfile in minigraph.xml, and there'no golden config. After migration, there's no DNS_NAMESERVER in CONFIG_DB, because db_migrator can't migrate from minigraph.
How did you verify/test it?
Run end to end test
)

What is the motivation for this PR?
pfcwd/test_pfcwd_function.py::TestPfcwdFunc::test_pfcwd_actions is flaky and fails with the following signature.

======================================================================
FAIL: pfc_wd.PfcWdTest
----------------------------------------------------------------------
Traceback (most recent call last):
  File "ptftests/py3/pfc_wd.py", line 148, in runTest
    return verify_packet_any_port(self, masked_exp_pkt, dst_port_list)
  File "/root/env-python3/lib/python3.7/site-packages/ptf/testutils.py", line 3437, in verify_packet_any_port
    % (result.port, device_number, ports, result.format())
AssertionError: Received expected packet on port 1 for device 0, but it should have arrived on one of these ports: [23].
========== RECEIVED ==========
0000  82 FD E1 7F 90 01 00 AA BB CC DD EE 08 00 45 0D  ..............E.
0010  00 56 00 01 00 00 3F 06 1B DF 64 5B 3A B0 C0 A8  .V....?...d[:...
0020  00 02 EA F5 27 6F 00 00 00 00 00 00 00 00 50 02  ....'o........P.
0030  20 00 21 87 00 00 00 01 02 03 04 05 06 07 08 09   .!.............
0040  0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19  ................
0050  1A 1B 1C 1D 1E 1F 20 21 22 23 24 25 26 27 28 29  ...... !"#$%&'()
0060  2A 2B 2C 2D                                      *+,-
==============================
How did you do it?
The test randomly selects a dst_port but always assigns the IP 192.168.0.2 to it. In dualtor topologies there is a notion of static/fixed IP addresses on the ToR's side

admin@ld301:~$ show mux config
SWITCH_NAME    PEER_TOR
-------------  ----------
ld302          10.1.0.33
port        state    ipv4             ipv6
----------  -------  ---------------  -----------------
Ethernet4   auto     192.168.0.2/32   fc02:1000::2/128
Ethernet8   auto     192.168.0.3/32   fc02:1000::3/128
Ethernet12  auto     192.168.0.4/32   fc02:1000::4/128
Ethernet16  auto     192.168.0.5/32   fc02:1000::5/128
Ethernet20  auto     192.168.0.6/32   fc02:1000::6/128
Ethernet24  auto     192.168.0.7/32   fc02:1000::7/128
Ethernet28  auto     192.168.0.8/32   fc02:1000::8/128
Ethernet32  auto     192.168.0.9/32   fc02:1000::9/128
Ethernet36  auto     192.168.0.10/32  fc02:1000::a/128
Ethernet40  auto     192.168.0.11/32  fc02:1000::b/128
Ethernet44  auto     192.168.0.12/32  fc02:1000::c/128
Ethernet48  auto     192.168.0.13/32  fc02:1000::d/128
Ethernet52  auto     192.168.0.14/32  fc02:1000::e/128
Ethernet56  auto     192.168.0.15/32  fc02:1000::f/128
Ethernet60  auto     192.168.0.16/32  fc02:1000::10/128
Ethernet64  auto     192.168.0.17/32  fc02:1000::11/128
Ethernet68  auto     192.168.0.18/32  fc02:1000::12/128
Ethernet72  auto     192.168.0.19/32  fc02:1000::13/128
Ethernet76  auto     192.168.0.20/32  fc02:1000::14/128
Ethernet80  auto     192.168.0.21/32  fc02:1000::15/128
Ethernet84  auto     192.168.0.22/32  fc02:1000::16/128
Ethernet88  auto     192.168.0.23/32  fc02:1000::17/128
Ethernet92  auto     192.168.0.24/32  fc02:1000::18/128
Ethernet96  auto     192.168.0.25/32  fc02:1000::19/128
Due to this packet sometimes ends up being forwarded to Ethernet4 (port1) instead of the port expected by the test.

The proposed fix is that in case of dualtor alone choose destination IP according to MUX_CONFIG for the interface chosen as the dst_port.

How did you verify/test it?
Ran all pfcwd tests on Arista-7260CX3 with dualtor-120 topology.
What is the motivation for this PR?
Baseline testplan name are different from PR testing, but it's better to let them have the same name, will be easier for kusto query.
t0-sonic test didn't pass VM_TYPE to elastictest template, which caused t0-sonic deploy failure.
t0-sonic and dpu test lost specific param.

How did you do it?
Refine baseline pipeline yml to let testplan name have same build reason with PR test.
Pass VM_TYPE to elastictest template.
Add specific param for t0-sonic and dpu test.
…17831)

What is the motivation for this PR?
There are 2 Vlans on the t0-118 topology. We observe that the ptftest launched from upgrade_path tests will default to using the 192.169.0.0/22 IP for Vlan1000 and the test would fail with DUT is not ready due to packets sent by the PTF does not have any response from the DUT.

However, by switching to use 192.168.0.0/25 for Vlan2000, upgrade_path no longer fails on DUT is not ready and is able to pass normal warm upgrade.

How did you do it?
Call the common help function get_vlan_interface_list and get_vlan_interface_info to get vlan interface and ipv4 address.

How did you verify/test it?
Run platform_tests.test_advanced_reboot on T0 testbeds.

Any platform specific information?
T0 platforms
Description of PR
Optimize the ip/test_mgmt_ipv6_only.py test module with Python multithreading.

Summary:
Fixes # (issue) Microsoft ADO 30056122

Approach
What is the motivation for this PR?
The ip/test_mgmt_ipv6_only.py takes a long time to finish on a multi-DUT device, for example, ~100 min on T2 device, so we wanted to optimize it with Python multithreading to reduce the running time.

How did you do it?
How did you verify/test it?
I ran the updated code on a multi-DUT device and verified that the running time was reduced to ~50 min: Elastictest link

Besides, I also verified the change on T0 and dualtor:

T0: https://elastictest.org/scheduler/testplan/67f05c6787ffab7db692a20b?testcase=ip%2Ftest_mgmt_ipv6_only.py&type=console&leftSideViewMode=detail
dualtor: https://elastictest.org/scheduler/testplan/67f05c8d40a6f1f300f5363e?leftSideViewMode=detail&testcase=ip%2Ftest_mgmt_ipv6_only.py&type=console

co-authorzied by: [email protected]
Description of PR
Support trimming the inventory files such as ansible/lab, ansible/t2_lab etc when passing --trim_inv option.

Summary:
Fixes # (issue) Microsoft ADO 30056122

Approach
What is the motivation for this PR?
When we enable inventory trimming by passing the --trim_inv option, the current logic is to only trim the ansible/veos file, but we noticed that the other inventory file (such as ansible/lab) should also be trimmed because it contains the configs of all the devices in that lab, but we only need the configs related to the current test run. Therefore, we decided to support trimming these inventory files as well.

Please note that the PDU & Fanout hosts trimming is not supported in this PR as it's currently blocked by sonic-net#17347

How did you do it?
How did you verify/test it?
I ran the new trimming logic on various lab files and can confirm it's working well:

https://elastictest.org/scheduler/testplan/67c7ad505048655bf9cf8a58
https://elastictest.org/scheduler/testplan/67c78be48dcac0cdc64a3998
https://elastictest.org/scheduler/testplan/67c78cc7f60a7a79ff1ae585
https://elastictest.org/scheduler/testplan/67c78c9c8dcac0cdc64a399c
https://elastictest.org/scheduler/testplan/67c7b419d0bae94c81d8a9d6
https://elastictest.org/scheduler/testplan/67ca846a5048655bf9cf8f7b
Any platform specific information?

co-authorized by: [email protected]
Description of PR
Summary: Add multi ASIC support for test-intf-fec. This is possible with the utility command update in sonic-net/sonic-utilities#3819
Fixes # (issue) 28838870

Approach
What is the motivation for this PR?
Described

How did you do it?
Update the command from sonic-net/sonic-utilities#3819 and update the code base so that works with T2. For 202405

Please note that for a release branch to work internally, the following PR here needs to be included:

sonic-net#17183
sonic-net#14661
sonic-net#16424
sonic-net#15481

How did you verify/test it?
T2 platform verified

Signed-off-by: Austin Pham <[email protected]>

---------

Signed-off-by: Austin Pham <[email protected]>
Approach
What is the motivation for this PR?
We need to verify gnmi feature after container upgrade

How did you do it?
And gnmi and gnmi_watchdog to container upgrade

How did you verify/test it?
Run container upgrade pipeline
updated to support new snappi model
What is the motivation for this PR?
Need check for checking time spent in swss create switch

How did you do it?
Add new success criteria to check for occurrence of swss create switch start and end

How did you verify/test it?
Run test on 7215 devices
What is the motivation for this PR?
Fix a none working mcx deployment script.

How did you do it?
Fix iteritems

How did you verify/test it?
Deploy mcx with new script
sonic-net#17877)

What is the motivation for this PR?
Add port alias-to-name mapping for Arista-7050CX3-32S-S128

How did you do it?
Update port_utils.py.

How did you verify/test it?
Verified by deploy testbed.
updated file to accomodate new snappi changes.
updated file to accomodate new snappi changes
updating files to accomodate snappi changes
What is the motivation for this PR?
Old template is not up to date and does not match with changes in vm_topo results. Update it so the generated minigraph work.

How did you do it?
Copy the section from minigraph_dpg.j2

How did you verify/test it?
Run yang validation on generated minigraph.
…onic-net#15715)

What is the motivation for this PR?
The test test_lldp_neighbor_post_orchagent_reboot fails on multi-asic system. The test tries to disable autorestart feature for swss by using the namespace container name, e.g., swss0, swss1, etc

For config feature autorestart disable, it needs to use 'swss' as global feature name

How did you do it?
Changed code to use 'swss' as feature name without using namespace id

How did you verify/test it?
run sonic-mgmt test_lldp.py

---------

Signed-off-by: Anand Mehra [email protected]
1. Enable nat for dpus on smartswitch
What is the motivation for this PR?
Solve IndexError: list index out of range in dut_port = list(mg_facts['minigraph_portchannels'].keys())[0] because minigraph_portchannels is empty.

How did you do it?
This checks if any portchannels exist before attempting to access them, preventing the IndexError.

How did you verify/test it?
========================================================================================================================================================================================= short test summary info ==========================================================================================================================================================================================
SKIPPED [4] decap/test_subnet_decap.py:207: No portchannels found in minigraph
================================================================================================================================================================================ 4 skipped, 1 warning in 797.40s (0:13:17) =================================================================================================================================================================================
Any platform specific information?
str4-sn5600-1
sonic-net#17873)

What is the motivation for this PR?
After link bringup, it's taking some time for mux status to be consistent in dualtor-aa topology (i.e SERVER_STATUS is 'unknown'). And it's not a test specific issue, I can see similar behaviour on dut where dualtor-aa is deployed.

How did you do it?
So increasing the timeout to 300 (currently it's 150 secs) to fix flakiness.
sonic-net#17904)

What is the motivation for this PR?
The test is giving us a false negative

msg        = 'Did not receive expected packet on any of ports [7, 13, 17, 30, 27, 25, 5, 34, 21, 16, 24, 1, 33, 12, 4, 20, 2, 0, 11... 01  .............0..\n0050  00 AA BB CC DD EE                                ......\n==============================\n'
self       = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>

/usr/lib/python3.8/unittest/case.py:753: AssertionError
Although on a closer look we found that the DUTis forwarding the packet in a reasonable duration of time but for some reason testutils.verify_packet_any_port is taking longer to detect it.

There is also another issue which doesn't cause any failure but defeats the purpose of testing. In case of active-active dualtor we call setup_standby_ports_on_rand_unselected_tor_unconditionally to put the system in active-standby mode. If this is called after background_traffic then the background trafffic flows through the unselected ToR which is not desired.

How did you do it?
Increase the timeout to 5s from system default for testutils.verify_packet_any_port

Make the order of fixture execution deterministic so that setup_standby_ports_on_rand_unselected_tor_unconditionally is called before background_traffic

How did you verify/test it?
Verified on Arista-7050CX3 with dualtor-aa topology.
…est cases. (sonic-net#17910)

Disable all bmp table after test to avoid potential impact to other test cases

Description of PR
Work item tracking
Microsoft ADO (number only):32206168

Approach
What is the motivation for this PR?
Disable all bmp table after test to avoid potential impact to other test cases

How did you do it?
Disable all relevant bmp table via config cli after each test.

How did you verify/test it?
kvm test verified.

Any platform specific information?
What is the motivation for this PR?
Macsec::TestControlPlane::test_rekey_by_period tests failing when EOS selected as key-server
How did you do it?
If rekey-period is non-zero, we are configuring rekey period on EOS host
How did you verify/test it?
Sonic-mgmt Macsec::TestControlPlane::test_rekey_by_period tests are passing with the above change.
Summary:
Add doc for M1 topology announce routes.
lizhijianrd and others added 29 commits April 17, 2025 18:29
What is the motivation for this PR?
Update everflow testcase to support M1 topo

How did you do it?
Update common functions

How did you verify/test it?
Verified by run everflow testcases on Arista-7050CX3 M1-48 testbed.
…lacing variables.py file. (sonic-net#15069)

Description of PR
The purpose of the pull-request is dynamic port_selection from available setup rather than relying on variables.py.

Pull-request adds a function snappi_port_selection in snappi_fixtures.py file.

Summary:
Fixes # (issue)

Type of change
sonic-net#13769

 Bug fix
 Testbed and Framework(new/improvement)
 Test case(new/improvement)
Back port request
 202012
 202205
 202305
 202311
 202405
Approach
What is the motivation for this PR?
Existing variables.py had following drawbacks:

Various line-cards and ports had to be manually added in this file, making it dependent on that particular setup. For different setup, user had to re-configure this file. This is not scalable. This also hindered selecting setups on run-time.
The variables.py did not have any provision for the interface-speed selection. The user had no provision to mention the speeds of the interfaces selected. For example, if the setup had both 100 and 400Gbps ports, user would have to define two different files or create additional dictionaries to accommodate 100 and 400Gbps interface separately.
If a line-card is added or removed, then variables.py will require manual modification.
To counter the above drawbacks, function snappi_port_selection is added in snappi_fixtures.py

How did you do it?
Following are the changes and reasoning behind the changes:

Each testbed has to re-run test_pretest.py to generate a .JSON file in tests/metadata/snappi_tests/ folder. Metadata file generations will be in metadata/snappi_tests/ folder. This is avoid modification to the current metadata folder, therefore addressing our concern of conflicting with the current code base.
Syntax:

./run_tests.sh -n TESTBED_NAME -c test_pretest.py::test_update_snappi_testbed_metadata -i ../ansible/INVENTORY,../ansible/veos -e "--topology=multidut-tgen,any --skip_sanity --trim_inv --disable_loganalyzer" -u
If the topology is not 'multi-tgen' or 'tgen', then a skip message for non-tgen topology has been added.

Function 'generate_skeleton_port_info' parses the above JSON file and creates template to fetch port-data from output of 'snappi_port_selection'. Skeleton parameterization format will be -, for example: 400.0-single_linecard_single_asic. The reason for this change is to follow the Pytest standard of using delimiter "-" for parameterization.

This also skips the speed-category combination if it's not available with comes to 'snappi_port_selection' fixtures.

The conditions for skip are:

Speed or category is not in snappi_port_selection
Or snappi_port_selection return None for the combination
Function snappi_port_selection parses through all the available ports used in the testbed and generates a dictionary with ::.

The line-card combination has three available modes - single line-card single asic, single line-card multiple asic and multiple linecard.

The set of ports are determined by fixture number_of_tx_rx ports with scope "module" defined in each test.

We don't need the setup_ports_and_dut as well now and we can simply call the snappi_testbed_config in the test itself and iterate through the available ports.

Tagging for relevant reach:
@sdszhang , @vmittal-msft , @rawal01 , @selldinesh, @developfast

How did you verify/test it?
Snapshot of the log:

AzDevOps@68684a43ec9e:/data/tests$ python3 -m pytest --inventory ../ansible/ixia-sonic --host-pattern board71,board72,board73,board74 --testbed ixre-chassis117-t2 --testbed_file ../ansible/testbed.csv --log-cli-level info --log-file-level info --kube_master unset --showlocals -ra --show-capture stdout --junit-xml=/tmp/f.xml --skip_sanity --log-file=/tmp/f.log  --disable_loganalyzer --topology multidut-tgen,any --cache-clear snappi_tests/pfc/test_lossless_response_to_external_pause_storms.py --pdb
====================================================================================================================== test session starts =======================================================================================================================
platform linux -- Python 3.8.10, pytest-7.4.0, pluggy-1.4.0
ansible: 2.13.13
rootdir: /data/tests
configfile: pytest.ini
------------ curtailing irrelevant output ----------------
20:06:33 __init__.store_fixture_values            L0017 INFO   | store memory_utilization test_lossless_response_to_external_pause_storms_test[400.0-multiple_linecard_multiple_asic]
20:06:33 __init__.pytest_runtest_setup            L0024 INFO   | collect memory before test test_lossless_response_to_external_pause_storms_test[400.0-multiple_linecard_multiple_asic]
20:06:33 __init__.pytest_runtest_setup            L0044 INFO   | Before test: collected memory_values {'before_test': {}, 'after_test': {}}
------------------------------------------------------------------------------------------------------------------------- live log call --------------------------------------------------------------------------------------------------------------------------
20:06:33 test_lossless_response_to_external_pause L0070 INFO   | Ports:[{'ip': '100.117.59.187', 'port_id': '1', 'location': '100.117.59.187/1', 'peer_port': 'Ethernet0', 'peer_device': 'board73', 'speed': '400000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board73>, 'snappi_speed_type': 'speed_400_gbps', 'asic_value': 'asic0'}, {'ip': '100.117.59.187', 'port_id': '2', 'location': '100.117.59.187/2', 'peer_port': 'Ethernet8', 'peer_device': 'board73', 'speed': '400000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board73>, 'snappi_speed_type': 'speed_400_gbps', 'asic_value': 'asic0'}, {'ip': '100.117.59.187', 'port_id': '4', 'location': '100.117.59.187/4', 'peer_port': 'Ethernet0', 'peer_device': 'board74', 'speed': '400000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board74>, 'snappi_speed_type': 'speed_400_gbps', 'asic_value': 'asic0'}]
20:06:38 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board73 with port Ethernet0 with IP 20.10.1.0/31
20:06:39 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board73 with port Ethernet8 with IP 20.10.1.2/31
20:06:41 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board74 with port Ethernet0 with IP 20.10.1.4/31
--------------- curtailed irrelevant output ----------
20:11:02 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board73 with port Ethernet0 with ip :20.10.1.0/31
20:11:03 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board73 with port Ethernet8 with ip :20.10.1.2/31
20:11:04 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board74 with port Ethernet0 with ip :20.10.1.4/31
PASSED                                                                                                                                                                                                                                                     [ 16%]
----------------------------------------------------------------------------------------------------------------------- live log teardown ------------------------------------------------------------------------------------------------------------------------
20:11:04 __init__.pytest_runtest_teardown         L0049 INFO   | collect memory after test test_lossless_response_to_external_pause_storms_test[400.0-multiple_linecard_multiple_asic]
20:11:04 __init__.pytest_runtest_teardown         L0072 INFO   | After test: collected memory_values {'before_test': {}, 'after_test': {}}

snappi_tests/multidut/pfc/test_lossless_response_to_external_pause_storms.py::test_lossless_response_to_external_pause_storms_test[400.0-single_linecard_single_asic] 
------------------------------------------------------------------------------------------------------------------------- live log setup -------------------------------------------------------------------------------------------------------------------------
20:11:04 __init__.set_default                     L0053 INFO   | Completeness level not set during test execution. Setting to default level: CompletenessLevel.basic
20:11:04 __init__.check_test_completeness         L0151 INFO   | Test has no defined levels. Continue without test completeness checks
20:11:04 __init__.loganalyzer                     L0051 INFO   | Log analyzer is disabled
20:11:04 __init__.store_fixture_values            L0017 INFO   | store memory_utilization test_lossless_response_to_external_pause_storms_test[400.0-single_linecard_single_asic]
20:11:04 __init__.pytest_runtest_setup            L0024 INFO   | collect memory before test test_lossless_response_to_external_pause_storms_test[400.0-single_linecard_single_asic]
20:11:04 __init__.pytest_runtest_setup            L0044 INFO   | Before test: collected memory_values {'before_test': {}, 'after_test': {}}
------------------------------------------------------------------------------------------------------------------------- live log call --------------------------------------------------------------------------------------------------------------------------
20:11:04 test_lossless_response_to_external_pause L0070 INFO   | Ports:[{'ip': '100.117.59.187', 'port_id': '1', 'location': '100.117.59.187/1', 'peer_port': 'Ethernet0', 'peer_device': 'board73', 'speed': '400000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board73>, 'snappi_speed_type': 'speed_400_gbps', 'asic_value': 'asic0'}, {'ip': '100.117.59.187', 'port_id': '2', 'location': '100.117.59.187/2', 'peer_port': 'Ethernet8', 'peer_device': 'board73', 'speed': '400000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board73>, 'snappi_speed_type': 'speed_400_gbps', 'asic_value': 'asic0'}, {'ip': '100.117.59.187', 'port_id': '5', 'location': '100.117.59.187/5', 'peer_port': 'Ethernet16', 'peer_device': 'board73', 'speed': '400000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board73>, 'snappi_speed_type': 'speed_400_gbps', 'asic_value': 'asic0'}]
20:11:10 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board73 with port Ethernet0 with IP 20.10.1.0/31
20:11:12 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board73 with port Ethernet8 with IP 20.10.1.2/31
20:11:13 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board73 with port Ethernet16 with IP 20.10.1.4/31
--------------- curtailed irrelevant output ----------
20:14:48 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board73 with port Ethernet0 with ip :20.10.1.0/31
20:14:49 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board73 with port Ethernet8 with ip :20.10.1.2/31
20:14:50 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board73 with port Ethernet16 with ip :20.10.1.4/31
PASSED                                                                                                                                                                                                                                                     [ 33%]
----------------------------------------------------------------------------------------------------------------------- live log teardown ------------------------------------------------------------------------------------------------------------------------
20:14:51 __init__.pytest_runtest_teardown         L0049 INFO   | collect memory after test test_lossless_response_to_external_pause_storms_test[400.0-single_linecard_single_asic]
20:14:51 __init__.pytest_runtest_teardown         L0072 INFO   | After test: collected memory_values {'before_test': {}, 'after_test': {}}

snappi_tests/multidut/pfc/test_lossless_response_to_external_pause_storms.py::test_lossless_response_to_external_pause_storms_test[100.0-single_linecard_multiple_asic] 
------------------------------------------------------------------------------------------------------------------------- live log setup -------------------------------------------------------------------------------------------------------------------------
20:14:51 __init__.set_default                     L0053 INFO   | Completeness level not set during test execution. Setting to default level: CompletenessLevel.basic
20:14:51 __init__.check_test_completeness         L0151 INFO   | Test has no defined levels. Continue without test completeness checks
20:14:51 __init__.loganalyzer                     L0051 INFO   | Log analyzer is disabled
20:14:51 __init__.store_fixture_values            L0017 INFO   | store memory_utilization test_lossless_response_to_external_pause_storms_test[100.0-single_linecard_multiple_asic]
20:14:51 __init__.pytest_runtest_setup            L0024 INFO   | collect memory before test test_lossless_response_to_external_pause_storms_test[100.0-single_linecard_multiple_asic]
20:14:51 __init__.pytest_runtest_setup            L0044 INFO   | Before test: collected memory_values {'before_test': {}, 'after_test': {}}
------------------------------------------------------------------------------------------------------------------------- live log call --------------------------------------------------------------------------------------------------------------------------
20:14:51 test_lossless_response_to_external_pause L0070 INFO   | Ports:[{'ip': '100.117.59.187', 'port_id': '9.1', 'location': '100.117.59.187/9.1', 'peer_port': 'Ethernet0', 'peer_device': 'board71', 'speed': '100000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board71>, 'snappi_speed_type': 'speed_100_gbps', 'asic_value': 'asic0'}, {'ip': '100.117.59.187', 'port_id': '9.2', 'location': '100.117.59.187/9.2', 'peer_port': 'Ethernet8', 'peer_device': 'board71', 'speed': '100000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board71>, 'snappi_speed_type': 'speed_100_gbps', 'asic_value': 'asic0'}, {'ip': '100.117.59.187', 'port_id': '9.3', 'location': '100.117.59.187/9.3', 'peer_port': 'Ethernet144', 'peer_device': 'board71', 'speed': '100000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board71>, 'snappi_speed_type': 'speed_100_gbps', 'asic_value': 'asic1'}]
20:14:57 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board71 with port Ethernet0 with IP 20.10.1.0/31
20:14:58 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board71 with port Ethernet8 with IP 20.10.1.2/31
20:14:59 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board71 with port Ethernet144 with IP 20.10.1.4/31
--------------- curtailed irrelevant output ----------
20:18:20 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board71 with port Ethernet0 with ip :20.10.1.0/31
20:18:21 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board71 with port Ethernet8 with ip :20.10.1.2/31
20:18:22 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board71 with port Ethernet144 with ip :20.10.1.4/31
PASSED                                                                                                                                                                                                                                                     [ 50%]
----------------------------------------------------------------------------------------------------------------------- live log teardown ------------------------------------------------------------------------------------------------------------------------
20:18:23 __init__.pytest_runtest_teardown         L0049 INFO   | collect memory after test test_lossless_response_to_external_pause_storms_test[100.0-single_linecard_multiple_asic]
20:18:23 __init__.pytest_runtest_teardown         L0072 INFO   | After test: collected memory_values {'before_test': {}, 'after_test': {}}

snappi_tests/multidut/pfc/test_lossless_response_to_external_pause_storms.py::test_lossless_response_to_external_pause_storms_test[100.0-multiple_linecard_multiple_asic] 
------------------------------------------------------------------------------------------------------------------------- live log setup -------------------------------------------------------------------------------------------------------------------------
20:18:23 __init__.set_default                     L0053 INFO   | Completeness level not set during test execution. Setting to default level: CompletenessLevel.basic
20:18:23 __init__.check_test_completeness         L0151 INFO   | Test has no defined levels. Continue without test completeness checks
20:18:23 __init__.loganalyzer                     L0051 INFO   | Log analyzer is disabled
20:18:23 __init__.store_fixture_values            L0017 INFO   | store memory_utilization test_lossless_response_to_external_pause_storms_test[100.0-multiple_linecard_multiple_asic]
20:18:23 __init__.pytest_runtest_setup            L0024 INFO   | collect memory before test test_lossless_response_to_external_pause_storms_test[100.0-multiple_linecard_multiple_asic]
20:18:23 __init__.pytest_runtest_setup            L0044 INFO   | Before test: collected memory_values {'before_test': {}, 'after_test': {}}
------------------------------------------------------------------------------------------------------------------------- live log call --------------------------------------------------------------------------------------------------------------------------
20:18:23 test_lossless_response_to_external_pause L0070 INFO   | Ports:[{'ip': '100.117.59.187', 'port_id': '9.1', 'location': '100.117.59.187/9.1', 'peer_port': 'Ethernet0', 'peer_device': 'board71', 'speed': '100000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board71>, 'snappi_speed_type': 'speed_100_gbps', 'asic_value': 'asic0'}, {'ip': '100.117.59.187', 'port_id': '9.2', 'location': '100.117.59.187/9.2', 'peer_port': 'Ethernet8', 'peer_device': 'board71', 'speed': '100000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board71>, 'snappi_speed_type': 'speed_100_gbps', 'asic_value': 'asic0'}, {'ip': '100.117.59.187', 'port_id': '10.1', 'location': '100.117.59.187/10.1', 'peer_port': 'Ethernet0', 'peer_device': 'board72', 'speed': '100000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board72>, 'snappi_speed_type': 'speed_100_gbps', 'asic_value': 'asic0'}]
20:18:29 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board71 with port Ethernet0 with IP 20.10.1.0/31
20:18:30 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board71 with port Ethernet8 with IP 20.10.1.2/31
20:18:32 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board72 with port Ethernet0 with IP 20.10.1.4/31
--------------- curtailed irrelevant output ----------
20:21:35 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board71 with port Ethernet0 with ip :20.10.1.0/31
20:21:36 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board71 with port Ethernet8 with ip :20.10.1.2/31
20:21:37 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board72 with port Ethernet0 with ip :20.10.1.4/31
PASSED                                                                                                                                                                                                                                                     [ 66%]
----------------------------------------------------------------------------------------------------------------------- live log teardown ------------------------------------------------------------------------------------------------------------------------
20:21:38 __init__.pytest_runtest_teardown         L0049 INFO   | collect memory after test test_lossless_response_to_external_pause_storms_test[100.0-multiple_linecard_multiple_asic]
20:21:38 __init__.pytest_runtest_teardown         L0072 INFO   | After test: collected memory_values {'before_test': {}, 'after_test': {}}

snappi_tests/multidut/pfc/test_lossless_response_to_external_pause_storms.py::test_lossless_response_to_external_pause_storms_test[100.0-single_linecard_single_asic] 
------------------------------------------------------------------------------------------------------------------------- live log setup -------------------------------------------------------------------------------------------------------------------------
20:21:38 __init__.set_default                     L0053 INFO   | Completeness level not set during test execution. Setting to default level: CompletenessLevel.basic
20:21:38 __init__.check_test_completeness         L0151 INFO   | Test has no defined levels. Continue without test completeness checks
20:21:38 __init__.loganalyzer                     L0051 INFO   | Log analyzer is disabled
20:21:38 __init__.store_fixture_values            L0017 INFO   | store memory_utilization test_lossless_response_to_external_pause_storms_test[100.0-single_linecard_single_asic]
20:21:38 __init__.pytest_runtest_setup            L0024 INFO   | collect memory before test test_lossless_response_to_external_pause_storms_test[100.0-single_linecard_single_asic]
20:21:38 __init__.pytest_runtest_setup            L0044 INFO   | Before test: collected memory_values {'before_test': {}, 'after_test': {}}
------------------------------------------------------------------------------------------------------------------------- live log call --------------------------------------------------------------------------------------------------------------------------
20:21:38 test_lossless_response_to_external_pause L0070 INFO   | Ports:[{'ip': '100.117.59.187', 'port_id': '10.1', 'location': '100.117.59.187/10.1', 'peer_port': 'Ethernet0', 'peer_device': 'board72', 'speed': '100000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board72>, 'snappi_speed_type': 'speed_100_gbps', 'asic_value': 'asic0'}, {'ip': '100.117.59.187', 'port_id': '10.2', 'location': '100.117.59.187/10.2', 'peer_port': 'Ethernet8', 'peer_device': 'board72', 'speed': '100000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board72>, 'snappi_speed_type': 'speed_100_gbps', 'asic_value': 'asic0'}, {'ip': '100.117.59.187', 'port_id': '10.3', 'location': '100.117.59.187/10.3', 'peer_port': 'Ethernet16', 'peer_device': 'board72', 'speed': '100000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board72>, 'snappi_speed_type': 'speed_100_gbps', 'asic_value': 'asic0'}]
20:21:44 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board72 with port Ethernet0 with IP 20.10.1.0/31
20:21:45 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board72 with port Ethernet8 with IP 20.10.1.2/31
20:21:46 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board72 with port Ethernet16 with IP 20.10.1.4/31
--------------- curtailed irrelevant output ----------
20:25:04 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board72 with port Ethernet0 with ip :20.10.1.0/31
20:25:05 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board72 with port Ethernet8 with ip :20.10.1.2/31
20:25:06 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board72 with port Ethernet16 with ip :20.10.1.4/31
PASSED                                                                                                                                                                                                                                                     [ 83%]
----------------------------------------------------------------------------------------------------------------------- live log teardown ------------------------------------------------------------------------------------------------------------------------
20:25:07 __init__.pytest_runtest_teardown         L0049 INFO   | collect memory after test test_lossless_response_to_external_pause_storms_test[100.0-single_linecard_single_asic]
20:25:07 __init__.pytest_runtest_teardown         L0072 INFO   | After test: collected memory_values {'before_test': {}, 'after_test': {}}

snappi_tests/multidut/pfc/test_lossless_response_to_external_pause_storms.py::test_lossless_response_to_external_pause_storms_test[400.0-single_linecard_multiple_asic] 
------------------------------------------------------------------------------------------------------------------------- live log setup -------------------------------------------------------------------------------------------------------------------------
20:25:07 __init__.set_default                     L0053 INFO   | Completeness level not set during test execution. Setting to default level: CompletenessLevel.basic
20:25:07 __init__.check_test_completeness         L0151 INFO   | Test has no defined levels. Continue without test completeness checks
20:25:07 __init__.loganalyzer                     L0051 INFO   | Log analyzer is disabled
20:25:07 __init__.store_fixture_values            L0017 INFO   | store memory_utilization test_lossless_response_to_external_pause_storms_test[400.0-single_linecard_multiple_asic]
20:25:07 __init__.pytest_runtest_setup            L0024 INFO   | collect memory before test test_lossless_response_to_external_pause_storms_test[400.0-single_linecard_multiple_asic]
20:25:07 __init__.pytest_runtest_setup            L0044 INFO   | Before test: collected memory_values {'before_test': {}, 'after_test': {}}
------------------------------------------------------------------------------------------------------------------------- live log call --------------------------------------------------------------------------------------------------------------------------
20:25:07 test_lossless_response_to_external_pause L0070 INFO   | Ports:[{'ip': '100.117.59.187', 'port_id': '1', 'location': '100.117.59.187/1', 'peer_port': 'Ethernet0', 'peer_device': 'board73', 'speed': '400000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board73>, 'snappi_speed_type': 'speed_400_gbps', 'asic_value': 'asic0'}, {'ip': '100.117.59.187', 'port_id': '2', 'location': '100.117.59.187/2', 'peer_port': 'Ethernet8', 'peer_device': 'board73', 'speed': '400000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board73>, 'snappi_speed_type': 'speed_400_gbps', 'asic_value': 'asic0'}, {'ip': '100.117.59.187', 'port_id': '3', 'location': '100.117.59.187/3', 'peer_port': 'Ethernet144', 'peer_device': 'board73', 'speed': '400000', 'intf_config_changed': False, 'api_server_ip': '10.251.30.110', 'asic_type': 'broadcom', 'duthost': <MultiAsicSonicHost board73>, 'snappi_speed_type': 'speed_400_gbps', 'asic_value': 'asic1'}]
20:25:13 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board73 with port Ethernet0 with IP 20.10.1.0/31
20:25:14 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board73 with port Ethernet8 with IP 20.10.1.2/31
20:25:16 snappi_fixtures.__intf_config_multidut   L0934 INFO   | Configuring Dut: board73 with port Ethernet144 with IP 20.10.1.4/31
--------------- curtailed irrelevant output ----------
20:29:03 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board73 with port Ethernet0 with ip :20.10.1.0/31
20:29:04 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board73 with port Ethernet8 with ip :20.10.1.2/31
20:29:05 snappi_fixtures.cleanup_config           L1159 INFO   | Removing Configuration on Dut: board73 with port Ethernet144 with ip :20.10.1.4/31
PASSED                                                                                                                                                                                                                                                     [100%]
----------------------------------------------------------------------------------------------------------------------- live log teardown ------------------------------------------------------------------------------------------------------------------------
20:29:06 __init__.pytest_runtest_teardown         L0049 INFO   | collect memory after test test_lossless_response_to_external_pause_storms_test[400.0-single_linecard_multiple_asic]
--------------- curtailed irrelevant output ----------
---------------------------------------------------------------------------------------------------------------- generated xml file: /tmp/f.xml -----------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------- live log sessionfinish ---------------------------------------------------------------------------------------------------------------------
20:29:32 __init__.pytest_terminal_summary         L0067 INFO   | Can not get Allure report URL. Please check logs
========================================================================================================== 6 passed, 12 warnings in 1466.75s (0:24:26) ===========================================================================================================
INFO:root:Can not get Allure report URL. Please check logs
Any platform specific information?
Supported testbed topology if it's a new test case?
Documentation

Signed-off-by: Austin Pham <[email protected]>
Co-authored-by: Austin Pham <[email protected]>
…tion for all to all scenario (sonic-net#17919)

Description of PR
Summary: Fix for [snappi] test_multidut_pfcwd_all_to_all is not working as expected as in test plan sonic-net#17413
Fixes # (issue)
sonic-net#17413

Approach
What is the motivation for this PR?
How did you do it?
Added and if else condition to use separate tx rx port id list for all to all pattern scenario

How did you verify/test it?
Tested on Edgecore dut

Any platform specific information?
Supported testbed topology if it's a new test case?
Documentation
Output
snappi_tests/pfcwd/test_pfcwd_a2a_with_snappi.py::test_multidut_pfcwd_all_to_all[multidut_port_info0-False]
--------------------------------------------------------------------------------------------------------------------------------- live log setup ----------------------------------------------------------------------------------------------------------------------------------
18:09:50 init.set_default L0053 INFO | Completeness level not set during test execution. Setting to default level: CompletenessLevel.basic
18:09:50 init.check_test_completeness L0151 INFO | Test has no defined levels. Continue without test completeness checks
18:09:50 conftest.enhance_inventory L0285 INFO | Inventory file: ['../ansible/snappi-sonic']
18:09:52 ptfhost_utils.run_icmp_responder_session L0310 INFO | Skip running icmp_responder at session level, it is only for dualtor testbed with active-active mux ports.
18:09:52 init._sanity_check L0428 INFO | Skip sanity check according to command line argument
18:09:52 conftest.collect_before_test L2457 INFO | Dumping Disk and Memory Space information before test on sonic-s6100-dut1
18:09:53 conftest.collect_before_test L2461 INFO | Collecting core dumps before test on sonic-s6100-dut1
18:09:53 conftest.collect_before_test L2470 INFO | Collecting running config before test on sonic-s6100-dut1
18:09:55 conftest.temporarily_disable_route_check L2736 INFO | Skipping temporarily_disable_route_check fixture
18:09:55 conftest.generate_params_dut_hostname L1395 INFO | Using DUTs ['sonic-s6100-dut1'] in testbed 'vms-snappi-sonic'
18:09:55 conftest.set_rand_one_dut_hostname L0561 INFO | Randomly select dut sonic-s6100-dut1 for testing
18:09:55 init._fixture_generator_decorator L0081 INFO | -------------------- fixture enable_packet_aging_after_test setup starts --------------------
18:09:55 init._fixture_generator_decorator L0085 INFO | -------------------- fixture enable_packet_aging_after_test setup ends --------------------
18:09:55 init._fixture_generator_decorator L0081 INFO | -------------------- fixture rand_lossless_prio setup starts --------------------
18:09:55 init._fixture_generator_decorator L0085 INFO | -------------------- fixture rand_lossless_prio setup ends --------------------
18:09:55 init._fixture_generator_decorator L0081 INFO | -------------------- fixture rand_lossy_prio setup starts --------------------
18:09:55 init._fixture_generator_decorator L0085 INFO | -------------------- fixture rand_lossy_prio setup ends --------------------
18:09:55 init._fixture_generator_decorator L0081 INFO | -------------------- fixture start_pfcwd_after_test setup starts --------------------
18:09:55 init._fixture_generator_decorator L0085 INFO | -------------------- fixture start_pfcwd_after_test setup ends --------------------
18:09:55 init._fixture_func_decorator L0069 INFO | -------------------- fixture snappi_api_serv_ip setup starts --------------------
18:09:55 init._fixture_func_decorator L0076 INFO | -------------------- fixture snappi_api_serv_ip setup ends --------------------
18:09:55 init._fixture_func_decorator L0069 INFO | -------------------- fixture snappi_api_serv_port setup starts --------------------
18:09:55 init._fixture_func_decorator L0076 INFO | -------------------- fixture snappi_api_serv_port setup ends --------------------
18:09:55 init._fixture_generator_decorator L0081 INFO | -------------------- fixture snappi_api setup starts --------------------
18:09:55 init._fixture_generator_decorator L0085 INFO | -------------------- fixture snappi_api setup ends --------------------
18:09:56 conftest.rand_one_dut_front_end_hostname L0597 INFO | Randomly select dut sonic-s6100-dut1 for testing
18:09:56 init._fixture_func_decorator L0069 INFO | -------------------- fixture lossless_prio_list setup starts --------------------
18:09:57 init._fixture_func_decorator L0076 INFO | -------------------- fixture lossless_prio_list setup ends --------------------
18:09:57 init._fixture_func_decorator L0069 INFO | -------------------- fixture get_snappi_ports setup starts --------------------
18:09:57 conftest.generate_port_lists L1464 INFO | Generate dut_port_map: {'sonic-s6100-dut1': ['sonic-s6100-dut1|Ethernet64', 'sonic-s6100-dut1|Ethernet68', 'sonic-s6100-dut1|Ethernet72', 'sonic-s6100-dut1|Ethernet76']}
18:09:57 conftest.generate_port_lists L1487 INFO | Generate port_list: ['sonic-s6100-dut1|Ethernet64', 'sonic-s6100-dut1|Ethernet68', 'sonic-s6100-dut1|Ethernet72', 'sonic-s6100-dut1|Ethernet76']
18:09:57 init._fixture_func_decorator L0069 INFO | -------------------- fixture get_snappi_ports_single_dut setup starts --------------------
18:09:57 init._fixture_func_decorator L0076 INFO | -------------------- fixture get_snappi_ports_single_dut setup ends --------------------
18:09:57 init._fixture_func_decorator L0076 INFO | -------------------- fixture get_snappi_ports setup ends --------------------
18:09:57 init._fixture_func_decorator L0069 INFO | -------------------- fixture prio_dscp_map setup starts --------------------
18:09:58 init._fixture_func_decorator L0076 INFO | -------------------- fixture prio_dscp_map setup ends --------------------
18:09:58 init._fixture_func_decorator L0069 INFO | -------------------- fixture all_prio_list setup starts --------------------
18:09:58 init._fixture_func_decorator L0076 INFO | -------------------- fixture all_prio_list setup ends --------------------
18:09:58 init._fixture_func_decorator L0069 INFO | -------------------- fixture lossy_prio_list setup starts --------------------
18:09:58 init._fixture_func_decorator L0076 INFO | -------------------- fixture lossy_prio_list setup ends --------------------
18:09:58 init.loganalyzer L0074 INFO | Log analyzer is disabled
18:09:58 init.memory_utilization L0108 INFO | Hostname: sonic-s6100-dut1, Hwsku: Accton-AS7726-32X, Platform: x86_64-accton_as7726_32x-r0
18:09:58 init.store_fixture_values L0020 INFO | store memory_utilization test_multidut_pfcwd_all_to_all[multidut_port_info0-False]
18:09:58 init._fixture_generator_decorator L0081 INFO | -------------------- fixture ignore_route_check_for_cisco_8000 setup starts --------------------
18:09:58 init._fixture_generator_decorator L0085 INFO | -------------------- fixture ignore_route_check_for_cisco_8000 setup ends --------------------
18:09:58 init.pytest_runtest_setup L0034 INFO | collect memory before test test_multidut_pfcwd_all_to_all[multidut_port_info0-False]
18:09:58 init.pytest_runtest_setup L0054 INFO | Before test: collected memory_values {'before_test': {'sonic-s6100-dut1': {'monit': {'memory_usage': 22.7}}}, 'after_test': {'sonic-s6100-dut1': {}}}
---------------------------------------------------------------------------------------------------------------------------------- live log call ----------------------------------------------------------------------------------------------------------------------------------
18:09:58 test_pfcwd_a2a_with_snappi.test_multidut L0068 INFO | Running test for testbed subtype: single-dut-single-asic
18:09:58 snappi_fixtures.__intf_config_multidut L0853 INFO | Configuring Dut: sonic-s6100-dut1 with port Ethernet72 with IP 20.1.1.0/31
18:10:00 snappi_fixtures.__intf_config_multidut L0853 INFO | Configuring Dut: sonic-s6100-dut1 with port Ethernet64 with IP 20.1.1.2/31
18:10:01 snappi_fixtures.__intf_config_multidut L0853 INFO | Configuring Dut: sonic-s6100-dut1 with port Ethernet68 with IP 20.1.1.4/31
18:10:02 snappi_fixtures.__intf_config_multidut L0853 INFO | Configuring Dut: sonic-s6100-dut1 with port Ethernet76 with IP 20.1.1.6/31
18:13:52 connection._warn L0246 WARNING| Verification of certificates is disabled
18:13:52 connection._info L0243 INFO | Determining the platform and rest_port using the 10.36.77.53 address...
18:13:52 connection._warn L0246 WARNING| Unable to connect to http://10.36.77.53:11009.
18:13:52 connection._info L0243 INFO | Connection established to https://10.36.77.53:11009 on windows
18:13:52 connection._info L0243 INFO | Using IxNetwork api server version 10.20.2402.29
18:13:52 connection._info L0243 INFO | User info IxNetwork/WIN-11RK5TNKNAN/8010
18:13:52 snappi_api.info L1132 INFO | snappi-0.9.1
18:13:52 snappi_api.info L1132 INFO | snappi_ixnetwork-0.9.1
18:13:52 snappi_api.info L1132 INFO | ixnetwork_restpy-1.0.64
18:13:53 snappi_api.info L1132 INFO | Config validation 0.084s
18:13:53 snappi_api.info L1132 INFO | Ports configuration 0.059s
18:13:53 snappi_api.info L1132 INFO | Captures configuration 0.032s
18:14:03 snappi_api.info L1132 INFO | Location hosts ready [10.36.78.53] 2.057s
18:14:03 snappi_api.info L1132 INFO | Speed conversion is not require for (port.name, speed) : [('Port 0', 'novusHundredGigNonFanOut'), ('Port 1', 'novusHundredGigNonFanOut'), ('Port 2', 'novusHundredGigNonFanOut'), ('Port 3', 'novusHundredGigNonFanOut')]
18:14:03 snappi_api.info L1132 INFO | Aggregation mode speed change 0.286s
18:14:10 snappi_api.info L1132 INFO | Location preemption [10.36.78.53;6;7, 10.36.78.53;6;5, 10.36.78.53;6;6] 0.020s
18:14:56 snappi_api.info L1132 INFO | Location connect [Port 0, Port 1, Port 2] 46.208s
18:14:56 snappi_api.info L1132 INFO | Location state check [Port 0, Port 1, Port 2] 0.040s
18:14:56 snappi_api.info L1132 INFO | Location configuration 63.066s
18:15:06 snappi_api.info L1132 INFO | Layer1 configuration 10.092s
18:15:06 snappi_api.info L1132 INFO | Lag Configuration 0.008s
18:15:06 snappi_api.info L1132 INFO | Convert device config : 0.093s
18:15:06 snappi_api.info L1132 INFO | Create IxNetwork device config : 0.000s
18:15:06 snappi_api.info L1132 INFO | Push IxNetwork device config : 0.360s
18:15:06 snappi_api.info L1132 INFO | Devices configuration 0.462s
18:15:23 snappi_api.info L1132 INFO | Flows configuration 16.625s
18:15:29 snappi_api.info L1132 INFO | Start interfaces 6.336s
18:15:30 snappi_api.info L1132 INFO | IxNet - The Traffic Item was modified. Please perform a Traffic Generate to update the associated traffic Flow Groups
18:15:30 pfcwd_multi_node_helper_new.__run_traffi L0550 INFO | Wait for Arp to Resolve ...
18:15:30 pfcwd_multi_node_helper_new.__run_traffi L0553 INFO | Starting transmit on all flows ...
18:15:32 snappi_api.info L1132 INFO | Flows generate/apply 2.049s
18:15:45 snappi_api.info L1132 INFO | Flows clear statistics 12.532s
18:15:45 snappi_api.info L1132 INFO | Captures start 0.000s
18:15:49 snappi_api.info L1132 INFO | Flows start 4.009s
18:15:59 pfcwd_multi_node_helper_new.__run_traffi L0586 INFO | Stop transmit on all flows ...
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Warm Up Traffic 0 -> 1 Prio 3 , Tx Frames : 1795977, Rx Frames : 1795977
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Warm Up Traffic 0 -> 2 Prio 3 , Tx Frames : 1795977, Rx Frames : 1795977
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Warm Up Traffic 1 -> 0 Prio 3 , Tx Frames : 1795977, Rx Frames : 1795977
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Warm Up Traffic 1 -> 2 Prio 3 , Tx Frames : 1795977, Rx Frames : 1795977
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Warm Up Traffic 2 -> 0 Prio 3 , Tx Frames : 1795977, Rx Frames : 1795977
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Warm Up Traffic 2 -> 1 Prio 3 , Tx Frames : 1795977, Rx Frames : 1795977
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Pause Storm , Tx Frames : 596, Rx Frames : 0
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0642 INFO | PFC pause storm expected to be dropped
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Test Flow 0 -> 1 Prio 3 , Tx Frames : 3591954, Rx Frames : 3591954
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Test Flow 0 -> 2 Prio 3 , Tx Frames : 3591954, Rx Frames : 3591954
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Test Flow 1 -> 0 Prio 3 , Tx Frames : 3591954, Rx Frames : 3591954
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0680 INFO | This test flow is delayed by PFC storm
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0681 INFO | Tx and Rx should not have any dropped packet
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Test Flow 1 -> 2 Prio 3 , Tx Frames : 3591954, Rx Frames : 3591954
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Test Flow 2 -> 0 Prio 3 , Tx Frames : 3591954, Rx Frames : 3591954
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0680 INFO | This test flow is delayed by PFC storm
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0681 INFO | Tx and Rx should not have any dropped packet
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Test Flow 2 -> 1 Prio 3 , Tx Frames : 3591954, Rx Frames : 3591954
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 0 -> 1 Prio 1 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 0 -> 1 Prio 5 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 0 -> 1 Prio 6 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 0 -> 1 Prio 2 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 0 -> 1 Prio 0 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 0 -> 2 Prio 1 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 0 -> 2 Prio 5 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 0 -> 2 Prio 6 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 0 -> 2 Prio 2 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 0 -> 2 Prio 0 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 1 -> 0 Prio 1 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 1 -> 0 Prio 5 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 1 -> 0 Prio 6 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 1 -> 0 Prio 2 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 1 -> 0 Prio 0 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 1 -> 2 Prio 1 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 1 -> 2 Prio 5 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 1 -> 2 Prio 6 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 1 -> 2 Prio 2 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 1 -> 2 Prio 0 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 2 -> 0 Prio 1 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 2 -> 0 Prio 5 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 2 -> 0 Prio 6 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 2 -> 0 Prio 2 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 2 -> 0 Prio 0 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 2 -> 1 Prio 1 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 2 -> 1 Prio 5 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 2 -> 1 Prio 6 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 2 -> 1 Prio 2 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0638 INFO | Flow Name : Background Flow 2 -> 1 Prio 0 , Tx Frames : 718390, Rx Frames : 718390
18:15:59 pfcwd_multi_node_helper_new.__verify_res L0648 INFO | Background flows expected not to have any dropped packets
18:15:59 snappi_fixtures.cleanup_config L0943 INFO | Removing Configuration on Dut: sonic-s6100-dut1 with port Ethernet72 with ip :20.1.1.0/31
18:16:00 snappi_fixtures.cleanup_config L0943 INFO | Removing Configuration on Dut: sonic-s6100-dut1 with port Ethernet64 with ip :20.1.1.2/31
18:16:02 snappi_fixtures.cleanup_config L0943 INFO | Removing Configuration on Dut: sonic-s6100-dut1 with port Ethernet68 with ip :20.1.1.4/31
18:16:03 snappi_fixtures.cleanup_config L0943 INFO | Removing Configuration on Dut: sonic-s6100-dut1 with port Ethernet76 with ip :20.1.1.6/31
PASSED [100%]
-------------------------------------------------------------------------------------------------------------------------------- live log teardown --------------------------------------------------------------------------------------------------------------------------------
18:16:04 init.pytest_runtest_teardown L0066 INFO | collect memory after test test_multidut_pfcwd_all_to_all[multidut_port_info0-False]
18:16:04 init.pytest_runtest_teardown L0089 INFO | After test: collected memory_values {'before_test': {'sonic-s6100-dut1': {'monit': {'memory_usage': 22.7}}}, 'after_test': {'sonic-s6100-dut1': {'monit': {'memory_usage': 23.2}}}}
18:16:04 init._fixture_generator_decorator L0093 INFO | -------------------- fixture ignore_route_check_for_cisco_8000 teardown starts --------------------
18:16:04 init._fixture_generator_decorator L0102 INFO | -------------------- fixture ignore_route_check_for_cisco_8000 teardown ends --------------------
18:16:04 init._fixture_generator_decorator L0093 INFO | -------------------- fixture snappi_api teardown starts --------------------
18:16:04 init._fixture_generator_decorator L0102 INFO | -------------------- fixture snappi_api teardown ends --------------------
18:16:04 init._fixture_generator_decorator L0093 INFO | -------------------- fixture start_pfcwd_after_test teardown starts --------------------
18:16:05 init._fixture_generator_decorator L0102 INFO | -------------------- fixture start_pfcwd_after_test teardown ends --------------------
18:16:05 init._fixture_generator_decorator L0093 INFO | -------------------- fixture rand_lossy_prio teardown starts --------------------
18:16:05 init._fixture_generator_decorator L0102 INFO | -------------------- fixture rand_lossy_prio teardown ends --------------------
18:16:05 init._fixture_generator_decorator L0093 INFO | -------------------- fixture rand_lossless_prio teardown starts --------------------
18:16:05 init._fixture_generator_decorator L0102 INFO | -------------------- fixture rand_lossless_prio teardown ends --------------------
18:16:05 init._fixture_generator_decorator L0093 INFO | -------------------- fixture enable_packet_aging_after_test teardown starts --------------------
18:16:05 init._fixture_generator_decorator L0102 INFO | -------------------- fixture enable_packet_aging_after_test teardown ends --------------------
18:16:05 conftest.temporarily_disable_route_check L2738 INFO | Skipping temporarily_disable_route_check fixture
18:16:05 conftest.collect_after_test L2525 INFO | Dumping Disk and Memory Space information after test on sonic-s6100-dut1
18:16:06 conftest.collect_after_test L2529 INFO | Collecting core dumps after test on sonic-s6100-dut1
18:16:06 conftest.collect_after_test L2540 INFO | Collecting running config after test on sonic-s6100-dut1
18:16:07 conftest.core_dump_and_config_check L2681 WARNING| Core dump or config check failed for test_pfcwd_a2a_with_snappi.py, results: {"core_dump_check": {"failed": false, "new_core_dumps": {"sonic-s6100-dut1": []}}, "config_db_check": {"failed": true, "pre_only_config": {"sonic-s6100-dut1": {"null": {"VLAN_MEMBER": {"Vlan2|Ethernet64": {"tagging_mode": "untagged"}, "Vlan2|Ethernet68": {"tagging_mode": "untagged"}, "Vlan2|Ethernet72": {"tagging_mode": "untagged"}, "Vlan2|Ethernet76": {"tagging_mode": "untagged"}}}}}, "cur_only_config": {"sonic-s6100-dut1": {"null": {}}}, "inconsistent_config": {"sonic-s6100-dut1": {"null": {}}}}}
18:16:07 conftest.restore_config_db_and_config_re L2359 INFO | dut reload called on sonic-s6100-dut1
18:16:08 config_reload.config_reload L0147 INFO | reloading config_db

co-authorized by: [email protected]
Split the original PR Reg: Changes of VOQ Single Dut Multi asic sonic-net#17130 into individual PRs for each test.
This is an enhancement to support VOQ Single DUT Multi-ASIC setup for T2 topo.
…et#17755)

Description of PR
Summary: Enable YANG check in boot health check and config_reload

Approach
What is the motivation for this PR?
Add YANG validation check over running config afte reload and load_minigraph

How did you do it?
Add YANG command check

How did you verify/test it?
E2E
warmboot: https://dev.azure.com/mssonic/internal/_build/results?buildId=827045&view=results
09:34:35 device_utils.verify_yang L0360 INFO | Verify yang over running config
…et#18030)

* [dualtor-aa] Skip `test_arp_update_for_failed_standby_neighbor`

Signed-off-by: Longxiang Lyu <[email protected]>
What is the motivation for this PR?
KVM testbed have a chance to fail for memory exhaustion:

>       pytest.fail(message)
E       Failed: [ALARM]: monit:memory_usage, Previous memory usage 90.2 exceeds high threshold 90.0
How did you do it?
Increase KVM testbed memory from 4G to 6G
…estarting of orchagent (sonic-net#16765)

Description of PR
test_lldp_neighbor_post_orchagent_reboot uses prefix restart_orchagent() to warm restart orchagent. This warm restart of orchagent(kill orchagent and restart orchagent process) is not supported on chassis with type VOQ.

Add code to handle chassis type VOQ case by restarting the swss@0(1) service. This will cause ports to be removed from lldp table and add back in so we can test the issue found in sonic-net#6560.

Summary:
Fixes # (issue)

Type of change
 Bug fix
 Testbed and Framework(new/improvement)
 New Test case
 Skipped for non-supported platforms
 Test case improvement
Back port request
 202012
 202205
 202305
 202311
 202405
 202411
Approach
What is the motivation for this PR?
Currently test_lldp_neighbor_post_orchagent_reboot is failing against chassis with type VOQ. It's found warm restart of orchagent is not supported on VOQ type chassis.

How did you do it?
For VOQ type chassis, uses "systemctl restart swss@0(1)" instead.

How did you verify/test it?
After the fix the test passed consistently against VOQ type chassis.
What is the motivation for this PR?
New testbed modification of VOQ Single DUT and multi-asic.
How did you do it?
If switch type is VOQ, decide based on the number of Duts and also based on if the DUT is modular chassis or not.
How did you verify/test it?
Any platform specific information?
Supported testbed topology if it's a new test case?
T2 VOQ Single Dut Multi ASIC
What is the motivation for this PR?
New testbed modification of VOQ Single DUT and multi-asic.
How did you do it?
If switch type is VOQ, decide supervisor based on the number of Duts and also based on if the DUT is modular chassis or not.
How did you verify/test it?
What is the motivation for this PR?
New testbed modification of VOQ Single DUT and multi-asic.
How did you do it?
Skip internal neighbors for both VOQ single dut multi-asic as well as multi-dut multi-asic.
How did you verify/test it?
Any platform specific information?
Supported testbed topology if it's a new test case?
T2 VOQ Single Dut Multi ASIC
What is the motivation for this PR?
The eos default cred is all digits. It may be interpreted as an integer and raise below error during tests:

  File "/var/src/sonic-mgmt/tests/common/devices/eos.py", line 85, in shutdown
    out = self.eos_config(
  File "/var/src/sonic-mgmt/tests/common/devices/base.py", line 131, in _run
    raise RunAnsibleModuleFail("run module {} failed".format(self.module_name), res)
tests.common.errors.RunAnsibleModuleFail: run module eos_config failed, Ansible Results =>
failed = True
module_stdout = 
module_stderr = Expected unicode or bytes, got 123456
msg = MODULE FAILURE
See stdout/stderr for the exact error
_ansible_no_log = None
changed = False
stdout =
stderr =

How did you do it?
Add quote to the all digits default cred to force string type.
PR sonic-net#15004 enhanced the connection graph csv files to support "AutoNeg".
After this change, deploy fanout switch using connection graph csv files
missing this field would fail.

This change updated the fanout deploy template to best effort getting
value of "autoneg" for links. With this change, the field "AutoNeg" in csv
file would be optional and default to "no".

Signed-off-by: Xin Wang <[email protected]>
)

* Ignore kdump related errors

 * These errors are temporarily ignored until the issue referenced in
   sonic-net/sonic-buildimage#22346 is fixed.

Signed-off-by: Prabhat Aravind <[email protected]>
…onic-net#17527)

* Skip the nvgre hash test on SPC1 t1-lag

* Change the skip for nvgre hash to xfail on SPC1
… single node T2 (sonic-net#17986)

What is the motivation for this PR?
This PR adds a clear handling mechanism for non-chassis T2 by setting dut_index = 0.
And changes for the Minigraph generation non-chassis T2 setups

How did you do it?
How did you verify/test it?
Verified with:
Non-chassis VOQ T2 devices & T2 VOQ Chassis devices
---------

Co-authored-by: rawal <[email protected]>
Co-authored-by: vikshaw-Nokia <[email protected]>
What is the motivation for this PR?
'test_default_route_with_bgp_flap', for msft specific AZNG route maps on upstream LC fails even with the fix provided by [default-route-bgp-flap]: Fix for 9052 - test failure sonic-net#15962
How did you do it?
In addition to the fix provided by [default-route-bgp-flap]: Fix for 9052 - test failure sonic-net#15962 , handle the case for all route maps.
How did you verify/test it?
Ran the test on T2 chassis with upstream LC, having different sets of route maps and made sure the tests are passing as expected.
…sonic-net#18035)

Approach
What is the motivation for this PR?
upgrade_path test pipeline is broken due to this change, since the from version may not support bmp feature.

How did you do it?
Add config cmd check and skip the cli if bmp feature is not supported in the version.

How did you verify/test it?
PR validation will cover the sanity check.
sonic-net#17999)

* remove unused arguments when calling config_vchassis playbook.

* remove unused --vault-password-file
What is the motivation for this PR?
Warm reboot is not fully-supported on dualtor, and might leave the testbed in a corrupted state.
Let's skip the pfc wb testcase for now.

How did you do it?
Skip the pfcwd warm reboot test on dualtor.

Signed-off-by: Longxiang Lyu <[email protected]>
Add Dash meter class, policy, rule config to PL tests (sonic-net#17884)
Summary:
Add meter class, policy, rule config to Dash PrivateLink tests

Type of change
Test case improvement
…f features are enabled (sonic-net#17976)

What is the motivation for this PR?
Sometimes dhcp containers are messed up by previous test, but sanity check cannot notice it because they are not critical services

How did you do it?
Add them to critical services if features are enabled

How did you verify/test it?
Run tests
Why I did it
GNMI service will change to mapping cname to a role list:
sonic-net/sonic-buildimage#21849

To make sure GNMI test case in sonic-mgmt can pass with/without this change, we need improve test case to handle both case.

Work item tracking
Microsoft ADO: 31561802
How I did it
Ignore test case bug github issue: sonic-net#17876
Change GNMI setup code to handle role list by check yang model

How to verify it
Pass all test case.
What is the motivation for this PR?
Fix the following error

ipv6 address pattern
'route-id' typo
remove template dependencies for nhipv4 and host['interfaces']['Loopback0']['ipv4']
Summary: BMP feature is not needed for chassis supervisors because they don't have BGP service running.
Summary: This is a joint proposal from Microsoft and Nvidia to present a test plan for static SRv6 functionality in SONiC. The HLD doc for static SRv6 can be found in https://github.com/sonic-net/SONiC/blob/master/doc/srv6/srv6_static_config_hld.md.
…tion (sonic-net#16885)

What is the motivation for this PR?
Verifying HeadroomPoolsize test for broadcom-dnx chassis

How did you verify/test it?
Executed qos test for headroomPool size and verify the results
* DASH PL tests fixes

 * Teardown configurations after the test
 * Add an option to select the dpu to use for testing

Signed-off-by: Prabhat Aravind <[email protected]>
@kamalsahu0001 kamalsahu0001 merged commit 089e30e into snappi-changes-pr Apr 22, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.