Support for not using all ports on hardware DUT for testing#1744
Support for not using all ports on hardware DUT for testing#1744sanmalho-git wants to merge 4 commits intosonic-net:masterfrom
Conversation
…stead of (vlan_id - vlan_base)
Had extra spaces and indented with 2 white spaces instead of 4
yxieca
left a comment
There was a problem hiding this comment.
Approved with 2 very cosmetic issues.
|
The solution proposed works if we have a one-to-one mapping between a leaf fanout switch and DUT. But, if we want to share the leaf fanout amongst multiple DUT's, then this restricts the ports on the DUT that are connected to the fanout - they have to be all unique and contiguous amongst the multiple DUTs. Working on an implementation the would use the connection graph to create the dut_fp_ports dictionary. The connection graph 'device_port_vlans' has the DUT port and fanout vlan info. For example: This along with the device hwsku can help get the right key mapping for the dut_fp_ports dictionary. |
|
retest vsimage please |
|
@sanmalho-git: The newly-added platform API tests (specifically the SFP tests) run under the assumption that transceivers are connected to all ports on the device. With this change, we will also need to update those tests to understand which ports should have transceivers connected. Could you please also provide an example of what a test should do to determine whether a port is expected to be connected? |
|
@jleveque: We only list the ports that are connected in the connection graph (eg. lab_connection_graph.xml). In our test, we use 'conn_graphs' ansible module to get these connections and expect transciever to be present only on those ports that are in the connection graph. eg. sonic_lab_links.csv: One other challenge for us right now is that we have SONiC box with 1G copper ports as well. We are trying to figure out how we would be able to distinguish those as ports present in connection graph, but still no transceiver is present on them (eg. Ethernet3 from above sample sonic_lab_links.csv). Currently, we are using the 'bandwidth' to determine this. If it is 1000, then we expect no transceiver. Our SONiC box has 48 copper 1G port, and 4 10G SFP ports. But, this won't work for 1G Fiber ports. |
In this case, we may want to consider adding a "transceiver type" and/or "cable type" to the connection graph. It would also be beneficial for my SFP tests, in order to know which type of transceiver to expect when testing. |
|
@sanmalho-git can you address the merge conflict? |
|
This approach doesn't work well for us, and so are closing this pull request. Will open another PR with a better solution. |
…#8355) To include following changes: * d84a8cc 2021-08-05 | [fast-reboot] revert the change of disabling counter polling before fast-reboot (sonic-net#1744) (HEAD -> 202012, github/202012) [Ying Xie] * e900bc5 2021-08-04 | Add script null_route_helper (sonic-net#1718) [bingwang-ms] * 85f14e1 2021-08-02 | disk_check updates: (sonic-net#1736) [Renuka Manavalan] * d68ac1c 2021-05-27 | [console][show] Force refresh all lines status during show line (sonic-net#1641) [Blueve] * a0e417f 2021-04-25 | [console] Display success message after line cleared (sonic-net#1579) [Blueve] * 0c6bb27 2021-04-07 | [console] Include Flow Control status in show line result (sonic-net#1549) [Blueve]
8b149a3 Load the database global_db only once for show cli (sonic-net#1712) cd0e560 [config][interface][speed] Fixed the config interface speed in multiasic issue (sonic-net#1739) b595ba6 [fast-reboot] revert the change of disabling counter polling before fast-reboot (sonic-net#1744) 8518820 [minigraph] Donot enable PFC watchdog for MgmtTsToR (sonic-net#1734) 2213774 [CLI][show][bgp] Fix the show ip bgp network command (sonic-net#1733) 3526507 [configlet] Python3 compatible syntax for extracting a key from the dict (sonic-net#1721) 5b56b97 [sonic_installer] don't print errors when installing an image not supporting app ext (sonic-net#1719) a581955 [LLDP] Fix lldpshow script to enable display multiple MAC addresses on the same remote physical interface (sonic-net#1657)
Description of PR
Summary:
Support for not using all ports on hardware DUT for testing and thus not connecting all the ports to a fanout switch.
Type of change
Approach
What is the motivation for this PR?
Currently, it is required that all ports on DUT are in use and are connected to a fanout.
However, there is a need to be able to run tests where all ports are not in use. Specifically, when dealing with
Also, majority of the basic functional testing can be done without testing all the ports on the DUT.
How did you do it?
To support above in orchestration, following changes were made:
In current orchestration in add-topo, the dut_fp_ports is a list of nic’s on the testbed server corresponding to the ports on the DUT. In the topology file, the 'host_interfaces' and 'vlans' for VM's are defined as an offset from the 'vlan_base'. When we 'bind' the topology (using vm_topology module), it uses this offset as the index into the dut_fp_ports to get the corresponding nic on the testbed server to the DUT's port. So, when having lesser number of elements in dut_fp_ports than the offsets defined in the topology results in index of out bound exception.
By changing dut_fp_ports to be a dictionary with key being this offset from 'vlan_base' and the value being the actual nic on the testbed server corresponding to the DUT's port, we avoid the above issue. This 'vlan_base' defaults to 0 (to give backward compatability) and can be specified as an extra ansible arg (using -e option). If it is not specified (has value 0), then the key is the same as the index in the original dut_fp_ports list.
For example, let's consider that we have a topology with only 2 ports (3 and 51) on the DUT connected to the fanout, with fanout vlans 103 and 151 respectively,
'vlan_base' of 100, and the trunk port on the testbed server being 'eno2'. Then dut_fp_ports would look like:
For a topology where we 32 ports connected to fanout with vlans 100 - 132 on trunk port eno2, and vlan_base is not specified, then the resulting dut_fp_ports would be
How did you verify/test it?
topology defining only 1 VM and 1 'host-interfaces' and validated BGP and ptf connectivity
Any platform specific information?
Supported testbed topology if it's a new test case?
Documentation