Skip to content

[action] [PR:11008] QoSHdrmPoolsize _tx_disable_enable for all destination ports in lag#12025

Merged
mssonicbld merged 1 commit intosonic-net:202305from
mssonicbld:cherry/202305/11008
Mar 17, 2024
Merged

[action] [PR:11008] QoSHdrmPoolsize _tx_disable_enable for all destination ports in lag#12025
mssonicbld merged 1 commit intosonic-net:202305from
mssonicbld:cherry/202305/11008

Conversation

@mssonicbld
Copy link
Collaborator

Description of PR

Summary:
Since testQosHeadroomPoolsize test takes more than one source port as input, even after tx disable, the packets could still hash from the other ports. Hence tx_disable has to be applied on all ports in the destination lag to block all the traffic during tx_disable.
Fixes # (issue)

Type of change

  • Bug fix
  • Testbed and Framework(new/improvement)
  • Test case(new/improvement)

Back port request

  • 201911
  • 202012
  • 202205
  • 202305

Approach

Since testQosHeadroomPoolsize test takes more than one source port as input, even after tx disable, the packets could still hash from the other ports. Hence tx_disable has to be applied on all ports in the destination lag to block all the traffic during tx_disable.
This was achieved by
-Collecting all the dst_port_ids ports associated with the src_port_ids
-Sending all the dst_port_ids in a list, instead of sending a single dst_port_id in tx_disable

What is the motivation for this PR?

Intermitted failure of testQosHdrmPoolsize test

How did you do it?

Executed Qos test suite and verify the results

How did you verify/test it?

Any platform specific information?

Supported testbed topology if it's a new test case?

Documentation

…onic-net#11008)

* QoSHdrmPoolsize_collecting dst ports in lag for tx_disable_enable
@mssonicbld
Copy link
Collaborator Author

Original PR: #11008

@mssonicbld mssonicbld merged commit 5e50c9e into sonic-net:202305 Mar 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants