[action] [PR:21939] Fix/nonlinear high nexthop dataplane downtime#975
Merged
mssonicbld merged 1 commit intoAzure:202412from Jan 23, 2026
Merged
Conversation
<!-- Please make sure you've read and understood our contributing guidelines; https://github.com/sonic-net/SONiC/blob/gh-pages/CONTRIBUTING.md Please provide following information to help code review process a bit easier: --> ### Description of PR <!-- - Please include a summary of the change and which issue is fixed. - Please also include relevant motivation and context. Where should reviewer start? background context? - List any dependencies that are required for this change. --> Summary: Fixes # (issue) This PR fixes **excessively high dataplane downtime attributed to nexthop behavior** in the high‑BGP test scenarios Nexthop handling in the test logic caused downtime measurements to stay high and inconsistent. This PR corrects nexthop‑related announcement, and verification so that: - Traffic is always tested towards valid, expected nexthops, - Stale or mis‑mapped nexthops no longer inflate the observed downtime, - Downtime better reflects the actual behavior. The fix put out in [PR #20842](sonic-net/sonic-mgmt#20842) now also fixes the recently found issue where the failed nexthop_group_member_scale pollutes the test environment for future re-runs of the entire testbed. Dependency: - Depends on the fixes introduced in: - [PR #21936 ](sonic-net/sonic-mgmt#21936) ### Type of change <!-- - Fill x for your type of change. - e.g. - [x] Bug fix --> - [ x ] Bug fix - [ ] Testbed and Framework(new/improvement) - [ ] New Test case - [ ] Skipped for non-supported platforms - [ ] Test case improvement ### Back port request - [ ] 202205 - [ ] 202305 - [ ] 202311 - [ ] 202405 - [ ] 202411 - [ ] 202505 ### Approach #### What is the motivation for this PR? - Measured dataplane downtime remained unexpectedly high when: - The number of nexthops increased, - The test exercised different nexthop sets or ECMP groups. - Downtime spikes appeared that did not match the BGP session and route programming timelines. #### How did you do it? - A fresh clean ptf dataplane environment for the nexthop group member scale similar to the [PR #21936](sonic-net/sonic-mgmt#21936) - Uses the bulk reannouncement of the starting state as per the fix introduced by [PR #20842](sonic-net/sonic-mgmt#20842) #### How did you verify/test it? - Ran the high‑BGP convergence, flap, nexthop group member scale tests end‑to‑end with the nexthop fixes applied on: - Topology: `t0-isolated-d2u510s2` - Platform: Broadcom Arista-7060X6-64PE-B-C512S2 - Verified that the dataplane downtime does not fail the expected the MAX_DOWNTIME_NEXTHOP_GROUP_MEMBER_CHANGE of 30 seconds. Dataplane Downtime results before: 63 seconds > MAX_DOWNTIME_NEXTHOP_GROUP_MEMBER_CHANGE Dataplane Downtime results now: Shutdown Phase - 0.11 seconds as expected Startup Phase - 0.14 seconds as expected Also fixes the recently found issue where the failed nexthop group member scale pollutes the FIB on the switch for future re runs of the testbed. #### Any platform specific information? #### Supported testbed topology if it's a new test case? ### Documentation <!-- (If it's a new feature, new test case) Did you update documentation/Wiki relevant to your implementation? Link to the wiki page? -->
Collaborator
Author
|
Original PR: sonic-net/sonic-mgmt#21939 |
Collaborator
Author
|
/azp run |
10 tasks
|
Azure Pipelines successfully started running 1 pipeline(s). |
Closed
11 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description of PR
Summary:
Fixes # (issue)
This PR fixes excessively high dataplane downtime attributed to nexthop behavior in the high‑BGP test scenarios
Nexthop handling in the test logic caused downtime measurements to stay high and inconsistent. This PR corrects nexthop‑related announcement, and verification so that:
The fix put out in PR #20842 now also fixes the recently found issue where the failed nexthop_group_member_scale pollutes the test environment for future re-runs of the entire testbed.
Dependency:
Type of change
Back port request
Approach
What is the motivation for this PR?
How did you do it?
How did you verify/test it?
Ran the high‑BGP convergence, flap, nexthop group member scale tests end‑to‑end with the nexthop fixes applied on:
t0-isolated-d2u510s2Verified that the dataplane downtime does not fail the expected the MAX_DOWNTIME_NEXTHOP_GROUP_MEMBER_CHANGE of 30 seconds.
Dataplane Downtime results before: 63 seconds > MAX_DOWNTIME_NEXTHOP_GROUP_MEMBER_CHANGE
Dataplane Downtime results now:
Shutdown Phase - 0.11 seconds as expected
Startup Phase - 0.14 seconds as expected
Also fixes the recently found issue where the failed nexthop group member scale pollutes the FIB on the switch for future re runs of the testbed.
Any platform specific information?
Supported testbed topology if it's a new test case?
Documentation