Skip to content

[Dynamic buffer calc] Bug fix: Remove PGs from an administratively down port.#1677

Merged
liat-grozovik merged 1 commit intosonic-net:masterfrom
stephenxs:remove-pg-admin-down
Mar 29, 2021
Merged

[Dynamic buffer calc] Bug fix: Remove PGs from an administratively down port.#1677
liat-grozovik merged 1 commit intosonic-net:masterfrom
stephenxs:remove-pg-admin-down

Conversation

@stephenxs
Copy link
Collaborator

@stephenxs stephenxs commented Mar 20, 2021

What I did
Bug fixes: Remove PGs from an administratively down port.

Signed-off-by: Stephen Sun [email protected]

Why I did it
To fix bugs

How I verified it
Run regression and vs test

Which release branch to backport (provide reason below if selected)

  • 201811
  • 201911
  • 202006
  • 202012

Details if related

  • Remove PGs from an administratively down port.
    • Introduce a new state: PORT_ADMIN_DOWN which represents the port is administratively down.
    • Remove all PGs when the port is shut down and re-add all configured PGs when port is started up
    • Only record the new value but don't touch BUFFER_PG_TABLE if the following events come when a port is administratively down
      • a port's MTU, speed, or cable length is updated
      • a new PG is added to a port or an existing PG is removed from a port
    • Optimize the port event handling flow since refreshPriorityGroupsForPort should be called only once in case more than one fields are updated
    • Adjust the Lua plugin which calculates the buffer pool size accordingly

Introduce a new state: `PORT_ADMIN_DOWN` which represents the port is administratively down.
Remove all PGs when the port is shut down and re-add all configured PGs when the port is started up
Only record the new value but don't touch `BUFFER_PG_TABLE` if the following events come when a port is administratively down
 - a port's MTU, speed, or cable length is updated
 - a new PG is added to a port or an existing PG is removed from a port
Optimize the port event handling flow since `refreshPriorityGroupsForPort` should be called only once in case more than one fields are updated
Optimize the Lua plugin which calculates the buffer pool size accordingly

Signed-off-by: Stephen Sun <[email protected]>
@stephenxs stephenxs marked this pull request as ready for review March 20, 2021 06:23
@neethajohn
Copy link
Contributor

@stephenxs , can you move out the lua plugin optimizations out of this PR and keep this PR only for the port down bug fix?

@stephenxs
Copy link
Collaborator Author

@stephenxs , can you move out the lua plugin optimizations out of this PR and keep this PR only for the port down bug fix?

Hi @neethajohn
Actually the lua plugin is also a part of the fix. Probably I need to adjust the wording for optimize to adjust?

@liat-grozovik liat-grozovik merged commit 8eb2326 into sonic-net:master Mar 29, 2021
@stephenxs stephenxs deleted the remove-pg-admin-down branch March 29, 2021 11:06
daall pushed a commit that referenced this pull request Apr 1, 2021
- What I did
Bug fixes: Remove PGs from an administratively down port.

- Why I did it
To fix bugs

- How I verified it
Run regression and vs test

- Details if related
Remove PGs from an administratively down port.
Introduce a new state: PORT_ADMIN_DOWN which represents the port is administratively down.
Remove all PGs when the port is shut down and re-add all configured PGs when port is started up.
Only record the new value but don't touch BUFFER_PG_TABLE if the following events come when a port is administratively down a port's MTU, speed, or cable length is updated a new PG is added to a port or an existing PG is removed from a port.
Optimize the port event handling flow since refreshPriorityGroupsForPort should be called only once in case more than one fields are updated.
Adjust the Lua plugin which calculates the buffer pool size accordingly

Signed-off-by: Stephen Sun <[email protected]>
raphaelt-nvidia pushed a commit to raphaelt-nvidia/sonic-swss that referenced this pull request Oct 5, 2021
- What I did
Bug fixes: Remove PGs from an administratively down port.

- Why I did it
To fix bugs

- How I verified it
Run regression and vs test

- Details if related
Remove PGs from an administratively down port.
Introduce a new state: PORT_ADMIN_DOWN which represents the port is administratively down.
Remove all PGs when the port is shut down and re-add all configured PGs when port is started up.
Only record the new value but don't touch BUFFER_PG_TABLE if the following events come when a port is administratively down a port's MTU, speed, or cable length is updated a new PG is added to a port or an existing PG is removed from a port.
Optimize the port event handling flow since refreshPriorityGroupsForPort should be called only once in case more than one fields are updated.
Adjust the Lua plugin which calculates the buffer pool size accordingly

Signed-off-by: Stephen Sun <[email protected]>
EdenGri pushed a commit to EdenGri/sonic-swss that referenced this pull request Feb 28, 2022
re-enable after fix those tests

Signed-off-by: Guohan Lu <[email protected]>
Janetxxx pushed a commit to Janetxxx/sonic-swss that referenced this pull request Nov 10, 2025
- What I did
Bug fixes: Remove PGs from an administratively down port.

- Why I did it
To fix bugs

- How I verified it
Run regression and vs test

- Details if related
Remove PGs from an administratively down port.
Introduce a new state: PORT_ADMIN_DOWN which represents the port is administratively down.
Remove all PGs when the port is shut down and re-add all configured PGs when port is started up.
Only record the new value but don't touch BUFFER_PG_TABLE if the following events come when a port is administratively down a port's MTU, speed, or cable length is updated a new PG is added to a port or an existing PG is removed from a port.
Optimize the port event handling flow since refreshPriorityGroupsForPort should be called only once in case more than one fields are updated.
Adjust the Lua plugin which calculates the buffer pool size accordingly

Signed-off-by: Stephen Sun <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants