Skip to content

Define bulk chunk size and bulk chunk size per counter ID#1519

Merged
kcudnik merged 1 commit intosonic-net:masterfrom
stephenxs:bulk-counter-chunk-definition
Feb 5, 2025
Merged

Define bulk chunk size and bulk chunk size per counter ID#1519
kcudnik merged 1 commit intosonic-net:masterfrom
stephenxs:bulk-counter-chunk-definition

Conversation

@stephenxs
Copy link
Contributor

@stephenxs stephenxs commented Feb 5, 2025

Define bulk chunk size and bulk chunk size per counter ID.
This is to resolve the VS test failure in #1457, which is caused by loop dependency.
In PR #1457, new fields bulk_chunk_size and bulk_chunk_size_per_prefix have been introduced to sai_redis_flex_counter_group_parameter_t whose instances are initialized by orchagent.
However, the orchagent is still compiled with the old sairedis header, which prevents both new fields from being uninitialized which in turn fails vs test.

We have to split this PR into two:

  1. Define bulk chunk size and bulk chunk size per counter ID #1519 which updates the header sairedis.h only. the motivation is to compile swss(orchagent) with both new fields initiated.
  2. Optimize counter polling interval by making it more accurate #1457 contains all the rest of code

The order to merge:

  1. Define bulk chunk size and bulk chunk size per counter ID #1519
  2. Optimize counter polling interval by making it more accurate sonic-swss#3391
  3. Optimize counter polling interval by making it more accurate #1457

Signed-off-by: Stephen Sun <stephens@nvidia.com>
@mssonicbld
Copy link
Collaborator

/azp run

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@stephenxs stephenxs requested a review from kcudnik February 5, 2025 08:26
@stephenxs stephenxs changed the title Define bulk chunk size and bulk chunk size per counter ID. Define bulk chunk size and bulk chunk size per counter ID Feb 5, 2025
@stephenxs
Copy link
Contributor Author

Hi @kcudnik
As mentioned in the description, we are suffering loop dependency in VS test of counter optimization PRs. I have to split some definitions into this PR to break the loop and pass the VS test.
Would you please approve and merge the PR so that we can move forward with swss PR sonic-net/sonic-swss#3391 and then the rest of sairedis PR #1457?
Thank you.

@kcudnik kcudnik merged commit e72e49a into sonic-net:master Feb 5, 2025
14 checks passed
@stephenxs stephenxs deleted the bulk-counter-chunk-definition branch February 5, 2025 10:30
mssonicbld added a commit to mssonicbld/sonic-swss that referenced this pull request Feb 6, 2025
<!--
Please make sure you have read and understood the contribution guildlines:
https://github.com/Azure/SONiC/blob/gh-pages/CONTRIBUTING.md

1. Make sure your commit includes a signature generted with `git commit -s`
2. Make sure your commit title follows the correct format: [component]: description
3. Make sure your commit message contains enough details about the change and related tests
4. Make sure your pull request adds related reviewers, asignees, labels

Please also provide the following information in this pull request:
-->

**What I did**

Optimize the counter-polling performance in terms of polling interval accuracy

1. Enable bulk counter-polling to run at a smaller chunk size
   There is one counter-polling thread for each counter group. All such threads can compete for the critical sections at the vendor SAI level, which means a counter-polling thread can wait for a critical section if another thread has been in it, which introduces latency for the waiting counter group.
   An example is the competition between the PFC watchdog and the port counter groups.
   The port counter group contains many counters and is polled in a bulk mode which takes a relatively longer time. The PFC watchdog counter group contains only a few counters but is polled quickly. Sometimes, PFC watchdog counters must wait before polling, which makes the polling interval inaccurate and prevents the PFC storm from being detected in time.
   To resolve this issue, we can reduce the chunk size of the port counter group. By default, the port counter group polls the counters of all ports in a single bulk operation. By using a smaller chunk size, it polls the counters in several bulk operations, with each polling counter of a subset (whose size = `chunk size`) of all ports. Furthermore, we support setting chunk size on a per-counter-ID basis.
   By doing so, the port counter group stays in the critical section for a shorter time and the PFC watchdog is more likely to be scheduled to poll counters and detect the PFC storm in time.

2. Collect the time stamp immediately after vendor SAI API returns.
   Currently, many counter groups require a Lua plugin to execute based on polling interval, to calculate rates, detect certain events, etc.
   Eg. For PFC watchdog counter group to PFC storm. In this case, the polling interval is calculated based on the difference of time stamps between the `current` and `last` poll to avoid deviation due to scheduling latency. However, the timestamp is collected in the Lua plugin which is several steps after the SAI API returns and is executed in a different context (redis-server). Both introduce even larger deviations. To overcome this, we collect the timestamp immediately after the SAI API returns.

Depends on
1. sonic-net/sonic-swss-common#950
2. sonic-net/sonic-sairedis#1519

**Why I did it**

**How I verified it**

Run regression test and observe counter-polling performance.

A comparison test shows very good results if we put any/or all of the above optimizations.

**Details if related**

For 2, each counter group contains more than one counter context based on the type of objects. counter context is mapped from (group, object type). But the counters fetched from different counter groups will be pushed into the same entry for the same objects.
eg. PFC_WD group contains counters of ports and queues. PORT group contains counters of ports. QUEUE_STAT group contains counters of queues.
Both PFC_WD and PORT groups will push counter data into an item representing a port. but each counter has its own polling interval, which means counter IDs polled from different counter groups can be polled with different time stamps.
We use the name of a counter group to identify the time stamp of the counter group.
Eg. In port counter entry, PORT_timestamp represents last time when the port counter group polls the counters. PFC_WD_timestamp represents the last time when the PFC watchdog counter group polls the counters
mssonicbld added a commit to sonic-net/sonic-swss that referenced this pull request Feb 7, 2025
<!--
Please make sure you have read and understood the contribution guildlines:
https://github.com/Azure/SONiC/blob/gh-pages/CONTRIBUTING.md

1. Make sure your commit includes a signature generted with `git commit -s`
2. Make sure your commit title follows the correct format: [component]: description
3. Make sure your commit message contains enough details about the change and related tests
4. Make sure your pull request adds related reviewers, asignees, labels

Please also provide the following information in this pull request:
-->

**What I did**

Optimize the counter-polling performance in terms of polling interval accuracy

1. Enable bulk counter-polling to run at a smaller chunk size
 There is one counter-polling thread for each counter group. All such threads can compete for the critical sections at the vendor SAI level, which means a counter-polling thread can wait for a critical section if another thread has been in it, which introduces latency for the waiting counter group.
 An example is the competition between the PFC watchdog and the port counter groups.
 The port counter group contains many counters and is polled in a bulk mode which takes a relatively longer time. The PFC watchdog counter group contains only a few counters but is polled quickly. Sometimes, PFC watchdog counters must wait before polling, which makes the polling interval inaccurate and prevents the PFC storm from being detected in time.
 To resolve this issue, we can reduce the chunk size of the port counter group. By default, the port counter group polls the counters of all ports in a single bulk operation. By using a smaller chunk size, it polls the counters in several bulk operations, with each polling counter of a subset (whose size = `chunk size`) of all ports. Furthermore, we support setting chunk size on a per-counter-ID basis.
 By doing so, the port counter group stays in the critical section for a shorter time and the PFC watchdog is more likely to be scheduled to poll counters and detect the PFC storm in time.

2. Collect the time stamp immediately after vendor SAI API returns.
 Currently, many counter groups require a Lua plugin to execute based on polling interval, to calculate rates, detect certain events, etc.
 Eg. For PFC watchdog counter group to PFC storm. In this case, the polling interval is calculated based on the difference of time stamps between the `current` and `last` poll to avoid deviation due to scheduling latency. However, the timestamp is collected in the Lua plugin which is several steps after the SAI API returns and is executed in a different context (redis-server). Both introduce even larger deviations. To overcome this, we collect the timestamp immediately after the SAI API returns.

Depends on
1. sonic-net/sonic-swss-common#950
2. sonic-net/sonic-sairedis#1519

**Why I did it**

**How I verified it**

Run regression test and observe counter-polling performance.

A comparison test shows very good results if we put any/or all of the above optimizations.

**Details if related**

For 2, each counter group contains more than one counter context based on the type of objects. counter context is mapped from (group, object type). But the counters fetched from different counter groups will be pushed into the same entry for the same objects.
eg. PFC_WD group contains counters of ports and queues. PORT group contains counters of ports. QUEUE_STAT group contains counters of queues.
Both PFC_WD and PORT groups will push counter data into an item representing a port. but each counter has its own polling interval, which means counter IDs polled from different counter groups can be polled with different time stamps.
We use the name of a counter group to identify the time stamp of the counter group.
Eg. In port counter entry, PORT_timestamp represents last time when the port counter group polls the counters. PFC_WD_timestamp represents the last time when the PFC watchdog counter group polls the counters
@r12f
Copy link
Contributor

r12f commented Feb 9, 2025

hi @kperumalbfn , do you mind to help get this PR merged into 202411?

mssonicbld added a commit to mssonicbld/sonic-sairedis that referenced this pull request Feb 10, 2025
Define bulk chunk size and bulk chunk size per counter ID.
This is to resolve the VS test failure in sonic-net#1457, which is caused by loop dependency.
In PR sonic-net#1457, new fields `bulk_chunk_size` and `bulk_chunk_size_per_prefix` have been introduced to `sai_redis_flex_counter_group_parameter_t` whose instances are initialized by orchagent.
However, the orchagent is still compiled with the old sairedis header, which prevents both new fields from being uninitialized which in turn fails vs test.

We have to split this PR into two:
1. sonic-net#1519 which updates the header sairedis.h only. the motivation is to compile swss(orchagent) with both new fields initiated.
2. sonic-net#1457 contains all the rest of code

The order to merge:
1. sonic-net#1519
2. sonic-net/sonic-swss#3391
3. sonic-net#1457
@mssonicbld
Copy link
Collaborator

Cherry-pick PR to 202411: #1528

mssonicbld added a commit that referenced this pull request Feb 10, 2025
Define bulk chunk size and bulk chunk size per counter ID.
This is to resolve the VS test failure in #1457, which is caused by loop dependency.
In PR #1457, new fields `bulk_chunk_size` and `bulk_chunk_size_per_prefix` have been introduced to `sai_redis_flex_counter_group_parameter_t` whose instances are initialized by orchagent.
However, the orchagent is still compiled with the old sairedis header, which prevents both new fields from being uninitialized which in turn fails vs test.

We have to split this PR into two:
1. #1519 which updates the header sairedis.h only. the motivation is to compile swss(orchagent) with both new fields initiated.
2. #1457 contains all the rest of code

The order to merge:
1. #1519
2. sonic-net/sonic-swss#3391
3. #1457
@r12f
Copy link
Contributor

r12f commented Feb 14, 2025

already included in 202411.

qiluo-msft pushed a commit to sonic-net/sonic-buildimage that referenced this pull request Apr 18, 2025
… other (#22019)

Update swss submodule to a07838d : [[orchagent] Do not restore port admin if port admin is configured
Update sairedis submodule to 7a7320a : [[syncd] Move log set function after api initialize
Why I did it
PR sonic-net/sonic-swss#3391 has a dependency on sonic-net/sonic-sairedis#1519 and therefore the two submodules need to be updated together.

How I did it
By updating both swss and sairedis submodules together.

How to verify it
Ran sanity checks on kvm testbeds
DavidZagury pushed a commit to DavidZagury/sonic-sairedis that referenced this pull request Apr 28, 2025
* [syncd] Support bulk set in INIT_VIEW mode (sonic-net#1517)

Support bulk set in INIT_VIEW mode.

* Use sonictest pool instead of sonic-common and fix arm64 issue. (sonic-net#1516)

1. Use sonictest pool instead of sonic-common
2. Fix arm64 build error.

* [nvidia] Skip SAI discovery on ports (sonic-net#1524)

Given that modern systems have lots of ports, performing SAI discovery takes very long time, e.g. (8 sec) for 256 port system. This has a big impact of fast-boot downtime and the discovery itself is not required for Nvidia platform fast-boot.

Same applies to Nvidia fastfast-boot (aka warm-boot), yet needs to be tested separately.

* Define bulk chunk size and bulk chunk size per counter ID (sonic-net#1528)

Define bulk chunk size and bulk chunk size per counter ID.
This is to resolve the VS test failure in sonic-net#1457, which is caused by loop dependency.
In PR sonic-net#1457, new fields `bulk_chunk_size` and `bulk_chunk_size_per_prefix` have been introduced to `sai_redis_flex_counter_group_parameter_t` whose instances are initialized by orchagent.
However, the orchagent is still compiled with the old sairedis header, which prevents both new fields from being uninitialized which in turn fails vs test.

We have to split this PR into two:
1. sonic-net#1519 which updates the header sairedis.h only. the motivation is to compile swss(orchagent) with both new fields initiated.
2. sonic-net#1457 contains all the rest of code

The order to merge:
1. sonic-net#1519
2. sonic-net/sonic-swss#3391
3. sonic-net#1457

* [syncd] Update log level for bulk api (sonic-net#1532)

[syncd] Update log level for bulk api

* [FC] Support Policer Counter (sonic-net#1533)

Added the implantation for policer counter -
Support in POLICER group and sai_serialize functions
Unit Tests: Included unit tests to add and remove policer counter.

* Fix pipeline errors related to rsyslogd and libswsscommon installation (sonic-net#1535)

On arm64 (and maybe sometimes amd64), rsyslogd appears to need a second or two to actually fully exit. The current code expects it to exit practically instantly. Add a sleep of 2 seconds to give it some time. Also enable some logging so that the commands being run can be seen.

Also, fix an error related to libswsscommon not getting installed due to new dependencies being added. Solve this by using apt install to install the package, which brings in any necessary dependencies.

* [syncd] Move logSet logGet under mutex to prevent race condition (sonic-net#1520) (sonic-net#1538)

[syncd] Move logSet logGet under mutex to prevent race condition

* Optimize counter polling interval by making it more accurate (sonic-net#1457) (sonic-net#1534)

What I did

Optimize the counter-polling performance in terms of polling interval accuracy

Enable bulk counter-polling to run at a smaller chunk size
There is one counter-polling thread for each counter group. All such threads can compete for the critical sections at the vendor SAI level, which means a counter-polling thread can wait for a critical section if another thread has been in it, which introduces latency for the waiting counter group.
An example is the competition between the PFC watchdog and the port counter groups.
The port counter group contains many counters and is polled in a bulk mode which takes a relatively longer time. The PFC watchdog counter group contains only a few counters but is polled at a short interval. Sometimes, PFC watchdog counters need to wait before polling, which makes the polling interval inaccurate and prevents the PFC storm from being detected in time.
To resolve this issue, we can reduce the chunk size of the port counter group. The port counter group polls the counters of all ports in a single bulk operation by default. By using a smaller chunk size, it polls the counters in several bulk operations with each polling counter of a subset (whose size <= chunk size) of all ports.
By doing so, the port counter group stays in the critical section for a shorter time and the PFC watchdog is more likely to be scheduled to poll counters and detect the PFC storm in time.

Collect the time stamp immediately after vendor SAI API returns.
Currently, many counter groups require a Lua plugin to execute based on polling interval, to calculate rates, detect certain events, etc.
Eg. For PFC watchdog counter group to PFC storm. In this case, the polling interval is calculated based on the difference of time stamps between the current and last poll to avoid deviation due to scheduling latency. However, the timestamp is collected in the Lua plugin which is several steps after the SAI API returns and is executed in a different context (redis-server). Both introduce even larger deviations. To overcome this, we collect the timestamp immediately after the SAI API returns.

* Revert "Do not enter vendor SAI critical section for counter polling/clearing operations (sonic-net#1450)" (sonic-net#1541)

Revert "Do not enter vendor SAI critical section for counter polling/clearing operations (sonic-net#1450)"

This reverts commit 0317b16.

* [vslib] SAI_KEY_VS_OPER_SPEED_IS_CONFIGURED_SPEED, SAI_PORT_ATTR_HOST_TX_READY_STATUS support (sonic-net#1553)

This PR adds two features to `vslib`.

- `SAI_KEY_VS_OPER_SPEED_IS_CONFIGURED_SPEED`: when `true`, `SAI_PORT_ATTR_SPEED` returns the configured speed instead of the value retrieved via [`/sys/class/net/<name>/speed`](https://github.com/sonic-net/sonic-sairedis/blob/master/vslib/SwitchStateBaseHostif.cpp#L892-L893).
 - fixes sonic-net/sonic-buildimage#19735

- `SAI_PORT_ATTR_HOST_TX_READY_STATUS`: always returns `true`. Required to support running `xcvrd` in the VS env.
 - ref: https://github.com/sonic-net/SONiC/pull/1849/files#diff-6f3e95e6c57a3edc2e30e1f13edb9fd9a32a0db44e1035ac1f0b1b9a191762a5R46

* Update build_and_install_module.sh to match newer Linux kernel version (sonic-net#1561)

sonic-sairedis will checkout sonic-swss to do vstest but using local build_and_install_module.sh to setup test environment, which is out of date with newer Linux kernel version.
The build_and_install_module.sh in sonic-swss is up to date with latest Ubuntu 20.04, so we need to update the build sh file with the file in sonic-swss.
In a long term, we may need to do some automatically sync, but now we have some azure agent security issue need to fix immediately, so just update the build_and_install_module.sh manually.

* Revert "Optimize counter polling interval by making it more accurate (sonic-net#1457) …" (sonic-net#1570)

Revert "Optimize counter polling interval by making it more accurate

---------

Co-authored-by: mssonicbld <79238446+mssonicbld@users.noreply.github.com>
Co-authored-by: Jianyue Wu <jianyuew@nvidia.com>
Co-authored-by: Kamil Cudnik <kcudnik@gmail.com>
Co-authored-by: Stephen Sun <5379172+stephenxs@users.noreply.github.com>
Co-authored-by: Kumaresh Perumal <kperumal@microsoft.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants