Skip to content

[code sync] Merge code from sonic-net/sonic-buildimage:202405 to 202405#559

Merged
mssonicbld merged 3 commits intoAzure:202405from
mssonicbld:sonicbld/202405-merge
Jan 28, 2025
Merged

[code sync] Merge code from sonic-net/sonic-buildimage:202405 to 202405#559
mssonicbld merged 3 commits intoAzure:202405from
mssonicbld:sonicbld/202405-merge

Conversation

@mssonicbld
Copy link
Copy Markdown
Collaborator

* 29b900f6c - (head/202405) [RDMA] correct egress buffer size for Arista-7050CX3-32S-D48C8 DualToR (#21347) (2025-01-28) [mssonicbld]
* c07a12a53 - Update cisco-8000.ini to 202405.1.1.3 release (#21445) (2025-01-27) [anamehra]<br>```

anamehra and others added 3 commits January 27, 2025 11:52
…R (#21347)

<!--
 Please make sure you've read and understood our contributing guidelines:
 https://github.com/Azure/SONiC/blob/gh-pages/CONTRIBUTING.md

 failure_prs.log skip_prs.log Make sure all your commits include a signature generated with `git commit -s` **

 If this is a bug fix, make sure your description includes "fixes #xxxx", or
 "closes #xxxx" or "resolves #xxxx"

 Please provide the following information:
-->

#### Why I did it

### Symptom:
[MSFT ADO 28240256 [SONiC_Nightly][Failed_Case][qos.test_qos_sai.TestQosSai][testQosSaiHeadroomPoolSize][20231110][broadcom][Arista-7050CX3-32S-D48C8]

For Arista-7050CX3-32S-D48C8 (BCM56870_A0 / TD3), in headroom pool size test,
inject lossless traffic into multiple ingress ports, exhausted share buffer first, then before exhaust all the headroom pool observed egress drop.
Expected appearance is ingress drop, so test failed.

### RCA:
(Check BRCM CSP CS00012358392 "Egress lossless pool size update for Arista-7050CX3-32S-D48C8 DualToR" in detail)

```
Pool: egress_lossless_pool
---- --------
mode static
size 32340992
type egress
---- --------
... ...
Pool: ingress_lossless_pool
---- --------
mode dynamic
size 32689152
type ingress
xoff 2058240
---- --------
```
As above output of command "mmuconfig --list", for Arista-7050CX3-32S-D48C8's buffer configuration, egress buffer is less than ingress buffer.
So, before exhausting all headroom pool, reach egress buffer's limit first. and then trigger egress drop.

### MMU register dump analysis

**Total Ingress buffer limit for Pool 0:**
 Shared: THDI_BUFFER_CELL_LIMIT_SP=**0x1CDC4**
 Headroom: THDI_HDRM_BUFFER_CELL_LIMIT_HP: **0x1F68**
 Min reserved per PG: 0x12 cells per PG. Check THDI_PORT_PG_CONFIG_PIPE0, THDI_PORT_PG_CONFIG_PIPE1. There are total 80 PG with Min limit configured to 0x12. This takes up a total of 80*0x12 = 0x5A0 cells.
 Total ingress for Pool0 : 0x1CDC4 + 0x1F68 + 0x5A0 = **0x1F2CC (127692 cells). **

**Total Egress buffer limits for Pool 0:**

 Shared: MMU_THDM_DB_POOL_SHARED_LIMIT = **0x1ed7c**
 Reserved: Q_MIN for lossless Queue 3,4 : **0**

In your scenario, your total usage stats would be:
Ingress: Total number of Active PGs failure_prs.log skip_prs.log PG_MIN + Shared_count + Headroom count = **0x1ED7E**
Egress: Total egress usage count: **0x1ed7d**

Look at the above allocation, can clearly see that, if number of ingress ports is LESS, then Ingress Cell usage will decrease because Min guarantee per PG will decrease, so Total Ingress will be less than Total Egress in that case.
If number of ingress ports increase, the Ingress Usage increases, which makes Total Ingress greater than Total Egress, and this results in Egress Queue Drops.

##### Work item tracking
- Microsoft ADO **28240256 **:

#### How I did it

In BRCM CSP CS00012358392 "Egress lossless pool size update for Arista-7050CX3-32S-D48C8 DualToR" , brcm update mmuconfig .

```
Platform Type Config Uplinks Downlinks Standby All Ports Up All Ports Down Notes

Arista-7050CX3-32S-D48C8 (none) DualTOR 8 24 24 m THDI_BUFFER_CELL_LIMIT_SP(0) LIMIT=117246 m THDI_BUFFER_CELL_LIMIT_SP(0) LIMIT=119694 noe ### When there is a linkdown event on an in-use uplink port:
 m MMU_THDM_DB_POOL_SHARED_LIMIT(0) SHARED_LIMIT=126726 m MMU_THDM_DB_POOL_SHARED_LIMIT(0) SHARED_LIMIT=127734 noe ### THDI_BUFFER_CELL_LIMIT_SP(0).LIMIT += 93
 m MMU_THDM_DB_POOL_RESUME_LIMIT(0) RESUME_LIMIT=15831 m MMU_THDM_DB_POOL_RESUME_LIMIT(0) RESUME_LIMIT=15957 noe ### MMU_THDM_DB_POOL_SHARED_LIMIT(0).SHARED_LIMIT += 93
 m MMU_THDM_DB_POOL_SHARED_LIMIT(1) SHARED_LIMIT=92288 m MMU_THDM_DB_POOL_SHARED_LIMIT(1) SHARED_LIMIT=95255 noe ### MMU_THDM_DB_POOL_RESUME_LIMIT(0).RESUME_LIMIT += 11
 m MMU_THDM_DB_POOL_RESUME_LIMIT(1) RESUME_LIMIT=11527 m MMU_THDM_DB_POOL_RESUME_LIMIT(1) RESUME_LIMIT=11897 noe ### MMU_THDM_DB_POOL_SHARED_LIMIT(1).SHARED_LIMIT += 74
 m MMU_THDR_DB_CONFIG1_PRIQ SPID=1 m MMU_THDR_DB_CONFIG1_PRIQ SPID=1 noe ### MMU_THDM_DB_POOL_RESUME_LIMIT(0).RESUME_LIMIT += 9
 for x=0,639,10 '\ for x=0,639,10 '\ noe ### When there is a linkdown event on an in-use downlink port:
 mod MMU_THDM_DB_QUEUE_CONFIG_PIPE0 $x 10 Q_SPID=1 ;\ mod MMU_THDM_DB_QUEUE_CONFIG_PIPE0 $x 10 Q_SPID=1 ;\ noe ### THDI_BUFFER_CELL_LIMIT_SP(0).LIMIT += 71
 mod MMU_THDM_DB_QUEUE_CONFIG_PIPE1 $x 10 Q_SPID=1' mod MMU_THDM_DB_QUEUE_CONFIG_PIPE1 $x 10 Q_SPID=1' noe ### MMU_THDM_DB_POOL_SHARED_LIMIT(0).SHARED_LIMIT += 71
 noe ### MMU_THDM_DB_POOL_RESUME_LIMIT(0).RESUME_LIMIT += 8
 noe ### MMU_THDM_DB_POOL_SHARED_LIMIT(1).SHARED_LIMIT += 56
 noe ### MMU_THDM_DB_POOL_RESUME_LIMIT(0).RESUME_LIMIT += 7
```

And applied egress buffer pool size relevant part to image repo, as below:
```
m THDI_BUFFER_CELL_LIMIT_SP(0) LIMIT=117246
m MMU_THDM_DB_POOL_SHARED_LIMIT(0) SHARED_LIMIT=126726
```

#### How to verify it

- push change to private branch "xuchen3/20231110.24/CS00012358392/Arista-7050CX3-32S-D48C8.dualtor" to build private image
```
$ git log -
* c363f5b1c8 (2024-10-30 23:12) - bugfix: CS00012358392 change ingerss/egress buffer size for Arista-7050CX3-32S-D48C8 dualtor, static_th <Xu Chen>
* 9c284f015c (2024-10-29 09:15) - bugfix : CS00012358392 change ingerss/egress buffer size for Arista-7050CX3-32S-D48C8 dualtor <Xu Chen>
* 7f855c8ae8 (2024-10-28 23:52) - CS00012358392 change ingerss/egress buffer size for Arista-7050CX3-32S-D48C8 dualtor <Xu Chen>
```

- aand then run qos sai test, **pass all qos sai test**, include headroom pool size test.
https://elastictest.org/scheduler/testplan/673e052ad3c216e9a194b719?testcase=qos%2ftest_qos_sai.py&type=console
![image](https://github.com/user-attachments/assets/2c7dfad6-2160-4012-9f4b-3819e316f8f8)

- and run **full nightly test** , not observed regression issue.
https://dev.azure.com/mssonic/internal/_build/results?buildId=718645&view=results

- PS. also run additional test to verify above changes just **work for Arista-7050CX3-32S-D48C8 dualtor**, not impact other platforms

<!--
If PR needs to be backported, then the PR must be tested against the base branch and the earliest backport release branch and provide tested image version on these two branches. For example, if the PR is requested for master, 202211 and 202012, then the requester needs to provide test results on master and 202012.
-->

#### Which release branch to backport (provide reason below if selected)

<!--
- Note we only backport fixes to a release branch, *not* features!
- Please also provide a reason for the backporting below.
- e.g.
- [x] 202006
-->

- [ ] 201811
- [ ] 201911
- [ ] 202006
- [ ] 202012
- [ ] 202106
- [ ] 202111
- [ ] 202205
- [ ] 202211
- [ ] 202305

#### Tested branch (Please provide the tested image version)

<!--
- Please provide tested image version
- e.g.
- [x] 20201231.100
-->

- [ ] <!-- image version 1 -->
- [ ] <!-- image version 2 -->

#### Description for the changelog
<!--
Write a short (one line) summary that describes the changes in this
pull request for inclusion in the changelog:
-->

<!--
 Ensure to add label/tag for the feature raised. example - PR#2174 under sonic-utilities repo. where, Generic Config and Update feature has been labelled as GCU.
-->

#### Link to config_db schema for YANG module changes
<!--
Provide a link to config_db schema for the table for which YANG model
is defined
Link should point to correct section on https://github.com/Azure/sonic-buildimage/blob/master/src/sonic-yang-models/doc/Configuration.md
-->

#### A picture of a cute animal (not mandatory but encouraged)
@mssonicbld mssonicbld merged commit 29105cd into Azure:202405 Jan 28, 2025
3 checks passed
prabhataravind pushed a commit that referenced this pull request Jul 7, 2025
… automatically (#22244)

#### Why I did it
src/sonic-platform-common
```
* c51c605 - (HEAD -> master, origin/master, origin/HEAD) Update Amphenol package for mock test (#561) (10 hours ago) [mihirpat1]
* dd6cc64 - Add INNOLIGHT OSFP part for special firmware handling (#559) (7 days ago) [Dylan Godwin]
* 5d90044 - Change the QSFP identifier offset to 0x0 (#558) (13 days ago) [Prince George]
* b536f2c - Support for Custom Amphenol 800G Backplane catridge (#555) (3 weeks ago) [Prince George]
```
#### How I did it
#### How to verify it
#### Description for the changelog
liushilongbuaa pushed a commit that referenced this pull request Mar 25, 2026
…lly (#25285)

#### Why I did it
src/sonic-gnmi
```
* 5da416f - (HEAD -> master, origin/master, origin/HEAD) Implements the frontend logic for gNSI Authz (#549) (5 hours ago) [niranjanivivek]
* 153f13f - Skip flaky integration tests to stabilize PR pipeline (#615) (34 hours ago) [Dawei Huang]
* d297045 - Refactor: Add missing defer Close() to prevent Redis connection leak (#608) (5 days ago) [jayaragini-hcl]
* bf5d3e0 - Implements the frontend logic for gNSI Certz (#559) (9 days ago) [jayaragini-hcl]
* 91b14bf - Add support to write gNSI credentials metadata to DB (#596) (9 days ago) [jayaragini-hcl]
* f0d2b7d - Add support for PHY diagnostic parameters (#583) (12 days ago) [Feng-msft]
* 469b400 - Add unit tests for DPU proxy singleton and GetDPUConnection (#595) (12 days ago) [Dawei Huang]
* 8fccb68 - Fix TransferToRemote DPU connection: replace localhost loopback with … (#591) (2 weeks ago) [Dawei Huang]
```
#### How I did it
#### How to verify it
#### Description for the changelog
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants