Skip to content

[201811][utilities] advance sonic-utilities sub module head#2801

Merged
yxieca merged 1 commit intosonic-net:201811from
yxieca:11-module
Apr 18, 2019
Merged

[201811][utilities] advance sonic-utilities sub module head#2801
yxieca merged 1 commit intosonic-net:201811from
yxieca:11-module

Conversation

@yxieca
Copy link
Contributor

@yxieca yxieca commented Apr 18, 2019

- What I did

Submodule src/sonic-utilities 79a0185..6ba6d27:

load_minigraph: restart hostcfgd (#511)

Signed-off-by: Ying Xie ying.xie@microsoft.com

Submodule src/sonic-utilities 79a0185..6ba6d27:
  > load_minigraph: restart hostcfgd (sonic-net#511)

Signed-off-by: Ying Xie <ying.xie@microsoft.com>
@yxieca yxieca merged commit 03dfd3d into sonic-net:201811 Apr 18, 2019
@yxieca yxieca deleted the 11-module branch April 18, 2019 22:10
yxieca pushed a commit that referenced this pull request Apr 20, 2023
…atically (#14752)

src/sonic-utilities

* ece22b7d - (HEAD -> 202205, origin/202205) Revert "[GCU] Add PFC_WD RDMA validator  (#2781)" (4 minutes ago) [Ying Xie]
* 7d16b184 - Remove the no use new line in show version (#2792) (21 hours ago) [xumia]
* 3a880a2b - Support to display the SONiC OS Version in the command show version (#2787) (21 hours ago) [xumia]
* a5199f75 - [voq][chassis][generate_dump] [BCM] Dump only the relevant BCM commands for fabric cards (#2606) (21 hours ago) [saksarav-nokia]
* 2410d364 - Fixed a bug in "show vnet routes all" causing screen overrun. (#2644) (#2801) (
mihirpat1 pushed a commit to mihirpat1/sonic-buildimage that referenced this pull request Jun 14, 2023
…bles handled (sonic-net#2801)

Signed-off-by: Stephen Sun <stephens@nvidia.com>

What I did

Enhance QosOrch::doTask by handling QUEUE table after all the rest tables handled to avoid retry

Signed-off-by: Stephen Sun stephens@nvidia.com

Details if related
During system initialization, QoS table items are received before gPortsOrch->allPortsReady() becomes true and will be handled all together after it becomes true.
In most cases, it will be handled in the for loop in the following snippet of code in OrchDaemon::start()

        auto *c = (Executor *)s;
        c->execute();

        /* After each iteration, periodically check all m_toSync map to
         * execute all the remaining tasks that need to be retried. */

        /* TODO: Abstract Orch class to have a specific todo list */
        for (Orch *o : m_orchList)
            o->doTask();
The QUEUE table items reference WRED_PROFILE and SCHEDULER_PROFILE table items. In case the latter tables are handled after the QUEUE table, the QUEUE table needs to be retried in the next for loop, which is not necessary.
So, we adjust the order in which the tables are handled to guarantee that all QoS table items to be handled in a single call to QosOrch::doTask.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants