Skip to content

[QoS] Enhance QosOrch::doTask by handling QUEUE table after all the rest tables handled to avoid retry#2801

Merged
neethajohn merged 1 commit intosonic-net:masterfrom
stephenxs:avoid-retry-qos-queue-table
May 30, 2023
Merged

[QoS] Enhance QosOrch::doTask by handling QUEUE table after all the rest tables handled to avoid retry#2801
neethajohn merged 1 commit intosonic-net:masterfrom
stephenxs:avoid-retry-qos-queue-table

Conversation

@stephenxs
Copy link
Copy Markdown
Collaborator

@stephenxs stephenxs commented May 30, 2023

What I did

Enhance QosOrch::doTask by handling QUEUE table after all the rest tables handled to avoid retry

Signed-off-by: Stephen Sun stephens@nvidia.com

Why I did it

How I verified it

Details if related

During system initialization, QoS table items are received before gPortsOrch->allPortsReady() becomes true and will be handled all together after it becomes true.
In most cases, it will be handled in the for loop in the following snippet of code in OrchDaemon::start()

        auto *c = (Executor *)s;
        c->execute();

        /* After each iteration, periodically check all m_toSync map to
         * execute all the remaining tasks that need to be retried. */

        /* TODO: Abstract Orch class to have a specific todo list */
        for (Orch *o : m_orchList)
            o->doTask();

The QUEUE table items reference WRED_PROFILE and SCHEDULER_PROFILE table items. In case the latter tables are handled after the QUEUE table, the QUEUE table needs to be retried in the next for loop, which is not necessary.
So, we adjust the order in which the tables are handled to guarantee that all QoS table items to be handled in a single call to QosOrch::doTask.

…bles handled

Signed-off-by: Stephen Sun <stephens@nvidia.com>
@stephenxs
Copy link
Copy Markdown
Collaborator Author

/azpw run

@mssonicbld
Copy link
Copy Markdown
Collaborator

/AzurePipelines run

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 1 pipeline(s).

@stephenxs stephenxs marked this pull request as ready for review May 30, 2023 08:54
@neethajohn neethajohn merged commit 98d2b0c into sonic-net:master May 30, 2023
@stephenxs stephenxs deleted the avoid-retry-qos-queue-table branch May 30, 2023 22:47
stephenxs added a commit to Junchao-Mellanox/sonic-swss that referenced this pull request Jun 6, 2023
…bles handled (sonic-net#2801)

Signed-off-by: Stephen Sun <stephens@nvidia.com>

What I did

Enhance QosOrch::doTask by handling QUEUE table after all the rest tables handled to avoid retry

Signed-off-by: Stephen Sun stephens@nvidia.com

Details if related
During system initialization, QoS table items are received before gPortsOrch->allPortsReady() becomes true and will be handled all together after it becomes true.
In most cases, it will be handled in the for loop in the following snippet of code in OrchDaemon::start()

        auto *c = (Executor *)s;
        c->execute();

        /* After each iteration, periodically check all m_toSync map to
         * execute all the remaining tasks that need to be retried. */

        /* TODO: Abstract Orch class to have a specific todo list */
        for (Orch *o : m_orchList)
            o->doTask();
The QUEUE table items reference WRED_PROFILE and SCHEDULER_PROFILE table items. In case the latter tables are handled after the QUEUE table, the QUEUE table needs to be retried in the next for loop, which is not necessary.
So, we adjust the order in which the tables are handled to guarantee that all QoS table items to be handled in a single call to QosOrch::doTask.
StormLiangMS pushed a commit that referenced this pull request Jun 10, 2023
…bles handled (#2801)

Signed-off-by: Stephen Sun <stephens@nvidia.com>

What I did

Enhance QosOrch::doTask by handling QUEUE table after all the rest tables handled to avoid retry

Signed-off-by: Stephen Sun stephens@nvidia.com

Details if related
During system initialization, QoS table items are received before gPortsOrch->allPortsReady() becomes true and will be handled all together after it becomes true.
In most cases, it will be handled in the for loop in the following snippet of code in OrchDaemon::start()

        auto *c = (Executor *)s;
        c->execute();

        /* After each iteration, periodically check all m_toSync map to
         * execute all the remaining tasks that need to be retried. */

        /* TODO: Abstract Orch class to have a specific todo list */
        for (Orch *o : m_orchList)
            o->doTask();
The QUEUE table items reference WRED_PROFILE and SCHEDULER_PROFILE table items. In case the latter tables are handled after the QUEUE table, the QUEUE table needs to be retried in the next for loop, which is not necessary.
So, we adjust the order in which the tables are handled to guarantee that all QoS table items to be handled in a single call to QosOrch::doTask.
@stephenxs
Copy link
Copy Markdown
Collaborator Author

Added Included in 202211 branch as the PR has been cherry-picked and the submodule has been merged.

theasianpianist pushed a commit to theasianpianist/sonic-swss that referenced this pull request Jul 20, 2023
…bles handled (sonic-net#2801)

Signed-off-by: Stephen Sun <stephens@nvidia.com>

What I did

Enhance QosOrch::doTask by handling QUEUE table after all the rest tables handled to avoid retry

Signed-off-by: Stephen Sun stephens@nvidia.com

Details if related
During system initialization, QoS table items are received before gPortsOrch->allPortsReady() becomes true and will be handled all together after it becomes true.
In most cases, it will be handled in the for loop in the following snippet of code in OrchDaemon::start()

        auto *c = (Executor *)s;
        c->execute();

        /* After each iteration, periodically check all m_toSync map to
         * execute all the remaining tasks that need to be retried. */

        /* TODO: Abstract Orch class to have a specific todo list */
        for (Orch *o : m_orchList)
            o->doTask();
The QUEUE table items reference WRED_PROFILE and SCHEDULER_PROFILE table items. In case the latter tables are handled after the QUEUE table, the QUEUE table needs to be retried in the next for loop, which is not necessary.
So, we adjust the order in which the tables are handled to guarantee that all QoS table items to be handled in a single call to QosOrch::doTask.
Janetxxx pushed a commit to Janetxxx/sonic-swss that referenced this pull request Nov 10, 2025
…bles handled (sonic-net#2801)

Signed-off-by: Stephen Sun <stephens@nvidia.com>

What I did

Enhance QosOrch::doTask by handling QUEUE table after all the rest tables handled to avoid retry

Signed-off-by: Stephen Sun stephens@nvidia.com

Details if related
During system initialization, QoS table items are received before gPortsOrch->allPortsReady() becomes true and will be handled all together after it becomes true.
In most cases, it will be handled in the for loop in the following snippet of code in OrchDaemon::start()

        auto *c = (Executor *)s;
        c->execute();

        /* After each iteration, periodically check all m_toSync map to
         * execute all the remaining tasks that need to be retried. */

        /* TODO: Abstract Orch class to have a specific todo list */
        for (Orch *o : m_orchList)
            o->doTask();
The QUEUE table items reference WRED_PROFILE and SCHEDULER_PROFILE table items. In case the latter tables are handled after the QUEUE table, the QUEUE table needs to be retried in the next for loop, which is not necessary.
So, we adjust the order in which the tables are handled to guarantee that all QoS table items to be handled in a single call to QosOrch::doTask.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants