Skip to content

net_buffer_tuner: set high_order_alloc_disable=0 on newer kernels#215

Merged
alan-maguire merged 1 commit intomainfrom
highorder
Dec 5, 2025
Merged

net_buffer_tuner: set high_order_alloc_disable=0 on newer kernels#215
alan-maguire merged 1 commit intomainfrom
highorder

Conversation

@alan-maguire
Copy link
Member

The documentation for net.core.high_order_alloc_disable says

"By default the allocator for page frags tries to use high order pages (order-3 on x86). While the default behavior gives good results in most cases, some users might have hit a contention in page allocations/freeing. This was especially true on older kernels (< 5.14) when high-order pages were not stored on per-cpu lists. This allows to opt-in for order-0 allocation instead but is now mostly of historical importance."

So we set high_order_alloc_disable=0 on tuner init for kernels >= 5.14.

The documentation for net.core.high_order_alloc_disable says

"By default the allocator for page frags tries to use high order pages (order-3
on x86). While the default behavior gives good results in most cases, some users
might have hit a contention in page allocations/freeing. This was especially
true on older kernels (< 5.14) when high-order pages were not stored on per-cpu
lists. This allows to opt-in for order-0 allocation instead but is now mostly of
historical importance."

So we set high_order_alloc_disable=0 on tuner init for kernels >= 5.14.

Signed-off-by: Alan Maguire <[email protected]>
@oracle-contributor-agreement oracle-contributor-agreement bot added the OCA Verified All contributors have signed the Oracle Contributor Agreement. label Dec 5, 2025
@alan-maguire alan-maguire merged commit 74f47c4 into main Dec 5, 2025
1 check passed
@alan-maguire alan-maguire deleted the highorder branch December 6, 2025 12:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

OCA Verified All contributors have signed the Oracle Contributor Agreement.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant