Skip to content

Conversation

@stockholmux
Copy link
Member

@stockholmux stockholmux commented Dec 3, 2025

Description

  • Adds tiles to the what's new section for performance dashboard links
  • fixes inconsistencies in CSS (underscore vs dash, corner radius)
Screenshot 2025-12-03 at 10 12 06 AM

Issues Resolved

#421

Check List

  • Commits are signed per the DCO using --signoff

By submitting this pull request, I confirm that my contribution is made under the terms of the BSD-3-Clause License.

@roshkhatri
Copy link
Member

I have updated the PR with a /performance page for the website and the card on the home page,
The Website would look as under:
Screenshot 2025-12-10 at 3 18 40 PM

Followed by the /performance/ page:
Screenshot 2025-12-10 at 3 18 01 PM

Please add your feedback, I will also update the iframe links once we are ready to merge

@makubo-aws
Copy link

Super cool. Would be great to link to relevant blogs to performance and hash tables in case readers want to dive deeper.

_data/perf.toml Outdated
title = "Throughput Across Versions"
iframe_url = "https://df0m5orgq2d38.cloudfront.net/public-dashboards/e6e8ec1f88954ddd8a67df1ab4966eb2"
description = "This dashboard visualizes throughput trends across Valkey versions. It helps compare key releases side by side, highlight performance gains from new features. Use it when evaluating whether a new version meets your performance SLOs before rollout."
methodology = "These metrics are generated using the valkey-perf-benchmark tool on a c8g.metal.2xl instance, running a matrix of configurations that vary pipelining (1, 10), I/O threading (1, 9), and data sizes 512 bytes. To further stabilize results, we apply IRQ tuning by pinning network interrupts away from CPUs dedicated to the server process, and we isolate the server and benchmark client on separate NUMA nodes to remove L3 cache contention. This controlled methodology ensures consistent, repeatable comparisons across versions and makes performance gains or regressions easier to attribute to real code changes rather than environmental variability."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
methodology = "These metrics are generated using the valkey-perf-benchmark tool on a c8g.metal.2xl instance, running a matrix of configurations that vary pipelining (1, 10), I/O threading (1, 9), and data sizes 512 bytes. To further stabilize results, we apply IRQ tuning by pinning network interrupts away from CPUs dedicated to the server process, and we isolate the server and benchmark client on separate NUMA nodes to remove L3 cache contention. This controlled methodology ensures consistent, repeatable comparisons across versions and makes performance gains or regressions easier to attribute to real code changes rather than environmental variability."
methodology = "These metrics are generated using the valkey-perf-benchmark tool on an AWS c8g.metal.24xl instance, running a matrix of configurations that vary pipelining (1, 10), I/O threading (1, 9), and with a data size of 512 bytes. To further stabilize results, we apply IRQ tuning by pinning network interrupts away from CPUs dedicated to the server process, and we isolate the server and benchmark client on separate NUMA nodes to remove L3 cache contention."

Include a link to the valkey-perf-benchmark tool.

The last sentence adds nothing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also don't test the full matrix, since IO thread 9 and Pipelining 1 is not present. Is that intentional?

Copy link
Member

@roshkhatri roshkhatri Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do have that data, but thats intentional, I can add it

_data/perf.toml Outdated
[[sections]]
title = "Memory Overhead Across Versions"
iframe_url = "https://df0m5orgq2d38.cloudfront.net/public-dashboards/1e3d87a62b794ce9aa81281d8399e4bf"
description = "This dashboard visualizes Memory Efficiency trends across Valkey versions. It helps compare releases side by side, highlight Memory Efficiency gains from new features. Use it when evaluating whether a new version meets your performance SLOs before rollout."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
description = "This dashboard visualizes Memory Efficiency trends across Valkey versions. It helps compare releases side by side, highlight Memory Efficiency gains from new features. Use it when evaluating whether a new version meets your performance SLOs before rollout."
description = "This dashboard visualizes Memory Efficiency trends across Valkey versions. It helps compare releases side by side, highlight Memory Efficiency gains from new features."

What Performance SLO?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant Service Level Objective, but yeah can be removed

description = "This dashboard visualizes throughput trends across Valkey versions. It helps compare key releases side by side, highlight performance gains from new features. Use it when evaluating whether a new version meets your performance SLOs before rollout."
methodology = "These metrics are generated using the valkey-perf-benchmark tool on a c8g.metal.2xl instance, running a matrix of configurations that vary pipelining (1, 10), I/O threading (1, 9), and data sizes 512 bytes. To further stabilize results, we apply IRQ tuning by pinning network interrupts away from CPUs dedicated to the server process, and we isolate the server and benchmark client on separate NUMA nodes to remove L3 cache contention. This controlled methodology ensures consistent, repeatable comparisons across versions and makes performance gains or regressions easier to attribute to real code changes rather than environmental variability."

[[sections]]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Memory overhead is not performance. I'm not sure why we're including this here.

_data/perf.toml Outdated
title = "Memory Overhead Across Versions"
iframe_url = "https://df0m5orgq2d38.cloudfront.net/public-dashboards/1e3d87a62b794ce9aa81281d8399e4bf"
description = "This dashboard visualizes Memory Efficiency trends across Valkey versions. It helps compare releases side by side, highlight Memory Efficiency gains from new features. Use it when evaluating whether a new version meets your performance SLOs before rollout."
methodology = "These benchmarks are generated by starting an empty instance of Valkey for each test, measuring memory usage, then adding 3 million string items of a certain data size, then measuring Valkey memory usage again. We take the increase in memory use, divide by the number of items, and subtract the size of the user data (key and value) to get the extra overhead bytes Valkey uses to track and organize the data. For this graph, we tested every value size from 8B to 128B inclusive, then averaged the numbers for each range in this chart."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still confused why this isn't an integration test. Why is this here? What is this methodology?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was with the intention of showing how we have improved valkey for throughput and memory utilisation

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can surely add an integration test for it too

_data/perf.toml Outdated
[[sections]]
title = "Throughput Across Versions"
iframe_url = "https://df0m5orgq2d38.cloudfront.net/public-dashboards/e6e8ec1f88954ddd8a67df1ab4966eb2"
description = "This dashboard visualizes throughput trends across Valkey versions. It helps compare key releases side by side, highlight performance gains from new features. Use it when evaluating whether a new version meets your performance SLOs before rollout."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
description = "This dashboard visualizes throughput trends across Valkey versions. It helps compare key releases side by side, highlight performance gains from new features. Use it when evaluating whether a new version meets your performance SLOs before rollout."
description = "This dashboard visualizes throughput trends across Valkey versions. It helps compare key releases side by side, highlight performance gains from new features."

Performance SLOs is a very weird thing to call out.

Signed-off-by: Roshan Khatri <[email protected]>
@roshkhatri
Copy link
Member

Applied the suggestions and also added the links to the relevant blogs.
Screenshot 2025-12-11 at 12 26 27 PM
Screenshot 2025-12-11 at 12 26 42 PM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants