Skip to content

Conversation

@yihuang
Copy link
Collaborator

@yihuang yihuang commented Jan 12, 2026

Description

I seems to observe a little bit improvement on my laptop with this change, using the builtin benchmarks.

$ go test -v -bench=BenchmarkBlockSTM -run ^$ ./...

The benchmark difference might be accidental though, either my laptop don't have enough cpus or the assumptions are wrong.

// A similar index for tracking validation.
validationIdx atomic.Uint64

_ cpu.CacheLinePad
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we keep executionIdx and validationIdx in the same cache line, because each executor thread will read both anyway.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How did you determine that this yields any perf improvements? Can you attach your local benchmarks here?

Copy link
Contributor

@songgaoye songgaoye Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we keep executionIdx and validationIdx in the same cache line, because each executor thread will read both anyway.

I don't think so. Executors read both executionIdx and validationIdx both indices, but they read and write different ones most of the time (executution workers increment executionIdx, validators bump validationIdx).
Keeping them in the same cache line means those writes will constantly invalidate each other’s cache line even though the fields track unrelated counters.
That false sharing is exactly what the padding was supposed to prevent.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we keep executionIdx and validationIdx in the same cache line, because each executor thread will read both anyway.

I don't think so. Executors read both executionIdx and validationIdx both indices, but they read and write different ones most of the time (executution workers increment executionIdx, validators bump validationIdx). Keeping them in the same cache line means those writes will constantly invalidate each other’s cache line even though the fields track unrelated counters. That false sharing is exactly what the padding was supposed to prevent.

I mean at the beginning of the loop, each executor will read both executionIdx and validationIdx to compare them for priority.

@codecov
Copy link

codecov bot commented Jan 12, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 70.57%. Comparing base (82f1fc2) to head (e95911e).

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##             main   #25766   +/-   ##
=======================================
  Coverage   70.56%   70.57%           
=======================================
  Files         838      838           
  Lines       54570    54570           
=======================================
+ Hits        38508    38513    +5     
+ Misses      16062    16057    -5     
Files with missing lines Coverage Δ
blockstm/scheduler.go 92.39% <ø> (ø)

... and 3 files with indirect coverage changes

Impacted file tree graph

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Eric-Warehime Eric-Warehime changed the title optim: avoid false sharing in block-stm perf: avoid false sharing in block-stm Jan 12, 2026
@technicallyty
Copy link
Contributor

technicallyty commented Jan 22, 2026

would be good to share some before/after results and your interpretation here. i ran it on my macbook and saw improvements on random, iterate, and no-conflict, but saw small regressions on worst-case worker-1 and worst-case-worker-10

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants