-
Notifications
You must be signed in to change notification settings - Fork 4
Use Stack based SCC instead of recursive #461
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughReplaces adjacency-map SCC and cycle-handling with a streaming Source-based SCC API. EntitlementGraph gains iterator methods and a filteredGraph wrapper; SCC condensation is implemented via a CSR-driven, iterative CondenseFWBW returning groups and metrics. Adds an internal atomic bitset and updates tests, fuzzers, and callers to the new API. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Caller
participant Cycle as cycle.go
participant EG as EntitlementGraph
participant FG as filteredGraph
participant SCC as scc.CondenseFWBW
participant CSR as CSR Builder
Caller->>Cycle: ComputeCyclicComponents(ctx)
Cycle->>EG: determine reachable set / SourcesToDestinations
Cycle->>FG: wrap EG with include-predicate (reachable)
Cycle->>SCC: CondenseFWBW(ctx, FG, opts)
SCC->>CSR: buildCSRFromSource(FG)
note right of CSR: CSR and transpose built directly\nfrom Source iterators (no adjacency map)
CSR-->>SCC: return CSR
SCC->>SCC: iterative FW–BW condensation (stack-driven)
SCC-->>Cycle: components [][]int and *Metrics
Cycle->>EG: hasSelfLoop checks for singletons
Cycle-->>Caller: return components, metrics
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pkg/sync/expand/cycle.go (1)
136-143: Bug: deleting wrong adjacency map on incoming-edge cleanupSecond delete should remove from DestinationsToSources, not SourcesToDestinations. This leaves stale reverse entries.
- delete(g.SourcesToDestinations, nodeID) + delete(g.DestinationsToSources, nodeID)
🧹 Nitpick comments (22)
pkg/sync/expand/scc/test_source.go (2)
1-6: Keep test-only helper out of production buildsSince adjSource is only used by tests, consider renaming this file to test_source_test.go so it’s excluded from non-test builds. Keeps the package surface smaller.
8-14: Deterministic iteration to avoid test flakinessMap iteration order is randomized. Sort node and edge keys before invoking callbacks to keep SCC construction deterministic across runs.
Apply within the shown ranges:
func (a adjSource) ForEachNode(fn func(id int) bool) { - for id := range a.adj { - if !fn(id) { - return - } - } + ids := make([]int, 0, len(a.adj)) + for id := range a.adj { + ids = append(ids, id) + } + sort.Ints(ids) + for _, id := range ids { + if !fn(id) { + return + } + } } func (a adjSource) ForEachEdgeFrom(src int, fn func(dst int) bool) { if row, ok := a.adj[src]; ok { - for dst := range row { - if !fn(dst) { - return - } - } + dsts := make([]int, 0, len(row)) + for dst := range row { + dsts = append(dsts, dst) + } + sort.Ints(dsts) + for _, dst := range dsts { + if !fn(dst) { + return + } + } } }Add the missing import (outside the selected ranges):
import "sort"Also applies to: 16-24
pkg/sync/expand/scc/bitset_test.go (2)
88-106: Use atomic counter instead of mutex in the concurrency test (small nit)Current code is correct; atomic makes intent clearer and removes the extra mutex.
-import ( - "sync" - "testing" -) +import ( + "sync" + "sync/atomic" + "testing" +) @@ - setCount := 0 - var mu sync.Mutex + var setCount int32 for i := 0; i < N; i++ { go func() { defer wg.Done() if !b.testAndSetAtomic(idx) { - mu.Lock() - setCount++ - mu.Unlock() + atomic.AddInt32(&setCount, 1) } }() } @@ - if setCount != 1 { + if atomic.LoadInt32(&setCount) != 1 { t.Fatalf("expected exactly one set, got %d", setCount) }Also applies to: 107-124
10-11: Style nit: simplify boolean checkPrefer if !b.isEmpty() for idiomatic Go.
- if b.isEmpty() == false { + if !b.isEmpty() { t.Fatalf("new bitset should be empty") }pkg/sync/expand/cycle.go (3)
39-41: Don’t drop caller context in cycleDetectionHelperUsing context.Background() bypasses cancellation/timeouts. Plumb ctx through this helper and pass it to CondenseFWBW.
Example refactor (requires updating callers):
-func (g *EntitlementGraph) cycleDetectionHelper( - nodeID int, -) ([]int, bool) { +func (g *EntitlementGraph) cycleDetectionHelper( + ctx context.Context, + nodeID int, +) ([]int, bool) { @@ - groups := scc.CondenseFWBW(context.Background(), fg, scc.DefaultOptions()) + groups := scc.CondenseFWBW(ctx, fg, scc.DefaultOptions())
78-94: filteredGraph iteration correctness LGTM; consider determinism trade-offsImplementation is correct. If you ever need deterministic traversal (for reproducible outputs), sort keys in ForEachNode/ForEachEdgeFrom at the cost of CPU. Not required for correctness.
Also applies to: 95-109
1-9: Add compile-time interface assertion (optional)Ensures EntitlementGraph (and filteredGraph) continue to satisfy scc.Source after refactors.
var _ scc.Source = (*EntitlementGraph)(nil) var _ scc.Source = (filteredGraph{})pkg/sync/expand/scc/bitset.go (2)
78-97: Bitwise ops assume equal lengths—assert or document.and/or/andNot iterate up to len(b.w) and index x.w[i] without checking. If sizes ever diverge, it panics.
Optionally add a fast guard:
func (b *bitset) and(x *bitset) *bitset { + if len(b.w) != len(x.w) { + // programmatic misuse; choose one: panic, truncate, or pad. + panic("bitset.and: mismatched sizes") + } for i := range b.w { b.w[i] &= x.w[i] } return b }(Repeat for or/andNot.)
23-37: Nit: use uint64(1) shifts consistently.Non-atomic paths use an untyped 1; elsewhere you use uint64(1). Align for readability and to quiet linters.
pkg/sync/expand/scc/scc_fuzz_test.go (2)
13-13: Lint: end comment with a period.Fix “Helper utilities” to satisfy godot.
-// Helper utilities +// Helper utilities.
311-313: gosec G404 in tests: annotate intent.math/rand is fine for fuzzing; add a nolint to silence CI.
- r := rand.New(rand.NewSource(int64(seed))) + r := rand.New(rand.NewSource(int64(seed))) //nolint:gosec // fuzzing uses math/rand intentionallypkg/sync/expand/scc/scc_nohang_test.go (4)
10-24: Avoid leaking the test goroutine on timeoutwithTimeout fails the test but leaves the worker goroutine running. Prefer a context-aware runner or a done channel the worker observes to exit early.
Apply:
func withTimeout(t *testing.T, d time.Duration, f func(t *testing.T)) { t.Helper() - done := make(chan struct{}) + done := make(chan struct{}) + stop := make(chan struct{}) go func() { - defer close(done) - f(t) + defer close(done) + select { + case <-stop: + return + default: + f(t) + } }() select { case <-done: return case <-time.After(d): t.Fatalf("function did not complete within %v (possible hang)", d) + close(stop) } }
116-123: Remove unused boolean from funcName usageThe third argument is always false and ignored. Simplify call sites.
- t.Run( - funcName("general", gi, false), + t.Run( + funcName("general", gi),
129-132: Drop unused parameter from funcName signatureTighten API to match actual use.
-func funcName(kind string, idx int, _ bool) string { +func funcName(kind string, idx int) string { return kind + "_#" + strconv.Itoa(idx) }
316-339: Cancellation may be delayed by CSR buildThis test asserts prompt cancellation, but CondenseFWBW currently doesn’t check ctx before building the CSR. Add a fast pre-check in CondenseFWBW (see scc.go comment) or pass ctx into CSR construction to make this test robust.
pkg/sync/expand/scc/scc.go (7)
76-84: Fast-cancel before CSR buildIf ctx is already canceled/expired, return immediately to honor deadlines (also stabilizes TestCancel_HeavyGraphs).
func CondenseFWBW(ctx context.Context, src Source, opts Options) [][]int { + if err := ctx.Err(); err != nil { + return nil + } if opts.MaxWorkers <= 0 { opts.MaxWorkers = runtime.GOMAXPROCS(0) }
403-406: Silence gosec G115: bound the shift before castingCast after masking to avoid int→uint conversion warnings; semantics unchanged.
- mask := uint64(1) << (uint(v) & 63) + mask := uint64(1) << uint(v&63)
292-357: Avoid named return to satisfy lintersDrop the named result and return explicitly.
-func sccFWBWIterative(ctx context.Context, csr *CSR, comp []int, opts Options) (nextID int) { - nextID = 0 +func sccFWBWIterative(ctx context.Context, csr *CSR, comp []int, opts Options) int { + nextID := 0 ... return nextID }
51-57: Fix doc invariant: NodeIDToIdx no longer existsComment still references NodeIDToIdx; remove to prevent confusion.
- // - len(IdxToNodeID) == N; NodeIDToIdx[IdxToNodeID[i]] == i for all i + // - len(IdxToNodeID) == N
519-537: Preserve pooled slice when too smallIf a slice from the pool is too small, put it back instead of dropping it.
func getIntSlice(n int) []int { p, _ := intSlicePool.Get().(*[]int) - if p == nil || cap(*p) < n { + if p == nil { return make([]int, n) } + if cap(*p) < n { + intSlicePool.Put(p) + return make([]int, n) + } s := (*p)[:n] for i := range s { s[i] = 0 } return s }
215-287: Consider gating validateCSR in productionPanic on invariant violations is great for tests; consider a build tag or opt-in assertion in production paths to avoid process-wide crashes.
Would you like a debug build tag patch (e.g., //go:build sccdebug) wrapping validateCSR?
106-214: CSR builder: clarify behavior for edges to unknown nodesEdges to destinations not yielded by ForEachNode are silently dropped. Document this contract on Source (or optionally count drops in debug) to aid integrators.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
pkg/sync/expand/cycle.go(2 hunks)pkg/sync/expand/graph.go(2 hunks)pkg/sync/expand/scc/bitset.go(1 hunks)pkg/sync/expand/scc/bitset_test.go(1 hunks)pkg/sync/expand/scc/scc.go(12 hunks)pkg/sync/expand/scc/scc_fuzz_test.go(1 hunks)pkg/sync/expand/scc/scc_nohang_test.go(1 hunks)pkg/sync/expand/scc/scc_test.go(7 hunks)pkg/sync/expand/scc/test_source.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
pkg/sync/expand/cycle.go (2)
pkg/sync/expand/scc/scc.go (2)
CondenseFWBW(77-104)DefaultOptions(41-46)pkg/sync/expand/graph.go (1)
EntitlementGraph(42-54)
pkg/sync/expand/scc/scc_fuzz_test.go (1)
pkg/sync/expand/scc/scc.go (2)
DefaultOptions(41-46)CondenseFWBW(77-104)
pkg/sync/expand/scc/scc_test.go (1)
pkg/sync/expand/scc/scc.go (1)
CondenseFWBW(77-104)
pkg/sync/expand/scc/scc_nohang_test.go (1)
pkg/sync/expand/scc/scc.go (3)
DefaultOptions(41-46)CondenseFWBW(77-104)Options(35-38)
pkg/sync/expand/graph.go (1)
pkg/sync/expand/scc/scc.go (1)
Source(71-74)
🪛 GitHub Check: go-lint
pkg/sync/expand/scc/scc_fuzz_test.go
[failure] 364-364:
G115: integer overflow conversion uint64 -> int (gosec)
[failure] 357-357:
G115: integer overflow conversion uint64 -> int (gosec)
[failure] 311-311:
G404: Use of weak random number generator (math/rand or math/rand/v2 instead of crypto/rand) (gosec)
[failure] 13-13:
Comment should end in a period (godot)
pkg/sync/expand/scc/scc.go
[failure] 292-292:
named return "nextID" with type "int" found (nonamedreturns)
[failure] 404-404:
G115: integer overflow conversion int -> uint (gosec)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Cursor Bugbot
- GitHub Check: go-test (1.23.x, windows-latest)
🔇 Additional comments (9)
pkg/sync/expand/scc/scc_test.go (1)
62-62: Approve API migration and interface compliance
EntitlementGraph defines both ForEachNode (line 323) and ForEachEdgeFrom (line 333), satisfying the scc.Source interface.pkg/sync/expand/cycle.go (1)
56-67: Cyclic-component predicate mirrors prior semantics — goodlen>1 or singleton with self-loop via hasSelfLoop is correct.
pkg/sync/expand/graph.go (4)
7-7: Good: explicit scc.Source conformance.Import plus compile-time assertion keeps the integration tight and catches drift early.
Also applies to: 319-319
317-317: Removal of toAdjacency is the right direction.Leaning on the Source iterator keeps memory bounded and avoids building transient maps.
321-329: Map iteration order: confirm determinism surfaces via CSR builder.ForEachNode ranges over a map (random order). Please verify that buildCSRFromSource (or equivalent) imposes a stable node ordering when Options.Deterministic is true; otherwise fuzz tests asserting idempotence may flake.
333-341: Edge iteration semantics look good; nil-safe.Enumerating keys of SourcesToDestinations[src] provides unique dsts and early-exit on false is respected. No action.
pkg/sync/expand/scc/scc_fuzz_test.go (1)
425-433: normalizeGroups must exist or this won’t compile.This file references normalizeGroups(...) but doesn’t define it. Confirm it’s provided in another _test.go within package scc, or inline a simple normalizer here.
If needed, drop-in helper:
+func normalizeGroups(gs [][]int) [][]int { + out := make([][]int, len(gs)) + for i, g := range gs { + gg := append([]int(nil), g...) + slices.Sort(gg) + out[i] = gg + } + slices.SortFunc(out, func(a, b []int) int { + if len(a) != len(b) { + return cmp.Compare(len(a), len(b)) + } + for i := range a { + if a[i] != b[i] { + return cmp.Compare(a[i], b[i]) + } + } + return 0 + }) + return out +}(Requires Go 1.21’s slices/cmp imports.)
pkg/sync/expand/scc/scc_nohang_test.go (1)
221-245: LGTM: good adversarial coverage for deep chainsExercises worker counts and ensures no-hang behavior.
pkg/sync/expand/scc/scc.go (1)
296-300: Fix compile error: range over intfor i := range csr.N is invalid. Use an index loop.
- root := newBitset(csr.N) - for i := range csr.N { - root.set(i) - } + root := newBitset(csr.N) + for i := 0; i < csr.N; i++ { + root.set(i) + }⛔ Skipped due to learnings
Learnt from: ggreer PR: ConductorOne/baton-sdk#321 File: pkg/tasks/c1api/service_client.go:190-190 Timestamp: 2025-03-10T17:07:23.395Z Learning: Go 1.22 (released February 2024) added support for range over integers, allowing syntax like `for i := range n` which iterates from 0 to n-1 and is equivalent to `for i := 0; i < n; i++`.Learnt from: ggreer PR: ConductorOne/baton-sdk#321 File: pkg/tasks/c1api/service_client.go:190-190 Timestamp: 2025-03-10T17:07:23.395Z Learning: Go 1.22 (released February 2024) added support for range over integers, allowing syntax like `for i := range n` which iterates from 0 to n-1 and is equivalent to `for i := 0; i < n; i++`.
| func (b *bitset) testAndSetAtomic(i int) bool { | ||
| if i < 0 { | ||
| return false | ||
| } | ||
| w := i >> 6 | ||
| mask := uint64(1) << (uint(i) & 63) | ||
| addr := &b.w[w] | ||
| for { | ||
| old := atomic.LoadUint64(addr) | ||
| if old&mask != 0 { | ||
| return true | ||
| } | ||
| if atomic.CompareAndSwapUint64(addr, old, old|mask) { | ||
| return false | ||
| } | ||
| } | ||
| } | ||
|
|
||
| func (b *bitset) clearAtomic(i int) { | ||
| if i < 0 { | ||
| return | ||
| } | ||
| w := i >> 6 | ||
| mask := ^(uint64(1) << (uint(i) & 63)) | ||
| addr := &b.w[w] | ||
| for { | ||
| old := atomic.LoadUint64(addr) | ||
| if atomic.CompareAndSwapUint64(addr, old, old&mask) { | ||
| return | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Atomic ops: add index guards; unify mask types.
Same out-of-bounds risk here. Suggest cheap precondition guards; returning early is safer than panicking in production paths. Also use uint64(1) consistently for clarity.
func (b *bitset) testAndSetAtomic(i int) bool {
- if i < 0 {
+ if i < 0 {
return false
}
w := i >> 6
- mask := uint64(1) << (uint(i) & 63)
+ if w >= len(b.w) {
+ // Treat OOB as already-set to avoid enqueueing invalid work.
+ return true
+ }
+ mask := uint64(1) << (uint(i) & 63)
addr := &b.w[w]
for {
old := atomic.LoadUint64(addr)
if old&mask != 0 {
return true
}
if atomic.CompareAndSwapUint64(addr, old, old|mask) {
return false
}
}
}
func (b *bitset) clearAtomic(i int) {
- if i < 0 {
+ if i < 0 {
return
}
w := i >> 6
- mask := ^(uint64(1) << (uint(i) & 63))
+ if w >= len(b.w) {
+ return
+ }
+ mask := ^(uint64(1) << (uint(i) & 63))
addr := &b.w[w]
for {
old := atomic.LoadUint64(addr)
if atomic.CompareAndSwapUint64(addr, old, old&mask) {
return
}
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (b *bitset) testAndSetAtomic(i int) bool { | |
| if i < 0 { | |
| return false | |
| } | |
| w := i >> 6 | |
| mask := uint64(1) << (uint(i) & 63) | |
| addr := &b.w[w] | |
| for { | |
| old := atomic.LoadUint64(addr) | |
| if old&mask != 0 { | |
| return true | |
| } | |
| if atomic.CompareAndSwapUint64(addr, old, old|mask) { | |
| return false | |
| } | |
| } | |
| } | |
| func (b *bitset) clearAtomic(i int) { | |
| if i < 0 { | |
| return | |
| } | |
| w := i >> 6 | |
| mask := ^(uint64(1) << (uint(i) & 63)) | |
| addr := &b.w[w] | |
| for { | |
| old := atomic.LoadUint64(addr) | |
| if atomic.CompareAndSwapUint64(addr, old, old&mask) { | |
| return | |
| } | |
| } | |
| } | |
| func (b *bitset) testAndSetAtomic(i int) bool { | |
| if i < 0 { | |
| return false | |
| } | |
| w := i >> 6 | |
| if w >= len(b.w) { | |
| // Treat OOB as already-set to avoid enqueueing invalid work. | |
| return true | |
| } | |
| mask := uint64(1) << (uint(i) & 63) | |
| addr := &b.w[w] | |
| for { | |
| old := atomic.LoadUint64(addr) | |
| if old&mask != 0 { | |
| return true | |
| } | |
| if atomic.CompareAndSwapUint64(addr, old, old|mask) { | |
| return false | |
| } | |
| } | |
| } | |
| func (b *bitset) clearAtomic(i int) { | |
| if i < 0 { | |
| return | |
| } | |
| w := i >> 6 | |
| if w >= len(b.w) { | |
| return | |
| } | |
| mask := ^(uint64(1) << (uint(i) & 63)) | |
| addr := &b.w[w] | |
| for { | |
| old := atomic.LoadUint64(addr) | |
| if atomic.CompareAndSwapUint64(addr, old, old&mask) { | |
| return | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In pkg/sync/expand/scc/bitset.go around lines 39 to 70, both testAndSetAtomic
and clearAtomic lack bounds checks on the computed word index and mix literal 1
with uint64; add a cheap precondition guard at the top of each function to
return immediately if i < 0 or w >= len(b.w) (where w := i >> 6), and construct
masks with uint64(1) (e.g., mask := uint64(1) << (uint(i) & 63) and mask :=
^(uint64(1) << (uint(i) & 63))) so mask is a uint64 in both functions; keep the
existing atomic loop logic unchanged otherwise.
| if len(data) == 0 { | ||
| return map[int]map[int]int{0: {}} | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix invalid composite literal for empty adjacency.
map[int]map[int]int{0: {}} won’t compile; inner type is required.
- if len(data) == 0 {
- return map[int]map[int]int{0: {}}
- }
+ if len(data) == 0 {
+ return map[int]map[int]int{0: map[int]int{}}
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if len(data) == 0 { | |
| return map[int]map[int]int{0: {}} | |
| } | |
| if len(data) == 0 { | |
| return map[int]map[int]int{0: map[int]int{}} | |
| } |
🤖 Prompt for AI Agents
In pkg/sync/expand/scc/scc_fuzz_test.go around lines 345 to 347, the composite
literal map[int]map[int]int{0: {}} is invalid because the inner map type must be
specified; replace the empty inner literal with an explicit map, e.g.
map[int]map[int]int{0: map[int]int{}}, or construct it with make (inner :=
make(map[int]int); return map[int]map[int]int{0: inner}) so the code compiles.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
pkg/sync/expand/scc/scc_fuzz_test.go (1)
345-347: Fix invalid composite literal for empty adjacency.This won’t compile; inner map type must be specified.
- if len(data) == 0 { - return map[int]map[int]int{0: {}} - } + if len(data) == 0 { + return map[int]map[int]int{0: map[int]int{}} + }
🧹 Nitpick comments (1)
pkg/sync/expand/scc/scc_fuzz_test.go (1)
13-13: End comment with a period to satisfy godot.-// Helper utilities +// Helper utilities.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
pkg/sync/expand/scc/scc.go(12 hunks)pkg/sync/expand/scc/scc_fuzz_test.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- pkg/sync/expand/scc/scc.go
🧰 Additional context used
🧬 Code graph analysis (1)
pkg/sync/expand/scc/scc_fuzz_test.go (1)
pkg/sync/expand/scc/scc.go (2)
DefaultOptions(41-46)CondenseFWBW(77-104)
🪛 GitHub Check: go-lint
pkg/sync/expand/scc/scc_fuzz_test.go
[failure] 369-369:
G115: integer overflow conversion uint64 -> int (gosec)
[failure] 366-366:
G115: integer overflow conversion int -> uint64 (gosec)
[failure] 363-363:
G115: integer overflow conversion uint64 -> int (gosec)
[failure] 358-358:
G115: integer overflow conversion int -> uint64 (gosec)
[failure] 13-13:
Comment should end in a period (godot)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Cursor Bugbot
- GitHub Check: go-test (1.23.x, windows-latest)
🔇 Additional comments (1)
pkg/sync/expand/scc/scc_fuzz_test.go (1)
357-370: Annotate bounded conversions with nosec
Add a// #nosec G115directive above thetoIntBoundeddefinition to silence gosec on these false-positive bounded conversions.Likely an incorrect or invalid review comment.
pkg/sync/expand/scc/scc_fuzz_test.go
Outdated
| groups := CondenseFWBW(context.Background(), adjSource{adj: adj}, opts) | ||
| assertPartition(t, adj, groups) | ||
| assertDAGCondensation(t, adj, groups) | ||
| // idempotence in deterministic mode | ||
| groups2 := CondenseFWBW(context.Background(), adjSource{adj: adj}, opts) | ||
| if !equalGroups(normalizeGroups(groups), normalizeGroups(groups2)) { | ||
| t.Fatalf("non-deterministic result with Deterministic=true") | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
normalizeGroups is undefined; tests will not compile.
Provide an in-file implementation and import sort.
Add import:
import (
"context"
"encoding/binary"
"math"
"math/rand"
+ "sort"
"reflect"
"testing"
"time"
)Add helper after equalGroups (below Line 34):
+// normalizeGroups sorts members within each group and orders groups lexicographically.
+func normalizeGroups(gs [][]int) [][]int {
+ out := make([][]int, len(gs))
+ for i, g := range gs {
+ gg := append([]int(nil), g...)
+ sort.Ints(gg)
+ out[i] = gg
+ }
+ sort.Slice(out, func(i, j int) bool {
+ a, b := out[i], out[j]
+ for k := 0; k < len(a) && k < len(b); k++ {
+ if a[k] != b[k] {
+ return a[k] < b[k]
+ }
+ }
+ return len(a) < len(b)
+ })
+ return out
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| groups := CondenseFWBW(context.Background(), adjSource{adj: adj}, opts) | |
| assertPartition(t, adj, groups) | |
| assertDAGCondensation(t, adj, groups) | |
| // idempotence in deterministic mode | |
| groups2 := CondenseFWBW(context.Background(), adjSource{adj: adj}, opts) | |
| if !equalGroups(normalizeGroups(groups), normalizeGroups(groups2)) { | |
| t.Fatalf("non-deterministic result with Deterministic=true") | |
| } | |
| ++ b/pkg/sync/expand/scc/scc_fuzz_test.go | |
| @@ -8,6 +8,7 @@ import ( | |
| "context" | |
| "encoding/binary" | |
| "math" | |
| "sort" | |
| "math/rand" | |
| "reflect" | |
| "testing" | |
| @@ -34,6 +35,24 @@ func equalGroups(a, b [][]int) bool { | |
| // existing implementation... | |
| } | |
| // normalizeGroups sorts members within each group and orders groups lexicographically. | |
| func normalizeGroups(gs [][]int) [][]int { | |
| out := make([][]int, len(gs)) | |
| for i, g := range gs { | |
| // copy to avoid mutating the original slice | |
| gg := append([]int(nil), g...) | |
| sort.Ints(gg) | |
| out[i] = gg | |
| } | |
| // sort the groups themselves | |
| sort.Slice(out, func(i, j int) bool { | |
| a, b := out[i], out[j] | |
| for k := 0; k < len(a) && k < len(b); k++ { | |
| if a[k] != b[k] { | |
| return a[k] < b[k] | |
| } | |
| } | |
| return len(a) < len(b) | |
| }) | |
| return out | |
| } |
🤖 Prompt for AI Agents
In pkg/sync/expand/scc/scc_fuzz_test.go around lines 425-432, the test calls
normalizeGroups but that helper is missing which prevents compilation; add the
missing normalizeGroups implementation in this file (place it after the existing
equalGroups helper, as requested, below line 34) and update imports to include
"sort" (ensure import block is sorted). The normalizeGroups function should
accept the same group type used by equalGroups, produce a deterministic
normalized representation (e.g., sort nodes within groups and sort the groups by
a stable key) so comparing normalizedGroups works reliably, and be declared
in-file right after equalGroups.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
pkg/sync/expand/scc/scc_fuzz_test.go (2)
345-347: Fix invalid composite literal for empty adjacency.
map[int]map[int]int{0: {}} doesn’t compile; inner type must be specified.- return map[int]map[int]int{0: {}} + return map[int]map[int]int{0: map[int]int{}}
418-440: normalizeGroups is undefined; tests won’t compile.
Add the helper and import sort.Add import:
import ( "context" "encoding/binary" "math" "math/rand" + "sort" "reflect" "testing" "time" )Add helper after equalGroups (after Line 34):
+// normalizeGroups sorts members within each group and orders groups lexicographically. +func normalizeGroups(gs [][]int) [][]int { + out := make([][]int, len(gs)) + for i, g := range gs { + gg := append([]int(nil), g...) + sort.Ints(gg) + out[i] = gg + } + sort.Slice(out, func(i, j int) bool { + a, b := out[i], out[j] + for k := 0; k < len(a) && k < len(b); k++ { + if a[k] != b[k] { + return a[k] < b[k] + } + } + return len(a) < len(b) + }) + return out +}
🧹 Nitpick comments (5)
pkg/sync/expand/scc/scc_fuzz_test.go (5)
13-13: Punctuate top-level comment to satisfy godot.
Add a period.-// Helper utilities +// Helper utilities.
117-125: Remove redundant nil-guard in addEdge.
All node maps are pre-initialized (Lines 113–115); the nil check is dead code.- if adj[u] == nil { - adj[u] = make(map[int]int) - } adj[u][v] = 1
134-148: Edge budget semantics are inconsistent in mode 0.
You increment edgesAdded for the first edge only; bidirectional and self-loop additions don’t count against budget. Other modes (e.g., 3) do increment for added reverse edges. Consider counting only unique insertions and applying budget uniformly.
299-319: Avoid flakiness: relax timeout.
30ms can be tight on loaded CI. Suggest 100–250ms to reduce flakes while still exercising cancellation.- ctx, cancel := context.WithTimeout(context.Background(), 30*time.Millisecond) + ctx, cancel := context.WithTimeout(context.Background(), 150*time.Millisecond)
357-376: Quiet gosec G115 by avoiding int→uint64 casts and using a uint64 max-int constant.
Rework the clamps to compare against uint64 values without direct int→uint64 casts; also replace uint64(^uint(0)>>1) with a pure-uint64 constant.- // clamp n64 and m64 within int range and provided bounds without unsafe casts - var n int - if n64 < 1 { - n = 1 - } else if n64 > uint64(maxN) { - n = maxN - } else if n64 > uint64(^uint(0)>>1) { - // larger than max int; cap to maxN which is <= max int - n = maxN - } else { - n = int(n64) - } - var m int - if m64 > uint64(maxM) { - m = maxM - } else if m64 > uint64(^uint(0)>>1) { - m = maxM - } else { - m = int(m64) - } + // clamp n64 and m64 within int range and provided bounds without int→uint64 casts + const maxIntUint64 = ^uint64(0) >> 1 + var n int + switch { + case n64 < 1: + n = 1 + case n64 > uint64(int64(maxN)): + n = maxN + case n64 > maxIntUint64: + n = maxN + default: + n = int(n64) + } + var m int + switch { + case m64 > uint64(int64(maxM)): + m = maxM + case m64 > maxIntUint64: + m = maxM + default: + m = int(m64) + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
pkg/sync/expand/scc/scc_fuzz_test.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
pkg/sync/expand/scc/scc_fuzz_test.go (1)
pkg/sync/expand/scc/scc.go (2)
DefaultOptions(41-46)CondenseFWBW(77-104)
🪛 GitHub Check: go-lint
pkg/sync/expand/scc/scc_fuzz_test.go
[failure] 370-370:
G115: integer overflow conversion int -> uint64 (gosec)
[failure] 361-361:
G115: integer overflow conversion int -> uint64 (gosec)
[failure] 13-13:
Comment should end in a period (godot)
[failure] 359-359:
ifElseChain: rewrite if-else to switch statement (gocritic)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Cursor Bugbot
- GitHub Check: go-test (1.23.x, windows-latest)
🔇 Additional comments (3)
pkg/sync/expand/scc/scc_fuzz_test.go (3)
24-34: Group equality helper looks good.
No issues spotted.
36-53: Partition assertion is solid.
Covers duplicates and missing nodes.
55-105: DAG condensation check is correct.
Topo-sort based cycle detection is appropriate here.
fb31a00 to
7c42bcf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pkg/sync/expand/scc/scc.go (1)
109-123: Deduplicate nodes from Source to avoid duplicated NodeIDs in CSR/groups.If ForEachNode yields duplicates, components may contain the same original ID multiple times. Dedup at collection.
- // 1) Collect nodes - nodes := make([]int, 0, 1024) - src.ForEachNode(func(id int) bool { - nodes = append(nodes, id) - return true - }) + // 1) Collect nodes (deduplicated) + nodes := make([]int, 0, 1024) + seen := make(map[int]struct{}, 1024) + src.ForEachNode(func(id int) bool { + if _, ok := seen[id]; ok { + return true + } + seen[id] = struct{}{} + nodes = append(nodes, id) + return true + })
♻️ Duplicate comments (4)
pkg/sync/expand/scc/scc_fuzz_test.go (3)
344-346: Fix invalid empty map composite literal (compile error).Inner map type must be specified.
Apply:
- if len(data) == 0 { - return map[int]map[int]int{0: {}} - } + if len(data) == 0 { + return map[int]map[int]int{0: map[int]int{}} + }
3-11: Add missing import for normalizeGroups.You call normalizeGroups below; it requires sort.
import ( "context" "encoding/binary" "math" "math/rand" + "sort" "reflect" "testing" "time" )
34-36: Provide normalizeGroups helper to make idempotence check compile and be deterministic.Missing function prevents build; add right after equalGroups.
+// normalizeGroups sorts members within each group and orders groups lexicographically. +func normalizeGroups(gs [][]int) [][]int { + out := make([][]int, len(gs)) + for i, g := range gs { + gg := append([]int(nil), g...) + sort.Ints(gg) + out[i] = gg + } + sort.Slice(out, func(i, j int) bool { + a, b := out[i], out[j] + for k := 0; k < len(a) && k < len(b); k++ { + if a[k] != b[k] { + return a[k] < b[k] + } + } + return len(a) < len(b) + }) + return out +}pkg/sync/expand/scc/scc.go (1)
316-325: Remove redundant trim loop; trimSingletons already reaches a fixpoint.This avoids extra O(n+m) passes with no effect.
- // Trim loop: peel sources/sinks; each peeled vertex becomes its own SCC. - for { - if n := trimSingletons(csr, active, comp, &nextID); n == 0 { - break - } - if active.isEmpty() { - break - } - } + // Trim sources/sinks to fixpoint; each peeled vertex becomes its own SCC. + _ = trimSingletons(csr, active, comp, &nextID) if active.isEmpty() { continue }
🧹 Nitpick comments (3)
pkg/sync/expand/scc/scc.go (3)
51-58: Doc invariant mentions NodeIDToIdx but struct no longer stores it.Update comment to match the type.
-// - len(IdxToNodeID) == N; NodeIDToIdx[IdxToNodeID[i]] == i for all i +// - len(IdxToNodeID) == N
129-139: Simplify membership check; remove unused j.Minor cleanup.
- src.ForEachEdgeFrom(srcID, func(dst int) bool { - j, ok := id2idx[dst] - if !ok { - return true - } - _ = j // only used to validate membership + src.ForEachEdgeFrom(srcID, func(dst int) bool { + if _, ok := id2idx[dst]; !ok { + return true + } outDeg[i]++ m++ return true })
211-219: Consider gating validateCSR behind a debug build tag or option.validateCSR is O(n+m) and always-on; make it conditional for production.
Happy to wire this under a
//go:build sccdebugguard or an Options flag if desired.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
pkg/sync/expand/scc/scc.go(12 hunks)pkg/sync/expand/scc/scc_fuzz_test.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
pkg/sync/expand/scc/scc_fuzz_test.go (1)
pkg/sync/expand/scc/scc.go (2)
DefaultOptions(41-46)CondenseFWBW(77-104)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Cursor Bugbot
- GitHub Check: go-test (1.23.x, windows-latest)
🔇 Additional comments (1)
pkg/sync/expand/scc/scc.go (1)
418-472: BFS work partitioning looks solid.Good clamping of worker count, chunking, and per-bucket aggregation with atomic visited. LGTM.
| root := newBitset(csr.N) | ||
| for i := range csr.N { | ||
| root.set(i) | ||
| } | ||
| return groups | ||
| } | ||
|
|
||
| func sccFWBW(ctx context.Context, csr *CSR, active *bitset, comp []int, nextID *int, opts Options) { | ||
| // Optional trimming loop. | ||
| if opts.EnableTrim { | ||
| type item struct{ mask *bitset } | ||
| stack := make([]item, 0, 64) | ||
| stack = append(stack, item{mask: root}) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Compile error: cannot range over int.
Initialize the root mask with a standard for-loop.
- root := newBitset(csr.N)
- for i := range csr.N {
- root.set(i)
- }
+ root := newBitset(csr.N)
+ for i := 0; i < csr.N; i++ {
+ root.set(i)
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| root := newBitset(csr.N) | |
| for i := range csr.N { | |
| root.set(i) | |
| } | |
| return groups | |
| } | |
| func sccFWBW(ctx context.Context, csr *CSR, active *bitset, comp []int, nextID *int, opts Options) { | |
| // Optional trimming loop. | |
| if opts.EnableTrim { | |
| type item struct{ mask *bitset } | |
| stack := make([]item, 0, 64) | |
| stack = append(stack, item{mask: root}) | |
| root := newBitset(csr.N) | |
| for i := 0; i < csr.N; i++ { | |
| root.set(i) | |
| } | |
| type item struct{ mask *bitset } | |
| stack := make([]item, 0, 64) | |
| stack = append(stack, item{mask: root}) |
🤖 Prompt for AI Agents
In pkg/sync/expand/scc/scc.go around lines 296 to 304, the code attempts to
range over csr.N (an int) which causes a compile error; replace the range with a
standard indexed for-loop: create the bitset with newBitset(csr.N) then iterate
for i := 0; i < csr.N; i++ { root.set(i) } so the root mask is initialized
correctly before pushing it on the stack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
pkg/sync/expand/cycle.go (2)
137-145: Fix: removeNode double-deletes SourcesToDestinations and leaves DestinationsToSources entry.The final delete targets SourcesToDestinations again instead of DestinationsToSources, leaving a stale row.
Apply:
- delete(g.SourcesToDestinations, nodeID) + delete(g.DestinationsToSources, nodeID)Add a postcondition test to assert both maps lack nodeID after removal.
I can add a small unit test to validate adjacency invariants after fixCycle/removeNode. Want me to open an issue and draft it?
33-41: Propagate context incycleDetectionHelper
Add actx context.Contextparameter and pass it toscc.CondenseFWBWinstead of usingcontext.Background():- func (g *EntitlementGraph) cycleDetectionHelper( - nodeID int, - ) ([]int, bool) { + func (g *EntitlementGraph) cycleDetectionHelper( + ctx context.Context, + nodeID int, + ) ([]int, bool) { @@ - groups, _ := scc.CondenseFWBW(context.Background(), fg, scc.DefaultOptions()) + groups, _ := scc.CondenseFWBW(ctx, fg, scc.DefaultOptions())Update all call sites in tests and benchmarks to pass the existing
ctxas the first argument:
pkg/sync/expand/cycle_test.goat lines 62, 84, 93, 119pkg/sync/expand/cycle_benchmark_test.goat lines 118, 129, 139, 148
♻️ Duplicate comments (2)
pkg/sync/expand/scc/scc.go (2)
320-323: Fix compile error: cannot range over intUse a standard indexed loop to set the initial root mask.
Apply:
- root := newBitset(csr.N) - for i := range csr.N { - root.set(i) - } + root := newBitset(csr.N) + for i := 0; i < csr.N; i++ { + root.set(i) + }
343-356: Remove redundant trim loop (unnecessary O(n+m) repeats)trimSingletons already peels to a fixpoint; calling it in a loop just recomputes degrees repeatedly with no effect.
Apply:
- // Trim loop: peel sources/sinks; each peeled vertex becomes its own SCC. - for { - if n := trimSingletons(csr, active, comp, &nextID); n == 0 { - break - } else if metrics != nil { - metrics.Peeled += n - } - if active.isEmpty() { - break - } - } - if active.isEmpty() { - continue - } + // Trim sources/sinks to fixpoint; each peeled vertex becomes its own SCC. + n := trimSingletons(csr, active, comp, &nextID) + if metrics != nil { + metrics.Peeled += n + } + if active.isEmpty() { + continue + }
🧹 Nitpick comments (9)
pkg/sync/expand/scc/scc.go (4)
56-58: Fix CSR invariant comment (references removed field)Documentation still mentions NodeIDToIdx which no longer exists.
Apply:
-// - len(IdxToNodeID) == N; NodeIDToIdx[IdxToNodeID[i]] == i for all i +// - len(IdxToNodeID) == N
235-237: Make CSR validation opt-in to avoid panics in production pathsvalidateCSR panics on violations; allow callers to disable in hot/production paths while keeping default-on behavior.
Apply:
type Options struct { - MaxWorkers int - Deterministic bool + MaxWorkers int + Deterministic bool + Validate bool } // DefaultOptions returns a sensible baseline. func DefaultOptions() Options { return Options{ MaxWorkers: runtime.GOMAXPROCS(0), - Deterministic: false, + Deterministic: false, + Validate: true, } }- validateCSR(csr) + if opts.Validate { + validateCSR(csr) + }Also applies to: 36-39, 41-47
442-447: Remove dead check in BFSv comes from CSR columns; it cannot be negative post-validate. This branch adds noise.
Apply:
- if v < 0 { - continue - }
149-164: Optional: avoid double edge enumeration in deterministic pathYou enumerate edges twice (count, then fill). If Source callbacks are expensive, cache per-row neighbors once and reuse for outDeg and fill.
Also applies to: 175-204
pkg/sync/expand/scc/scc_test.go (1)
210-216: Fail fast on unexpected empty groupsEmpty groups indicate a packing bug; prefer asserting instead of filtering to surface regressions early.
Apply:
- for _, g := range groups { - if len(g) == 0 { - continue - } - sizes = append(sizes, len(g)) - } + for i, g := range groups { + if len(g) == 0 { + t.Fatalf("unexpected empty group at index %d", i) + } + sizes = append(sizes, len(g)) + }pkg/sync/expand/cycle.go (4)
29-31: Avoid double SCC work when both HasCycles and GetFirstCycle are called.Both paths compute components; callers that invoke HasCycles then GetFirstCycle in succession will pay twice. Consider caching the last SCC result in-context (or returning a bool alongside the first cycle) to amortize cost within a request.
61-62: Expose SCC options or flip Deterministic in tests for reproducibility.DefaultOptions uses non-deterministic iteration; if tests/logs depend on stable ordering, consider accepting an opts parameter or setting Deterministic=true in test paths.
97-111: Defensive: skip edges pointing to non-existent nodes.If invariants slip (e.g., partial removals), CSR build could reference missing nodes. Cheap guard:
Apply:
func (fg filteredGraph) ForEachEdgeFrom(src int, fn func(dst int) bool) { if fg.include != nil && !fg.include(src) { return } if dsts, ok := fg.g.SourcesToDestinations[src]; ok { for dst := range dsts { + if _, exists := fg.g.Nodes[dst]; !exists { + continue + } if fg.include != nil && !fg.include(dst) { continue } if !fn(dst) { return } } } }
57-69: Always return non-nil Metrics and pre-allocate thecyclicslice
- Replace
return nil, nilwithreturn nil, &scc.Metrics{}to avoid forcing callers to nil-check.- Use
make([][]int, 0, len(groups))instead ofmake([][]int, 0)to avoid repeated allocations.func (g *EntitlementGraph) ComputeCyclicComponents(ctx context.Context) ([][]int, *scc.Metrics) { if g.HasNoCycles { - return nil, nil + return nil, &scc.Metrics{} } groups, metrics := scc.CondenseFWBW(ctx, g, scc.DefaultOptions()) - cyclic := make([][]int, 0) + cyclic := make([][]int, 0, len(groups)) for _, comp := range groups { if len(comp) > 1 || (len(comp) == 1 && g.hasSelfLoop(comp[0])) { cyclic = append(cyclic, comp) } } return cyclic, metrics }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
pkg/sync/expand/cycle.go(3 hunks)pkg/sync/expand/scc/scc.go(12 hunks)pkg/sync/expand/scc/scc_fuzz_test.go(1 hunks)pkg/sync/expand/scc/scc_nohang_test.go(1 hunks)pkg/sync/expand/scc/scc_test.go(10 hunks)pkg/sync/syncer.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- pkg/sync/expand/scc/scc_fuzz_test.go
- pkg/sync/expand/scc/scc_nohang_test.go
🧰 Additional context used
🧬 Code graph analysis (2)
pkg/sync/expand/scc/scc_test.go (1)
pkg/sync/expand/scc/scc.go (1)
CondenseFWBW(92-128)
pkg/sync/expand/cycle.go (2)
pkg/sync/expand/scc/scc.go (3)
CondenseFWBW(92-128)DefaultOptions(42-47)Metrics(78-89)pkg/sync/expand/graph.go (1)
EntitlementGraph(42-54)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Cursor Bugbot
- GitHub Check: go-test (1.23.x, windows-latest)
🔇 Additional comments (7)
pkg/sync/expand/scc/scc_test.go (2)
62-71: LGTM: updated API usage and metrics assertionsTests correctly adopt CondenseFWBW(src) and validate core metrics for a simple ring.
245-257: Determinism test looks solidMaxWorkers=1 plus normalized comparisons makes runs stable across iterations.
pkg/sync/syncer.go (1)
1365-1372: LGTM: surface SCC metrics in logsCapturing scc_metrics alongside a sample cycle is useful for observability.
pkg/sync/expand/cycle.go (4)
17-21: LGTM: GetFirstCycle now delegates to SCC-driven components.The early-return on HasNoCycles and selection of the first cyclic component look good.
43-45: LGTM: correct cycle predicate including self-loops.Handling singleton components via hasSelfLoop mirrors the SCC definition.
51-53: LGTM: FixCycles uses precomputed cyclic components.Keeps SCC work centralized.
71-78: LGTM: hasSelfLoop is simple and O(1).Directly checks adjacency row; good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pkg/sync/expand/cycle.go (1)
145-146: Wrong map deleted; leaks stale reverse-entry.Should delete from DestinationsToSources, not SourcesToDestinations (already deleted above).
- delete(g.SourcesToDestinations, nodeID) + delete(g.DestinationsToSources, nodeID)
♻️ Duplicate comments (3)
pkg/sync/expand/scc/bitset.go (2)
34-40: Guard against OOB and use uint64 mask in set.Prevent panic on short/zero-capacity bitsets and fix mask type.
func (b *bitset) set(i int) { if i < 0 { return } w := i >> 6 - b.w[w] |= 1 << (uint(i) & 63) + if w >= len(b.w) { + return + } + b.w[w] |= uint64(1) << (uint(i) & 63) }
23-31: Fix mask type to uint64 to avoid compile error.Bitwise ops mix uint64 with an int-typed shift result. Use uint64(1) for the mask.
- return (b.w[w] & (1 << (uint(i) & 63))) != 0 + return (b.w[w] & (uint64(1) << (uint(i) & 63))) != 0pkg/sync/expand/scc/scc.go (1)
343-356: Remove redundant trim loop; trimSingletons already peels to a fixpoint.This loop re-runs O(n+m) work unnecessarily and inflates metrics.
- // Trim loop: peel sources/sinks; each peeled vertex becomes its own SCC. - for { - if n := trimSingletons(csr, active, comp, &nextID); n == 0 { - break - } else if metrics != nil { - metrics.Peeled += n - } - if active.isEmpty() { - break - } - } - if active.isEmpty() { - continue - } + // Trim to fixpoint once; trimSingletons already peels transitively. + n := trimSingletons(csr, active, comp, &nextID) + if metrics != nil { + metrics.Peeled += n + } + if active.isEmpty() { + continue + }
🧹 Nitpick comments (3)
pkg/sync/expand/cycle_benchmark_test.go (1)
110-111: Scope ctx per sub-benchmark to avoid cross-run couplingCreate a fresh ctx inside each b.Run to prevent future cancellations (if added) from leaking across sub-benchmarks.
-ctx, cancel := context.WithCancel(context.Background()) -defer cancel() +// Create ctx per sub-benchmark below instead of sharing one across all.And within each b.Run:
b.Run(fmt.Sprintf("ring-%d", n), func(b *testing.B) { + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() g := buildRing(b, n)pkg/sync/expand/scc/bitset.go (1)
87-106: Defensive length handling for bitwise ops.If x and b differ in size, these will panic. Use min length; zero the tail for and/andNot.
func (b *bitset) and(x *bitset) *bitset { - for i := range b.w { - b.w[i] &= x.w[i] - } + n := len(b.w) + if len(x.w) < n { + n = len(x.w) + } + for i := 0; i < n; i++ { + b.w[i] &= x.w[i] + } + for i := n; i < len(b.w); i++ { + b.w[i] = 0 + } return b } func (b *bitset) or(x *bitset) *bitset { - for i := range b.w { - b.w[i] |= x.w[i] - } + n := len(b.w) + if len(x.w) < n { + n = len(x.w) + } + for i := 0; i < n; i++ { + b.w[i] |= x.w[i] + } return b } func (b *bitset) andNot(x *bitset) *bitset { - for i := range b.w { - b.w[i] &^= x.w[i] - } + n := len(b.w) + if len(x.w) < n { + n = len(x.w) + } + for i := 0; i < n; i++ { + b.w[i] &^= x.w[i] + } + for i := n; i < len(b.w); i++ { + b.w[i] = 0 + } return b }pkg/sync/expand/cycle.go (1)
41-43: Enable deterministic SCC for stable results.Map iteration order is random; passing Deterministic=true avoids flaky ordering in GetFirstCycle/metrics.
- groups, _ := scc.CondenseFWBW(ctx, fg, scc.DefaultOptions()) + opts := scc.DefaultOptions() + opts.Deterministic = true + groups, _ := scc.CondenseFWBW(ctx, fg, opts)- groups, metrics := scc.CondenseFWBW(ctx, g, scc.DefaultOptions()) + opts := scc.DefaultOptions() + opts.Deterministic = true + groups, metrics := scc.CondenseFWBW(ctx, g, opts)Also applies to: 62-63
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
pkg/sync/expand/cycle.go(2 hunks)pkg/sync/expand/cycle_benchmark_test.go(5 hunks)pkg/sync/expand/cycle_test.go(4 hunks)pkg/sync/expand/graph_test.go(4 hunks)pkg/sync/expand/scc/bitset.go(1 hunks)pkg/sync/expand/scc/scc.go(10 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
pkg/sync/expand/cycle.go (2)
pkg/sync/expand/graph.go (1)
EntitlementGraph(42-54)pkg/sync/expand/scc/scc.go (3)
CondenseFWBW(92-128)DefaultOptions(42-47)Metrics(78-89)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Cursor Bugbot
- GitHub Check: go-test (1.23.x, windows-latest)
🔇 Additional comments (5)
pkg/sync/expand/cycle_benchmark_test.go (1)
120-120: LGTM: context-aware helper usageCalls updated to cycleDetectionHelper(ctx, start) match the new API.
Also applies to: 131-131, 141-141, 151-151
pkg/sync/expand/cycle_test.go (1)
63-63: LGTM: tests now pass ctx to cycleDetectionHelperContext propagation in tests aligns with the updated helper signature.
Also applies to: 85-85, 94-94, 119-119
pkg/sync/expand/graph_test.go (2)
5-5: LGTM: fmt importNeeded for expression construction; all good.
24-25: LGTM: rely on type inference for mapset.NewSetGenerics inference from []int is fine here.
pkg/sync/expand/scc/scc.go (1)
111-124: Replace invalid “range over int” with indexed loops.Go cannot range over an int; the current code does not compile in many places.
- for idx := range csr.N { + for idx := 0; idx < csr.N; idx++ { cid := comp[idx] ... } @@ - for i := range csr.N { + for i := 0; i < csr.N; i++ { root.set(i) } @@ - for i := range n { + for i := 0; i < n; i++ { srcID := nodes[i] ... } @@ - for i := range n { + for i := 0; i < n; i++ { row[i+1] = row[i] + outDeg[i] } @@ - if opts.Deterministic { - for i := range n { + if opts.Deterministic { + for i := 0; i < n; i++ { srcID := nodes[i] ... } } else { - for i := range n { + for i := 0; i < n; i++ { srcID := nodes[i] ... } } @@ - for i := range n { + for i := 0; i < n; i++ { trow[i+1] = trow[i] + inDeg[i] } @@ - for u := range n { + for u := 0; u < n; u++ { start, end := row[u], row[u+1] ... } @@ - for i := range len(csr.Row) - 1 { + for i := 0; i < len(csr.Row)-1; i++ { if csr.Row[i] > csr.Row[i+1] { ... } } @@ - for i := range len(csr.TRow) - 1 { + for i := 0; i < len(csr.TRow)-1; i++ { if csr.TRow[i] > csr.TRow[i+1] { ... } } @@ - for p := range len(csr.Col) { + for p := 0; p < len(csr.Col); p++ { v := csr.Col[p] ... } @@ - for p := range len(csr.TCol) { + for p := 0; p < len(csr.TCol); p++ { v := csr.TCol[p] ... } @@ - for v := range n { + for v := 0; v < n; v++ { expected := inDeg[v] ... } @@ - for u := range n { + for u := 0; u < n; u++ { if !active.test(u) { ... } ... } @@ - for v := range n { + for v := 0; v < n; v++ { if !active.test(v) { ... } ... } @@ - for i := range n { + for i := 0; i < n; i++ { if !active.test(i) { ... } if inDeg[i] == 0 || outDeg[i] == 0 { queue = append(queue, i) } }Also applies to: 320-323, 152-164, 168-174, 176-191, 192-202, 211-217, 217-225, 263-267, 276-280, 286-291, 292-297, 304-311, 590-604, 605-618, 623-631
⛔ Skipped due to learnings
Learnt from: ggreer PR: ConductorOne/baton-sdk#321 File: pkg/tasks/c1api/service_client.go:190-190 Timestamp: 2025-03-10T17:07:23.395Z Learning: Go 1.22 (released February 2024) added support for range over integers, allowing syntax like `for i := range n` which iterates from 0 to n-1 and is equivalent to `for i := 0; i < n; i++`.Learnt from: ggreer PR: ConductorOne/baton-sdk#321 File: pkg/tasks/c1api/service_client.go:190-190 Timestamp: 2025-03-10T17:07:23.395Z Learning: Go 1.22 (released February 2024) added support for range over integers, allowing syntax like `for i := range n` which iterates from 0 to n-1 and is equivalent to `for i := 0; i < n; i++`.
| expressionStr := "" | ||
| for i := range depth { | ||
| expressionStr += fmt.Sprintf("%d>%d", i+1, i+2) | ||
| } | ||
| graph := parseExpression(t, ctx, expressionStr) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix chain expression construction (currently builds invalid string)
Concatenating "i>i+1" pairs yields "1>22>33..." instead of "1>2>3...". Build a single chain string.
- expressionStr := ""
- for i := range depth {
- expressionStr += fmt.Sprintf("%d>%d", i+1, i+2)
- }
- graph := parseExpression(t, ctx, expressionStr)
+ var sb strings.Builder
+ sb.WriteString("1")
+ for i := 2; i <= depth+1; i++ {
+ fmt.Fprintf(&sb, ">%d", i)
+ }
+ expressionStr := sb.String()
+ graph := parseExpression(t, ctx, expressionStr)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In pkg/sync/expand/graph_test.go around lines 296 to 301, the test builds the
chain expression incorrectly by concatenating "i>i+1" pairs which produces
strings like "1>22>33..."; instead construct a single chain "1>2>3...": iterate
from 1 to len(depth)+1 and append each number separated by ">" (or build a slice
of numbers as strings and strings.Join with ">"), then pass that joined string
to parseExpression so the expression is valid.
| expressionStr := "" | ||
| for i := range depth { | ||
| expressionStr += fmt.Sprintf("%d>%d", i+1, i+2) | ||
| } | ||
| expressionStr += fmt.Sprintf("%d>%d", depth, 1) | ||
| graph := parseExpression(t, ctx, expressionStr) | ||
|
|
||
| require.Equal(t, depth+1, len(graph.Nodes)) | ||
| require.Equal(t, depth+1, len(graph.Edges)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Close the ring correctly and build the expression safely
Two issues: (1) same pair-concatenation bug as above; (2) closing edge should be (depth+1)->1, not depth->1, to include all nodes in the cycle and satisfy the postconditions.
- expressionStr := ""
- for i := range depth {
- expressionStr += fmt.Sprintf("%d>%d", i+1, i+2)
- }
- expressionStr += fmt.Sprintf("%d>%d", depth, 1)
+ var sb strings.Builder
+ sb.WriteString("1")
+ for i := 2; i <= depth+1; i++ {
+ fmt.Fprintf(&sb, ">%d", i)
+ }
+ // Close the ring: (depth+1) -> 1
+ fmt.Fprintf(&sb, ">%d", 1)
+ expressionStr := sb.String()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| expressionStr := "" | |
| for i := range depth { | |
| expressionStr += fmt.Sprintf("%d>%d", i+1, i+2) | |
| } | |
| expressionStr += fmt.Sprintf("%d>%d", depth, 1) | |
| graph := parseExpression(t, ctx, expressionStr) | |
| require.Equal(t, depth+1, len(graph.Nodes)) | |
| require.Equal(t, depth+1, len(graph.Edges)) | |
| // Build a correct cycle expression: 1→2→3→…→(depth+1)→1 | |
| var sb strings.Builder | |
| sb.WriteString("1") | |
| for i := 2; i <= depth+1; i++ { | |
| fmt.Fprintf(&sb, ">%d", i) | |
| } | |
| // Close the ring: (depth+1) -> 1 | |
| fmt.Fprintf(&sb, ">%d", 1) | |
| expressionStr := sb.String() | |
| graph := parseExpression(t, ctx, expressionStr) | |
| require.Equal(t, depth+1, len(graph.Nodes)) | |
| require.Equal(t, depth+1, len(graph.Edges)) |
🤖 Prompt for AI Agents
In pkg/sync/expand/graph_test.go around lines 325 to 333, the expression builder
uses an incorrect range and closes the ring with the wrong node, causing the
cycle to miss the last node; replace the loop with an index-based loop (for i :=
0; i < depth; i++) that appends each pair i+1->i+2, separate pairs with the
required delimiter, and then append the final closing edge using (depth+1)->1 so
the cycle includes all nodes; prefer using a strings.Builder to build the
expression safely and efficiently rather than repeated string concatenation.
| func (b *bitset) forEachSet(fn func(i int)) { | ||
| for wi, w := range b.w { | ||
| for w != 0 { | ||
| tz := bits.TrailingZeros64(w) | ||
| i := (wi << 6) + tz | ||
| fn(i) | ||
| w &^= 1 << uint(tz) //nolint:gosec // trailing zeros is non-negative | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use uint64 mask in forEachSet.
Same mixed-type issue; current code won’t compile.
- w &^= 1 << uint(tz) //nolint:gosec // trailing zeros is non-negative
+ w &^= uint64(1) << uint(tz) //nolint:gosec // trailing zeros is non-negative📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (b *bitset) forEachSet(fn func(i int)) { | |
| for wi, w := range b.w { | |
| for w != 0 { | |
| tz := bits.TrailingZeros64(w) | |
| i := (wi << 6) + tz | |
| fn(i) | |
| w &^= 1 << uint(tz) //nolint:gosec // trailing zeros is non-negative | |
| } | |
| func (b *bitset) forEachSet(fn func(i int)) { | |
| for wi, w := range b.w { | |
| for w != 0 { | |
| tz := bits.TrailingZeros64(w) | |
| i := (wi << 6) + tz | |
| fn(i) | |
| w &^= uint64(1) << uint(tz) //nolint:gosec // trailing zeros is non-negative | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In pkg/sync/expand/scc/bitset.go around lines 117 to 124, the bit-clear
expression mixes integer types and won't compile; replace the mask expression w
&^= 1 << uint(tz) with a uint64 mask (e.g. w &^= uint64(1) << uint(tz)) so both
operands are uint64 and the shift uses a uint cast for tz.
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Refactor