What would you like to be added:
We need to establish a set of Go benchmarks to quantify the upper limits of the Flow Control layer, specifically focusing on the overhead of maintaining a large number of active queues (Flows).
Why is this needed:
As we onboard new users, a common question is "How many Fairness IDs (Queues) can we support?" (e.g., per-user fairness in a large MaaS app). We need concrete data to answer this.
Specific Benchmarks needed:
-
Memory Overhead: Cost per
Flow (Queue + Policy + Metrics) in the Registry.
-
Dispatch Cycle Latency vs. Flow Count:
- The
BestHead inter-flow policy currently iterates over candidates. We need to measure how dispatch cycle latency degrades as $N_{flows}$ scales from 10 to 10,000.
-
Goroutine Saturation: Since each buffered request holds a blocking goroutine, we should stress test the system with high concurrency (e.g., 50k+ buffered requests) to identify bottlenecks in the Go runtime scheduler or memory pressure.
Acceptance Criteria:
go test -bench results added to the repository or documentation.
- Identification of the "Knee of the curve" where performance degrades unacceptably.
What would you like to be added:
We need to establish a set of Go benchmarks to quantify the upper limits of the Flow Control layer, specifically focusing on the overhead of maintaining a large number of active queues (Flows).
Why is this needed:
As we onboard new users, a common question is "How many Fairness IDs (Queues) can we support?" (e.g., per-user fairness in a large MaaS app). We need concrete data to answer this.
Specific Benchmarks needed:
Flow(Queue + Policy + Metrics) in the Registry.BestHeadinter-flow policy currently iterates over candidates. We need to measure how dispatch cycle latency degrades asAcceptance Criteria:
go test -benchresults added to the repository or documentation.