-
Notifications
You must be signed in to change notification settings - Fork 194
PMM-14737 QAN dev env. #4955
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PMM-14737 QAN dev env. #4955
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## v3 #4955 +/- ##
==========================================
- Coverage 45.83% 45.16% -0.67%
==========================================
Files 367 367
Lines 38531 38531
==========================================
- Hits 17659 17402 -257
- Misses 19172 19458 +286
+ Partials 1700 1671 -29
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| <storage_configuration> | ||
| <disks> | ||
| <backup> | ||
| <type>local</type> | ||
| <path>/root/go/src/github.com/percona/pmm/dev/clickhouse-backups/</path> | ||
| </backup> | ||
| </disks> | ||
| </storage_configuration> | ||
| <backups> | ||
| <allowed_disk>backup</allowed_disk> | ||
| </backups> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Real changes in dev config compared to default in PMM Server. Other lines are copy out from there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is worth to explicitly describe that these MAKE targets shall be running outside of docker container. Because right now we have a mix: some shall be run on host, the other in pmm-server docker container (like make run-managed).
| if [ "$num" -gt "$start" ]; then | ||
| echo "Running migration: $file" | ||
| sql=$(sed 's/ALTER TABLE metrics/ALTER TABLE pmm.metrics/g' "$dir/$file") | ||
| docker exec pmm-server clickhouse-client --password=clickhouse --query="$sql" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would add the possibility to override credentials via ENV
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense if this is only for development purposes and we have static credentials in ClickHouse? It is not included in the image, and customers cannot use it anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is really for development purposes but we can't guarantee that this script will be used with our default CH deployment only. What if developers spins up his separate deployment 9another CH version, cluster-based once, etc) and needs to run benchmarks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Option to pass user and password via variables is added.
Co-authored-by: Maxim Kondratenko <[email protected]>
Co-authored-by: Maxim Kondratenko <[email protected]>
Co-authored-by: Maxim Kondratenko <[email protected]>
Co-authored-by: Maxim Kondratenko <[email protected]>
Co-authored-by: Maxim Kondratenko <[email protected]>
Co-authored-by: Maxim Kondratenko <[email protected]>
Co-authored-by: Maxim Kondratenko <[email protected]>
|
With $(info ) it always printed it to output in my case. |
qan-api2/pmm_demo_benchmark_test.go
Outdated
| for i := 0; i < b.N; i++ { | ||
| duration := benchmarkRequest(b, url, payload) | ||
| durations[i] = duration | ||
| b.Logf("iteration %d took %v", i, duration) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not quite correct measurements because:
- it measures request preparation in
benchmarkRequest - it measures HTTP client initialization
- it measures I/O in
b.Logf()call
I would suggest making the following:
- exclude request preparation (setup) from measurements:
func benchmarkRequest(b *testing.B, ...) time.Duration {
<init client>
<init request>
start := time.Now()
b.ResetTimer() // <--- start measurement here
resp, err := client.Do(req)
b.StopTimer() // <----- in order to exclude the rest of the function from measurement
duration := time.Since(start)
if err != nil {
b.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
b.Fatalf("Unexpected status code: %d", resp.StatusCode)
}
return duration
}
One more point - use testing.B.Loop() - https://go.dev/blog/testing-b-loop
func BenchmarkGetFilters(b *testing.B) {
url := baseURL + "qan/metrics:getFilters"
payload := `{
"labels": [],
"main_metric_name": "load"}`
f := prepareRequest(...)
for b.Loop() {
benchmarkRequest(b, url, payload)
}
}
in order to print an additional you may use https://pkg.go.dev/testing#B.ReportMetric
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Loop is applied.
- ReportMetric is used.
- Suggestion above
func benchmarkRequest(b *testing.B, ...) time.Duration {
<init client>
<init request>
start := time.Now()
b.ResetTimer() // <--- start measurement here
resp, err := client.Do(req)
b.StopTimer() // <----- in order to exclude the rest of the function from measurement
duration := time.Since(start)
if err != nil {
b.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
b.Fatalf("Unexpected status code: %d", resp.StatusCode)
}
return duration
}
results in the following error:
"benchmark.go:412: B.Loop called with timer stopped"
The timer is stopped and execution continues with the next iteration, so I added a Start call before End to fix this. However, resetting and stopping the timer will slightly affect the measured time, which is the main indicator we care about in this benchmark.
PMM-14737
FB: Percona-Lab/pmm-submodules#4201
We dont need update timestamps, because if data get deleted we can easily recover them by one command. Also it is good to stick with same data and period range when performing benchmark on different PMM versions.
Repository to store PMM Demo backups: https://github.com/Percona-Lab/pmm-demo-dump
Usage:
Variables:
Description
1. Download the PMM Demo backup
make download-clickhouse-backupThe default backup is 2026-01-20.
All available backups can be found here: https://github.com/Percona-Lab/pmm-demo-dump/releases/tag/pmm-demo
To download a different backup, set the Makefile variable:
BACKUP_NAME=<backup_name>
2. Restore the PMM Demo backup
After downloading, the backup will be available in "dev/clickhouse-backups/"
Example: dev/clickhouse-backups/20260120
Restore the backup by running:
make restore-clickhouse-backupYou must specify the back name and latest migration applied in the backup.
For the default backup (20260120), this is migration 21 (default).
To restore a different backup, set the Makefile variable:
BACKUP_NAME=<backup_name>
BACKUP_NAME=<backup_last_migration>
For other backups, the latest applied migration is documented in the release description:
https://github.com/Percona-Lab/pmm-demo-dump/releases/tag/pmm-demo
3. Run benchmarks
To benchmark all main endpoints:
make benchTo benchmark specific endpoints:
Each benchmark runs 10 iterations and reports AVG, MIN, and MAX timings.