Skip to content

Conversation

@JSCU-CNI
Copy link
Contributor

@JSCU-CNI JSCU-CNI commented Apr 30, 2025

Fixes #1126. Inspired (read: copy/paste'd) setup from fox-it/dissect.util#64.

@JSCU-CNI JSCU-CNI changed the title dd some benchmarks Add some benchmarks Apr 30, 2025
@codecov
Copy link

codecov bot commented May 1, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 77.90%. Comparing base (5200fc1) to head (a345b0c).
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1129      +/-   ##
==========================================
+ Coverage   77.86%   77.90%   +0.03%     
==========================================
  Files         358      358              
  Lines       32622    32622              
==========================================
+ Hits        25401    25413      +12     
+ Misses       7221     7209      -12     
Flag Coverage Δ
unittests 77.90% <ø> (+0.03%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Schamper
Copy link
Member

Schamper commented May 1, 2025

Can we make the journal (and maybe the walkfs) benchmark more targeted? For example, only parse a single entry.

@JSCU-CNI
Copy link
Contributor Author

JSCU-CNI commented May 6, 2025

Can we make the journal (and maybe the walkfs) benchmark more targeted? For example, only parse a single entry.

What do you propose? Call next() once on a plugin function generator?

@Schamper
Copy link
Member

Schamper commented May 7, 2025

Can we make the journal (and maybe the walkfs) benchmark more targeted? For example, only parse a single entry.

What do you propose? Call next() once on a plugin function generator?

It's tricky for sure, since just benchmarking next() is not very fruitful (it will change the state). At least for the journal parser, _parse_entry_object seems to be deterministic.

It's a bit annoying that CodSpeed does not yet support the benchmark.pedantic mode from pytest-benchmark (see CodSpeedHQ/pytest-codspeed#78), otherwise we could put plugin setup in there and test a single iteration of it easily.

Perhaps for the time being initializing the plugin and calling a single next is just the best we can do until CodSpeed supports that. So basically a lambda: next(MyPlugin(target).func()) as benchmark.

@codspeed-hq
Copy link

codspeed-hq bot commented May 8, 2025

CodSpeed Performance Report

Congrats! CodSpeed is installed 🎉

🆕 3 new benchmarks were detected.

You will start to see performance impacts in the reports once the benchmarks are run from your default branch.

Detected benchmarks

  • test_benchmark_find_needles (58.3 µs)
  • test_benchmark_walkfs (477.2 µs)
  • test_benchmark_journal (4.7 ms)

@Schamper Schamper merged commit dbf97a0 into fox-it:main May 8, 2025
22 of 25 checks passed
@JSCU-CNI JSCU-CNI deleted the add-benchmarks branch May 12, 2025 13:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add benchmark tests for scrape plugin

2 participants