Skip to content

Conversation

@MrMarble
Copy link
Member

@MrMarble MrMarble commented Nov 27, 2025

Important

Please read the description before looking at the code, I've also added some comments to the code itself

I've created a test fixture that gathers some analytics to help analyze the performance of the application.
The idea is to not longer need django-silk or similar to conduct some manual tests to check/improve performance, just write a test case replicating the workflow or calling the function you want to test using this new fixture and review the results.

The fixture will print a summary of the results in the cli after the tests:

===================================================== Performance Regression Report =====================================================
-----------------------------------------------------------------------------------------------------------------------------------------
Test Name                                | Time     | CPU Time | DB Time  | Queries | Tables | N+1  | Writes | Dup Queries | Slow queries
-----------------------------------------------------------------------------------------------------------------------------------------
test_flaw_details_with_client            |    256ms |    235ms |     13ms |      58 |     15 |    3 |      0 |           1 |            0
test_flaw_details_with_factory           |     25ms |     23ms |      1ms |      10 |     12 |    0 |      0 |           0 |            0
test_list_endpoints[/flaws]              |    145ms |    141ms |      1ms |      14 |     13 |    0 |      0 |           0 |            0
test_list_endpoints[/flaws?include_hi... |    171ms |    160ms |      8ms |      14 |     23 |    0 |      0 |           0 |            0
test_list_endpoints[/flaws?exclude_fi... |    129ms |    123ms |      7ms |      13 |     13 |    0 |      0 |           0 |            0
test_list_endpoints[/flaws?include_fi... |     73ms |     65ms |      3ms |      24 |      2 |    2 |      0 |           0 |            0
test_list_endpoints[/affects]            |     62ms |     60ms |      3ms |       5 |      6 |    0 |      0 |           0 |            0
test_list_endpoints[/affects?include_... |     79ms |     74ms |      6ms |       6 |     11 |    0 |      0 |           0 |            0
test_fn_call                             |     57ms |     47ms |     15ms |      18 |      9 |    1 |      0 |           0 |            0
test_create_flaw                         |     33ms |     29ms |      1ms |      23 |      9 |    2 |      2 |           3 |            0

It will also generate a full report in markdown in to performance_report_YYYY-MM-DD_HH-MM.md so you can review old runs while iterating, this report will also be added to the github summary for the test action, you can check it here https://github.com/RedHatProductSecurity/osidb/actions/runs/19742723923?pr=1147.

The tests I wrote in this pull request are just an example of usage, not intended to be merged, we could keep the fixture alone for manual usage or ideally implement some kind of smoke tests that runs on release (detectable by branch name for example) and have these test generate the report, but we should discuss it.

I tried to keep it simple and objective without assuming or suggesting fixes/improvements that could be wrong, is just plain data to help understand the performance of the application.

@MrMarble MrMarble self-assigned this Nov 27, 2025
@MrMarble MrMarble added the technical For PRs that introduce changes not worthy of a CHANGELOG entry label Nov 27, 2025
@MrMarble MrMarble requested a review from a team November 27, 2025 16:46
@MrMarble MrMarble changed the title Create new perf pytest marker Performance auditor fixture Nov 27, 2025
Copy link
Contributor

@roduran-dev roduran-dev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very, nice change, with this we can introduce some bothersome cases like megaflaws and compare the performance.

2 questions:

  • does this change the default behaviour on pytest output?
  • this just changes only if performance_audit is used?

@MrMarble
Copy link
Member Author

Very, nice change, with this we can introduce some bothersome cases like megaflaws and compare the performance.

2 questions:

* does this change the default behaviour on pytest output?

* this just changes only if performance_audit is used?
  1. No it does not change the default behaviour of pytest
  2. Yes, the report is only generated for tests using the performance_audit fixture.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

technical For PRs that introduce changes not worthy of a CHANGELOG entry

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants