Skip to content

Conversation

@dylanwh
Copy link
Contributor

@dylanwh dylanwh commented Jul 27, 2018

In #672 performance regressed a bit, so let's win that back here.
The theme of this change is to load bugzilla classes earlier. It seems to save about 40ms per API call.

Before:

screen shot 2018-07-27 at 14 54 13

After
screen shot 2018-07-27 at 14 54 02

@dylanwh dylanwh changed the title perf improvements Bug 1320977 - performance tweaks Jul 27, 2018
@dylanwh dylanwh requested a review from purelogiq July 27, 2018 17:33
@dylanwh
Copy link
Contributor Author

dylanwh commented Jul 27, 2018

@purelogiq can you review this?
Maybe verify the profiling numbers?
(see https://www.youtube.com/watch?v=1hrdVxI0uFM for how, or ask me in IRC)

Copy link
Contributor

@purelogiq purelogiq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice

use Bugzilla::Util ();
use Bugzilla::RNG ();
use Bugzilla::ModPerl ();
use Mojo::Loader qw(find_modules);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting use of mojo in a not-yet-mojo app :)

@purelogiq
Copy link
Contributor

@dylanwh Will verify the profiling numbers Monday, missed your comment there.

@dylanwh
Copy link
Contributor Author

dylanwh commented Jul 27, 2018

cool, I'll wait to merge. if it's not fast enough-er I can find another slow part.

@purelogiq
Copy link
Contributor

@dylanwh First off, awesome stuff with the flamegraphs and profiling. When I compared the profile on master vs this branch I saw that webservice_user_get is faster in this PR but the overall request is the same or even slower. Can you verify that I'm doing this right?

Vid on google drive: https://drive.google.com/file/d/194cHcyvlZpyP5JvXUr1eyY42tCmp0DlA/view

@dylanwh
Copy link
Contributor Author

dylanwh commented Jul 30, 2018

Are you restarting httpd between tests?

But actually, looking at the video this is mostly expected -- the difference disappears after the first request because the optimization here is just loading some modules earlier.

However "first requests" are quite common -- because it's per worker-process and there are hundreds of those, and they get restarted somewhat frequently.

@dylanwh
Copy link
Contributor Author

dylanwh commented Jul 30, 2018

Looks like the remaining balance of time is in using Safe->reval(). I wanted to fix that in #321 but it's a little tricky.

@dylanwh
Copy link
Contributor Author

dylanwh commented Jul 30, 2018

@purelogiq I don't want to check now because I really want to get this test case posted, but tomorrow can you try benchmarking $cpt->reval($serialized) replaced with just eval($serialized)?

@purelogiq
Copy link
Contributor

@dylanwh Retested and still getting the same results :/

Changing $cpt->reval($serialized) in DB/Schema.pm as below and removing the Safe import:
(preface I have no clue what I'm doing in this area of the code :). No idea if/why I need a 'my' inside eval. No clue of the implications of changing this, use at your own risk!)

my $stuff= eval("my $serialized") || die "Unable to restore cached schema: " . $@;
$thawed_hash = $stuff;

results in a pretty good speed up:

BEFORE:

with-safe-flames

AFTER:

without-safe-flames

@purelogiq
Copy link
Contributor

@dylanwh I'm still good on r+ing this because I've been testing post-warmup and the pre-warmup speed improvement like you mentioned maybe be worth it.

@dylanwh
Copy link
Contributor Author

dylanwh commented Jul 31, 2018

Yup, I'll take it, and prioritize #321 later.

@dylanwh dylanwh merged commit e1b3729 into mozilla-bteam:master Jul 31, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

2 participants