Skip to content

Conversation

@alt-dima
Copy link

@alt-dima alt-dima commented Nov 21, 2025

Provide a description of what has been changed

Checklist

Fixes #

Relates to #

@alt-dima alt-dima requested a review from a team as a code owner November 21, 2025 12:04
@keda-automation keda-automation requested a review from a team November 21, 2025 12:04
@github-actions
Copy link

Thank you for your contribution! 🙏

Please understand that we will do our best to review your PR and give you feedback as soon as possible, but please bear with us if it takes a little longer as expected.

While you are waiting, make sure to:

  • Add an entry in our changelog in alphabetical order and link related issue
  • Update the documentation, if needed
  • Add unit & e2e tests for your changes
  • GitHub checks are passing
  • Is the DCO check failing? Here is how you can fix DCO issues

Once the initial tests are successful, a KEDA member will ensure that the e2e tests are run. Once the e2e tests have been successfully completed, the PR may be merged at a later date. Please be patient.

Learn more about our contribution guide.

@snyk-io
Copy link

snyk-io bot commented Nov 21, 2025

Snyk checks have passed. No issues have been found so far.

Status Scanner Critical High Medium Low Total (0)
Open Source Security 0 0 0 0 0 issues

💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse.

@alt-dima alt-dima force-pushed the feature/dimaal/pod-spec-lazy branch from ff658a7 to 66ffe65 Compare November 21, 2025 12:11
@JorTurFer
Copy link
Member

JorTurFer commented Nov 21, 2025

/run-e2e
Update: You can check the progress here

@JorTurFer
Copy link
Member

I think that avoid checking not needed fields is always nice, but why would you like to avoid podSpec? Is there any case you're dealing with?

@alt-dima
Copy link
Author

alt-dima commented Nov 21, 2025

I think that avoid checking not needed fields is always nice, but why would you like to avoid podSpec? Is there any case you're dealing with?

Yes! After update from version 2.16.1 to 2.18.1 i noticed a spike in memory usage from 256Mb to 1Gb!
Some details and investigation:
https://kubernetes.slack.com/archives/CKZJ36A5D/p1763483034433569

In our clusters we do not use any fromEnv options. and in biggest clusters 1000s of pods = leads in a lot of api calls to k8s api and keda's memory usage

I would like to understand why/when behaviour changed? Maybe i missed changelog

image

@JorTurFer
Copy link
Member

Gotcha! now makes sense :)
As we are using cached client, I'm not sure if this will solve or even reduce the load as the manifests are already cached and requests are done to the local cache by the client. have you seen memory improvements after this fix?

@alt-dima
Copy link
Author

alt-dima commented Nov 21, 2025

Gotcha! now makes sense :) As we are using cached client, I'm not sure if this will solve or even reduce the load as the manifests are already cached and requests are done to the local cache by the client. have you seen memory improvements after this fix?

Just deployed on staging and i see decrease from 256 to 145 Mb. and also in pprof looks better now (need to solve secrets too :) )
Can't deploy now to prod biggest cluster to verify drop from 1Gb mem usage

image

And to verify again, deployed non-patched version
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants