OpenCL: Fix memory leak / OoM and stack overflow #837
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
OpenCL backend called ´chipstar::Event::addDependency()´ without ever calling ´chipstar::Event::releaseDependencies()` during HIP application's lifetime and this had two possible outcomes:
Crash due to out of memory error because the unreleased event objects. This occured after a HIP program had streamed enough commands - like >30000 kernels.
Crash due to stack overflow at program exit / chipStar uninitialization. Because the event dependencies were not released, this led to a build up of very long event dependency chain. At uninitialization, the destruction of a Queue's last event led to destruction of its dependent events which led to destruction their dependend events and this possibly kept going until the crash which was caused by stack overflow from numerous call frames.
Both cases are fixed by removing the
addDependency()call. AFAIK, the event dependency system is meant for timing safe release of the backend driver objects (cl_events in this case). OpenCL backend does not need this as the driver releases the objects when they are not needed by the application or by the driver for internal in-progress tasks.