[BUG] fixed memory leak in BaseModel by detach some tensor #1924
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
Reference Issues/PRs
#1369
#1461
What does this implement/fix? Explain your changes.
1.Detached tensors in the log dictionary before appending them to the training/validation/testing_step_outputs lists. This fixes a memory leak caused by retaining the computation graph for every batch throughout an entire epoch.
2.Detached the loss tensor within the step() method before logging.
3.Move prediction results to CPU to prevent VRAM growth.
Did you add any tests for the change?
I ran my training code for 5 epochs using a memory profiler. Here are two comparison plot:


before
after