-
Notifications
You must be signed in to change notification settings - Fork 5
Open
Description
When I recently run the model inference on a low-resource device, there were some memory issues with the largest image stacks. I solved the issue by introducing a gc.collect() to the run_model() function and torch.cuda.empty_cache() to the run_single_model() and del model command when running the pipeline for an "ensemble" of models. These were very random actions to alleviate the memory issue, but perhaps they can help someone else struggling with similar problem, so I post them here.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels