-
Notifications
You must be signed in to change notification settings - Fork 24.7k
Description
🚀 Feature
It would be useful to configure PyTorch to use an external memory allocator for its allocations.
Motivation
When working on GPUs, memory can be a somewhat limited resources. Particularly when using multiple libraries each handling their own memory. In this case it is possible for libraries to "compete" for memory and exhaust it more quickly as a result. In these use cases, it's helpful to have a single memory allocator that can be shared across libraries. This reduces some of the friction of memory allocations between libraries.
Pitch
It would be great to have some mechanism for users of PyTorch to specify an external memory allocator possibly like CuPy's, Numba's, or similar.
Alternatives
Feel free to suggest 😉
Additional context
We frequently receive reports from users of RAPIDS that they can run out of memory when using RAPIDS (say for preprocessing) and pass data to PyTorch for DL. In this context having a more unified memory management story should help users transition smoothly between libraries and reuse memory already freed (though still with the pool) of RAPIDS Memory Manager (RMM).
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @ngimel