10000 Using external memory allocator with PyTorch · Issue #43144 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content
Using external memory allocator with PyTorch #43144
@jakirkham

Description

@jakirkham

🚀 Feature

It would be useful to configure PyTorch to use an external memory allocator for its allocations.

Motivation

When working on GPUs, memory can be a somewhat limited resources. Particularly when using multiple libraries each handling their own memory. In this case it is possible for libraries to "compete" for memory and exhaust it more quickly as a result. In these use cases, it's helpful to have a single memory allocator that can be shared across libraries. This reduces some of the friction of memory allocations between libraries.

Pitch

It would be great to have some mechanism for users of PyTorch to specify an external memory allocator possibly like CuPy's, Numba's, or similar.

Alternatives

Feel free to suggest 😉

Additional context

We frequently receive reports from users of RAPIDS that they can run out of memory when using RAPIDS (say for preprocessing) and pass data to PyTorch for DL. In this context having a more unified memory management story should help users transition smoothly between libraries and reuse memory already freed (though still with the pool) of RAPIDS Memory Manager (RMM).

cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @ngimel

Metadata

Metadata

Assignees

No one assigned

    Labels

    featureA request for a proper, new feature.high prioritymodule: cudaRelated to torch.cuda, and CUDA support in generaltriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0