8000 Reduce manylinux wheel size for x86_64 arch · Issue #94262 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Reduce manylinux wheel size for x86_64 arch #94262

8000
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
aignas opened this issue Feb 7, 2023 · 2 comments
Closed

Reduce manylinux wheel size for x86_64 arch #94262

aignas opened this issue Feb 7, 2023 · 2 comments
Labels
module: binaries Anything related to official binaries that we release to users

Comments

@aignas
Copy link
aignas commented Feb 7, 2023

🚀 The feature, motivation and pitch

Right now the linux x86_64 wheels are huge and are more than 800MB, whilst the aarch64 counterparts are much smaller.
Supplying the manylinux2014 wheels in addition to the manylinux1 wheels may provide users access to potential size optimizations because of fewer deps needing to be bundled within the wheel (this is an assumption on my part and maybe someone can clarify why the size is so large).

Source: https://pypi.org/project/torch/#files

Alternatives

Do nothing and keep downloading the existing wheels. Whilst this works, it increases the size of the docker images and the amount of data that a user needs to download to just work with the package.

Additional context

No response

cc @ezyang @seemethere @malfet

@Skylion007
Copy link
Collaborator

Related to #93955 and partial duplicate of #34058

@Skylion007 Skylion007 added the module: binaries Anything related to official binaries that we release to users label Feb 7, 2023
@seemethere
Copy link
Member

The difference in the size of wheels has less to do with manylinux2014 vs. manylinux1 and more to do with the fact that the linux x86_64 wheels that are larger than 800MB typically have CUDA dependencies bundled with them.

So unfortunately it's not really something that can be drastically reduced since we want to keep the CUDA experience as simple as possible but has been something that's been talked about for a long time.

Past discussions on this:

With the past discussions linked here I think we can safely close this as a duplicate but we do understand the issues here and are continually trying to investigate ways to make the binaries smaller, but unfortunately there are just not a whole lot of things we can do to drastically change the binary sizes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: binaries Anything related to official binaries that we release to users
Projects
None yet
Development

No branches or pull requests

3 participants
0