8000 CPU version of PyTorch on PyPI · Issue #26340 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

CPU version of PyTorch on PyPI #26340

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
severinsimmler opened this issue Sep 17, 2019 · 60 comments
Open

CPU version of PyTorch on PyPI #26340

severinsimmler opened this issue Sep 17, 2019 · 60 comments
Labels
feature A request for a proper, new feature. module: build Build system issues module: cpu CPU specific problem (e.g., perf, algorithm) oncall: releng In support of CI and Release Engineering triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@severinsimmler
Copy link
severinsimmler commented Sep 17, 2019

🚀 Feature

Publish the CPU version of PyTorch on PyPI to make it installable in a more convenient way.

Motivation

Using PyTorch in production does not require necessarily the (~700 MB big) GPU version, but installing the much smaller CPU version as suggested on the website:

pip3 install torch==1.2.0+cpu torchvision==0.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html

makes it hard to use tools like Poetry, which do not work with pip itself and therefore do not support an argument like -f https://download.pytorch.org/whl/torch_stable.html.

Pitch

Publish the CPU version (e.g. as torch-cpu) on PyPI.

@cpuhrsch cpuhrsch added feature A request for a proper, new feature. module: build Build system issues module: cpu CPU specific problem (e.g., perf, algorithm) oncall: releng In support of CI and Release Engineering triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Sep 17, 2019
@soumith
Copy link
Member
soumith commented Oct 7, 2019

the gating issues here are:

  1. how do we do dependency tracking (torch-cpu is a change of 8000 package name). Unfortunately pypi doesn't allow us to upload packages with local version identifiers.
  2. time and effort

@severinsimmler
Copy link
Author
severinsimmler commented Oct 7, 2019

torch-cpu would be a new project on PyPI, but the package name itself, torch, remains the same.

Assuming wheels are built with setuptools, the argument packages defines the importable package names (which have to match local folders or files containing the code etc.), whereas name is the name in terms of metadata.

For example:

setuptools.setup(name="torch-cpu", packages=["torch"])

This will produce a wheel like torch_cpu-0.1.0-py3-none-any.whl which is published on PyPI as torch-cpu (because of the metadata in torch_cpu-0.1.0-py3-none-any.whl/torch_cpu-0.1.0.dist-info/METADATA).

After pip install torch-cpu, one can import PyTorch as usual with import torch.

scikit-learn does basically the same. You install it with pip install scikit-learn, but import it as sklearn.

You still know which version of PyTorch is installed in your environment:

$ pip freeze
torch-cpu==0.1.0

This is currently not possible, because both the GPU and CPU version have torch in their METADATA as package name and are listed as such.

Using local version identifiers becomes obsolete when publishing as a separate project, time and effort is minimal (might be wrong here, I know building PyTorch is quite complex), because you only have to set either torch for GPU and torch-cpu or whatever for CPU as name during the build process.

@smiles3983
Copy link

I am also having issues with this mainly because we use nexus internally. And that has issues with the plus sign. But if pypi had the +cpu versions, it would make things simpler for us.

@evanzd
Copy link
evanzd commented Jan 3, 2020

For those who rely on requirements.txt for deployment, we can install torch+cpu with format like below:

numpy==1.17.2                                                                                                                                                                                                     
pandas==0.25.2
-f https://download.pytorch.org/whl/torch_stable.html                                                                                                                                                             
torch==1.3.1+cpu

@smiles3983
Copy link
smiles3983 commented Jan 3, 2020 via email

@lig
Copy link
lig commented May 7, 2020

An alternative way to make things compatible with poetry could be to support pip repo format additionally to plain HTML pages with package links. Say, several repos targeted to different cuda versions should do the trick. Poetry allows specifying custom repos and pointing to the proper one for a specific package.

Update: I've found this issue #25639 which is about it, I guess.

@kousu
Copy link
kousu commented Aug 15, 2020

I would like to gently request that you take another look at this. We're using torch in https://github.com/neuropoly/spinalcordtoolbox/ but we can't ask our users to download almost a gigabyte of software. Not everyone is running in a big high performance data centre. A lot of our users just have some old Windows computer, or their macbook, and not all of our users even use us for the neural network parts.

kousu added a commit to spinalcordtoolbox/spinalcordtoolbox that referenced this issue Aug 15, 2020
But keep requirements.txt, because we need to use it for pytorch (pytorch/pytorch#26340)
and --find-links (-f) isn't supported by pip's new URL pinning format: pypa/pip#5898 (comment)
kousu added a commit to spinalcordtoolbox/spinalcordtoolbox that referenced this issue Aug 15, 2020
But keep requirements.txt, because we need to use it for pytorch (pytorch/pytorch#26340)
and --find-links (-f) isn't supported by pip's new URL pinning format: pypa/pip#5898 (comment)
@sisp
Copy link
sisp 8000 commented Aug 15, 2020

the gating issues here are:

  1. how do we do dependency tracking (torch-cpu is a change of package name). Unfortunately pypi doesn't allow us to upload packages with local version identifiers.
  2. time and effort

Packages/Projects depending on PyTorch could distinguish between pytorch and pytorch-cpu using "extras". A library mylib could be installed by

pip install mylib[cpu]

or

pip install mylib[gpu]

where the cpu extra installs all dependencies with CPU-only capabilities and gpu installs all dependencies with CPU+GPU capabilities. That's how I've been handling it in the past with, e.g., TensorFlow. There's one downside though: You cannot specify a default extra and you cannot mark the two extras mutually exclusive.

@kousu
Copy link
kousu commented Aug 15, 2020

That's a great idea @sisp! The mutually-exclusiveness problem could be sidestepped if torch meant the CPU version, and torch[gpu] meant installing all the other stuff. That would imply splitting up the GPU stuff into plugin packages, because extras only works by declaring extra dependencies, so I imagine the maintainers aren't keen on the labour involved; and it would be a small but breaking change for users who do use the GPU stuff; but it would make consuming pytorch really smooth for a large number of people.

@sisp
Copy link
sisp commented Aug 15, 2020

@kousu Good point! I hadn't thought about applying this pattern to the torch package itself. But this would indeed make consuming PyTorch really smooth.

/cc @soumith

kousu added a commit to spinalcordtoolbox/spinalcordtoolbox that referenced this issue Aug 17, 2020
But keep requirements.txt, because we need to use it for pytorch (pytorch/pytorch#26340)
and --find-links (-f) isn't supported by pip's new URL pinning format: pypa/pip#5898 (comment)
@kohtala
Copy link
kohtala commented Aug 17, 2020

Hi. Some good ideas here. Just a note, that it might make sense to name the extension cuda. Just to make it easier to distinguish in case we get similar extension for AMD or other GPU hardware. Maybe even name the extension by cuda version?

@kousu
Copy link
kousu commented Aug 19, 2020

For context of how this might be used, here's how a HPC cluster is currently consuming scientific python:

https://docs.computecanada.ca/wiki/Python#Installing_dependent_packages (archive):

In some cases, such as TensorFlow, Compute Canada provides wheels for a specific host (cpu or gpu), suffixed with _cpu or _gpu. Packages dependent on tensorflow will then fail to install. If my_package depend on numpy and tensorflow, then the following will allow us to install it:

 (ENV) [name@server ~] pip install numpy tensorflow_cpu --no-index
 (ENV) [name@server ~] pip install my_package --no-deps

The --no-deps options tells pip to ignore dependencies.

I think they built their own wheels for tensorflow, compiled with tuning for their particular cluster set up. But it's awkward to have to consume them that way (with --no-deps and the weird non-standard package name). It seems like you can do better than that by providing a build scripts that make nice torch[cpu] and torch[gpu] versions.

@sisp
Copy link
sisp commented Aug 19, 2020

It's even awkward to install dependencies adhoc using pip for anything other than a quick test. All projects should always specify their dependencies in one of the standard ways (setup.py, setup.cfg, requirements.txt, Pipfile, pyproject.toml, ...) and install them accordingly, e.g. pip install <library> or pip install . or pip install -r requirements.txt or pipenv install or poetry install. If someone provides optimized versions of a package, that's fine, but then they should be uploaded to a custom PyPI-compatible index and users should specify an extra index that takes precedence over PyPI to get the optimized packages.

@dsuess
Copy link
dsuess commented Aug 19, 2020

I fully agree with what @sisp said. Unfortunately, this sounds like just uploading an optimized wheel-file to a custom PyPi server and pointing to that using --extra-index-url won't give the optimized package precedence. Some tools like poetry allow you to specify the PyPi server on a per-package basis, but that's not portable for e.g. pip

@sisp
Copy link
sisp commented Aug 19, 2020

@dsuess Oh, wow, that's unexpected behavior! I've never tested it but expected setting --index-url <custom pypi> --extra-index-url <official pypi> would give the custom PyPI precedence. Python package management is totally broken. Interestingly, Poetry seems to use the same technique, i.e. the pip command executed for a dependency with a specified source is:

$ pip install --no-deps --index-url <source> --extra-index-url https://pypi.org/simple/ <package>

8000

@dsuess
Copy link
dsuess commented Aug 19, 2020

Yeah, I've used the same pattern successfully to give precedence to a private PyPi server. The main downside to this approach would be of course that this wouldn't be portable. If someone wanted to install packages from both a private pip server and a pip server with optimized wheels, which one do you put in as index-url and which one as extra-index-url? Maybe that just goes to show that aiming for the lowest common denominator isn't worth it in this case.

The other way that I can see is to have the pypi torch package be CPU-only by default and have torch[gpu] install a second package torch-gpu, which only contains the compiled .so file with CUDA. Then the standard torch-install would only have to look for the so-file in torch_gpu first and fall back to it's own so file if it can't find that. All in all, it's a mess and extra work for the developers.

@zackees
Copy link
zackees commented Dec 5, 2022

All the bulk that takes up that 2.4GB is in the libs folder. You could simply upload your payload to another service. When torch runs it checks to see everything is downloaded, if not then it fetches it from your file services.

For example of how this could be done, check out my library
https://github.com/zackees/static_ffmpeg/blob/996aef20893bb8315ac169520561633ee8d54def/static_ffmpeg/run.py#L81

Torch+cuda could actually be very small in pypi.

@kshpytsya
Copy link

All the bulk that takes up that 2.4GB is in the libs folder. You could simply upload your payload to another service. When torch runs it checks to see everything is downloaded, if not then it fetches it from your file services.

@zackees I believe this to be a bad idea. I would expect pip install -r requirements.txt to install everything and reproducibly produce environment usable in isolated setups. Adding special case workaround for this unsocial behavior to my build system is not something I would like to do. Moreover, this would negate the security of having hashes in requirements.txt.

@zackees
Copy link
zackees commented Dec 6, 2022

It's not insecure because it's trivial add sha checking on the archive and that sha would live in the repo itself. Are you concerned that someone could hack the file repo and pypi at the same time?

Is the solution to this problem is for app developers to fork torch+cuda, manually separate the lib files into an archive and then upload the unofficial package ourselves?

@kshpytsya
Copy link

It's not insecure because it's trivial add sha checking on the archive and that sha would live in the repo itself. Are you concerned that someone could hack the file repo and pypi at the same time?

@zackees Leave package management tasks to package managers. I would have to review the code of such a library responsible for downloads and checking hashes with its every release, and still do extra special case treatment of this specific package to make it available for off-line cases.

@zackees
Copy link
zackees commented Jan 1, 2023

In light of the security vulnerability that happened today, I again call on the pytorch team to put the GPU package on pypi and sideload the large install files. App developers need this and not having it means we have to use the --extra-index-url which suffers from the security vulnerability exposed today.

@soumith
Copy link
Member
soumith commented Jan 1, 2023

@zackees I didn't notice your comment from last month, but it leaves me confused. The version of PyTorch on PyPI is the GPU version. We've worked closely with nvidia to even have cudatoolkit now published on PyPI and the PyTorch wheel on PyPI now depends on it (instead of us plugging in a bunch of binary code)

@kousu
Copy link
kousu commented Jan 1, 2023

@soumith, @zackees must have made a typo and meant "CPU version". I had the exact same thought as soon as I read the writeup (thank you for the writeup, it is very clear, honest, to the point, and helpfully includes remediation steps).

pip is susceptible by design to dependency confusion attacks

In concrete words, pip is consistently treating pypi.org as a global namespace registry, while your mental model is treating it as a package repository. This is still an actively debated topic, but at this point neither view is “wrong”. To use this feature as pip designs it, you should name-squat on PyPI to get what you want.

and my sense is most of their team strongly resists anyone using extra repos; for example, they don't have any method to specify alternate indexes within wheels -- it must be done at the command line, with --extra-index-url.

they explained once to me that they think forcing end-users to type out --extra-index-url would make risk more obvious to them, and not let packagers slip in a million extra repos.

I'm not sure achieves that. Clearly lots of people copy-pasted the pip line off https://pytorch.org/get-started/locally/ without interrogating it, but that's their choice so we have to play in their sandbox, and here with this attack, it wasn't that the extra index was compromised, but that the extra index was inadvertantly pointing to the the "safe" PyPI that had had a malicious package uploaded.

Though what it definitely does protect against, and this is important too, is people building packages that have a URL to a single obscure repo server baked in, all of which then break when funding for that repo runs out.

Given that it's python's sandbox you're playing in, I think you've gotta recognize that pytorch is the one using the wrong design.

I know it's a lot of work to change torch. But it's even more work to change pip when tens of thousands of public packages have been built against its currently behaviour, and the need split-packaging tricks only seems to ever show up with corporate packages, and that's why they are extremely hesitant to even broach it:

Most of the use cases are in internal, private environments, and it's very hard from the outside to get any sense of what is practical/workable. The discussions seem in my experience to be plagued by people not quite understanding each other's use cases, and proposals that never quite gain broad support.

torch is in a funny grey-zone: it is open source now, but it was built by Facebook, with Facebook's environment in mind (i.e. lots of cheap Linux VMs on commodity hardware backed by an expensive storage cluster, devs mostly working on macOS), which is very different from the environment the rest of us (usually one, maybe two machines, with limited disk space, on any OS). It seems like it was originally packaged for this "private, internal environment", hence why the Linux packages vendor torch and the macOS and Windows ones don't, which is the root of this whole thread. When it was open-sourced instead of re-packaging it to fit in pypi, the existing infrastructure of alternate --extra-index-urls was kept. And now we're stuck. And now it's allowed a supply-chain vulnerability to slip in, and an attack to exploit it.

So please, reconsider rearranging torch. Put separate torch-gpu and torch-cpu both on pypi if you have to, and shut down https://download.pytorch.org/whl/.

@zackees
Copy link
zackees commented Jan 1, 2023

If they had torch[gpu] I think that would solve the issue. The problem is that there's a large 2.4 GB payload that won't fit on pypi. I recommended putting those assets on another server and download lazily.

I've done this exact same pattern with https://github.com/zackees/static-ffmpeg so I know it works.

The current issue seems to be that the devs are resistant to side loading assets. It was said that it was open to attacks but then I pointed out that the sha fingerprint could be used to verify the integrity. I offered to do the work and was told no.

Not having this creates problems for app developers like myself. For example see the gpu installation step in my https://github.com/zackees/transcribe-anything app. I was able to get it to use one line of code like this:

curl https://raw.githubusercontent.com/zackees/transcribe-anything/main/install_cuda.py | python

But I consider this an ugly hack to work around this issue.

@kousu
Copy link
kousu commented Jan 1, 2023

If they had torch[gpu] I think that would solve the issue.

I couldn't agree more!

The problem is that there's a large 2.4 GB payload that won't fit on pypi. I recommended putting those assets on another server and download lazily.

I think the problem is that 2.4GB number.

PyPA are very aware of this problem, but it's clearly not something they can fix. This is more evidence to me that Torch was designed to be used internally in Facebook's network: 2.4GB is nothing inside a datacentre where they own all the wires, but it's a lot for PyPI to pay to serve to the general public.

Most of that 2.4GB is not torch, it's CUDA. Torch could help the situation by figuring out ways to compromise by cutting down what parts of CUDA need to be included.

curl https://raw.githubusercontent.com/zackees/transcribe-anything/main/install_cuda.py | python

But I consider this an ugly hack to work around this issue.

We basically do the same thing, we have a custom installer written in bash which calls pip

https://github.com/spinalcordtoolbox/spinalcordtoolbox/blob/fed2d7214796e6a87c1c2a9bd0cd702994fae523/install_sct#L645-L647

and our requirements.txt uses --extra-index-url:

https://github.com/spinalcordtoolbox/spinalcordtoolbox/blob/fed2d7214796e6a87c1c2a9bd0cd702994fae523/requirements.txt#L8

(funny that requirements.txt is allowed, sort of half contradicting PyPA's assertion that users should ALWAYS see what repos they are using up front)

I also have a lot of misgivings about this design. But I don't see a way around it, given the lack of a good CPU copy of torch on pypi!

@zackees
Copy link
zackees commented Jan 1, 2023

I don't think you quite understand what I'm saying.

For torch gpu, there is only one large asset, the .so/.dll file. This one file can be moved outside of pip and then downloaded as soon as torch needs it.

If this one file is moved off of pypi, the entire torch gpu package fits.

The psuedo code for this would be something like:

with FileLock(lock_file) as locked:
  if os.path.exists(large_shared_object):
    return
  download_file(large_shared_object)

Here's an example in my repo:
https://github.com/zackees/static_ffmpeg/blob/14e3b8a78f90bffb2c4653f61ba8964f9d2d62dc/static_ffmpeg/run.py#L81

@kousu
Copy link
kousu commented Jan 1, 2023

For torch gpu, there is only one large asset, the .so/.dll file.

I'm hoping the large DLL can be broken up. Surely a DLL that large consists of sublibraries.

Here's an example in my repo:
https://github.com/zackees/static_ffmpeg/blob/14e3b8a78f90bffb2c4653f61ba8964f9d2d62dc/static_ffmpeg/run.py#L81

I understand this pattern. We are using this pattern. But I think it's a bad idea. It means your dependencies are now:

a. pinned to specific URLs, which can easily linkrot
b. downloaded at run time, so it's impossible to package your software safely. Instead, your app is incomplete until a user tries to use the feature that triggers the download. Packagers could try to run all your branches, but tracing out all the features is tedious and impossible to automate, and obviously a little bit concerning for packagers since it means invoking arbitrary code from the internet on their build machines, that might be buggy, malicious, etc. The reason we have package managers is to avoid doing that. It's just too bad that the package manager we have, pip, doesn't get along smoothly with the everything-and-the-kitchen-sink way torch has been designed.

@zackees
Copy link
zackees commented Jan 1, 2023

a. pinned to specific URLs, which can easily linkrot
b. downloaded at run time, so it's impossible to package your software safely

All these problems already exist now. We have to use extra index url which resolves to a pinned url. Also torch gpu can't be packaged in any app.

The proposal reduces friction and allows gpu accelerated app deployment. All the points you brought up with my proposal already exists with the status quo.

@kousu
Copy link
kousu commented Jan 1, 2023

I guess so, but I'm here to solve the problem not to replace it with the same problem

I'm trying to make packaging for my team less crazy, ideally with standard reference docs we can point to (I've been using this page a lot). Sideloading breaks that.

If you're already going to ignore the standards and write a custom installer to work around pip, you can add sideloading in to your installer. I'm sure most of your users will be pleased that it Just Works, but the minority that wants to use things like configuration management and reproducible builds/deployments will be frustrated with you.

@soumith wrote:

 @zackees Leave package management tasks to package managers. I would have to review the code of such a library responsible for downloads and checking hashes with its every release, and still do extra special case treatment of this specific package to make it available for off-line cases.

This is exactly my concern with your solution, the offline case and the extra review overhead every time something changed. Please don't ask torch to grow more bad habits.

@soumith
Copy link
Member
soumith commented Jan 2, 2023

Hey Folks. a lot of discussion, so let me try to respond in order.

  1. Side-loading should not be considered. Wheels are used in many settings, including mirrored repositories without internet access. The assumption that while a wheel is being installed, there would be internet is not a sound assumption. Side-loading is also a big hack that side-steps how the package manager intends to work.
  2. there are infrastructure limits to PyPI, and we already are stretching them. The folks who maintain PyPI already grant us a big exception to host wheels larger than the default 80MB limit. For us to ask them for exceptions over and over again (for our CPU wheels which also cross this limit, our separate GPU wheels (for different versions of CUDA, for ROCm, etc.) is painful for everyone involved, especially for the folks maintaining PyPI. This is why we try to maintain a single good default on PyPI and everything else on our own index
  3. We work on making the PyTorch binary smaller and move all the dependency bits to proper separate wheels. With the release of PyTorch 1.13.1, we now depend on the following packages from PyPI: nvidia-cuda-runtime-cu11, nvidia-cuda-nvrtc-cu11, nvidia-cuda-cudnn-cu11, nvidia-cuda-cublas-cu11. This was work that @ptrblck and many other folks have helped make possible over ~2+ years. We'll continue driving work in this direction.

@kousu
Copy link
kousu commented Jan 2, 2023

3. We work on making the PyTorch binary smaller and move all the dependency bits to proper separate wheels. With the release of PyTorch 1.13.1, we now depend on the following packages from PyPI: nvidia-cuda-runtime-cu11, nvidia-cuda-nvrtc-cu11, nvidia-cuda-cudnn-cu11, nvidia-cuda-cublas-cu11. This was work that @ptrblck and many other folks have helped make possible over ~2+ years. We'll continue driving work in this direction.

This is amazing news. Thank you!

@neersighted
Copy link
neersighted commented Jan 3, 2023

With regard to pip not propagating index information:

and my sense is most of their team strongly resists anyone using extra repos; for example, they don't have any method to specify alternate indexes within wheels -- it must be done at the command line, with --extra-index-url.

This is not a problem with pip; there is simply no way to express this in the Python ecosystem at present. What you are asking for is a hybrid of standard index/constraint dependencies and direct references, which is not something on anyone's radar in the Python packaging world as of present.

@ppwwyyxx
Copy link
Collaborator
ppwwyyxx commented Jan 3, 2023

btw, there are discussions on adding support of external hosting to pypi, e.g. https://discuss.python.org/t/external-hosting-linked-to-via-pypi/8917 https://discuss.python.org/t/fallback-links-on-pypi-for-the-same-file/14678. I think there is no fundamental blocker to this idea, just that someone has to come up with a convincing design.

@mironnn
Copy link
mironnn commented Jan 18, 2023

@soumith https://download.pytorch.org/whl/ is not PEP503 compatible, which specially describes Pypi API.

For example,
[link](https://download.pytorch.org/whl/cu113/simple/torch-1.11.0%2Bcu113-cp38-cp38-linux_x86_64.whl) should work, but this gives 403 error.

Please take a look at the issue in Nexus, pytorch could be installed via it.
https://community.sonatype.com/t/not-able-to-proxy-python-link/6745/5

Should I create a separate issue?

@humanzz
Copy link
humanzz commented Jan 26, 2023

We work on making the PyTorch binary smaller and move all the dependency bits to proper separate wheels. With the release of PyTorch 1.13.1, we now depend on the following packages from PyPI: nvidia-cuda-runtime-cu11, nvidia-cuda-nvrtc-cu11, nvidia-cuda-cudnn-cu11, nvidia-cuda-cublas-cu11. This was work that @ptrblck and many other folks have helped make possible over ~2+ years. We'll continue driving work in this direction.

@soumith

Coming to this from another angle, which is LICENSEs and only needing the torch CPU version.

We have a setup, where packages from pypi are ingested into an AWS CodeArtifact repo and our teams are only allowed to use packages there. There are policies regarding which licenses of packages that are approved for ingestion which exclude proprietary ones. The nvidia-* packages have a proprietary license and are declared as requirements for the linux wheels e.g. pypi's torch-1.13.1-cp39-cp39-manylinux1_x86_64.whl.

So, we're left in a non-ideal situation, wanting to adopt newer torch versions for CPU usage, but unable to use it, due to the unnecessary requirement for the nvidia-* packages.

@hboutemy
Copy link

@smiles3983 @mironnn on issue with Nexus, I came across the same issue as you and worked with Sonatype Community to create a mirror of PyTorch index that works with Nexus Repository
see https://sonatype-nexus-community.github.io/pytorch-pypi/whl/ for the indexes
and https://github.com/sonatype-nexus-community/pytorch-pypi for documentation

feedback appreciated

@HudsonGraeme
Copy link

What can we do to make this happen? Does this require community funding or appeal to pypi to grant torch a larger amount of space?

This would be huge for many that depend on torch.

I also think defaulting new versions published on pypi to CPU would decrease overall load on pypi itself since the default behavior changes to avoid pulling the heavy cuda dependencies from pypi.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature A request for a proper, new feature. module: build Build system issues module: cpu CPU specific problem (e.g., perf, algorithm) oncall: releng In support of CI and Release Engineering triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
Status: Untriaged Archival - really old stuff
Development

No branches or pull requests

0