8000 Comparing v2.5.0...v2.5.1 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content
Permalink

Comparing cha 10000 nges

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: pytorch/pytorch
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v2.5.0
Choose a base ref
...
head repository: pytorch/pytorch
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v2.5.1
Choose a head ref
  • 9 commits
  • 17 files changed
  • 9 contributors

Commits on Oct 18, 2024

  1. update getting started xpu (#138090)

    update get start xpu (#137479)
    
    1. respect the comment from the community, downgrade the "Beta" to "Prototype" for the first xpu release with wheel
    2. add wheels installation of torchaudio & torchvision for nightly on Windows
    Pull Request resolved: #137479
    Approved by: https://github.com/atalman, https://github.com/malfet
    
    (cherry picked from commit 7ba706c)
    
    Co-authored-by: Zheng, Zhaoqiong <zhaoqiong.zheng@intel.com>
    pytorchbot and ZhaoqiongZ authored Oct 18, 2024
    Configuration menu
    Copy the full SHA
    a97c151 View commit details
    Browse the repository at this point in the history

Commits on Oct 22, 2024

  1. [Cherry-Pick] Use cuda 12.4 pytorch_extra_install_requirements as def…

    …ault (#138526)
    
    Cherry-Picks #138458
    Need to do it manually due to conflict with generated files.
    atalman authored Oct 22, 2024
    Configuration menu
    Copy the full SHA
    4076a73 View commit details
    Browse the repository at this point in the history
  2. Don't try to load cufile (#138539)

    Don't try to load cufile (#138501)
    
    Trying to loading it caused a big issue with 2.5.0 release - #138324
    
    cufile is not actually used currently by default, see #133489
    
    Pull Request resolved: #138501
    Approved by: https://github.com/atalman, https://github.com/mikaylagawarecki, https://github.com/malfet
    
    (cherry picked from commit 012ff2a)
    
    Co-authored-by: Sergii Dymchenko <sdym@meta.com>
    pytorchbot and kit1980 authored Oct 22, 2024
    Configuration menu
    Copy the full SHA
    cde6b38 View commit details
    Browse the repository at this point in the history
  3. Add link to torch.compile the missing manual in troubleshooting (#137369

    )
    
    Add link to torch.compile the missing manual in troubleshooting (#137301)
    
    Fixes #ISSUE_NUMBER
    
    Pull Request resolved: #137301
    Approved by: https://github.com/svekars
    
    Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
    (cherry picked from commit 22e19bd)
    
    Co-authored-by: Michael Lazos <mlazos@meta.com>
    pytorchbot and mlazos authored Oct 22, 2024
    Configuration menu
    Copy the full SHA
    70cf2bb View commit details
    Browse the repository at this point in the history
  4. Update cpuinfo submodule (#138600)

    Spiritual cherry-pick of #138351 that picks pytorch/cpuinfo#258 into the branch
    
    Fixes #138333
    
    
    Test Plan: `python -c "import torch"` finishes without any output on the screen
    malfet authored Oct 22, 2024
    Configuration menu
    Copy the full SHA
    8c3ed97 View commit details
    Browse the repository at this point in the history
  5. Update doc copyrights to 2024 (#138650)

    Update copyrights to 2024 (#138638)
    
    Spiritual successor of #119413 + CPP docs copyright update as well
    Fixes #138630
    
    Pull Request resolved: #138638
    Approved by: https://github.com/atalman
    
    (cherry picked from commit d1be61c)
    
    Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
    pytorchbot and malfet authored Oct 22, 2024
    Configuration menu
    Copy the full SHA
    885c823 View commit details
    Browse the repository at this point in the history
  6. [SDPA-CUDNN] Make CuDNN Attention Opt in (#138587)

    [SDPA-CUDNN] Make CuDNN Attention Opt in (#138522)
    
    # Summary
    Currently we have a `cudnn_order` that says on H100 w/ new enough CuDNN backend (we ship a 9.1 version in OSS) try to run CuDNN attention first. We have already encountered a few bugs with the release of 2.5:
    
    1. #138529
    2. huggingface/diffusers#9704
    3. #138354
    
    In light of the above we are going to make the CuDNN backend Opt-in by default.
    
    This can be done easily with the context manager for choosing backends I.e.:
    ``` Python
    from torch.nn.attention import sdpa_kernel, SDPBackend
    
    with sdpa_kernel(SDPBackend.CUDNN_ATTENTION):
        out = F.scaled_dot_product_attention(q, k, v)
    
    ```
    
    This PR puts the CuDNN backend as the lowest precedence in the backend list, meaning that the Math backend will always be chosen unless disabled (which is done via the context manager).
    
    Cc @atalman
    
    Pull Request resolved: #138522
    Approved by: https://github.com/ngimel, https://github.com/eqy, https://github.com/malfet
    
    (cherry picked from commit 9a9a0ab)
    
    Co-authored-by: drisspg <drisspguessous@gmail.com>
    pytorchbot and drisspg authored Oct 22, 2024
    Configuration menu
    Copy the full SHA
    848e7ac View commit details
    Browse the repository at this point in the history
  7. [MPS] Fix sliced cast (#138535)

    [MPS] Fix sliced cast (#138314)
    
    This fixes internal crash due to the invalid bufer size computation if sliced API is used
    
    Not sure what was the purpose of
    ```c++
    IntArrayRef baseShape;
    if (src.is_view()) {
      baseShape = src._base().sizes();
    } else {
      baseShape = getIMPSAllocator()->getBufferShape(src.storage().data());
    }
    int flattenedShaped = 1;
    for (const auto i : c10::irange(baseShape.size())) {
      flattenedShaped *= baseShape[i];
    }
    ```
    As flattenShaped could be much easier computed as `[srcBuf
    lengh]/src.element_size()`, and even if `srcBuf` is padded it's a safe thing to do.
    
    When someone allocated buffer to hold say uint8 and that view-casted it
    to float16, attempt to compute `baseShape` returned sizes of original
    tensor in its data type, rather than size in new dtypes
    
    Fixes #137800
    Pull Request resolved: #138314
    Approved by: https://github.com/albanD, https://github.com/DenisVieriu97
    
    (cherry picked from commit de16159)
    
    Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
    pytorchbot and malfet authored Oct 22, 2024
    Configuration menu
    Copy the full SHA
    f31b8bb View commit details
    Browse the repository at this point in the history

Commits on Oct 23, 2024

  1. Disabling amp context when invoking compiler (#138659)

    Disabling amp context when invoking compiler (#138624)
    
    Fix for #133974
    
    Pull Request resolved: #138624
    Approved by: https://github.com/bdhirsh, https://github.com/drisspg
    
    (cherry picked from commit 5942b29)
    
    Co-authored-by: eellison <elias.ellison@gmail.com>
    pytorchbot and eellison authored Oct 23, 2024
    Configuration menu
    Copy the full SHA
    a8d6afb View commit details
    Browse the repository at this point in the history
Loading
0