8000 onednn 3.8.1 by BrewTestBot · Pull Request #224915 · Homebrew/homebrew-core · GitHub
[go: up one dir, main page]

Skip to content

onednn 3.8.1 #224915

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 28, 2025
Merged

onednn 3.8.1 #224915

merged 2 commits into from
May 28, 2025

Conversation

BrewTestBot
Copy link
Member

Created by brew bump


Created with brew bump-formula-pr.

release notes
This is a patch release containing the following changes to v3.8:
* Fixed correctness issue in reorder primitive with non-trivial strides on Intel CPUs (a762d3248ee5e04b2348f3a5aeecfa64da4634d8)
* Fixed runtime error in convolution weight gradient on Xe2 architecture-based Intel GPUs (a8fac73036f67657f51c10b385f967c64607e802, c409ef949ea112e8fc1caf480d55a07247b4a702)
* Fixed performance regression in `bf16` convolution on Intel Datacenter GPU Max Series (98170d0f138458f4b3fcefca773be2ef7e73959f, c6bae4aa45dbe9ff9fe4e51173dc301550832e08, c5edd53195f6b1465f4ab4857d64a704bb38e8e1, bb1a5919fbedd4ce078f2fcf368a3e099f6c3942)
* Improved performance of `fp16` matmul with `fp8` compressed weights on Intel GPUs (58f3ec1510a4b10e51e57227229d2b2cfe23f55a, abff1764af8a93dda5c9c8be11c5a1a5da31daa7, ffd7dd34d837f6ddb50d2b88515c5f45bb18ed4f, 3b1e855f440a13124d33c05e1ab671eba1401bba, 2e140de469d28b3f49d3284dc0e215b9b43b718a, 3429f79274957e4bd9b9c6ec12bcf2a4e8362a5b)
* Fixed runtime error in `fp16` pooling primitive on Xe2 architecture based Intel GPUs (c0f6b6ded756c35d50b383c8078fdec1b3ad2f09)
* Improved performance of `fp16` matmul with `int4` weights and `32 < m <= 64` on Intel GPUs (2fa7072a4d632e341a10d883243c0b54359da2fc)
* Fixed correctness issues in `bf16` matmul with 3 or more dimensional tensors on processors with Intel AMX support (dd20965518965ff0f63093c1f90c957cbe9ad3e6, ea1b4a169d3fe59a8c8a5d60e5da30a5167e0b52)
* Fixed performance regression in `fp16` or `bf16` matmul with transposed source and weight tensors on Intel Datacenter GPU Max Series (e45e1aa4fe44e0ba0cfb74d58272fea59c47f683)
* Improved performance of `bf16` matmul with `int4` weights on Intel GPUs (7a15c231c569432ca74f7dd1db260f1f8877980c)
* Fixed runtime error in `fp16` SDPA subgraph with head size `512` on Intel Core Ultra (Series 2) processor integrated GPU (bde698584cbc6ca3f02649c8ff743f9b5d3d527e)

View the full release notes at https://github.com/uxlfoundation/oneDNN/releases/tag/v3.8.1.


@github-actions github-actions bot added the bump-formula-pr PR was created using `brew bump-formula-pr` label May 27, 2025
Copy link
Contributor

🤖 An automated task has requested bottles to be published to this PR.

Please do not push to this PR branch before the bottle commits have been pushed, as this results in a state that is difficult to recover from. If you need to resolve a merge conflict, please use a merge commit. Do not force-push to this PR branch.

@github-actions github-actions bot added the CI-published-bottle-commits The commits for the built bottles have been pushed to the PR branch. label May 28, 2025
@BrewTestBot BrewTestBot enabled auto-merge May 28, 2025 00:40
@BrewTestBot BrewTestBot added this pull request to the merge queue May 28, 2025
Merged via the queue into master with commit 55e9b1e May 28, 2025
17 checks passed
@BrewTestBot BrewTestBot deleted the bump-onednn-3.8.1 branch May 28, 2025 00:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bump-formula-pr PR was created using `brew bump-formula-pr` CI-published-bottle-commits The commits for the built bottles have been pushed to the PR branch.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0