8000 vulkan: support CPY from any type to itself by jeffbolznv · Pull Request #13695 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content

vulkan: support CPY from any type to itself #13695

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 23, 2025

Conversation

jeffbolznv
Copy link
Collaborator

Reuse the f16/f32 copy shaders, and just scale the number of elements according to the type size.

Should fix #13684.

Reuse the f16/f32 copy shaders, and just scale the number of elements
according to the type size.
@wizardeur
Copy link

I can confirm that the issue is no longer reproducible.

@slaren slaren linked an issue May 21, 2025 that may be closed by this pull request
Copy link
Collaborator
@0cc4m 0cc4m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix, LGTM

@0cc4m 0cc4m merged commit 1dcd019 into ggml-org:master May 23, 2025
46 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend
Projects
None yet
3 participants
0