8000 ggml-cpu: x86 feature detection is specific to x86 by ckastner · Pull Request #13811 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content

ggml-cpu: x86 feature detection is specific to x86 #13811

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 27, 2025

Conversation

ckastner
Copy link
Contributor

I saw this get built on arm64 with GGML_BACKEND_DL=ON and GGML_NATIVE=OFF, and was a bit confused.

@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label May 26, 2025
@slaren slaren merged commit 7fe03e7 into ggml-org:master May 27, 2025
46 checks passed
@ckastner ckastner deleted the x86-feat branch June 9, 2025 15:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0