8000 Incorporate embedding pooling layer fixes by iamlemec · Pull Request #1194 · abetlen/llama-cpp-python · GitHub
[go: up one dir, main page]

Skip to content

Incorporate embedding pooling layer fixes #1194

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Feb 15, 2024
Merged

Conversation

iamlemec
Copy link
Contributor

Made some fixes to the pooling layer in llama.cpp that are reflected here. Previously we had to divide by the number of tokens in the sequence. Now we can just take them as-is and optionally normalize. Also changed the truncation to n_batch rather than n_ctx since that's what we're writing to.

Embedding numbers now match up very closely with SentenceTransformers. Usually around 1-(1e-7) cosine similarity, though there are some remaining issues with tokenizing text with accents.

@abetlen abetlen merged commit 7bb91f0 into abetlen:main Feb 15, 2024
@abetlen
Copy link
Owner
abetlen commented Feb 15, 2024

Thanks @iamlemec

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0