8000 perf: optimize RandLora dimension calculation with shape caching by srijondasgit · Pull Request #2536 · huggingface/peft · GitHub
[go: up one dir, main page]

Skip to content

perf: optimize RandLora dimension calculation with shape caching #2536

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

srijondasgit
Copy link

Description

This PR optimizes the RandLora dimension calculation by implementing shape caching and improving the dimension finding logic.

Changes

  • Added module shape caching to avoid redundant shape calculations
  • Optimized dimension finding by tracking max dimensions separately

Performance Impact

  • Reduces redundant shape calculations for each module
  • More efficient handling of layers with different dimensions
  • Better memory usage by avoiding repeated shape computations

Testing

  • All RandLora tests pass

@githubnemo
Copy link
Collaborator

Hi, thanks for the pull request. Maybe I'm missing something but I fail to see the impact of this change regarding performance. Especially the cache would only be useful if it was shared across _find_dim calls, which it isn't right now.

Do you have a test or script to measure the impact of this change? I'd suggest explaining the gravity of the issue first before attempting a fix.

Copy link
github-actions bot commented Jun 7, 2025

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

@github-actions github-actions bot closed this Jun 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0