-
-
Notifications
You must be signed in to change notification settings - Fork 25.9k
FIX Make dict learning loss match the paper #19210
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FIX Make dict learning loss match the paper #19210
Conversation
@jeremiedbb this PR can now be rebased on top of the main branch now that #19198 has been merged. |
How does this impact qualitatively the existing examples? Apparently no tests were impacted by this change. I am not sure this is a good sign ;) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming it does not impact the example, fine with me but this needs to be documented in the changelog.
…-learn into fix-dict-learning-loss
Thanks @jeremiedbb |
Co-authored-by: Olivier Grisel <olivier.grisel@gmail.com>
Co-authored-by: Olivier Grisel <olivier.grisel@gmail.com>
Co-authored-by: Olivier Grisel <olivier.grisel@gmail.com>
Built on top of #19198 for convenience.
In the paper
the constraint is that the norm of each atom is less or equal to 1. The current doc and implementation make it equal to 1. I'm not sure what it implies on the optimization problem but it might be considered a bug.
SparsePCA relies on dict learning and has the same issue. In the paper
the constraint on the components is also less or equal to 1.
This change was also proposed in #9036