8000 Implement Adaptive Lasso / reweighted L1 · Issue #555 · scikit-learn/scikit-learn · GitHub
[go: up one dir, main page]

Skip to content

Implement Adaptive Lasso / reweighted L1 #555

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ogrisel opened this issue Jan 14, 2012 · 7 comments
Closed

Implement Adaptive Lasso / reweighted L1 #555

ogrisel opened this issue Jan 14, 2012 · 7 comments

Comments

@ogrisel
Copy link
Member
ogrisel commented Jan 14, 2012

Here is a proof of concept implementation by @agramfort

https://gist.github.com/1610922

Implementing this in the scikit would require a conversion to the scikit-learn API / naming conventions + example comparing with classic Lasso + narrative doc.

What motivated @agramfort's gist was:

http://books.nips.cc/papers/files/nips24/NIPS2011_1135.pdf

See references therein for the original contributions.

@vinayak-mehta
Copy link
Contributor

I would like to give this a try, can't really find the paper in that link though.

@amueller
Copy link
Member

I can't see it either.
Hum, that paper was fresh when @ogrisel posted it. It would be interesting to know how much traction it got.

@howthebodyworks
Copy link

Since the reference request is in this thread, I'm dumping it here rather than #4912.

Adaptive Lasso is huge in my field (statistics) because of its oracle properties, and because it turns out to be essential to do reweighting for robust fitting in a variety of circumstances - I think the original ref here is Zou - here's a preprint: http://users.stat.umn.edu/~zouxx019/Papers/adalasso.pdf

More generally, there are many Lasso variants that one can implement from the basic lasso if you have access to coefficient or penalty weightings - see, e.g. this example of doing adaptive Lasso using R's glmnet: http://ricardoscr.github.io/how-to-adaptive-lasso.html

If we had access to the sample weights, as in glmnet, in the sklearn.Lasso* packages, it would be easy to implement adaptive and various other Lasso variants (e.g. trace norm Lasso, from the same NIPS http://papers.nips.cc/paper/4277-trace-lasso-a-trace-norm-regularization-for-correlated-designs )

@howthebodyworks
Copy link

NB sample weighting is addressed in #3702

@georged4s
Copy link
georged4s commented Oct 11, 2022

Is this issue still open @ogrisel? In this reply from #4912, it is suggested to create it as an example perhaps in the documentation for Lasso. Further, I have found some concerns in this reply around the clarity of the mathematical definition of AdaptiveLasso from the original paper.

Would be great to know your thoughts on this. Thanks!

@mathurinm
8000 Copy link
Contributor

For the record @georged4s, the skglm package integrated in scikit-learn-contrib implements a fully sklearn compatible IterativeReweightedL1 estimator, which performs such reweighting. See doc here and an example for the L0.5 penalty here.

Feedback welcome!

@adrinjalali
Copy link
Member

Seems like this is out of scope for sklearn and available otherwise in other places. So we can close.

@adrinjalali adrinjalali closed this as not planned Won't fix, can't repro, duplicate, stale Apr 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants
0